diff --git a/MANUAL.html b/MANUAL.html
index 29c5bfac1..8b1bf934d 100644
--- a/MANUAL.html
+++ b/MANUAL.html
@@ -81,8 +81,75 @@
+
NAME
+rclone - manage files on cloud storage
+SYNOPSIS
+Usage:
+ rclone [flags]
+ rclone [command]
+
+Available commands:
+ about Get quota information from the remote.
+ authorize Remote authorization.
+ backend Run a backend-specific command.
+ bisync Perform bidirectional synchronization between two paths.
+ cat Concatenates any files and sends them to stdout.
+ check Checks the files in the source and destination match.
+ checksum Checks the files in the destination against a SUM file.
+ cleanup Clean up the remote if possible.
+ completion Output completion script for a given shell.
+ config Enter an interactive configuration session.
+ convmv Convert file and directory names in place.
+ copy Copy files from source to dest, skipping identical files.
+ copyto Copy files from source to dest, skipping identical files.
+ copyurl Copy the contents of the URL supplied content to dest:path.
+ cryptcheck Cryptcheck checks the integrity of an encrypted remote.
+ cryptdecode Cryptdecode returns unencrypted file names.
+ dedupe Interactively find duplicate filenames and delete/rename them.
+ delete Remove the files in path.
+ deletefile Remove a single file from remote.
+ gendocs Output markdown docs for rclone to the directory supplied.
+ gitannex Speaks with git-annex over stdin/stdout.
+ hashsum Produces a hashsum file for all the objects in the path.
+ help Show help for rclone commands, flags and backends.
+ link Generate public link to file/folder.
+ listremotes List all the remotes in the config file and defined in environment variables.
+ ls List the objects in the path with size and path.
+ lsd List all directories/containers/buckets in the path.
+ lsf List directories and objects in remote:path formatted for parsing.
+ lsjson List directories and objects in the path in JSON format.
+ lsl List the objects in path with modification time, size and path.
+ md5sum Produces an md5sum file for all the objects in the path.
+ mkdir Make the path if it doesn't already exist.
+ mount Mount the remote as file system on a mountpoint.
+ move Move files from source to dest.
+ moveto Move file or directory from source to dest.
+ ncdu Explore a remote with a text based user interface.
+ nfsmount Mount the remote as file system on a mountpoint.
+ obscure Obscure password for use in the rclone config file.
+ purge Remove the path and all of its contents.
+ rc Run a command against a running rclone.
+ rcat Copies standard input to file on remote.
+ rcd Run rclone listening to remote control commands only.
+ rmdir Remove the empty directory at path.
+ rmdirs Remove empty directories under the path.
+ selfupdate Update the rclone binary.
+ serve Serve a remote over a protocol.
+ settier Changes storage class/tier of objects in remote.
+ sha1sum Produces an sha1sum file for all the objects in the path.
+ size Prints the total size and number of objects in remote:path.
+ sync Make source and dest identical, modifying destination only.
+ test Run a test command
+ touch Create new file or change file modification time.
+ tree List the contents of the remote in a tree like fashion.
+ version Show the version number.
+
+Use "rclone [command] --help" for more information about a command.
+Use "rclone help flags" for to see the global flags.
+Use "rclone help backends" for a list of supported services.
+
Rclone syncs your files to cloud storage

@@ -156,6 +223,8 @@
- Enterprise File Fabric
- Fastmail Files
- Files.com
+- FileLu Cloud Storage
+- FlashBlade
- FTP
- Gofile
- Google Cloud Storage
@@ -180,7 +249,8 @@
- Magalu
- Mail.ru Cloud
- Memset Memstore
-- Mega
+- MEGA
+- MEGA S4
- Memory
- Microsoft Azure Blob Storage
- Microsoft Azure Files Storage
@@ -411,7 +481,7 @@ kill %1
Snap installation

Make sure you have Snapd installed
-$ sudo snap install rclone
+$ sudo snap install rclone
Due to the strict confinement of Snap, rclone snap cannot access real /home/$USER/.config/rclone directory, default config path is as below.
- Default config directory:
@@ -424,7 +494,7 @@ kill %1
Note that this is controlled by community maintainer not the rclone developers so it may be out of date. Its current version is as below.

Source installation
-Make sure you have git and Go installed. Go version 1.18 or newer is required, the latest release is recommended. You can get it from your package manager, or download it from golang.org/dl. Then you can run the following:
+Make sure you have git and Go installed. Go version 1.22 or newer is required, the latest release is recommended. You can get it from your package manager, or download it from golang.org/dl. Then you can run the following:
git clone https://github.com/rclone/rclone.git
cd rclone
go build
@@ -519,6 +589,7 @@ go build
- Digi Storage
- Dropbox
- Enterprise File Fabric
+- FileLu Cloud Storage
- Files.com
- FTP
- Gofile
@@ -585,7 +656,7 @@ rclone copy /local/path remote:path # copies /local/path to the remote
rclone sync --interactive /local/path remote:path # syncs /local/path to the remote
rclone config
Enter an interactive configuration session.
-Synopsis
+Synopsis
Enter an interactive configuration session where you can setup new remotes and manage existing ones. You may also set or remove a password to protect your configuration.
rclone config [flags]
Options
@@ -613,7 +684,7 @@ rclone sync --interactive /local/path remote:path # syncs /local/path to the rem
rclone copy
Copy files from source to dest, skipping identical files.
-Synopsis
+Synopsis
Copy the source to the destination. Does not transfer files that are identical on source and destination, testing by size and modification time or MD5SUM. Doesn't delete files from the destination. If you want to also delete files from destination, to make it match source, use the sync command instead.
Note that it is always the contents of the directory that is synced, not the directory itself. So when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents.
To copy single files, use the copyto command instead.
@@ -666,6 +737,7 @@ destpath/sourcepath/two.txt
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
--no-update-dir-modtime Don't update directory modification times
@@ -692,6 +764,7 @@ destpath/sourcepath/two.txt
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -716,7 +789,7 @@ destpath/sourcepath/two.txt
rclone sync
Make source and dest identical, modifying destination only.
-Synopsis
+Synopsis
Sync the source to the destination, changing the destination only. Doesn't transfer files that are identical on source and destination, testing by size and modification time or MD5SUM. Destination is updated to match source, including deleting files if necessary (except duplicate objects, see below). If you don't want to delete files from destination, use the copy command instead.
Important: Since this can cause data loss, test first with the --dry-run
or the --interactive
/-i
flag.
rclone sync --interactive SOURCE remote:DESTINATION
@@ -793,6 +866,7 @@ destpath/sourcepath/two.txt
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
--no-update-dir-modtime Don't update directory modification times
@@ -812,6 +886,7 @@ destpath/sourcepath/two.txt
--delete-during When synchronizing, delete files during transfer
--fix-case Force rename of case insensitive dest to match source
--ignore-errors Delete even if there are I/O errors
+ --list-cutoff int To save memory, sort directory listings on disk above this threshold (default 1000000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off)
--suffix string Suffix to add to changed files
@@ -833,6 +908,7 @@ destpath/sourcepath/two.txt
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -857,7 +933,7 @@ destpath/sourcepath/two.txt
rclone move
Move files from source to dest.
-Synopsis
+Synopsis
Moves the contents of the source directory to the destination directory. Rclone will error if the source and destination overlap and the remote does not support a server-side directory move operation.
To move single files, use the moveto command instead.
If no filters are in use and if possible this will server-side move source:path
into dest:path
. After this source:path
will no longer exist.
@@ -898,6 +974,7 @@ destpath/sourcepath/two.txt
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
--no-update-dir-modtime Don't update directory modification times
@@ -924,6 +1001,7 @@ destpath/sourcepath/two.txt
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -948,7 +1026,7 @@ destpath/sourcepath/two.txt
rclone delete
Remove the files in path.
-Synopsis
+Synopsis
Remove the files in path. Unlike purge it obeys include/exclude filters so can be used to selectively delete files.
rclone delete
only deletes files but leaves the directory structure alone. If you want to delete a directory and all of its contents use the purge command.
If you supply the --rmdirs
flag, it will remove all empty directories along with it. You can also use the separate command rmdir or rmdirs to delete empty directories only.
@@ -979,6 +1057,7 @@ rclone --dry-run --min-size 100M delete remote:path
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -1003,8 +1082,9 @@ rclone --dry-run --min-size 100M delete remote:path
rclone purge
Remove the path and all of its contents.
-Synopsis
+Synopsis
Remove the path and all of its contents. Note that this does not obey include/exclude filters - everything will be removed. Use the delete command if you want to selectively delete files. To delete empty directories only, use command rmdir or rmdirs.
+The concurrency of this operation is controlled by the --checkers
global flag. However, some backends will implement this command directly, in which case --checkers
will be ignored.
Important: Since this can cause data loss, test first with the --dry-run
or the --interactive
/-i
flag.
rclone purge remote:path [flags]
Options
@@ -1036,7 +1116,7 @@ rclone --dry-run --min-size 100M delete remote:path
rclone rmdir
Remove the empty directory at path.
-Synopsis
+Synopsis
This removes empty directory given by path. Will not remove the path if it has any objects in it, not even empty subdirectories. Use command rmdirs (or delete with option --rmdirs
) to do that.
To delete a path and any objects in it, use purge command.
rclone rmdir remote:path [flags]
@@ -1054,7 +1134,7 @@ rclone --dry-run --min-size 100M delete remote:path
rclone check
Checks the files in the source and destination match.
-Synopsis
+Synopsis
Checks the files in the source and destination match. It compares sizes and hashes (MD5 or SHA1) and logs a report of files that don't match. It doesn't alter the source or destination.
For the crypt remote there is a dedicated command, cryptcheck, that are able to check the checksums of the encrypted files.
If you supply the --size-only
flag, it will only compare the sizes not the hashes as well. Use this for a quick check.
@@ -1097,6 +1177,7 @@ rclone --dry-run --min-size 100M delete remote:path
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -1121,7 +1202,7 @@ rclone --dry-run --min-size 100M delete remote:path
rclone ls
List the objects in the path with size and path.
-Synopsis
+Synopsis
Lists the objects in the source path to standard output in a human readable format with size and path. Recurses by default.
Eg
$ rclone ls swift:bucket
@@ -1156,6 +1237,7 @@ rclone --dry-run --min-size 100M delete remote:path
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -1180,7 +1262,7 @@ rclone --dry-run --min-size 100M delete remote:path
rclone lsd
List all directories/containers/buckets in the path.
-Synopsis
+Synopsis
Lists the directories in the source path to standard output. Does not recurse by default. Use the -R
flag to recurse.
This command lists the total size of the directory (if known, -1 if not), the modification time (if known, the current time if not), the number of objects in the directory (if known, -1 if not) and the name of the directory, Eg
$ rclone lsd swift:
@@ -1220,6 +1302,7 @@ rclone --dry-run --min-size 100M delete remote:path
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -1244,7 +1327,7 @@ rclone --dry-run --min-size 100M delete remote:path
rclone lsl
List the objects in path with modification time, size and path.
-Synopsis
+Synopsis
Lists the objects in the source path to standard output in a human readable format with modification time, size and path. Recurses by default.
Eg
$ rclone lsl swift:bucket
@@ -1279,6 +1362,7 @@ rclone --dry-run --min-size 100M delete remote:path
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -1303,7 +1387,7 @@ rclone --dry-run --min-size 100M delete remote:path
rclone md5sum
Produces an md5sum file for all the objects in the path.
-Synopsis
+Synopsis
Produces an md5sum file for all the objects in the path. This is in the same format as the standard md5sum tool produces.
By default, the hash is requested from the remote. If MD5 is not supported by the remote, no hash will be returned. With the download flag, the file will be downloaded from the remote and hashed locally enabling MD5 for any remote.
For other algorithms, see the hashsum command. Running rclone md5sum remote:path
is equivalent to running rclone hashsum MD5 remote:path
.
@@ -1326,6 +1410,7 @@ rclone --dry-run --min-size 100M delete remote:path
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -1350,7 +1435,7 @@ rclone --dry-run --min-size 100M delete remote:path
rclone sha1sum
Produces an sha1sum file for all the objects in the path.
-Synopsis
+Synopsis
Produces an sha1sum file for all the objects in the path. This is in the same format as the standard sha1sum tool produces.
By default, the hash is requested from the remote. If SHA-1 is not supported by the remote, no hash will be returned. With the download flag, the file will be downloaded from the remote and hashed locally enabling SHA-1 for any remote.
For other algorithms, see the hashsum command. Running rclone sha1sum remote:path
is equivalent to running rclone hashsum SHA1 remote:path
.
@@ -1374,6 +1459,7 @@ rclone --dry-run --min-size 100M delete remote:path
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -1398,7 +1484,7 @@ rclone --dry-run --min-size 100M delete remote:path
rclone size
Prints the total size and number of objects in remote:path.
-Synopsis
+Synopsis
Counts objects in the path and calculates the total size. Prints the result to standard output.
By default the output is in human-readable format, but shows values in both human-readable format as well as the raw numbers (global option --human-readable
is not considered). Use option --json
to format output as JSON instead.
Recurses by default, use --max-depth 1
to stop the recursion.
@@ -1418,6 +1504,7 @@ rclone --dry-run --min-size 100M delete remote:path
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -1442,7 +1529,7 @@ rclone --dry-run --min-size 100M delete remote:path
rclone version
Show the version number.
-Synopsis
+Synopsis
Show the rclone version number, the go version, the build target OS and architecture, the runtime OS and kernel version and bitness, build tags and the type of executable (static or dynamic).
For example:
$ rclone version
@@ -1467,9 +1554,11 @@ latest: 1.42 (released 2018-06-16)
upgrade: https://downloads.rclone.org/v1.42
beta: 1.42.0.5 (released 2018-06-17)
upgrade: https://beta.rclone.org/v1.42-005-g56e1e820
+If you supply the --deps flag then rclone will print a list of all the packages it depends on and their versions along with some other information about the build.
rclone version [flags]
Options
--check Check for new version
+ --deps Show the Go dependencies
-h, --help help for version
See the global flags page for global options not listed here.
See Also
@@ -1478,7 +1567,7 @@ beta: 1.42.0.5 (released 2018-06-17)
rclone cleanup
Clean up the remote if possible.
-Synopsis
+Synopsis
Clean up the remote if possible. Empty the trash or delete old file versions. Not supported by all remotes.
rclone cleanup remote:path [flags]
Options
@@ -1495,7 +1584,7 @@ beta: 1.42.0.5 (released 2018-06-17)
rclone dedupe
Interactively find duplicate filenames and delete/rename them.
-Synopsis
+Synopsis
By default dedupe
interactively finds files with duplicate names and offers to delete all but one or rename them to be different. This is known as deduping by name.
Deduping by name is only useful with a small group of backends (e.g. Google Drive, Opendrive) that can have duplicate file names. It can be run on wrapping backends (e.g. crypt) if they wrap a backend which supports duplicate file names.
However if --by-hash
is passed in then dedupe will find files with duplicate hashes instead which will work on any backend which supports at least one hash. This can be used to find files with duplicate content. This is known as deduping by hash.
@@ -1579,7 +1668,7 @@ two-3.txt: renamed from: two.txt
rclone about
Get quota information from the remote.
-Synopsis
+Synopsis
Prints quota information about a remote to standard output. The output is typically used, free, quota and trash contents.
E.g. Typical output from rclone about remote:
is:
Total: 17 GiB
@@ -1625,11 +1714,12 @@ Other: 8849156022
rclone authorize
Remote authorization.
-Synopsis
+Synopsis
Remote authorization. Used to authorize a remote or headless rclone from a machine with a browser - use as instructed by rclone config.
+The command requires 1-3 arguments: - fs name (e.g., "drive", "s3", etc.) - Either a base64 encoded JSON blob obtained from a previous rclone config session - Or a client_id and client_secret pair obtained from the remote service
Use --auth-no-open-browser to prevent rclone to open auth link in default browser automatically.
Use --template to generate HTML output via a custom Go template. If a blank string is provided as an argument to this flag, the default template is used.
-rclone authorize [flags]
+rclone authorize <fs name> [base64_json_blob | client_id client_secret] [flags]
Options
--auth-no-open-browser Do not automatically open auth link in default browser
-h, --help help for authorize
@@ -1641,7 +1731,7 @@ Other: 8849156022
rclone backend
Run a backend-specific command.
-Synopsis
+Synopsis
This runs a backend-specific command. The commands themselves (except for "help" and "features") are defined by the backends and you should see the backend docs for definitions.
You can discover what commands a backend implements by using
rclone backend help remote:
@@ -1670,7 +1760,7 @@ rclone backend help <backendname>
rclone bisync
Perform bidirectional synchronization between two paths.
-Synopsis
+Synopsis
Perform bidirectional synchronization between two paths.
Bisync provides a bidirectional cloud sync solution in rclone. It retains the Path1 and Path2 filesystem listings from the prior run. On each successive run it will: - list files on Path1 and Path2, and check for changes on each side. Changes include New
, Newer
, Older
, and Deleted
files. - Propagate changes on Path1 to Path2, and vice-versa.
Bisync is in beta and is considered an advanced command, so use with care. Make sure you have read and understood the entire manual (especially the Limitations section) before using, or data loss can result. Questions can be asked in the Rclone Forum.
@@ -1727,6 +1817,7 @@ rclone backend help <backendname>
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
--no-update-dir-modtime Don't update directory modification times
@@ -1753,6 +1844,7 @@ rclone backend help <backendname>
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -1773,7 +1865,7 @@ rclone backend help <backendname>
rclone cat
Concatenates any files and sends them to stdout.
-Synopsis
+Synopsis
Sends any files to standard output.
You can use it like this to output a single file
rclone cat remote:path/to/file
@@ -1809,6 +1901,7 @@ rclone backend help <backendname>
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -1833,7 +1926,7 @@ rclone backend help <backendname>
rclone checksum
Checks the files in the destination against a SUM file.
-Synopsis
+Synopsis
Checks that hashsums of destination files match the SUM file. It compares hashes (MD5, SHA1, etc) and logs a report of files which don't match. It doesn't alter the file system.
The sumfile is treated as the source and the dst:path is treated as the destination for the purposes of the output.
If you supply the --download
flag, it will download the data from the remote and calculate the content hash on the fly. This can be useful for remotes that don't support hashes or if you really want to check all the data.
@@ -1871,6 +1964,7 @@ rclone backend help <backendname>
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -1895,7 +1989,7 @@ rclone backend help <backendname>
rclone completion
Output completion script for a given shell.
-Synopsis
+Synopsis
Generates a shell completion script for rclone. Run with --help
to list the supported shells.
Options
-h, --help help for completion
@@ -1910,7 +2004,7 @@ rclone backend help <backendname>
rclone completion bash
Output bash completion script for rclone.
-Synopsis
+Synopsis
Generates a bash shell autocompletion script for rclone.
By default, when run without any arguments,
rclone completion bash
@@ -1933,7 +2027,7 @@ rclone backend help <backendname>
rclone completion fish
Output fish completion script for rclone.
-Synopsis
+Synopsis
Generates a fish autocompletion script for rclone.
This writes to /etc/fish/completions/rclone.fish by default so will probably need to be run with sudo or as root, e.g.
sudo rclone completion fish
@@ -1951,7 +2045,7 @@ rclone backend help <backendname>
rclone completion powershell
Output powershell completion script for rclone.
-Synopsis
+Synopsis
Generate the autocompletion script for powershell.
To load completions in your current shell session:
rclone completion powershell | Out-String | Invoke-Expression
@@ -1967,7 +2061,7 @@ rclone backend help <backendname>
rclone completion zsh
Output zsh completion script for rclone.
-Synopsis
+Synopsis
Generates a zsh autocompletion script for rclone.
This writes to /usr/share/zsh/vendor-completions/_rclone by default so will probably need to be run with sudo or as root, e.g.
sudo rclone completion zsh
@@ -1985,7 +2079,7 @@ rclone backend help <backendname>
rclone config create
Create a new remote with name, type and options.
-Synopsis
+Synopsis
Create a new remote of name
with type
and options. The options should be passed in pairs of key
value
or as key=value
.
For example, to make a swift remote of name myremote using auto config you would do:
rclone config create myremote swift env_auth true
@@ -2045,6 +2139,7 @@ rclone config create myremote swift env_auth=true
--continue Continue the configuration process with an answer
-h, --help help for create
--no-obscure Force any passwords not to be obscured
+ --no-output Don't provide any output
--non-interactive Don't interact with user and return questions
--obscure Force any passwords to be obscured
--result string Result - use with --continue
@@ -2066,7 +2161,7 @@ rclone config create myremote swift env_auth=true
rclone config disconnect
Disconnects user from remote
-Synopsis
+Synopsis
This disconnects the remote: passed in to the cloud storage system.
This normally means revoking the oauth token.
To reconnect use "rclone config reconnect".
@@ -2090,7 +2185,7 @@ rclone config create myremote swift env_auth=true
rclone config edit
Enter an interactive configuration session.
-Synopsis
+Synopsis
Enter an interactive configuration session where you can setup new remotes and manage existing ones. You may also set or remove a password to protect your configuration.
rclone config edit [flags]
Options
@@ -2102,7 +2197,7 @@ rclone config create myremote swift env_auth=true
rclone config encryption
set, remove and check the encryption for the config file
-Synopsis
+Synopsis
This command sets, clears and checks the encryption for the config file using the subcommands below.
Options
-h, --help help for encryption
@@ -2116,7 +2211,7 @@ rclone config create myremote swift env_auth=true
rclone config encryption check
Check that the config file is encrypted
-Synopsis
+Synopsis
This checks the config file is encrypted and that you can decrypt it.
It will attempt to decrypt the config using the password you supply.
If decryption fails it will return a non-zero exit code if using --password-command
, otherwise it will prompt again for the password.
@@ -2131,7 +2226,7 @@ rclone config create myremote swift env_auth=true
rclone config encryption remove
Remove the config file encryption password
-Synopsis
+Synopsis
Remove the config file encryption password
This removes the config file encryption, returning it to un-encrypted.
If --password-command
is in use, this will be called to supply the old config password.
@@ -2146,12 +2241,12 @@ rclone config create myremote swift env_auth=true
rclone config encryption set
Set or change the config file encryption password
-Synopsis
+Synopsis
This command sets or changes the config file encryption password.
If there was no config password set then it sets a new one, otherwise it changes the existing config password.
Note that if you are changing an encryption password using --password-command
then this will be called once to decrypt the config using the old password and then again to read the new password to re-encrypt the config.
-When --password-command
is called to change the password then the environment variable RCLONE_PASSWORD_CHANGE=1
will be set. So if changing passwords programatically you can use the environment variable to distinguish which password you must supply.
-Alternatively you can remove the password first (with rclone config encryption remove
), then set it again with this command which may be easier if you don't mind the unecrypted config file being on the disk briefly.
+When --password-command
is called to change the password then the environment variable RCLONE_PASSWORD_CHANGE=1
will be set. So if changing passwords programmatically you can use the environment variable to distinguish which password you must supply.
+Alternatively you can remove the password first (with rclone config encryption remove
), then set it again with this command which may be easier if you don't mind the unencrypted config file being on the disk briefly.
rclone config encryption set [flags]
Options
-h, --help help for set
@@ -2172,7 +2267,7 @@ rclone config create myremote swift env_auth=true
rclone config password
Update password in an existing remote.
-Synopsis
+Synopsis
Update an existing remote's password. The password should be passed in pairs of key
password
or as key=password
. The password
should be passed in in clear (unobscured).
For example, to set password of a remote of name myremote you would do:
rclone config password myremote fieldname mypassword
@@ -2208,7 +2303,7 @@ rclone config password myremote fieldname=mypassword
rclone config reconnect
Re-authenticates user with remote.
-Synopsis
+Synopsis
This reconnects remote: passed in to the cloud storage system.
To disconnect the remote use "rclone config disconnect".
This normally means going through the interactive oauth flow again.
@@ -2222,7 +2317,7 @@ rclone config password myremote fieldname=mypassword
rclone config redacted
Print redacted (decrypted) config file, or the redacted config for a single remote.
-Synopsis
+Synopsis
This prints a redacted copy of the config file, either the whole config file or for a given remote.
The config file will be redacted by replacing all passwords and other sensitive info with XXX.
This makes the config file suitable for posting online for support.
@@ -2257,7 +2352,7 @@ rclone config password myremote fieldname=mypassword
rclone config update
Update options in an existing remote.
-Synopsis
+Synopsis
Update an existing remote's options. The options should be passed in pairs of key
value
or as key=value
.
For example, to update the env_auth field of a remote of name myremote you would do:
rclone config update myremote env_auth true
@@ -2317,6 +2412,7 @@ rclone config update myremote env_auth=true
--continue Continue the configuration process with an answer
-h, --help help for update
--no-obscure Force any passwords not to be obscured
+ --no-output Don't provide any output
--non-interactive Don't interact with user and return questions
--obscure Force any passwords to be obscured
--result string Result - use with --continue
@@ -2328,7 +2424,7 @@ rclone config update myremote env_auth=true
rclone config userinfo
Prints info about logged in user of remote.
-Synopsis
+Synopsis
This prints the details of the person logged in to the cloud storage system.
rclone config userinfo remote: [flags]
Options
@@ -2339,25 +2435,325 @@ rclone config update myremote env_auth=true
-rclone copyto
-Copy files from source to dest, skipping identical files.
-Synopsis
-If source:path is a file or directory then it copies it to a file or directory named dest:path.
-This can be used to upload single files to other than their current name. If the source is a directory then it acts exactly like the copy command.
-So
-rclone copyto src dst
-where src and dst are rclone paths, either remote:path or /path/to/local or C:.
-This will:
-if src is file
- copy it to dst, overwriting an existing file if it exists
-if src is directory
- copy it to dst, overwriting existing files if they exist
- see copy command for full details
-This doesn't transfer files that are identical on src and dst, testing by size and modification time or MD5SUM. It doesn't delete files from the destination.
-Note: Use the -P
/--progress
flag to view real-time transfer statistics
-rclone copyto source:path dest:path [flags]
+rclone convmv
+Convert file and directory names in place.
+Synopsis
+convmv supports advanced path name transformations for converting and renaming files and directories by applying prefixes, suffixes, and other alterations.
+
+
+
+
+
+
+
+
+
+
+--name-transform prefix=XXXX |
+Prepends XXXX to the file name. |
+
+
+--name-transform suffix=XXXX |
+Appends XXXX to the file name after the extension. |
+
+
+--name-transform suffix_keep_extension=XXXX |
+Appends XXXX to the file name while preserving the original file extension. |
+
+
+--name-transform trimprefix=XXXX |
+Removes XXXX if it appears at the start of the file name. |
+
+
+--name-transform trimsuffix=XXXX |
+Removes XXXX if it appears at the end of the file name. |
+
+
+--name-transform regex=/pattern/replacement/ |
+Applies a regex-based transformation. |
+
+
+--name-transform replace=old:new |
+Replaces occurrences of old with new in the file name. |
+
+
+--name-transform date={YYYYMMDD} |
+Appends or prefixes the specified date format. |
+
+
+--name-transform truncate=N |
+Truncates the file name to a maximum of N characters. |
+
+
+--name-transform base64encode |
+Encodes the file name in Base64. |
+
+
+--name-transform base64decode |
+Decodes a Base64-encoded file name. |
+
+
+--name-transform encoder=ENCODING |
+Converts the file name to the specified encoding (e.g., ISO-8859-1, Windows-1252, Macintosh). |
+
+
+--name-transform decoder=ENCODING |
+Decodes the file name from the specified encoding. |
+
+
+--name-transform charmap=MAP |
+Applies a character mapping transformation. |
+
+
+--name-transform lowercase |
+Converts the file name to lowercase. |
+
+
+--name-transform uppercase |
+Converts the file name to UPPERCASE. |
+
+
+--name-transform titlecase |
+Converts the file name to Title Case. |
+
+
+--name-transform ascii |
+Strips non-ASCII characters. |
+
+
+--name-transform url |
+URL-encodes the file name. |
+
+
+--name-transform nfc |
+Converts the file name to NFC Unicode normalization form. |
+
+
+--name-transform nfd |
+Converts the file name to NFD Unicode normalization form. |
+
+
+--name-transform nfkc |
+Converts the file name to NFKC Unicode normalization form. |
+
+
+--name-transform nfkd |
+Converts the file name to NFKD Unicode normalization form. |
+
+
+--name-transform command=/path/to/my/programfile names. |
+Executes an external program to transform |
+
+
+
+Conversion modes:
+none
+nfc
+nfd
+nfkc
+nfkd
+replace
+prefix
+suffix
+suffix_keep_extension
+trimprefix
+trimsuffix
+index
+date
+truncate
+base64encode
+base64decode
+encoder
+decoder
+ISO-8859-1
+Windows-1252
+Macintosh
+charmap
+lowercase
+uppercase
+titlecase
+ascii
+url
+regex
+command
+Char maps:
+
+IBM-Code-Page-037
+IBM-Code-Page-437
+IBM-Code-Page-850
+IBM-Code-Page-852
+IBM-Code-Page-855
+Windows-Code-Page-858
+IBM-Code-Page-860
+IBM-Code-Page-862
+IBM-Code-Page-863
+IBM-Code-Page-865
+IBM-Code-Page-866
+IBM-Code-Page-1047
+IBM-Code-Page-1140
+ISO-8859-1
+ISO-8859-2
+ISO-8859-3
+ISO-8859-4
+ISO-8859-5
+ISO-8859-6
+ISO-8859-7
+ISO-8859-8
+ISO-8859-9
+ISO-8859-10
+ISO-8859-13
+ISO-8859-14
+ISO-8859-15
+ISO-8859-16
+KOI8-R
+KOI8-U
+Macintosh
+Macintosh-Cyrillic
+Windows-874
+Windows-1250
+Windows-1251
+Windows-1252
+Windows-1253
+Windows-1254
+Windows-1255
+Windows-1256
+Windows-1257
+Windows-1258
+X-User-Defined
+Encoding masks:
+Asterisk
+ BackQuote
+ BackSlash
+ Colon
+ CrLf
+ Ctl
+ Del
+ Dollar
+ Dot
+ DoubleQuote
+ Exclamation
+ Hash
+ InvalidUtf8
+ LeftCrLfHtVt
+ LeftPeriod
+ LeftSpace
+ LeftTilde
+ LtGt
+ None
+ Percent
+ Pipe
+ Question
+ Raw
+ RightCrLfHtVt
+ RightPeriod
+ RightSpace
+ Semicolon
+ SingleQuote
+ Slash
+ SquareBracket
+Examples:
+rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,uppercase"
+// Output: STORIES/THE QUICK BROWN FOX!.TXT
+rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,replace=Fox:Turtle" --name-transform "all,replace=Quick:Slow"
+// Output: stories/The Slow Brown Turtle!.txt
+rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,base64encode"
+// Output: c3Rvcmllcw==/VGhlIFF1aWNrIEJyb3duIEZveCEudHh0
+rclone convmv "c3Rvcmllcw==/VGhlIFF1aWNrIEJyb3duIEZveCEudHh0" --name-transform "all,base64decode"
+// Output: stories/The Quick Brown Fox!.txt
+rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,nfc"
+// Output: stories/The Quick Brown 🦊 Fox Went to the Café!.txt
+rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,nfd"
+// Output: stories/The Quick Brown 🦊 Fox Went to the Café!.txt
+rclone convmv "stories/The Quick Brown 🦊 Fox!.txt" --name-transform "all,ascii"
+// Output: stories/The Quick Brown Fox!.txt
+rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,trimsuffix=.txt"
+// Output: stories/The Quick Brown Fox!
+rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,prefix=OLD_"
+// Output: OLD_stories/OLD_The Quick Brown Fox!.txt
+rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,charmap=ISO-8859-7"
+// Output: stories/The Quick Brown _ Fox Went to the Caf_!.txt
+rclone convmv "stories/The Quick Brown Fox: A Memoir [draft].txt" --name-transform "all,encoder=Colon,SquareBracket"
+// Output: stories/The Quick Brown Fox: A Memoir [draft].txt
+rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,truncate=21"
+// Output: stories/The Quick Brown 🦊 Fox
+rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=echo"
+// Output: stories/The Quick Brown Fox!.txt
+rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
+// Output: stories/The Quick Brown Fox!-20250617
+rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
+// Output: stories/The Quick Brown Fox!-2025-06-17 0551PM
+rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,regex=[\\.\\w]/ab"
+// Output: ababababababab/ababab ababababab ababababab ababab!abababab
+Multiple transformations can be used in sequence, applied in the order they are specified on the command line.
+The --name-transform
flag is also available in sync
, copy
, and move
.
+Files vs Directories
+By default --name-transform
will only apply to file names. The means only the leaf file name will be transformed. However some of the transforms would be better applied to the whole path or just directories. To choose which which part of the file path is affected some tags can be added to the --name-transform
+
+
+
+
+
+
+
+
+
+
+file |
+Only transform the leaf name of files (DEFAULT) |
+
+
+dir |
+Only transform name of directories - these may appear anywhere in the path |
+
+
+all |
+Transform the entire path for files and directories |
+
+
+
+This is used by adding the tag into the transform name like this: --name-transform file,prefix=ABC
or --name-transform dir,prefix=DEF
.
+For some conversions using all is more likely to be useful, for example --name-transform all,nfc
+Note that --name-transform
may not add path separators /
to the name. This will cause an error.
+Ordering and Conflicts
+
+- Transformations will be applied in the order specified by the user.
+
+- If the
file
tag is in use (the default) then only the leaf name of files will be transformed.
+- If the
dir
tag is in use then directories anywhere in the path will be transformed
+- If the
all
tag is in use then directories and files anywhere in the path will be transformed
+- Each transformation will be run one path segment at a time.
+- If a transformation adds a
/
or ends up with an empty path segment then that will be an error.
+
+- It is up to the user to put the transformations in a sensible order.
+
+- Conflicting transformations, such as
prefix
followed by trimprefix
or nfc
followed by nfd
, are possible.
+- Instead of enforcing mutual exclusivity, transformations are applied in sequence as specified by the user, allowing for intentional use cases (e.g., trimming one prefix before adding another).
+- Users should be aware that certain combinations may lead to unexpected results and should verify transformations using
--dry-run
before execution.
+
+
+Race Conditions and Non-Deterministic Behavior
+Some transformations, such as replace=old:new
, may introduce conflicts where multiple source files map to the same destination name. This can lead to race conditions when performing concurrent transfers. It is up to the user to anticipate these. * If two files from the source are transformed into the same name at the destination, the final state may be non-deterministic. * Running rclone check after a sync using such transformations may erroneously report missing or differing files due to overwritten results.
+
+- To minimize risks, users should:
+
+- Carefully review transformations that may introduce conflicts.
+- Use
--dry-run
to inspect changes before executing a sync (but keep in mind that it won't show the effect of non-deterministic transformations).
+- Avoid transformations that cause multiple distinct source files to map to the same destination name.
+- Consider disabling concurrency with
--transfers=1
if necessary.
+- Certain transformations (e.g.
prefix
) will have a multiplying effect every time they are used. Avoid these when using bisync
.
+
+
+rclone convmv dest:path --name-transform XXX [flags]
Options
- -h, --help help for copyto
+ --create-empty-src-dirs Create empty source dirs on destination after move
+ --delete-empty-src-dirs Delete empty source dirs after move
+ -h, --help help for convmv
Options shared with other commands are described next. See the global flags page for global options not listed here.
Copy Options
Flags for anything which can copy a file
@@ -2383,6 +2779,7 @@ if src is directory
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
--no-update-dir-modtime Don't update directory modification times
@@ -2409,6 +2806,7 @@ if src is directory
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -2431,15 +2829,110 @@ if src is directory
- rclone - Show help for rclone commands, flags and backends.
+rclone copyto
+Copy files from source to dest, skipping identical files.
+Synopsis
+If source:path is a file or directory then it copies it to a file or directory named dest:path.
+This can be used to upload single files to other than their current name. If the source is a directory then it acts exactly like the copy command.
+So
+rclone copyto src dst
+where src and dst are rclone paths, either remote:path or /path/to/local or C:.
+This will:
+if src is file
+ copy it to dst, overwriting an existing file if it exists
+if src is directory
+ copy it to dst, overwriting existing files if they exist
+ see copy command for full details
+This doesn't transfer files that are identical on src and dst, testing by size and modification time or MD5SUM. It doesn't delete files from the destination.
+If you are looking to copy just a byte range of a file, please see 'rclone cat --offset X --count Y'
+Note: Use the -P
/--progress
flag to view real-time transfer statistics
+rclone copyto source:path dest:path [flags]
+Options
+ -h, --help help for copyto
+Options shared with other commands are described next. See the global flags page for global options not listed here.
+Copy Options
+Flags for anything which can copy a file
+ --check-first Do all the checks before starting transfers
+ -c, --checksum Check for changes with size & checksum (if available, or fallback to size only)
+ --compare-dest stringArray Include additional server-side paths during comparison
+ --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
+ --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
+ --ignore-case-sync Ignore case when synchronizing
+ --ignore-checksum Skip post copy check of checksums
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use modtime or checksum
+ -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
+ --immutable Do not modify files, fail if existing files have been modified
+ --inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
+ --max-backlog int Maximum number of objects in sync or check backlog (default 10000)
+ --max-duration Duration Maximum duration rclone will transfer data for (default 0s)
+ --max-transfer SizeSuffix Maximum size of data to transfer (default off)
+ -M, --metadata If set, preserve metadata when copying objects
+ --modify-window Duration Max time diff to be considered the same (default 1ns)
+ --multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi)
+ --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
+ --multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
+ --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
+ --no-check-dest Don't check the destination, copy regardless
+ --no-traverse Don't traverse destination file system on copy
+ --no-update-dir-modtime Don't update directory modification times
+ --no-update-modtime Don't update destination modtime if files identical
+ --order-by string Instructions on how to order the transfers, e.g. 'size,descending'
+ --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
+ --refresh-times Refresh the modtime of remote files
+ --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
+ --size-only Skip based on size only, not modtime or checksum
+ --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
+ -u, --update Skip files that are newer on the destination
+Important Options
+Important flags useful for most commands
+ -n, --dry-run Do a trial run with no permanent changes
+ -i, --interactive Enable interactive mode
+ -v, --verbose count Print lots more stuff (repeat for more)
+Filter Options
+Flags for filtering directory listings
+ --delete-excluded Delete files on dest excluded from sync
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin)
+ --exclude-if-present stringArray Exclude directories if filename is present
+ --files-from stringArray Read list of source-file names from file (use - to read from stdin)
+ --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
+ -f, --filter stringArray Add a file filtering rule
+ --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
+ --ignore-case Ignore case in filters (case insensitive)
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read file include patterns from file (use - to read from stdin)
+ --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-depth int If set limits the recursion depth to this (default -1)
+ --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
+ --metadata-exclude stringArray Exclude metadatas matching pattern
+ --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin)
+ --metadata-filter stringArray Add a metadata filtering rule
+ --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin)
+ --metadata-include stringArray Include metadatas matching pattern
+ --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
+ --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
+Listing Options
+Flags for listing directories
+ --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
+ --fast-list Use recursive list if available; uses more memory but fewer transactions
+See Also
+
+- rclone - Show help for rclone commands, flags and backends.
+
rclone copyurl
Copy the contents of the URL supplied content to dest:path.
-Synopsis
+Synopsis
Download a URL's content and copy it to the destination without saving it in temporary storage.
Setting --auto-filename
will attempt to automatically determine the filename from the URL (after any redirections) and used in the destination path.
-With --auto-filename-header
in addition, if a specific filename is set in HTTP headers, it will be used instead of the name from the URL. With --print-filename
in addition, the resulting file name will be printed.
+With --header-filename
in addition, if a specific filename is set in HTTP headers, it will be used instead of the name from the URL. With --print-filename
in addition, the resulting file name will be printed.
Setting --no-clobber
will prevent overwriting file on the destination if there is one with the same name.
Setting --stdout
or making the output file name -
will cause the output to be written to standard output.
-Troublshooting
+Troubleshooting
If you can't get rclone copyurl
to work then here are some things you can try:
--disable-http2
rclone will use HTTP2 if available - try disabling it
@@ -2449,7 +2942,7 @@ if src is directory
- Make sure the site works with
curl
directly
rclone copyurl https://example.com dest:path [flags]
-Options
+Options
-a, --auto-filename Get the file name from the URL and use it for destination file path
--header-filename Get the file name from the Content-Disposition header
-h, --help help for copyurl
@@ -2457,18 +2950,18 @@ if src is directory
-p, --print-filename Print the resulting name from --auto-filename
--stdout Write the output to stdout rather than a file
Options shared with other commands are described next. See the global flags page for global options not listed here.
-Important Options
+Important Options
Important flags useful for most commands
-n, --dry-run Do a trial run with no permanent changes
-i, --interactive Enable interactive mode
-v, --verbose count Print lots more stuff (repeat for more)
-See Also
+See Also
- rclone - Show help for rclone commands, flags and backends.
rclone cryptcheck
Cryptcheck checks the integrity of an encrypted remote.
-Synopsis
+Synopsis
Checks a remote against a crypted remote. This is the equivalent of running rclone check, but able to check the checksums of the encrypted remote.
For it to work the underlying remote of the cryptedremote must support some kind of checksum.
It works by reading the nonce from each file on the cryptedremote: and using that to encrypt each file on the remote:. It then checks the checksum of the underlying file on the cryptedremote: against the checksum of the file it has just encrypted.
@@ -2489,7 +2982,7 @@ if src is directory
The default number of parallel checks is 8. See the --checkers=N option for more information.
rclone cryptcheck remote:path cryptedremote:path [flags]
-Options
+Options
--combined string Make a combined report of changes to this file
--differ string Report all non-matching files to this file
--error string Report all files with errors (hashing or reading) to this file
@@ -2502,154 +2995,6 @@ if src is directory
Check Options
Flags used for check commands
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
-Filter Options
-Flags for filtering directory listings
- --delete-excluded Delete files on dest excluded from sync
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin)
- --exclude-if-present stringArray Exclude directories if filename is present
- --files-from stringArray Read list of source-file names from file (use - to read from stdin)
- --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
- -f, --filter stringArray Add a file filtering rule
- --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
- --ignore-case Ignore case in filters (case insensitive)
- --include stringArray Include files matching pattern
- --include-from stringArray Read file include patterns from file (use - to read from stdin)
- --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-depth int If set limits the recursion depth to this (default -1)
- --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
- --metadata-exclude stringArray Exclude metadatas matching pattern
- --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin)
- --metadata-filter stringArray Add a metadata filtering rule
- --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin)
- --metadata-include stringArray Include metadatas matching pattern
- --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
- --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
-Listing Options
-Flags for listing directories
- --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
- --fast-list Use recursive list if available; uses more memory but fewer transactions
-See Also
-
-- rclone - Show help for rclone commands, flags and backends.
-
-rclone cryptdecode
-Cryptdecode returns unencrypted file names.
-Synopsis
-Returns unencrypted file names when provided with a list of encrypted file names. List limit is 10 items.
-If you supply the --reverse
flag, it will return encrypted file names.
-use it like this
-rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2
-
-rclone cryptdecode --reverse encryptedremote: filename1 filename2
-Another way to accomplish this is by using the rclone backend encode
(or decode
) command. See the documentation on the crypt overlay for more info.
-rclone cryptdecode encryptedremote: encryptedfilename [flags]
-Options
- -h, --help help for cryptdecode
- --reverse Reverse cryptdecode, encrypts filenames
-See the global flags page for global options not listed here.
-See Also
-
-- rclone - Show help for rclone commands, flags and backends.
-
-rclone deletefile
-Remove a single file from remote.
-Synopsis
-Remove a single file from remote. Unlike delete
it cannot be used to remove a directory and it doesn't obey include/exclude filters - if the specified file exists, it will always be removed.
-rclone deletefile remote:path [flags]
-Options
- -h, --help help for deletefile
-Options shared with other commands are described next. See the global flags page for global options not listed here.
-Important Options
-Important flags useful for most commands
- -n, --dry-run Do a trial run with no permanent changes
- -i, --interactive Enable interactive mode
- -v, --verbose count Print lots more stuff (repeat for more)
-See Also
-
-- rclone - Show help for rclone commands, flags and backends.
-
-rclone gendocs
-Output markdown docs for rclone to the directory supplied.
-Synopsis
-This produces markdown docs for the rclone commands to the directory supplied. These are in a format suitable for hugo to render into the rclone.org website.
-rclone gendocs output_directory [flags]
-Options
- -h, --help help for gendocs
-See the global flags page for global options not listed here.
-See Also
-
-- rclone - Show help for rclone commands, flags and backends.
-
-rclone gitannex
-Speaks with git-annex over stdin/stdout.
-Synopsis
-Rclone's gitannex
subcommand enables git-annex to store and retrieve content from an rclone remote. It is meant to be run by git-annex, not directly by users.
-Installation on Linux
-
-Skip this step if your version of git-annex is 10.20240430 or newer. Otherwise, you must create a symlink somewhere on your PATH with a particular name. This symlink helps git-annex tell rclone it wants to run the "gitannex" subcommand.
-# Create the helper symlink in "$HOME/bin".
-ln -s "$(realpath rclone)" "$HOME/bin/git-annex-remote-rclone-builtin"
-
-# Verify the new symlink is on your PATH.
-which git-annex-remote-rclone-builtin
-Add a new remote to your git-annex repo. This new remote will connect git-annex with the rclone gitannex
subcommand.
-Start by asking git-annex to describe the remote's available configuration parameters.
-# If you skipped step 1:
-git annex initremote MyRemote type=rclone --whatelse
-
-# If you created a symlink in step 1:
-git annex initremote MyRemote type=external externaltype=rclone-builtin --whatelse
-
-NOTE: If you're porting an existing git-annex-remote-rclone remote to use rclone gitannex
, you can probably reuse the configuration parameters verbatim without renaming them. Check parameter synonyms with --whatelse
as shown above.
-
-The following example creates a new git-annex remote named "MyRemote" that will use the rclone remote named "SomeRcloneRemote". That rclone remote must be one configured in your rclone.conf file, which can be located with rclone config file
.
-git annex initremote MyRemote \
- type=external \
- externaltype=rclone-builtin \
- encryption=none \
- rcloneremotename=SomeRcloneRemote \
- rcloneprefix=git-annex-content \
- rclonelayout=nodir
-Before you trust this command with your precious data, be sure to test the remote. This command is very new and has not been tested on many rclone backends. Caveat emptor!
-git annex testremote MyRemote
-
-Happy annexing!
-rclone gitannex [flags]
-Options
- -h, --help help for gitannex
-See the global flags page for global options not listed here.
-See Also
-
-- rclone - Show help for rclone commands, flags and backends.
-
-rclone hashsum
-Produces a hashsum file for all the objects in the path.
-Synopsis
-Produces a hash file for all the objects in the path using the hash named. The output is in the same format as the standard md5sum/sha1sum tool.
-By default, the hash is requested from the remote. If the hash is not supported by the remote, no hash will be returned. With the download flag, the file will be downloaded from the remote and hashed locally enabling any hash for any remote.
-For the MD5 and SHA1 algorithms there are also dedicated commands, md5sum and sha1sum.
-This command can also hash data received on standard input (stdin), by not passing a remote:path, or by passing a hyphen as remote:path when there is data to read (if not, the hyphen will be treated literally, as a relative path).
-Run without a hash to see the list of all supported hashes, e.g.
-$ rclone hashsum
-Supported hashes are:
- * md5
- * sha1
- * whirlpool
- * crc32
- * sha256
-Then
-$ rclone hashsum MD5 remote:path
-Note that hash names are case insensitive and values are output in lower case.
-rclone hashsum [<hash> remote:path] [flags]
-Options
- --base64 Output base64 encoded hashsum
- -C, --checkfile string Validate hashes against a given SUM file instead of printing them
- --download Download the file and hash it locally; if this flag is not specified, the hash is requested from the remote
- -h, --help help for hashsum
- --output-file string Output hashsums to a file rather than the terminal
-Options shared with other commands are described next. See the global flags page for global options not listed here.
Filter Options
Flags for filtering directory listings
--delete-excluded Delete files on dest excluded from sync
@@ -2660,6 +3005,7 @@ Supported hashes are:
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -2678,13 +3024,163 @@ Supported hashes are:
Flags for listing directories
--default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
--fast-list Use recursive list if available; uses more memory but fewer transactions
+See Also
+
+- rclone - Show help for rclone commands, flags and backends.
+
+rclone cryptdecode
+Cryptdecode returns unencrypted file names.
+Synopsis
+Returns unencrypted file names when provided with a list of encrypted file names. List limit is 10 items.
+If you supply the --reverse
flag, it will return encrypted file names.
+use it like this
+rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2
+
+rclone cryptdecode --reverse encryptedremote: filename1 filename2
+Another way to accomplish this is by using the rclone backend encode
(or decode
) command. See the documentation on the crypt overlay for more info.
+rclone cryptdecode encryptedremote: encryptedfilename [flags]
+Options
+ -h, --help help for cryptdecode
+ --reverse Reverse cryptdecode, encrypts filenames
+See the global flags page for global options not listed here.
+See Also
+
+- rclone - Show help for rclone commands, flags and backends.
+
+rclone deletefile
+Remove a single file from remote.
+Synopsis
+Remove a single file from remote. Unlike delete
it cannot be used to remove a directory and it doesn't obey include/exclude filters - if the specified file exists, it will always be removed.
+rclone deletefile remote:path [flags]
+Options
+ -h, --help help for deletefile
+Options shared with other commands are described next. See the global flags page for global options not listed here.
+Important Options
+Important flags useful for most commands
+ -n, --dry-run Do a trial run with no permanent changes
+ -i, --interactive Enable interactive mode
+ -v, --verbose count Print lots more stuff (repeat for more)
+See Also
+
+- rclone - Show help for rclone commands, flags and backends.
+
+rclone gendocs
+Output markdown docs for rclone to the directory supplied.
+Synopsis
+This produces markdown docs for the rclone commands to the directory supplied. These are in a format suitable for hugo to render into the rclone.org website.
+rclone gendocs output_directory [flags]
+Options
+ -h, --help help for gendocs
+See the global flags page for global options not listed here.
+See Also
+
+- rclone - Show help for rclone commands, flags and backends.
+
+rclone gitannex
+Speaks with git-annex over stdin/stdout.
+Synopsis
+Rclone's gitannex
subcommand enables git-annex to store and retrieve content from an rclone remote. It is meant to be run by git-annex, not directly by users.
+Installation on Linux
+
+Skip this step if your version of git-annex is 10.20240430 or newer. Otherwise, you must create a symlink somewhere on your PATH with a particular name. This symlink helps git-annex tell rclone it wants to run the "gitannex" subcommand.
+# Create the helper symlink in "$HOME/bin".
+ln -s "$(realpath rclone)" "$HOME/bin/git-annex-remote-rclone-builtin"
+
+# Verify the new symlink is on your PATH.
+which git-annex-remote-rclone-builtin
+Add a new remote to your git-annex repo. This new remote will connect git-annex with the rclone gitannex
subcommand.
+Start by asking git-annex to describe the remote's available configuration parameters.
+# If you skipped step 1:
+git annex initremote MyRemote type=rclone --whatelse
+
+# If you created a symlink in step 1:
+git annex initremote MyRemote type=external externaltype=rclone-builtin --whatelse
+
+NOTE: If you're porting an existing git-annex-remote-rclone remote to use rclone gitannex
, you can probably reuse the configuration parameters verbatim without renaming them. Check parameter synonyms with --whatelse
as shown above.
+
+The following example creates a new git-annex remote named "MyRemote" that will use the rclone remote named "SomeRcloneRemote". That rclone remote must be one configured in your rclone.conf file, which can be located with rclone config file
.
+git annex initremote MyRemote \
+ type=external \
+ externaltype=rclone-builtin \
+ encryption=none \
+ rcloneremotename=SomeRcloneRemote \
+ rcloneprefix=git-annex-content \
+ rclonelayout=nodir
+Before you trust this command with your precious data, be sure to test the remote. This command is very new and has not been tested on many rclone backends. Caveat emptor!
+git annex testremote MyRemote
+
+Happy annexing!
+rclone gitannex [flags]
+Options
+ -h, --help help for gitannex
+See the global flags page for global options not listed here.
See Also
- rclone - Show help for rclone commands, flags and backends.
+rclone hashsum
+Produces a hashsum file for all the objects in the path.
+Synopsis
+Produces a hash file for all the objects in the path using the hash named. The output is in the same format as the standard md5sum/sha1sum tool.
+By default, the hash is requested from the remote. If the hash is not supported by the remote, no hash will be returned. With the download flag, the file will be downloaded from the remote and hashed locally enabling any hash for any remote.
+For the MD5 and SHA1 algorithms there are also dedicated commands, md5sum and sha1sum.
+This command can also hash data received on standard input (stdin), by not passing a remote:path, or by passing a hyphen as remote:path when there is data to read (if not, the hyphen will be treated literally, as a relative path).
+Run without a hash to see the list of all supported hashes, e.g.
+$ rclone hashsum
+Supported hashes are:
+ * md5
+ * sha1
+ * whirlpool
+ * crc32
+ * sha256
+ * sha512
+Then
+$ rclone hashsum MD5 remote:path
+Note that hash names are case insensitive and values are output in lower case.
+rclone hashsum [<hash> remote:path] [flags]
+Options
+ --base64 Output base64 encoded hashsum
+ -C, --checkfile string Validate hashes against a given SUM file instead of printing them
+ --download Download the file and hash it locally; if this flag is not specified, the hash is requested from the remote
+ -h, --help help for hashsum
+ --output-file string Output hashsums to a file rather than the terminal
+Options shared with other commands are described next. See the global flags page for global options not listed here.
+Filter Options
+Flags for filtering directory listings
+ --delete-excluded Delete files on dest excluded from sync
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin)
+ --exclude-if-present stringArray Exclude directories if filename is present
+ --files-from stringArray Read list of source-file names from file (use - to read from stdin)
+ --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
+ -f, --filter stringArray Add a file filtering rule
+ --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
+ --ignore-case Ignore case in filters (case insensitive)
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read file include patterns from file (use - to read from stdin)
+ --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-depth int If set limits the recursion depth to this (default -1)
+ --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
+ --metadata-exclude stringArray Exclude metadatas matching pattern
+ --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin)
+ --metadata-filter stringArray Add a metadata filtering rule
+ --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin)
+ --metadata-include stringArray Include metadatas matching pattern
+ --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
+ --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
+Listing Options
+Flags for listing directories
+ --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
+ --fast-list Use recursive list if available; uses more memory but fewer transactions
+See Also
+
+- rclone - Show help for rclone commands, flags and backends.
+
rclone link
Generate public link to file/folder.
-Synopsis
+Synopsis
Create, retrieve or remove a public link to the given file or folder.
rclone link remote:path/to/file
rclone link remote:path/to/folder/
@@ -2694,23 +3190,23 @@ rclone link --expire 1d remote:path/to/file
Use the --unlink flag to remove existing public links to the file or folder. Note not all backends support "--unlink" flag - those that don't will just ignore it.
If successful, the last line of the output will contain the link. Exact capabilities depend on the remote, but the link will always by default be created with the least constraints – e.g. no expiry, no password protection, accessible without account.
rclone link remote:path [flags]
-Options
+Options
--expire Duration The amount of time that the link will be valid (default off)
-h, --help help for link
--unlink Remove existing public link to file/folder
See the global flags page for global options not listed here.
-See Also
+See Also
- rclone - Show help for rclone commands, flags and backends.
rclone listremotes
List all the remotes in the config file and defined in environment variables.
-Synopsis
+Synopsis
Lists all the available remotes from the config file, or the remotes matching an optional filter.
Prints the result in human-readable format by default, and as a simple list of remote names, or if used with flag --long
a tabular format including the remote names, types and descriptions. Using flag --json
produces machine-readable output instead, which always includes all attributes - including the source (file or environment).
Result can be filtered by a filter argument which applies to all attributes, and/or filter flags specific for each attribute. The values must be specified according to regular rclone filtering pattern syntax.
rclone listremotes [<filter>] [flags]
-Options
+Options
--description string Filter remotes by description
-h, --help help for listremotes
--json Format output as JSON
@@ -2720,13 +3216,13 @@ rclone link --expire 1d remote:path/to/file
--source string Filter remotes by source, e.g. 'file' or 'environment'
--type string Filter remotes by type
See the global flags page for global options not listed here.
-See Also
+See Also
- rclone - Show help for rclone commands, flags and backends.
rclone lsf
List directories and objects in remote:path formatted for parsing.
-Synopsis
+Synopsis
List the contents of the source path (directories and objects) to standard output in a form which is easy to parse by scripts. By default this will just be the names of the objects and directories, one per line. The directories will have a / suffix.
Eg
$ rclone lsf swift:bucket
@@ -2805,7 +3301,7 @@ rclone lsf remote:path --format pt --time-format max
The other list commands lsd
,lsf
,lsjson
do not recurse by default - use -R
to make them recurse.
Listing a nonexistent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket-based remotes).
rclone lsf remote:path [flags]
-Options
+Options
--absolute Put a leading / in front of path names
--csv Output in CSV format
-d, --dir-slash Append a slash to directory names (default true)
@@ -2818,7 +3314,7 @@ rclone lsf remote:path --format pt --time-format max
-s, --separator string Separator for the items in the format (default ";")
-t, --time-format string Specify a custom time format, or 'max' for max precision supported by remote (default: 2006-01-02 15:04:05)
Options shared with other commands are described next. See the global flags page for global options not listed here.
-Filter Options
+Filter Options
Flags for filtering directory listings
--delete-excluded Delete files on dest excluded from sync
--exclude stringArray Exclude files matching pattern
@@ -2828,6 +3324,7 @@ rclone lsf remote:path --format pt --time-format max
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -2842,17 +3339,17 @@ rclone lsf remote:path --format pt --time-format max
--metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
--min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
-Listing Options
+Listing Options
Flags for listing directories
--default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
--fast-list Use recursive list if available; uses more memory but fewer transactions
-See Also
+See Also
- rclone - Show help for rclone commands, flags and backends.
rclone lsjson
List directories and objects in the path in JSON format.
-Synopsis
+Synopsis
List directories and objects in the path in JSON format.
The output is an array of Items, where each Item looks like this:
{
@@ -2910,7 +3407,7 @@ rclone lsf remote:path --format pt --time-format max
The other list commands lsd
,lsf
,lsjson
do not recurse by default - use -R
to make them recurse.
Listing a nonexistent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket-based remotes).
rclone lsjson remote:path [flags]
-Options
+Options
--dirs-only Show only directories in the listing
--encrypted Show the encrypted names
--files-only Show only files in the listing
@@ -2924,7 +3421,7 @@ rclone lsf remote:path --format pt --time-format max
-R, --recursive Recurse into the listing
--stat Just return the info for the pointed to file
Options shared with other commands are described next. See the global flags page for global options not listed here.
-Filter Options
+Filter Options
Flags for filtering directory listings
--delete-excluded Delete files on dest excluded from sync
--exclude stringArray Exclude files matching pattern
@@ -2934,6 +3431,7 @@ rclone lsf remote:path --format pt --time-format max
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -2948,17 +3446,17 @@ rclone lsf remote:path --format pt --time-format max
--metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
--min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
-Listing Options
+Listing Options
Flags for listing directories
--default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
--fast-list Use recursive list if available; uses more memory but fewer transactions
-See Also
+See Also
- rclone - Show help for rclone commands, flags and backends.
rclone mount
Mount the remote as file system on a mountpoint.
-Synopsis
+Synopsis
Rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE.
First set up your remote using rclone config
. Check it works with rclone ls
etc.
On Linux and macOS, you can run mount in either foreground or background (aka daemon) mode. Mount runs in foreground mode by default. Use the --daemon
flag to force background mode. On Windows you can run mount in foreground only, the flag is ignored.
@@ -3131,7 +3629,7 @@ WantedBy=multi-user.target
If run with -vv
rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir
or setting the appropriate environment variable.
The cache has 4 different modes selected by --vfs-cache-mode
. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.
Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back
seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.
-If using --vfs-cache-max-size
or --vfs-cache-min-free-size
note that the cache may exceed these quotas for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval
. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size
or --vfs-cache-min-free-size
is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.
+If using --vfs-cache-max-size
or --vfs-cache-min-free-space
note that the cache may exceed these quotas for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval
. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size
or --vfs-cache-min-free-space
is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.
The --vfs-cache-max-age
will evict files from the cache after the set time since last access has passed. The default value of 1 hour will start evicting files from cache that haven't been accessed for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0 and will wait for 1 more hour before evicting. Specify the time with standard notation, s, m, h, d, w .
You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using --vfs-cache-mode > off
. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir
. You don't need to worry about this if the remotes in use don't overlap.
--vfs-cache-mode off
@@ -3242,8 +3740,31 @@ WantedBy=multi-user.target
Alternate report of used bytes
Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df
on the filesystem, then pass the flag --vfs-used-is-size
to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size
and compute the total used space itself.
WARNING. Contrary to rclone size
, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
+
+If you use the --vfs-metadata-extension
flag you can get the VFS to expose files which contain the metadata as a JSON blob. These files will not appear in the directory listing, but can be stat
-ed and opened and once they have been they will appear in directory listings until the directory cache expires.
+Note that some backends won't create metadata unless you pass in the --metadata
flag.
+For example, using rclone mount
with --metadata --vfs-metadata-extension .metadata
we get
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+If the file has no metadata it will be returned as {}
and if there is an error reading the metadata the error will be returned as {"error":"error string"}
.
rclone mount remote:path /path/to/mountpoint [flags]
-Options
+Options
--allow-non-empty Allow mounting over a non-empty directory (not supported on Windows)
--allow-other Allow access to other users (not supported on Windows)
--allow-root Allow access to root user (not supported on Windows)
@@ -3286,6 +3807,7 @@ WantedBy=multi-user.target
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -3298,7 +3820,7 @@ WantedBy=multi-user.target
--volname string Set the volume name (supported on Windows and OSX only)
--write-back-cache Makes kernel buffer writes before sending them to rclone (without this, writethrough caching is used) (not supported on Windows)
Options shared with other commands are described next. See the global flags page for global options not listed here.
-Filter Options
+Filter Options
Flags for filtering directory listings
--delete-excluded Delete files on dest excluded from sync
--exclude stringArray Exclude files matching pattern
@@ -3308,6 +3830,7 @@ WantedBy=multi-user.target
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -3322,13 +3845,13 @@ WantedBy=multi-user.target
--metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
--min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
-See Also
+See Also
- rclone - Show help for rclone commands, flags and backends.
rclone moveto
Move file or directory from source to dest.
-Synopsis
+Synopsis
If source:path is a file or directory then it moves it to a file or directory named dest:path.
This can be used to rename files or upload single files to other than their existing name. If the source is a directory then it acts exactly like the move command.
So
@@ -3344,10 +3867,10 @@ if src is directory
Important: Since this can cause data loss, test first with the --dry-run
or the --interactive
/-i
flag.
Note: Use the -P
/--progress
flag to view real-time transfer statistics.
rclone moveto source:path dest:path [flags]
-Options
+Options
-h, --help help for moveto
Options shared with other commands are described next. See the global flags page for global options not listed here.
-Copy Options
+Copy Options
Flags for anything which can copy a file
--check-first Do all the checks before starting transfers
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only)
@@ -3371,6 +3894,7 @@ if src is directory
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
--no-update-dir-modtime Don't update directory modification times
@@ -3382,12 +3906,12 @@ if src is directory
--size-only Skip based on size only, not modtime or checksum
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
-u, --update Skip files that are newer on the destination
-Important Options
+Important Options
Important flags useful for most commands
-n, --dry-run Do a trial run with no permanent changes
-i, --interactive Enable interactive mode
-v, --verbose count Print lots more stuff (repeat for more)
-Filter Options
+Filter Options
Flags for filtering directory listings
--delete-excluded Delete files on dest excluded from sync
--exclude stringArray Exclude files matching pattern
@@ -3397,6 +3921,7 @@ if src is directory
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -3411,17 +3936,17 @@ if src is directory
--metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
--min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
-Listing Options
+Listing Options
Flags for listing directories
--default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
--fast-list Use recursive list if available; uses more memory but fewer transactions
-See Also
+See Also
- rclone - Show help for rclone commands, flags and backends.
rclone ncdu
Explore a remote with a text based user interface.
-Synopsis
+Synopsis
This displays a text based user interface allowing the navigation of a remote. It is most useful for answering the question - "What is using all my disk space?".
To make the user interface it first scans the entire remote given and builds an in memory representation. rclone ncdu can be used during this scanning phase and you will see it building up the directory structure as it goes along.
You can interact with the user interface using key presses, press '?' to toggle the help on and off. The supported keys are:
@@ -3460,10 +3985,10 @@ if src is directory
Note that it might take some time to delete big files/directories. The UI won't respond in the meantime since the deletion is done synchronously.
For a non-interactive listing of the remote, see the tree command. To just get the total size of the remote you can also use the size command.
rclone ncdu remote:path [flags]
-Options
+Options
-h, --help help for ncdu
Options shared with other commands are described next. See the global flags page for global options not listed here.
-Filter Options
+Filter Options
Flags for filtering directory listings
--delete-excluded Delete files on dest excluded from sync
--exclude stringArray Exclude files matching pattern
@@ -3473,6 +3998,7 @@ if src is directory
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -3487,17 +4013,17 @@ if src is directory
--metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
--min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
-Listing Options
+Listing Options
Flags for listing directories
--default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
--fast-list Use recursive list if available; uses more memory but fewer transactions
-See Also
+See Also
- rclone - Show help for rclone commands, flags and backends.
rclone nfsmount
Mount the remote as file system on a mountpoint.
-Synopsis
+Synopsis
Rclone nfsmount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE.
First set up your remote using rclone config
. Check it works with rclone ls
etc.
On Linux and macOS, you can run mount in either foreground or background (aka daemon) mode. Mount runs in foreground mode by default. Use the --daemon
flag to force background mode. On Windows you can run mount in foreground only, the flag is ignored.
@@ -3670,7 +4196,7 @@ WantedBy=multi-user.target
If run with -vv
rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir
or setting the appropriate environment variable.
The cache has 4 different modes selected by --vfs-cache-mode
. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.
Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back
seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.
-If using --vfs-cache-max-size
or --vfs-cache-min-free-size
note that the cache may exceed these quotas for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval
. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size
or --vfs-cache-min-free-size
is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.
+If using --vfs-cache-max-size
or --vfs-cache-min-free-space
note that the cache may exceed these quotas for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval
. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size
or --vfs-cache-min-free-space
is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.
The --vfs-cache-max-age
will evict files from the cache after the set time since last access has passed. The default value of 1 hour will start evicting files from cache that haven't been accessed for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0 and will wait for 1 more hour before evicting. Specify the time with standard notation, s, m, h, d, w .
You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using --vfs-cache-mode > off
. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir
. You don't need to worry about this if the remotes in use don't overlap.
--vfs-cache-mode off
@@ -3781,8 +4307,31 @@ WantedBy=multi-user.target
Alternate report of used bytes
Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df
on the filesystem, then pass the flag --vfs-used-is-size
to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size
and compute the total used space itself.
WARNING. Contrary to rclone size
, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
+
+If you use the --vfs-metadata-extension
flag you can get the VFS to expose files which contain the metadata as a JSON blob. These files will not appear in the directory listing, but can be stat
-ed and opened and once they have been they will appear in directory listings until the directory cache expires.
+Note that some backends won't create metadata unless you pass in the --metadata
flag.
+For example, using rclone mount
with --metadata --vfs-metadata-extension .metadata
we get
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+If the file has no metadata it will be returned as {}
and if there is an error reading the metadata the error will be returned as {"error":"error string"}
.
rclone nfsmount remote:path /path/to/mountpoint [flags]
-Options
+Options
--addr string IPaddress:Port or :Port to bind server to
--allow-non-empty Allow mounting over a non-empty directory (not supported on Windows)
--allow-other Allow access to other users (not supported on Windows)
@@ -3830,6 +4379,7 @@ WantedBy=multi-user.target
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -3842,7 +4392,7 @@ WantedBy=multi-user.target
--volname string Set the volume name (supported on Windows and OSX only)
--write-back-cache Makes kernel buffer writes before sending them to rclone (without this, writethrough caching is used) (not supported on Windows)
Options shared with other commands are described next. See the global flags page for global options not listed here.
-Filter Options
+Filter Options
Flags for filtering directory listings
--delete-excluded Delete files on dest excluded from sync
--exclude stringArray Exclude files matching pattern
@@ -3852,6 +4402,7 @@ WantedBy=multi-user.target
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -3866,13 +4417,13 @@ WantedBy=multi-user.target
--metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
--min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
-See Also
+See Also
- rclone - Show help for rclone commands, flags and backends.
rclone obscure
Obscure password for use in the rclone config file.
-Synopsis
+Synopsis
In the rclone config file, human-readable passwords are obscured. Obscuring them is done by encrypting them and writing them out in base64. This is not a secure way of encrypting these passwords as rclone can decrypt them - it is to prevent "eyedropping" - namely someone seeing a password in the rclone config file by accident.
Many equally important things (like access tokens) are not obscured in the config file. However it is very hard to shoulder surf a 64 character hex token.
This command can also accept a password through STDIN instead of an argument by passing a hyphen as an argument. This will use the first line of STDIN as the password not including the trailing newline.
@@ -3880,16 +4431,16 @@ WantedBy=multi-user.target
If there is no data on STDIN to read, rclone obscure will default to obfuscating the hyphen itself.
If you want to encrypt the config file then please use config file encryption - see rclone config for more info.
rclone obscure password [flags]
-Options
+Options
-h, --help help for obscure
See the global flags page for global options not listed here.
-See Also
+See Also
- rclone - Show help for rclone commands, flags and backends.
rclone rc
Run a command against a running rclone.
-Synopsis
+Synopsis
This runs a command against a running rclone. Use the --url
flag to specify an non default URL to connect on. This can be either a ":port" which is taken to mean "http://localhost:port" or a "host:port" which is taken to mean "http://host:port"
A username and password can be passed in with --user
and --pass
.
Note that --rc-addr
, --rc-user
, --rc-pass
will be read also for --url
, --user
, --pass
.
@@ -3913,7 +4464,7 @@ rclone rc --unix-socket /tmp/my.socket core/stats
rclone rc --loopback operations/about fs=/
Use rclone rc
to see a list of all possible commands.
rclone rc commands parameter [flags]
-Options
+Options
-a, --arg stringArray Argument placed in the "arg" array
-h, --help help for rc
--json string Input JSON - use instead of key=value args
@@ -3925,13 +4476,13 @@ rclone rc --unix-socket /tmp/my.socket core/stats
--url string URL to connect to rclone remote control (default "http://localhost:5572/")
--user string Username to use to rclone remote control
See the global flags page for global options not listed here.
-See Also
+See Also
- rclone - Show help for rclone commands, flags and backends.
rclone rcat
Copies standard input to file on remote.
-Synopsis
+Synopsis
Reads from standard input (stdin) and copies it to a single remote file.
echo "hello world" | rclone rcat remote:path/to/file
ffmpeg - | rclone rcat remote:path/to/file
@@ -3941,22 +4492,22 @@ ffmpeg - | rclone rcat remote:path/to/file
--size
should be the exact size of the input stream in bytes. If the size of the stream is different in length to the --size
passed in then the transfer will likely fail.
Note that the upload cannot be retried because the data is not stored. If the backend supports multipart uploading then individual chunks can be retried. If you need to transfer a lot of data, you may be better off caching it locally and then rclone move
it to the destination which can use retries.
rclone rcat remote:path [flags]
-Options
+Options
-h, --help help for rcat
--size int File size hint to preallocate (default -1)
Options shared with other commands are described next. See the global flags page for global options not listed here.
-Important Options
+Important Options
Important flags useful for most commands
-n, --dry-run Do a trial run with no permanent changes
-i, --interactive Enable interactive mode
-v, --verbose count Print lots more stuff (repeat for more)
-See Also
+See Also
- rclone - Show help for rclone commands, flags and backends.
rclone rcd
Run rclone listening to remote control commands only.
-Synopsis
+Synopsis
This runs rclone so that it only listens to remote control commands.
This is useful if you are controlling rclone via the rc API.
If you pass in a path to a directory, rclone will serve that directory for GET requests on the URL passed in. It will also open the URL in the browser when rclone is run.
@@ -4092,7 +4643,8 @@ ffmpeg - | rclone rcat remote:path/to/file
Authentication
By default this will serve files without needing a login.
You can either use an htpasswd file which can take lots of users, or set a single username and password with the --rc-user
and --rc-pass
flags.
-If no static users are configured by either of the above methods, and client certificates are required by the --client-ca
flag passed to the server, the client certificate common name will be considered as the username.
+Alternatively, you can have the reverse proxy manage authentication and use the username provided in the configured header with --user-from-header
(e.g., --rc---user-from-header=x-remote-user
). Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access.
+If either of the above authentication methods is not configured and client certificates are required by the --client-ca
flag passed to the server, the client certificate common name will be considered as the username.
Use --rc-htpasswd /path/to/htpasswd
to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.
To create an htpasswd file:
touch htpasswd
@@ -4102,7 +4654,7 @@ htpasswd -B htpasswd anotherUser
Use --rc-realm
to set the authentication realm.
Use --rc-salt
to change the password hashing salt from the default.
rclone rcd <path to files to serve>* [flags]
-Options
+Options
-h, --help help for rcd
Options shared with other commands are described next. See the global flags page for global options not listed here.
RC Options
@@ -4131,40 +4683,41 @@ htpasswd -B htpasswd anotherUser
--rc-server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
--rc-template string User-specified template
--rc-user string User name for authentication
+ --rc-user-from-header string User name from a defined HTTP header
--rc-web-fetch-url string URL to fetch the releases for webgui (default "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest")
--rc-web-gui Launch WebGUI on localhost
--rc-web-gui-force-update Force update to latest version of web gui
--rc-web-gui-no-open-browser Don't open the browser automatically
--rc-web-gui-update Check and update to latest version of web gui
-See Also
+See Also
- rclone - Show help for rclone commands, flags and backends.
rclone rmdirs
Remove empty directories under the path.
-Synopsis
+Synopsis
This recursively removes any empty directories (including directories that only contain empty directories), that it finds under the path. The root path itself will also be removed if it is empty, unless you supply the --leave-root
flag.
Use command rmdir to delete just the empty directory given by path, not recurse.
This is useful for tidying up remotes that rclone has left a lot of empty directories in. For example the delete command will delete files but leave the directory structure (unless used with option --rmdirs
).
This will delete --checkers
directories concurrently so if you have thousands of empty directories consider increasing this number.
To delete a path and any objects in it, use the purge command.
rclone rmdirs remote:path [flags]
-Options
+Options
-h, --help help for rmdirs
--leave-root Do not remove root directory if empty
Options shared with other commands are described next. See the global flags page for global options not listed here.
-Important Options
+Important Options
Important flags useful for most commands
-n, --dry-run Do a trial run with no permanent changes
-i, --interactive Enable interactive mode
-v, --verbose count Print lots more stuff (repeat for more)
-See Also
+See Also
- rclone - Show help for rclone commands, flags and backends.
rclone selfupdate
Update the rclone binary.
-Synopsis
+Synopsis
This command downloads the latest release of rclone and replaces the currently running binary. The download is verified with a hashsum and cryptographically signed signature; see the release signing docs for details.
If used without flags (or with implied --stable
flag), this command will install the latest stable release. However, some issues may be fixed (or features added) only in the latest beta release. In such cases you should run the command with the --beta
flag, i.e. rclone selfupdate --beta
. You can check in advance what version would be installed by adding the --check
flag, then repeat the command without it when you are satisfied.
Sometimes the rclone team may recommend you a concrete beta or stable rclone release to troubleshoot your issue or add a bleeding edge feature. The --version VER
flag, if given, will update to the concrete version instead of the latest one. If you omit micro version from VER
(for example 1.53
), the latest matching micro version will be used.
@@ -4174,7 +4727,7 @@ htpasswd -B htpasswd anotherUser
Note: Windows forbids deletion of a currently running executable so this command will rename the old executable to 'rclone.old.exe' upon success.
Please note that this command was not available before rclone version 1.55. If it fails for you with the message unknown command "selfupdate"
then you will need to update manually following the install instructions located at https://rclone.org/install/
rclone selfupdate [flags]
-Options
+Options
--beta Install beta release
--check Check for latest release, do not download
-h, --help help for selfupdate
@@ -4183,21 +4736,21 @@ htpasswd -B htpasswd anotherUser
--stable Install stable release (this is the default)
--version string Install the given rclone version (default: latest)
See the global flags page for global options not listed here.
-See Also
+See Also
- rclone - Show help for rclone commands, flags and backends.
rclone serve
Serve a remote over a protocol.
-Synopsis
+Synopsis
Serve a remote over a given protocol. Requires the use of a subcommand to specify the protocol, e.g.
rclone serve http remote:
Each subcommand has its own options which you can see in their help.
rclone serve <protocol> [opts] <remote> [flags]
-Options
+Options
-h, --help help for serve
See the global flags page for global options not listed here.
-See Also
+See Also
- rclone - Show help for rclone commands, flags and backends.
- rclone serve dlna - Serve remote:path over DLNA
@@ -4212,7 +4765,7 @@ htpasswd -B htpasswd anotherUser
rclone serve dlna
Serve remote:path over DLNA
-Synopsis
+Synopsis
Run a DLNA media server for media stored in an rclone remote. Many devices, such as the Xbox and PlayStation, can automatically discover this server in the LAN and play audio/video from it. VLC is also supported. Service discovery uses UDP multicast packets (SSDP) and will thus only work on LANs.
Rclone will list all files present in the remote, without filtering based on media formats or file extensions. Additionally, there is no media transcoding support. This means that some players might show files that they are not able to play back correctly.
Rclone will add external subtitle files (.srt) to videos if they have the same filename as the video file itself (except the extension), either in the same directory as the video, or in a "Subs" subdirectory.
@@ -4254,7 +4807,7 @@ htpasswd -B htpasswd anotherUser
If run with -vv
rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir
or setting the appropriate environment variable.
The cache has 4 different modes selected by --vfs-cache-mode
. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.
Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back
seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.
-If using --vfs-cache-max-size
or --vfs-cache-min-free-size
note that the cache may exceed these quotas for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval
. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size
or --vfs-cache-min-free-size
is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.
+If using --vfs-cache-max-size
or --vfs-cache-min-free-space
note that the cache may exceed these quotas for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval
. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size
or --vfs-cache-min-free-space
is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.
The --vfs-cache-max-age
will evict files from the cache after the set time since last access has passed. The default value of 1 hour will start evicting files from cache that haven't been accessed for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0 and will wait for 1 more hour before evicting. Specify the time with standard notation, s, m, h, d, w .
You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using --vfs-cache-mode > off
. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir
. You don't need to worry about this if the remotes in use don't overlap.
--vfs-cache-mode off
@@ -4365,8 +4918,31 @@ htpasswd -B htpasswd anotherUser
Alternate report of used bytes
Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df
on the filesystem, then pass the flag --vfs-used-is-size
to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size
and compute the total used space itself.
WARNING. Contrary to rclone size
, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
+
+If you use the --vfs-metadata-extension
flag you can get the VFS to expose files which contain the metadata as a JSON blob. These files will not appear in the directory listing, but can be stat
-ed and opened and once they have been they will appear in directory listings until the directory cache expires.
+Note that some backends won't create metadata unless you pass in the --metadata
flag.
+For example, using rclone mount
with --metadata --vfs-metadata-extension .metadata
we get
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+If the file has no metadata it will be returned as {}
and if there is an error reading the metadata the error will be returned as {"error":"error string"}
.
rclone serve dlna remote:path [flags]
-Options
+Options
--addr string The ip:port or :port to bind the DLNA http server to (default ":7879")
--announce-interval Duration The interval between SSDP announcements (default 12m0s)
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
@@ -4395,6 +4971,7 @@ htpasswd -B htpasswd anotherUser
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -4405,7 +4982,7 @@ htpasswd -B htpasswd anotherUser
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
Options shared with other commands are described next. See the global flags page for global options not listed here.
-Filter Options
+Filter Options
Flags for filtering directory listings
--delete-excluded Delete files on dest excluded from sync
--exclude stringArray Exclude files matching pattern
@@ -4415,6 +4992,7 @@ htpasswd -B htpasswd anotherUser
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -4429,13 +5007,13 @@ htpasswd -B htpasswd anotherUser
--metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
--min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
-See Also
+See Also
rclone serve docker
Serve any remote on docker's volume plugin API.
-Synopsis
+Synopsis
This command implements the Docker volume plugin API allowing docker to use rclone as a data storage mechanism for various cloud providers. rclone provides docker volume plugin based on it.
To create a docker plugin, one must create a Unix or TCP socket that Docker will look for when you use the plugin and then it listens for commands from docker daemon and runs the corresponding code when necessary. Docker plugins can run as a managed plugin under control of the docker daemon or as an independent native service. For testing, you can just run it directly from the command line, for example:
sudo rclone serve docker --base-dir /tmp/rclone-volumes --socket-addr localhost:8787 -vv
@@ -4477,7 +5055,7 @@ htpasswd -B htpasswd anotherUser
If run with -vv
rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir
or setting the appropriate environment variable.
The cache has 4 different modes selected by --vfs-cache-mode
. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.
Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back
seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.
-If using --vfs-cache-max-size
or --vfs-cache-min-free-size
note that the cache may exceed these quotas for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval
. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size
or --vfs-cache-min-free-size
is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.
+If using --vfs-cache-max-size
or --vfs-cache-min-free-space
note that the cache may exceed these quotas for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval
. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size
or --vfs-cache-min-free-space
is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.
The --vfs-cache-max-age
will evict files from the cache after the set time since last access has passed. The default value of 1 hour will start evicting files from cache that haven't been accessed for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0 and will wait for 1 more hour before evicting. Specify the time with standard notation, s, m, h, d, w .
You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using --vfs-cache-mode > off
. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir
. You don't need to worry about this if the remotes in use don't overlap.
--vfs-cache-mode off
@@ -4588,8 +5166,31 @@ htpasswd -B htpasswd anotherUser
Alternate report of used bytes
Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df
on the filesystem, then pass the flag --vfs-used-is-size
to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size
and compute the total used space itself.
WARNING. Contrary to rclone size
, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
+
+If you use the --vfs-metadata-extension
flag you can get the VFS to expose files which contain the metadata as a JSON blob. These files will not appear in the directory listing, but can be stat
-ed and opened and once they have been they will appear in directory listings until the directory cache expires.
+Note that some backends won't create metadata unless you pass in the --metadata
flag.
+For example, using rclone mount
with --metadata --vfs-metadata-extension .metadata
we get
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+If the file has no metadata it will be returned as {}
and if there is an error reading the metadata the error will be returned as {"error":"error string"}
.
rclone serve docker [flags]
-Options
+Options
--allow-non-empty Allow mounting over a non-empty directory (not supported on Windows)
--allow-other Allow access to other users (not supported on Windows)
--allow-root Allow access to root user (not supported on Windows)
@@ -4637,6 +5238,7 @@ htpasswd -B htpasswd anotherUser
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -4649,7 +5251,7 @@ htpasswd -B htpasswd anotherUser
--volname string Set the volume name (supported on Windows and OSX only)
--write-back-cache Makes kernel buffer writes before sending them to rclone (without this, writethrough caching is used) (not supported on Windows)
Options shared with other commands are described next. See the global flags page for global options not listed here.
-Filter Options
+Filter Options
Flags for filtering directory listings
--delete-excluded Delete files on dest excluded from sync
--exclude stringArray Exclude files matching pattern
@@ -4659,6 +5261,7 @@ htpasswd -B htpasswd anotherUser
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -4673,13 +5276,13 @@ htpasswd -B htpasswd anotherUser
--metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
--min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
-See Also
+See Also
rclone serve ftp
Serve remote:path over FTP.
-Synopsis
+Synopsis
Run a basic FTP server to serve a remote over FTP protocol. This can be viewed with a FTP client or you can make a remote of type FTP to read and write it.
Server options
Use --addr to specify which IP address and port the server should listen on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
@@ -4721,7 +5324,7 @@ htpasswd -B htpasswd anotherUser
If run with -vv
rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir
or setting the appropriate environment variable.
The cache has 4 different modes selected by --vfs-cache-mode
. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.
Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back
seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.
-If using --vfs-cache-max-size
or --vfs-cache-min-free-size
note that the cache may exceed these quotas for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval
. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size
or --vfs-cache-min-free-size
is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.
+If using --vfs-cache-max-size
or --vfs-cache-min-free-space
note that the cache may exceed these quotas for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval
. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size
or --vfs-cache-min-free-space
is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.
The --vfs-cache-max-age
will evict files from the cache after the set time since last access has passed. The default value of 1 hour will start evicting files from cache that haven't been accessed for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0 and will wait for 1 more hour before evicting. Specify the time with standard notation, s, m, h, d, w .
You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using --vfs-cache-mode > off
. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir
. You don't need to worry about this if the remotes in use don't overlap.
--vfs-cache-mode off
@@ -4832,6 +5435,29 @@ htpasswd -B htpasswd anotherUser
Alternate report of used bytes
Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df
on the filesystem, then pass the flag --vfs-used-is-size
to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size
and compute the total used space itself.
WARNING. Contrary to rclone size
, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
+
+If you use the --vfs-metadata-extension
flag you can get the VFS to expose files which contain the metadata as a JSON blob. These files will not appear in the directory listing, but can be stat
-ed and opened and once they have been they will appear in directory listings until the directory cache expires.
+Note that some backends won't create metadata unless you pass in the --metadata
flag.
+For example, using rclone mount
with --metadata --vfs-metadata-extension .metadata
we get
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+If the file has no metadata it will be returned as {}
and if there is an error reading the metadata the error will be returned as {"error":"error string"}
.
Auth Proxy
If you supply the parameter --auth-proxy /path/to/program
then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocol with input on STDIN and output on STDOUT.
PLEASE NOTE: --auth-proxy
and --authorized-keys
cannot be used together, if --auth-proxy
is set the authorized keys option will be ignored.
@@ -4863,7 +5489,7 @@ htpasswd -B htpasswd anotherUser
Note that an internal cache is keyed on user
so only use that for configuration, don't use pass
or public_key
. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.
This can be used to build general purpose proxies to any kind of backend that rclone supports.
rclone serve ftp remote:path [flags]
-Options
+Options
--addr string IPaddress:Port or :Port to bind server to (default "localhost:2121")
--auth-proxy string A program to use to create the backend from the auth
--cert string TLS PEM key (concatenation of certificate and CA certificate)
@@ -4895,6 +5521,7 @@ htpasswd -B htpasswd anotherUser
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -4905,7 +5532,7 @@ htpasswd -B htpasswd anotherUser
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
Options shared with other commands are described next. See the global flags page for global options not listed here.
-Filter Options
+Filter Options
Flags for filtering directory listings
--delete-excluded Delete files on dest excluded from sync
--exclude stringArray Exclude files matching pattern
@@ -4915,6 +5542,7 @@ htpasswd -B htpasswd anotherUser
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -4929,13 +5557,13 @@ htpasswd -B htpasswd anotherUser
--metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
--min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
-See Also
+See Also
rclone serve http
Serve the remote over HTTP.
-Synopsis
+Synopsis
Run a basic web server to serve a remote over HTTP. This can be viewed in a web browser or you can make a remote of type http read from it.
You can use the filter flags (e.g. --include
, --exclude
) to control what is served.
The server will log errors. Use -v
to see access logs.
@@ -5071,7 +5699,8 @@ htpasswd -B htpasswd anotherUser
Authentication
By default this will serve files without needing a login.
You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user
and --pass
flags.
-If no static users are configured by either of the above methods, and client certificates are required by the --client-ca
flag passed to the server, the client certificate common name will be considered as the username.
+Alternatively, you can have the reverse proxy manage authentication and use the username provided in the configured header with --user-from-header
(e.g., ----user-from-header=x-remote-user
). Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access.
+If either of the above authentication methods is not configured and client certificates are required by the --client-ca
flag passed to the server, the client certificate common name will be considered as the username.
Use --htpasswd /path/to/htpasswd
to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.
To create an htpasswd file:
touch htpasswd
@@ -5114,7 +5743,7 @@ htpasswd -B htpasswd anotherUser
If run with -vv
rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir
or setting the appropriate environment variable.
The cache has 4 different modes selected by --vfs-cache-mode
. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.
Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back
seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.
-If using --vfs-cache-max-size
or --vfs-cache-min-free-size
note that the cache may exceed these quotas for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval
. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size
or --vfs-cache-min-free-size
is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.
+If using --vfs-cache-max-size
or --vfs-cache-min-free-space
note that the cache may exceed these quotas for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval
. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size
or --vfs-cache-min-free-space
is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.
The --vfs-cache-max-age
will evict files from the cache after the set time since last access has passed. The default value of 1 hour will start evicting files from cache that haven't been accessed for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0 and will wait for 1 more hour before evicting. Specify the time with standard notation, s, m, h, d, w .
You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using --vfs-cache-mode > off
. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir
. You don't need to worry about this if the remotes in use don't overlap.
--vfs-cache-mode off
@@ -5225,6 +5854,29 @@ htpasswd -B htpasswd anotherUser
Alternate report of used bytes
Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df
on the filesystem, then pass the flag --vfs-used-is-size
to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size
and compute the total used space itself.
WARNING. Contrary to rclone size
, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
+
+If you use the --vfs-metadata-extension
flag you can get the VFS to expose files which contain the metadata as a JSON blob. These files will not appear in the directory listing, but can be stat
-ed and opened and once they have been they will appear in directory listings until the directory cache expires.
+Note that some backends won't create metadata unless you pass in the --metadata
flag.
+For example, using rclone mount
with --metadata --vfs-metadata-extension .metadata
we get
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+If the file has no metadata it will be returned as {}
and if there is an error reading the metadata the error will be returned as {"error":"error string"}
.
Auth Proxy
If you supply the parameter --auth-proxy /path/to/program
then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocol with input on STDIN and output on STDOUT.
PLEASE NOTE: --auth-proxy
and --authorized-keys
cannot be used together, if --auth-proxy
is set the authorized keys option will be ignored.
@@ -5256,20 +5908,20 @@ htpasswd -B htpasswd anotherUser
Note that an internal cache is keyed on user
so only use that for configuration, don't use pass
or public_key
. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.
This can be used to build general purpose proxies to any kind of backend that rclone supports.
rclone serve http remote:path [flags]
-Options
- --addr stringArray IPaddress:Port, :Port or [unix://]/path/to/socket to bind server to (default [127.0.0.1:8080])
+Options
+ --addr stringArray IPaddress:Port or :Port to bind server to (default 127.0.0.1:8080)
--allow-origin string Origin which cross-domain request (CORS) can be executed from
--auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
- --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
- --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
+ --cert string TLS PEM key (concatenation of certificate and CA certificate)
+ --client-ca string Client certificate authority to verify clients with
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 777)
--file-perms FileMode File permissions (default 666)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for http
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string Path to TLS PEM private key file
+ --key string TLS PEM Private key
--link-perms FileMode Link permissions (default 666)
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
@@ -5287,6 +5939,7 @@ htpasswd -B htpasswd anotherUser
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask FileMode Override the permission bits set by the filesystem (not supported on Windows) (default 002)
--user string User name for authentication
+ --user-from-header string User name from a defined HTTP header
--vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost)
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
@@ -5297,6 +5950,7 @@ htpasswd -B htpasswd anotherUser
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -5307,7 +5961,7 @@ htpasswd -B htpasswd anotherUser
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
Options shared with other commands are described next. See the global flags page for global options not listed here.
-Filter Options
+Filter Options
Flags for filtering directory listings
--delete-excluded Delete files on dest excluded from sync
--exclude stringArray Exclude files matching pattern
@@ -5317,6 +5971,7 @@ htpasswd -B htpasswd anotherUser
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -5331,13 +5986,13 @@ htpasswd -B htpasswd anotherUser
--metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
--min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
-See Also
+See Also
rclone serve nfs
Serve the remote as an NFS mount
-Synopsis
+Synopsis
Create an NFS server that serves the given remote over the network.
This implements an NFSv3 server to serve any rclone remote via NFS.
The primary purpose for this command is to enable the mount command on recent macOS versions where installing FUSE is very cumbersome.
@@ -5346,13 +6001,14 @@ htpasswd -B htpasswd anotherUser
Modifying files through the NFS protocol requires VFS caching. Usually you will need to specify --vfs-cache-mode
in order to be able to write to the mountpoint (full
is recommended). If you don't specify VFS cache mode, the mount will be read-only.
--nfs-cache-type
controls the type of the NFS handle cache. By default this is memory
where new handles will be randomly allocated when needed. These are stored in memory. If the server is restarted the handle cache will be lost and connected NFS clients will get stale handle errors.
--nfs-cache-type disk
uses an on disk NFS handle cache. Rclone hashes the path of the object and stores it in a file named after the hash. These hashes are stored on disk the directory controlled by --cache-dir
or the exact directory may be specified with --nfs-cache-dir
. Using this means that the NFS server can be restarted at will without affecting the connected clients.
---nfs-cache-type symlink
is similar to --nfs-cache-type disk
in that it uses an on disk cache, but the cache entries are held as symlinks. Rclone will use the handle of the underlying file as the NFS handle which improves performance. This sort of cache can't be backed up and restored as the underlying handles will change. This is Linux only. It requres running rclone as root or with CAP_DAC_READ_SEARCH
. You can run rclone with this extra permission by doing this to the rclone binary sudo setcap cap_dac_read_search+ep /path/to/rclone
.
+--nfs-cache-type symlink
is similar to --nfs-cache-type disk
in that it uses an on disk cache, but the cache entries are held as symlinks. Rclone will use the handle of the underlying file as the NFS handle which improves performance. This sort of cache can't be backed up and restored as the underlying handles will change. This is Linux only. It requires running rclone as root or with CAP_DAC_READ_SEARCH
. You can run rclone with this extra permission by doing this to the rclone binary sudo setcap cap_dac_read_search+ep /path/to/rclone
.
--nfs-cache-handle-limit
controls the maximum number of cached NFS handles stored by the caching handler. This should not be set too low or you may experience errors when trying to access files. The default is 1000000
, but consider lowering this limit if the server's system resource usage causes problems. This is only used by the memory
type cache.
To serve NFS over the network use following command:
rclone serve nfs remote: --addr 0.0.0.0:$PORT --vfs-cache-mode=full
This specifies a port that can be used in the mount command. To mount the server under Linux/macOS, use the following command:
mount -t nfs -o port=$PORT,mountport=$PORT,tcp $HOSTNAME:/ path/to/mountpoint
Where $PORT
is the same port number used in the serve nfs
command and $HOSTNAME
is the network address of the machine that serve nfs
was run on.
+If --vfs-metadata-extension
is in use then for the --nfs-cache-type disk
and --nfs-cache-type cache
the metadata files will have the file handle of their parent file suffixed with 0x00, 0x00, 0x00, 0x01
. This means they can be looked up directly from the parent file handle is desired.
This command is only available on Unix platforms.
VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system.
@@ -5388,7 +6044,7 @@ htpasswd -B htpasswd anotherUser
If run with -vv
rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir
or setting the appropriate environment variable.
The cache has 4 different modes selected by --vfs-cache-mode
. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.
Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back
seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.
-If using --vfs-cache-max-size
or --vfs-cache-min-free-size
note that the cache may exceed these quotas for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval
. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size
or --vfs-cache-min-free-size
is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.
+If using --vfs-cache-max-size
or --vfs-cache-min-free-space
note that the cache may exceed these quotas for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval
. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size
or --vfs-cache-min-free-space
is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.
The --vfs-cache-max-age
will evict files from the cache after the set time since last access has passed. The default value of 1 hour will start evicting files from cache that haven't been accessed for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0 and will wait for 1 more hour before evicting. Specify the time with standard notation, s, m, h, d, w .
You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using --vfs-cache-mode > off
. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir
. You don't need to worry about this if the remotes in use don't overlap.
--vfs-cache-mode off
@@ -5499,8 +6155,31 @@ htpasswd -B htpasswd anotherUser
Alternate report of used bytes
Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df
on the filesystem, then pass the flag --vfs-used-is-size
to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size
and compute the total used space itself.
WARNING. Contrary to rclone size
, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
+
+If you use the --vfs-metadata-extension
flag you can get the VFS to expose files which contain the metadata as a JSON blob. These files will not appear in the directory listing, but can be stat
-ed and opened and once they have been they will appear in directory listings until the directory cache expires.
+Note that some backends won't create metadata unless you pass in the --metadata
flag.
+For example, using rclone mount
with --metadata --vfs-metadata-extension .metadata
we get
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+If the file has no metadata it will be returned as {}
and if there is an error reading the metadata the error will be returned as {"error":"error string"}
.
rclone serve nfs remote:path [flags]
-Options
+Options
--addr string IPaddress:Port or :Port to bind server to
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 777)
@@ -5528,6 +6207,7 @@ htpasswd -B htpasswd anotherUser
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -5538,7 +6218,7 @@ htpasswd -B htpasswd anotherUser
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
Options shared with other commands are described next. See the global flags page for global options not listed here.
-Filter Options
+Filter Options
Flags for filtering directory listings
--delete-excluded Delete files on dest excluded from sync
--exclude stringArray Exclude files matching pattern
@@ -5548,6 +6228,7 @@ htpasswd -B htpasswd anotherUser
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -5562,13 +6243,13 @@ htpasswd -B htpasswd anotherUser
--metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
--min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
-See Also
+See Also
rclone serve restic
Serve the remote for restic's REST API.
-Synopsis
+Synopsis
Run a basic web server to serve a remote over restic's REST backend API over HTTP. This allows restic to use rclone as a data storage mechanism for cloud providers that restic does not support directly.
Restic is a command-line program for doing backups.
The server will log errors. Use -v to see access logs.
@@ -5629,7 +6310,8 @@ $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/
This will socket-activate rclone on the first connection to port 8000 over TCP. ### Authentication
By default this will serve files without needing a login.
You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user
and --pass
flags.
-If no static users are configured by either of the above methods, and client certificates are required by the --client-ca
flag passed to the server, the client certificate common name will be considered as the username.
+Alternatively, you can have the reverse proxy manage authentication and use the username provided in the configured header with --user-from-header
(e.g., ----user-from-header=x-remote-user
). Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access.
+If either of the above authentication methods is not configured and client certificates are required by the --client-ca
flag passed to the server, the client certificate common name will be considered as the username.
Use --htpasswd /path/to/htpasswd
to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.
To create an htpasswd file:
touch htpasswd
@@ -5639,17 +6321,17 @@ htpasswd -B htpasswd anotherUser
Use --realm
to set the authentication realm.
Use --salt
to change the password hashing salt from the default.
rclone serve restic remote:path [flags]
-Options
- --addr stringArray IPaddress:Port, :Port or [unix://]/path/to/socket to bind server to (default [127.0.0.1:8080])
+Options
+ --addr stringArray IPaddress:Port or :Port to bind server to (default 127.0.0.1:8080)
--allow-origin string Origin which cross-domain request (CORS) can be executed from
--append-only Disallow deletion of repository data
--baseurl string Prefix for URLs - leave blank for root
--cache-objects Cache listed objects (default true)
- --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
- --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
+ --cert string TLS PEM key (concatenation of certificate and CA certificate)
+ --client-ca string Client certificate authority to verify clients with
-h, --help help for restic
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string Path to TLS PEM private key file
+ --key string TLS PEM Private key
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--pass string Password for authentication
@@ -5659,20 +6341,21 @@ htpasswd -B htpasswd anotherUser
--server-read-timeout Duration Timeout for server reading data (default 1h0m0s)
--server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
--stdio Run an HTTP2 server on stdin/stdout
- --user string User name for authentication
+ --user string User name for authentication
+ --user-from-header string User name from a defined HTTP header
See the global flags page for global options not listed here.
-See Also
+See Also
rclone serve s3
Serve remote:path over s3.
-Synopsis
+Synopsis
serve s3
implements a basic s3 server that serves a remote via s3. This can be viewed with an s3 client, or you can make an s3 type remote to read and write to it with rclone.
serve s3
is considered Experimental so use with care.
S3 server supports Signature Version 4 authentication. Just use --auth-key accessKey,secretKey
and set the Authorization
header correctly in the request. (See the AWS docs).
--auth-key
can be repeated for multiple auth pairs. If --auth-key
is not provided then serve s3
will allow anonymous access.
-Please note that some clients may require HTTPS endpoints. See the SSL docs for more information.
+Please note that some clients may require HTTPS endpoints. See the SSL docs for more information.
This command uses the VFS directory cache. All the functionality will work with --vfs-cache-mode off
. Using --vfs-cache-mode full
(or writes
) can be used to cache objects locally to improve performance.
Use --force-path-style=false
if you want to use the bucket name as a part of the hostname (such as mybucket.local)
Use --etag-hash
if you want to change the hash uses for the ETag
. Note that using anything other than MD5
(the default) is likely to cause problems for S3 clients which rely on the Etag being the MD5.
@@ -5693,7 +6376,7 @@ endpoint = http://127.0.0.1:8080/
access_key_id = ACCESS_KEY_ID
secret_access_key = SECRET_ACCESS_KEY
use_multipart_uploads = false
-Note that setting disable_multipart_uploads = true
is to work around a bug which will be fixed in due course.
+Note that setting use_multipart_uploads = false
is to work around a bug which will be fixed in due course.
Bugs
When uploading multipart files serve s3
holds all the parts in memory (see #7453). This is a limitaton of the library rclone uses for serving S3 and will hopefully be fixed at some point.
Multipart server side copies do not work (see #7454). These take a very long time and eventually fail. The default threshold for multipart server side copies is 5G which is the maximum it can be, so files above this side will fail to be server side copied.
@@ -5732,7 +6415,8 @@ use_multipart_uploads = false
Authentication
By default this will serve files without needing a login.
You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user
and --pass
flags.
-If no static users are configured by either of the above methods, and client certificates are required by the --client-ca
flag passed to the server, the client certificate common name will be considered as the username.
+Alternatively, you can have the reverse proxy manage authentication and use the username provided in the configured header with --user-from-header
(e.g., ----user-from-header=x-remote-user
). Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access.
+If either of the above authentication methods is not configured and client certificates are required by the --client-ca
flag passed to the server, the client certificate common name will be considered as the username.
Use --htpasswd /path/to/htpasswd
to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.
To create an htpasswd file:
touch htpasswd
@@ -5792,7 +6476,7 @@ htpasswd -B htpasswd anotherUser
If run with -vv
rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir
or setting the appropriate environment variable.
The cache has 4 different modes selected by --vfs-cache-mode
. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.
Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back
seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.
-If using --vfs-cache-max-size
or --vfs-cache-min-free-size
note that the cache may exceed these quotas for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval
. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size
or --vfs-cache-min-free-size
is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.
+If using --vfs-cache-max-size
or --vfs-cache-min-free-space
note that the cache may exceed these quotas for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval
. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size
or --vfs-cache-min-free-space
is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.
The --vfs-cache-max-age
will evict files from the cache after the set time since last access has passed. The default value of 1 hour will start evicting files from cache that haven't been accessed for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0 and will wait for 1 more hour before evicting. Specify the time with standard notation, s, m, h, d, w .
You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using --vfs-cache-mode > off
. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir
. You don't need to worry about this if the remotes in use don't overlap.
--vfs-cache-mode off
@@ -5903,24 +6587,47 @@ htpasswd -B htpasswd anotherUser
Alternate report of used bytes
Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df
on the filesystem, then pass the flag --vfs-used-is-size
to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size
and compute the total used space itself.
WARNING. Contrary to rclone size
, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
+
+If you use the --vfs-metadata-extension
flag you can get the VFS to expose files which contain the metadata as a JSON blob. These files will not appear in the directory listing, but can be stat
-ed and opened and once they have been they will appear in directory listings until the directory cache expires.
+Note that some backends won't create metadata unless you pass in the --metadata
flag.
+For example, using rclone mount
with --metadata --vfs-metadata-extension .metadata
we get
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+If the file has no metadata it will be returned as {}
and if there is an error reading the metadata the error will be returned as {"error":"error string"}
.
rclone serve s3 remote:path [flags]
-Options
- --addr stringArray IPaddress:Port, :Port or [unix://]/path/to/socket to bind server to (default [127.0.0.1:8080])
+Options
+ --addr stringArray IPaddress:Port or :Port to bind server to (default 127.0.0.1:8080)
--allow-origin string Origin which cross-domain request (CORS) can be executed from
--auth-key stringArray Set key pair for v4 authorization: access_key_id,secret_access_key
--auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
- --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
- --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
+ --cert string TLS PEM key (concatenation of certificate and CA certificate)
+ --client-ca string Client certificate authority to verify clients with
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 777)
--etag-hash string Which hash to use for the ETag, or auto or blank for off (default "MD5")
--file-perms FileMode File permissions (default 666)
- --force-path-style If true use path style access if false use virtual hosted style (default true) (default true)
+ --force-path-style If true use path style access if false use virtual hosted style (default true)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for s3
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string Path to TLS PEM private key file
+ --key string TLS PEM Private key
--link-perms FileMode Link permissions (default 666)
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
@@ -5938,6 +6645,7 @@ htpasswd -B htpasswd anotherUser
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask FileMode Override the permission bits set by the filesystem (not supported on Windows) (default 002)
--user string User name for authentication
+ --user-from-header string User name from a defined HTTP header
--vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost)
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
@@ -5948,6 +6656,7 @@ htpasswd -B htpasswd anotherUser
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -5958,7 +6667,7 @@ htpasswd -B htpasswd anotherUser
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
Options shared with other commands are described next. See the global flags page for global options not listed here.
-Filter Options
+Filter Options
Flags for filtering directory listings
--delete-excluded Delete files on dest excluded from sync
--exclude stringArray Exclude files matching pattern
@@ -5968,6 +6677,7 @@ htpasswd -B htpasswd anotherUser
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -5982,13 +6692,13 @@ htpasswd -B htpasswd anotherUser
--metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
--min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
-See Also
+See Also
rclone serve sftp
Serve the remote over SFTP.
-Synopsis
+Synopsis
Run an SFTP server to serve a remote over SFTP. This can be used with an SFTP client or you can make a remote of type sftp to use with it.
You can use the filter flags (e.g. --include
, --exclude
) to control what is served.
The server will respond to a small number of shell commands, mainly md5sum, sha1sum and df, which enable it to provide support for checksums and the about feature when accessed from an sftp remote.
@@ -6041,7 +6751,7 @@ htpasswd -B htpasswd anotherUser
If run with -vv
rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir
or setting the appropriate environment variable.
The cache has 4 different modes selected by --vfs-cache-mode
. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.
Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back
seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.
-If using --vfs-cache-max-size
or --vfs-cache-min-free-size
note that the cache may exceed these quotas for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval
. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size
or --vfs-cache-min-free-size
is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.
+If using --vfs-cache-max-size
or --vfs-cache-min-free-space
note that the cache may exceed these quotas for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval
. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size
or --vfs-cache-min-free-space
is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.
The --vfs-cache-max-age
will evict files from the cache after the set time since last access has passed. The default value of 1 hour will start evicting files from cache that haven't been accessed for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0 and will wait for 1 more hour before evicting. Specify the time with standard notation, s, m, h, d, w .
You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using --vfs-cache-mode > off
. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir
. You don't need to worry about this if the remotes in use don't overlap.
--vfs-cache-mode off
@@ -6152,6 +6862,29 @@ htpasswd -B htpasswd anotherUser
Alternate report of used bytes
Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df
on the filesystem, then pass the flag --vfs-used-is-size
to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size
and compute the total used space itself.
WARNING. Contrary to rclone size
, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
+
+If you use the --vfs-metadata-extension
flag you can get the VFS to expose files which contain the metadata as a JSON blob. These files will not appear in the directory listing, but can be stat
-ed and opened and once they have been they will appear in directory listings until the directory cache expires.
+Note that some backends won't create metadata unless you pass in the --metadata
flag.
+For example, using rclone mount
with --metadata --vfs-metadata-extension .metadata
we get
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+If the file has no metadata it will be returned as {}
and if there is an error reading the metadata the error will be returned as {"error":"error string"}
.
Auth Proxy
If you supply the parameter --auth-proxy /path/to/program
then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocol with input on STDIN and output on STDOUT.
PLEASE NOTE: --auth-proxy
and --authorized-keys
cannot be used together, if --auth-proxy
is set the authorized keys option will be ignored.
@@ -6183,7 +6916,7 @@ htpasswd -B htpasswd anotherUser
Note that an internal cache is keyed on user
so only use that for configuration, don't use pass
or public_key
. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.
This can be used to build general purpose proxies to any kind of backend that rclone supports.
rclone serve sftp remote:path [flags]
-Options
+Options
--addr string IPaddress:Port or :Port to bind server to (default "localhost:2022")
--auth-proxy string A program to use to create the backend from the auth
--authorized-keys string Authorized keys file (default "~/.ssh/authorized_keys")
@@ -6215,6 +6948,7 @@ htpasswd -B htpasswd anotherUser
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -6225,7 +6959,7 @@ htpasswd -B htpasswd anotherUser
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
Options shared with other commands are described next. See the global flags page for global options not listed here.
-Filter Options
+Filter Options
Flags for filtering directory listings
--delete-excluded Delete files on dest excluded from sync
--exclude stringArray Exclude files matching pattern
@@ -6235,6 +6969,7 @@ htpasswd -B htpasswd anotherUser
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -6249,13 +6984,13 @@ htpasswd -B htpasswd anotherUser
--metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
--min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
-See Also
+See Also
rclone serve webdav
Serve remote:path over WebDAV.
-Synopsis
+Synopsis
Run a basic WebDAV server to serve a remote over HTTP via the WebDAV protocol. This can be viewed with a WebDAV client, through a web browser, or you can make a remote of type WebDAV to read and write it.
WebDAV options
--etag-hash
@@ -6403,7 +7138,8 @@ htpasswd -B htpasswd anotherUser
Authentication
By default this will serve files without needing a login.
You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user
and --pass
flags.
-If no static users are configured by either of the above methods, and client certificates are required by the --client-ca
flag passed to the server, the client certificate common name will be considered as the username.
+Alternatively, you can have the reverse proxy manage authentication and use the username provided in the configured header with --user-from-header
(e.g., ----user-from-header=x-remote-user
). Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access.
+If either of the above authentication methods is not configured and client certificates are required by the --client-ca
flag passed to the server, the client certificate common name will be considered as the username.
Use --htpasswd /path/to/htpasswd
to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.
To create an htpasswd file:
touch htpasswd
@@ -6446,7 +7182,7 @@ htpasswd -B htpasswd anotherUser
If run with -vv
rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir
or setting the appropriate environment variable.
The cache has 4 different modes selected by --vfs-cache-mode
. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.
Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back
seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.
-If using --vfs-cache-max-size
or --vfs-cache-min-free-size
note that the cache may exceed these quotas for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval
. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size
or --vfs-cache-min-free-size
is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.
+If using --vfs-cache-max-size
or --vfs-cache-min-free-space
note that the cache may exceed these quotas for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval
. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size
or --vfs-cache-min-free-space
is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.
The --vfs-cache-max-age
will evict files from the cache after the set time since last access has passed. The default value of 1 hour will start evicting files from cache that haven't been accessed for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0 and will wait for 1 more hour before evicting. Specify the time with standard notation, s, m, h, d, w .
You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using --vfs-cache-mode > off
. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir
. You don't need to worry about this if the remotes in use don't overlap.
--vfs-cache-mode off
@@ -6557,6 +7293,29 @@ htpasswd -B htpasswd anotherUser
Alternate report of used bytes
Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df
on the filesystem, then pass the flag --vfs-used-is-size
to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size
and compute the total used space itself.
WARNING. Contrary to rclone size
, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
+
+If you use the --vfs-metadata-extension
flag you can get the VFS to expose files which contain the metadata as a JSON blob. These files will not appear in the directory listing, but can be stat
-ed and opened and once they have been they will appear in directory listings until the directory cache expires.
+Note that some backends won't create metadata unless you pass in the --metadata
flag.
+For example, using rclone mount
with --metadata --vfs-metadata-extension .metadata
we get
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+If the file has no metadata it will be returned as {}
and if there is an error reading the metadata the error will be returned as {"error":"error string"}
.
Auth Proxy
If you supply the parameter --auth-proxy /path/to/program
then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocol with input on STDIN and output on STDOUT.
PLEASE NOTE: --auth-proxy
and --authorized-keys
cannot be used together, if --auth-proxy
is set the authorized keys option will be ignored.
@@ -6588,13 +7347,13 @@ htpasswd -B htpasswd anotherUser
Note that an internal cache is keyed on user
so only use that for configuration, don't use pass
or public_key
. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.
This can be used to build general purpose proxies to any kind of backend that rclone supports.
rclone serve webdav remote:path [flags]
-Options
- --addr stringArray IPaddress:Port, :Port or [unix://]/path/to/socket to bind server to (default [127.0.0.1:8080])
+Options
+ --addr stringArray IPaddress:Port or :Port to bind server to (default 127.0.0.1:8080)
--allow-origin string Origin which cross-domain request (CORS) can be executed from
--auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
- --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
- --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
+ --cert string TLS PEM key (concatenation of certificate and CA certificate)
+ --client-ca string Client certificate authority to verify clients with
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 777)
--disable-dir-list Disable HTML directory list on GET request for a directory
@@ -6603,7 +7362,7 @@ htpasswd -B htpasswd anotherUser
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for webdav
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string Path to TLS PEM private key file
+ --key string TLS PEM Private key
--link-perms FileMode Link permissions (default 666)
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
@@ -6621,6 +7380,7 @@ htpasswd -B htpasswd anotherUser
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask FileMode Override the permission bits set by the filesystem (not supported on Windows) (default 002)
--user string User name for authentication
+ --user-from-header string User name from a defined HTTP header
--vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost)
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
@@ -6631,6 +7391,7 @@ htpasswd -B htpasswd anotherUser
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -6641,7 +7402,7 @@ htpasswd -B htpasswd anotherUser
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
Options shared with other commands are described next. See the global flags page for global options not listed here.
-Filter Options
+Filter Options
Flags for filtering directory listings
--delete-excluded Delete files on dest excluded from sync
--exclude stringArray Exclude files matching pattern
@@ -6651,6 +7412,7 @@ htpasswd -B htpasswd anotherUser
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -6665,13 +7427,13 @@ htpasswd -B htpasswd anotherUser
--metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
--min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
-See Also
+See Also
rclone settier
Changes storage class/tier of objects in remote.
-Synopsis
+Synopsis
Changes storage tier or class at remote if supported. Few cloud storage services provides different storage classes on objects, for example AWS S3 and Glacier, Azure Blob storage - Hot, Cool and Archive, Google Cloud Storage, Regional Storage, Nearline, Coldline etc.
Note that, certain tier changes make objects not available to access immediately. For example tiering to archive in azure blob storage makes objects in frozen state, user can restore by setting tier to Hot/Cool, similarly S3 to Glacier makes object inaccessible.true
You can use it to tier single object
@@ -6681,25 +7443,25 @@ htpasswd -B htpasswd anotherUser
Or just provide remote directory and all files in directory will be tiered
rclone settier tier remote:path/dir
rclone settier tier remote:path [flags]
-Options
+Options
-h, --help help for settier
See the global flags page for global options not listed here.
-See Also
+See Also
- rclone - Show help for rclone commands, flags and backends.
rclone test
Run a test command
-Synopsis
+Synopsis
Rclone test is used to run test commands.
Select which test command you want with the subcommand, eg
rclone test memory remote:
Each subcommand has its own options which you can see in their help.
NB Be careful running these commands, they may do strange things so reading their documentation first is recommended.
-Options
+Options
-h, --help help for test
See the global flags page for global options not listed here.
-See Also
+See Also
rclone test changenotify remote: [flags]
-Options
+Options
-h, --help help for changenotify
--poll-interval Duration Time to wait between polling for changes (default 10s)
See the global flags page for global options not listed here.
-See Also
-
-rclone test histogram
-Makes a histogram of file name characters.
-Synopsis
-This command outputs JSON which shows the histogram of characters used in filenames in the remote:path specified.
-The data doesn't contain any identifying information but is useful for the rclone developers when developing filename compression.
-rclone test histogram [remote:path] [flags]
-Options
- -h, --help help for histogram
-See the global flags page for global options not listed here.
See Also
+rclone test histogram
+Makes a histogram of file name characters.
+Synopsis
+This command outputs JSON which shows the histogram of characters used in filenames in the remote:path specified.
+The data doesn't contain any identifying information but is useful for the rclone developers when developing filename compression.
+rclone test histogram [remote:path] [flags]
+Options
+ -h, --help help for histogram
+See the global flags page for global options not listed here.
+See Also
+
rclone test info
Discovers file name or other limitations for paths.
-Synopsis
+Synopsis
Discovers what filenames and upload methods are possible to write to the paths passed in and how long they can be. It can take some time. It will write test files into the remote:path passed in. It outputs a bit of go code for each one.
NB this can create undeletable files and other hazards - use with care
rclone test info [remote:path]+ [flags]
-Options
+Options
--all Run all tests
--check-base32768 Check can store all possible base32768 characters
--check-control Check control characters
@@ -6751,14 +7513,14 @@ htpasswd -B htpasswd anotherUser
--upload-wait Duration Wait after writing a file (default 0s)
--write-json string Write results to file
See the global flags page for global options not listed here.
-See Also
+See Also
rclone test makefile
Make files with random contents of the size given
rclone test makefile <size> [<file>]+ [flags]
-Options
+Options
--ascii Fill files with random ASCII printable bytes only
--chargen Fill files with a ASCII chargen pattern
-h, --help help for makefile
@@ -6767,14 +7529,14 @@ htpasswd -B htpasswd anotherUser
--sparse Make the files sparse (appear to be filled with ASCII 0x00)
--zero Fill files with ASCII 0x00
See the global flags page for global options not listed here.
-See Also
+See Also
rclone test makefiles
Make a random file hierarchy in a directory
rclone test makefiles <dir> [flags]
-Options
+Options
--ascii Fill files with random ASCII printable bytes only
--chargen Fill files with a ASCII chargen pattern
--files int Number of files to create (default 1000)
@@ -6791,26 +7553,26 @@ htpasswd -B htpasswd anotherUser
--sparse Make the files sparse (appear to be filled with ASCII 0x00)
--zero Fill files with ASCII 0x00
See the global flags page for global options not listed here.
-See Also
+See Also
rclone test memory
Load all the objects at remote:path into memory and report memory stats.
rclone test memory remote:path [flags]
-Options
+Options
-h, --help help for memory
See the global flags page for global options not listed here.
-See Also
+See Also
rclone touch
Create new file or change file modification time.
-Synopsis
+Synopsis
Set the modification time on file(s) as specified by remote:path to have the current time.
If remote:path does not exist then a zero sized file will be created, unless --no-create
or --recursive
is provided.
-If --recursive
is used then recursively sets the modification time on all existing files that is found under the path. Filters are supported, and you can test with the --dry-run
or the --interactive
/-i
flag.
+If --recursive
is used then recursively sets the modification time on all existing files that is found under the path. Filters are supported, and you can test with the --dry-run
or the --interactive
/-i
flag. This will touch --transfers
files concurrently.
If --timestamp
is used then sets the modification time to that time instead of the current time. Times may be specified as one of:
- 'YYMMDD' - e.g. 17.10.30
@@ -6819,19 +7581,19 @@ htpasswd -B htpasswd anotherUser
Note that value of --timestamp
is in UTC. If you want local time then add the --localtime
flag.
rclone touch remote:path [flags]
-Options
+Options
-h, --help help for touch
--localtime Use localtime for timestamp, not UTC
-C, --no-create Do not create the file if it does not exist (implied with --recursive)
-R, --recursive Recursively touch all files
-t, --timestamp string Use specified time instead of the current time of day
Options shared with other commands are described next. See the global flags page for global options not listed here.
-Important Options
+Important Options
Important flags useful for most commands
-n, --dry-run Do a trial run with no permanent changes
-i, --interactive Enable interactive mode
-v, --verbose count Print lots more stuff (repeat for more)
-Filter Options
+Filter Options
Flags for filtering directory listings
--delete-excluded Delete files on dest excluded from sync
--exclude stringArray Exclude files matching pattern
@@ -6841,6 +7603,7 @@ htpasswd -B htpasswd anotherUser
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -6855,17 +7618,17 @@ htpasswd -B htpasswd anotherUser
--metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
--min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
-Listing Options
+Listing Options
Flags for listing directories
--default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
--fast-list Use recursive list if available; uses more memory but fewer transactions
-See Also
+See Also
- rclone - Show help for rclone commands, flags and backends.
rclone tree
List the contents of the remote in a tree like fashion.
-Synopsis
+Synopsis
Lists the contents of a remote in a similar way to the unix tree command.
For example
$ rclone tree remote:path
@@ -6882,7 +7645,7 @@ htpasswd -B htpasswd anotherUser
The tree command has many options for controlling the listing which are compatible with the tree command, for example you can include file sizes with --size
. Note that not all of them have short options as they conflict with rclone's short options.
For a more interactive navigation of the remote see the ncdu command.
rclone tree remote:path [flags]
-Options
+Options
-a, --all All files are listed (list . files too)
-d, --dirs-only List directories only
--dirsfirst List directories before files (-U disables)
@@ -6903,7 +7666,7 @@ htpasswd -B htpasswd anotherUser
-U, --unsorted Leave files unsorted
--version Sort files alphanumerically by version
Options shared with other commands are described next. See the global flags page for global options not listed here.
-Filter Options
+Filter Options
Flags for filtering directory listings
--delete-excluded Delete files on dest excluded from sync
--exclude stringArray Exclude files matching pattern
@@ -6913,6 +7676,7 @@ htpasswd -B htpasswd anotherUser
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -6927,11 +7691,11 @@ htpasswd -B htpasswd anotherUser
--metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
--min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
-Listing Options
+Listing Options
Flags for listing directories
--default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
--fast-list Use recursive list if available; uses more memory but fewer transactions
-See Also
+See Also
- rclone - Show help for rclone commands, flags and backends.
@@ -7165,9 +7929,10 @@ rclone sync --interactive /path/to/files remote:current-backup
The metadata keys mtime
and content-type
will take precedence if supplied in the metadata over reading the Content-Type
or modification time of the source object.
Hashes are not included in system metadata as there is a well defined way of reading those already.
-Options
+Options
Rclone has a number of options to control its behaviour.
Options that take parameters can have the values passed in two ways, --option=value
or --option value
. However boolean (true/false) options behave slightly differently to the other options in that --boolean
sets the option to true
and the absence of the flag sets it to false
. It is also possible to specify --boolean=false
or --boolean=true
. Note that --boolean false
is not valid - this is parsed as --boolean
and the false
is parsed as an extra command line argument for rclone.
+Options documented to take a stringArray
parameter accept multiple values. To pass more than one value, repeat the option; for example: --include value1 --include value2
.
Time or duration options
TIME or DURATION options can be specified as a duration string or a time string.
A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Default units are seconds or the following abbreviations are valid:
@@ -7305,9 +8070,10 @@ rclone sync --interactive /path/to/files remote:current-backup
On Windows: %HOME%
if defined, else %USERPROFILE%
, or else %HOMEDRIVE%\%HOMEPATH%
.
On Unix: $HOME
if defined, else by looking up current user in OS-specific user database (e.g. passwd file), or else use the result from shell command cd && pwd
.
-If you run rclone config file
you will see where the default location is for you.
+If you run rclone config file
you will see where the default location is for you. Running rclone config touch
will ensure a configuration file exists, creating an empty one in the default location if there is none.
The fact that an existing file rclone.conf
in the same directory as the rclone executable is always preferred, means that it is easy to run in "portable" mode by downloading rclone executable to a writable directory and then create an empty file rclone.conf
in the same directory.
-If the location is set to empty string ""
or path to a file with name notfound
, or the os null device represented by value NUL
on Windows and /dev/null
on Unix systems, then rclone will keep the config file in memory only.
+If the location is set to empty string ""
or path to a file with name notfound
, or the os null device represented by value NUL
on Windows and /dev/null
on Unix systems, then rclone will keep the configuration file in memory only.
+You may see a log message "Config file not found - using defaults" if there is no configuration file. This can be supressed, e.g. if you are using rclone entirely with on the fly remotes, by using memory-only configuration file or by creating an empty configuration file, as described above.
The file format is basic INI: Sections of text, led by a [section]
header and followed by key=value
entries on separate lines. In rclone each remote is represented by its own section, where the section name defines the name of the remote. Options are specified as the key=value
entries, where the key is the option name without the --backend-
prefix, in lowercase and with _
instead of -
. E.g. option --mega-hard-delete
corresponds to key hard_delete
. Only backend options can be specified. A special, and required, key type
identifies the storage system, where the value is the internal lowercase name as returned by command rclone help backends
. Comments are indicated by ;
or #
at the beginning of a line.
Example:
[megaremote]
@@ -7459,20 +8225,84 @@ y/n/s/!/q> n
If you supply this flag then rclone will copy symbolic links from any supported backend backend, and store them as text files, with a .rclonelink
suffix in the destination.
The text file will contain the target of the symbolic link.
The --links
/ -l
flag enables this feature for all supported backends and the VFS. There are individual flags for just enabling it for the VFS --vfs-links
and the local backend --local-links
if required.
+--list-cutoff N
+When syncing rclone needs to sort directory entries before comparing them. Below this threshold (1,000,000) by default, rclone will store the directory entries in memory. 1,000,000 entries will take approx 1GB of RAM to store. Above this threshold rclone will store directory entries on disk and sort them without using a lot of memory.
+Doing this is slightly less efficient then sorting them in memory and will only work well for the bucket based backends (eg s3, b2, azureblob, swift) but these are the only backends likely to have millions of entries in a directory.
--log-file=FILE
Log all of rclone's output to FILE. This is not active by default. This can be useful for tracking down problems with syncs in combination with the -v
flag. See the Logging section for more info.
If FILE exists then rclone will append to it.
Note that if you are using the logrotate
program to manage rclone's logs, then you should use the copytruncate
option as rclone doesn't have a signal to rotate logs.
-Comma separated list of log format options. Accepted options are date
, time
, microseconds
, pid
, longfile
, shortfile
, UTC
. Any other keywords will be silently ignored. pid
will tag log messages with process identifier which useful with rclone mount --daemon
. Other accepted options are explained in the go documentation. The default log format is "date
,time
".
+Comma separated list of log format options. The accepted options are:
+
+date
- Add a date in the format YYYY/MM/YY to the log.
+time
- Add a time to the log in format HH:MM:SS.
+microseconds
- Add microseconds to the time in format HH:MM:SS.SSSSSS.
+UTC
- Make the logs in UTC not localtime.
+longfile
- Adds the source file and line number of the log statement.
+shortfile
- Adds the source file and line number of the log statement.
+pid
- Add the process ID to the log - useful with rclone mount --daemon
.
+nolevel
- Don't add the level to the log.
+json
- Equivalent to adding --use-json-log
+
+They are added to the log line in the order above.
+The default log format is "date,time"
.
--log-level LEVEL
This sets the log level for rclone. The default log level is NOTICE
.
DEBUG
is equivalent to -vv
. It outputs lots of debug info - useful for bug reports and really finding out what rclone is doing.
INFO
is equivalent to -v
. It outputs information about each transfer and prints stats once a minute by default.
NOTICE
is the default log level if no logging flags are supplied. It outputs very little when things are working normally. It outputs warnings and significant events.
ERROR
is equivalent to -q
. It only outputs error messages.
+--windows-event-log LEVEL
+If this is configured (the default is OFF
) then logs of this level and above will be logged to the Windows event log in addition to the normal logs. These will be logged in JSON format as described below regardless of what format the main logs are configured for.
+The Windows event log only has 3 levels of severity Info
, Warning
and Error
. If enabled we map rclone levels like this.
+
+Error
← ERROR
(and above)
+Warning
← WARNING
(note that this level is defined but not currently used).
+Info
← NOTICE
, INFO
and DEBUG
.
+
+Rclone will declare its log source as "rclone" if it is has enough permissions to create the registry key needed. If not then logs will appear as "Application". You can run rclone version --windows-event-log DEBUG
once as administrator to create the registry key in advance.
+Note that the --windows-event-log
level must be greater (more severe) than or equal to the --log-level
. For example to log DEBUG to a log file but ERRORs to the event log you would use
+--log-file rclone.log --log-level DEBUG --windows-event-log ERROR
+This option is only supported Windows platforms.
--use-json-log
-This switches the log format to JSON for rclone. The fields of json log are level, msg, source, time.
+This switches the log format to JSON for rclone. The fields of JSON log are level
, msg
, source
, time
. The JSON logs will be printed on a single line, but are shown expanded here for clarity.
+{
+ "time": "2025-05-13T17:30:51.036237518+01:00",
+ "level": "debug",
+ "msg": "4 go routines active\n",
+ "source": "cmd/cmd.go:298"
+}
+Completed data transfer logs will have extra size
information. Logs which are about a particular object will have object
and objectType
fields also.
+{
+ "time": "2025-05-13T17:38:05.540846352+01:00",
+ "level": "info",
+ "msg": "Copied (new) to: file2.txt",
+ "size": 6,
+ "object": "file.txt",
+ "objectType": "*local.Object",
+ "source": "operations/copy.go:368"
+}
+Stats logs will contain a stats
field which is the same as returned from the rc call core/stats.
+{
+ "time": "2025-05-13T17:38:05.540912847+01:00",
+ "level": "info",
+ "msg": "...text version of the stats...",
+ "stats": {
+ "bytes": 6,
+ "checks": 0,
+ "deletedDirs": 0,
+ "deletes": 0,
+ "elapsedTime": 0.000904825,
+ ...truncated for clarity...
+ "totalBytes": 6,
+ "totalChecks": 0,
+ "totalTransfers": 1,
+ "transferTime": 0.000882794,
+ "transfers": 1
+ },
+ "source": "accounting/stats.go:569"
+}
--low-level-retries NUMBER
This controls the number of low level retries rclone does.
A low level retry is used to retry a failing operation - typically one HTTP request. This might be uploading a chunk of a big file for example. You will see low level retries in the log with the -v
flag.
@@ -7484,6 +8314,19 @@ y/n/s/!/q> n
Setting this large allows rclone to calculate how many files are pending more accurately, give a more accurate estimated finish time and make --order-by
work more accurately.
Setting this small will make rclone more synchronous to the listings of the remote which may be desirable.
Setting this to a negative number will make the backlog as large as possible.
+--max-buffer-memory=SIZE
+If set, don't allocate more than SIZE amount of memory as buffers. If not set or set to 0
or off
this will not limit the amount of memory in use.
+This includes memory used by buffers created by the --buffer
flag and buffers used by multi-thread transfers.
+Most multi-thread transfers do not take additional memory, but some do depending on the backend (eg the s3 backend for uploads). This means there is a tension between total setting --transfers
as high as possible and memory use.
+Setting --max-buffer-memory
allows the buffer memory to be controlled so that it doesn't overwhelm the machine and allows --transfers
to be set large.
+--max-connections=N
+This sets the maximum number of concurrent calls to the backend API. It may not map 1:1 to TCP or HTTP connections depending on the backend in use and the use of HTTP1 vs HTTP2.
+When downloading files, backends only limit the initial opening of the stream. The bulk data download is not counted as a connection. This means that the --max--connections
flag won't limit the total number of downloads.
+Note that it is possible to cause deadlocks with this setting so it should be used with care.
+If you are doing a sync or copy then make sure --max-connections
is one more than the sum of --transfers
and --checkers
.
+If you use --check-first
then --max-connections
just needs to be one more than the maximum of --checkers
and --transfers
.
+So for --max-connections 3
you'd use --checkers 2 --transfers 2 --check-first
or --checkers 1 --transfers 1
.
+Setting this flag can be useful for backends which do multipart uploads to limit the number of simultaneous parts being transferred.
--max-delete=N
This tells rclone not to delete more than N files. If that limit is exceeded then a fatal error will be generated and rclone will stop the operation in progress.
--max-delete-size=SIZE
@@ -7530,55 +8373,55 @@ y/n/s/!/q> n
ID
is the source ID
of the object if known.
Metadata
is the backend specific metadata as described in the backend docs.
-{
- "SrcFs": "gdrive:",
- "SrcFsType": "drive",
- "DstFs": "newdrive:user",
- "DstFsType": "onedrive",
- "Remote": "test.txt",
- "Size": 6,
- "MimeType": "text/plain; charset=utf-8",
- "ModTime": "2022-10-11T17:53:10.286745272+01:00",
- "IsDir": false,
- "ID": "xyz",
- "Metadata": {
- "btime": "2022-10-11T16:53:11Z",
- "content-type": "text/plain; charset=utf-8",
- "mtime": "2022-10-11T17:53:10.286745272+01:00",
- "owner": "user1@domain1.com",
- "permissions": "...",
- "description": "my nice file",
- "starred": "false"
- }
-}
+{
+ "SrcFs": "gdrive:",
+ "SrcFsType": "drive",
+ "DstFs": "newdrive:user",
+ "DstFsType": "onedrive",
+ "Remote": "test.txt",
+ "Size": 6,
+ "MimeType": "text/plain; charset=utf-8",
+ "ModTime": "2022-10-11T17:53:10.286745272+01:00",
+ "IsDir": false,
+ "ID": "xyz",
+ "Metadata": {
+ "btime": "2022-10-11T16:53:11Z",
+ "content-type": "text/plain; charset=utf-8",
+ "mtime": "2022-10-11T17:53:10.286745272+01:00",
+ "owner": "user1@domain1.com",
+ "permissions": "...",
+ "description": "my nice file",
+ "starred": "false"
+ }
+}
The program should then modify the input as desired and send it to STDOUT. The returned Metadata
field will be used in its entirety for the destination object. Any other fields will be ignored. Note in this example we translate user names and permissions and add something to the description:
-{
- "Metadata": {
- "btime": "2022-10-11T16:53:11Z",
- "content-type": "text/plain; charset=utf-8",
- "mtime": "2022-10-11T17:53:10.286745272+01:00",
- "owner": "user1@domain2.com",
- "permissions": "...",
- "description": "my nice file [migrated from domain1]",
- "starred": "false"
- }
-}
+{
+ "Metadata": {
+ "btime": "2022-10-11T16:53:11Z",
+ "content-type": "text/plain; charset=utf-8",
+ "mtime": "2022-10-11T17:53:10.286745272+01:00",
+ "owner": "user1@domain2.com",
+ "permissions": "...",
+ "description": "my nice file [migrated from domain1]",
+ "starred": "false"
+ }
+}
Metadata can be removed here too.
An example python program might look something like this to implement the above transformations.
-import sys, json
-
-i = json.load(sys.stdin)
-metadata = i["Metadata"]
-# Add tag to description
-if "description" in metadata:
- metadata["description"] += " [migrated from domain1]"
-else:
- metadata["description"] = "[migrated from domain1]"
-# Modify owner
-if "owner" in metadata:
- metadata["owner"] = metadata["owner"].replace("domain1.com", "domain2.com")
-o = { "Metadata": metadata }
-json.dump(o, sys.stdout, indent="\t")
+import sys, json
+
+i = json.load(sys.stdin)
+metadata = i["Metadata"]
+# Add tag to description
+if "description" in metadata:
+ metadata["description"] += " [migrated from domain1]"
+else:
+ metadata["description"] = "[migrated from domain1]"
+# Modify owner
+if "owner" in metadata:
+ metadata["owner"] = metadata["owner"].replace("domain1.com", "domain2.com")
+o = { "Metadata": metadata }
+json.dump(o, sys.stdout, indent="\t")
You can find this example (slightly expanded) in the rclone source code at bin/test_metadata_mapper.py.
If you want to see the input to the metadata mapper and the output returned from it in the log you can use -vv --dump mapper
.
See the metadata section for more info.
@@ -7603,12 +8446,15 @@ y/n/s/!/q> n
The number of threads used to transfer is controlled by --multi-thread-streams
.
Use -vv
if you wish to see info about the threads.
This will work with the sync
/copy
/move
commands and friends copyto
/moveto
. Multi thread transfers will be used with rclone mount
and rclone serve
if --vfs-cache-mode
is set to writes
or above.
+Most multi-thread transfers do not take additional memory, but some do (for example uploading to s3). In the worst case memory usage can be at maximum --transfers
* --multi-thread-chunk-size
--multi-thread-streams
or specifically for the s3 backend --transfers
--s3-chunk-size
* --s3-concurrency
. However you can use the the --max-buffer-memory flag to control the maximum memory used here.
NB that this only works with supported backends as the destination but will work with any backend as the source.
NB that multi-thread copies are disabled for local to local copies as they are faster without unless --multi-thread-streams
is set explicitly.
NB on Windows using multi-thread transfers to the local disk will cause the resulting files to be sparse. Use --local-no-sparse
to disable sparse files (which may cause long delays at the start of transfers) or disable multi-thread transfers with --multi-thread-streams 0
--multi-thread-streams=N
When using multi thread transfers (see above --multi-thread-cutoff
) this sets the number of streams to use. Set to 0
to disable multi thread transfers (Default 4).
If the backend has a --backend-upload-concurrency
setting (eg --s3-upload-concurrency
) then this setting will be used as the number of transfers instead if it is larger than the value of --multi-thread-streams
or --multi-thread-streams
isn't set.
+
+--name-transform
introduces path name transformations for rclone copy
, rclone sync
, and rclone move
. These transformations enable modifications to source and destination file names by applying prefixes, suffixes, and other alterations during transfer operations. For detailed docs and examples, see convmv
.
--no-check-dest
The --no-check-dest
can be used with move
or copy
and it causes rclone not to check the destination at all when copying files.
This means that:
@@ -7991,6 +8837,7 @@ export RCLONE_CONFIG_PASS
--max-size
--min-age
--max-age
+--hash-filter
--dump filters
--metadata-include
--metadata-include-from
@@ -8043,7 +8890,7 @@ export RCLONE_CONFIG_PASS
Environment Variables
Rclone can be configured entirely using environment variables. These can be used to set defaults for options or config file entries.
-Options
+Options
Every option in rclone can have its default set by environment variable.
To find the name of the environment variable, first, take the long option name, strip the leading --
, change -
to _
, make upper case and prepend RCLONE_
.
For example, to always set --stats 5s
, set the environment variable RCLONE_STATS=5s
. If you set stats on the command line this will override the environment variable setting.
@@ -8051,7 +8898,7 @@ export RCLONE_CONFIG_PASS
Verbosity is slightly different, the environment variable equivalent of --verbose
or -v
is RCLONE_VERBOSE=1
, or for -vv
, RCLONE_VERBOSE=2
.
The same parser is used for the options and the environment variables so they take exactly the same form.
The options set by environment variables can be seen with the -vv
flag, e.g. rclone version -vv
.
-Options that can appear multiple times (type stringArray
) are treated slighly differently as environment variables can only be defined once. In order to allow a simple mechanism for adding one or many items, the input is treated as a CSV encoded string. For example
+Options that can appear multiple times (type stringArray
) are treated slightly differently as environment variables can only be defined once. In order to allow a simple mechanism for adding one or many items, the input is treated as a CSV encoded string. For example
@@ -8623,6 +9470,68 @@ user2/prefect
--min-age
applies only to files and not to directories.
E.g. rclone ls remote: --min-age 2d
lists files on remote:
of 2 days old or more.
See the time option docs for valid formats.
+--hash-filter
- Deterministically select a subset of files
+The --hash-filter
flag enables selecting a deterministic subset of files, useful for:
+
+- Running large sync operations across multiple machines.
+- Checking a subset of files for bitrot.
+- Any other operations where a sample of files is required.
+
+Syntax
+The flag takes two parameters expressed as a fraction:
+--hash-filter K/N
+
+N
: The total number of partitions (must be a positive integer).
+K
: The specific partition to select (an integer from 0
to N
).
+
+For example: - --hash-filter 1/3
: Selects the first third of the files. - --hash-filter 2/3
and --hash-filter 3/3
: Select the second and third partitions, respectively.
+Each partition is non-overlapping, ensuring all files are covered without duplication.
+Random Partition Selection
+Use @
as K
to randomly select a partition:
+--hash-filter @/M
+For example, --hash-filter @/3
will randomly select a number between 0 and 2. This will stay constant across retries.
+How It Works
+
+- Rclone takes each file's full path, normalizes it to lowercase, and applies Unicode normalization.
+- It then hashes the normalized path into a 64 bit number.
+- The hash result is reduced modulo
N
to assign the file to a partition.
+- If the calculated partition does not match
K
the file is excluded.
+- Other filters may apply if the file is not excluded.
+
+Important: Rclone will traverse all directories to apply the filter.
+Usage Notes
+
+- Safe to use with
rclone sync
; source and destination selections will match.
+- Do not use with
--delete-excluded
, as this could delete unselected files.
+- Ignored if
--files-from
is used.
+
+Examples
+Dividing files into 4 partitions
+Assuming the current directory contains file1.jpg
through file9.jpg
:
+$ rclone lsf --hash-filter 0/4 .
+file1.jpg
+file5.jpg
+
+$ rclone lsf --hash-filter 1/4 .
+file3.jpg
+file6.jpg
+file9.jpg
+
+$ rclone lsf --hash-filter 2/4 .
+file2.jpg
+file4.jpg
+
+$ rclone lsf --hash-filter 3/4 .
+file7.jpg
+file8.jpg
+
+$ rclone lsf --hash-filter 4/4 . # the same as --hash-filter 0/4
+file1.jpg
+file5.jpg
+Syncing the first quarter of files
+rclone sync --hash-filter 1/4 source:path destination:path
+Checking a random 1% of files for integrity
+rclone check --download --hash-filter @/100 source:path destination:path
Other flags
--delete-excluded
- Delete files on dest excluded from sync
Important this flag is dangerous to your data - use with --dry-run
and -v
first.
@@ -8691,7 +9600,7 @@ dir1/dir2/dir3/.ignore
Log out
(More docs and walkthrough video to come!)
-How it works
+How it works
When you run the rclone rcd --rc-web-gui
this is what happens
- Rclone starts but only runs the remote control API ("rc").
@@ -8890,7 +9799,10 @@ dir1/dir2/dir3/.ignore
}
Setting config flags with _config
If you wish to set config (the equivalent of the global flags) for the duration of an rc call only then pass in the _config
parameter.
-This should be in the same format as the config
key returned by options/get.
+This should be in the same format as the main
key returned by options/get.
+
rclone rc --loopback options/get blocks=main
+You can see more help on these options with this command (see the options blocks section for more info).
+rclone rc --loopback options/info blocks=main
For example, if you wished to run a sync with the --checksum
parameter, you would pass this parameter in your JSON blob.
"_config":{"CheckSum": true}
If using rclone rc
this could be passed as
@@ -8903,6 +9815,9 @@ dir1/dir2/dir3/.ignore
Setting filter flags with _filter
If you wish to set filters for the duration of an rc call only then pass in the _filter
parameter.
This should be in the same format as the filter
key returned by options/get.
+rclone rc --loopback options/get blocks=filter
+You can see more help on these options with this command (see the options blocks section for more info).
+rclone rc --loopback options/info blocks=filter
For example, if you wished to run a sync with these flags
--max-size 1M --max-age 42s --include "a" --include "b"
you would pass this parameter in your JSON blob.
@@ -8961,7 +9876,7 @@ dir1/dir2/dir3/.ignore
FieldName |
string |
N |
-name of the field used in the rc - if blank use Name |
+name of the field used in the rc - if blank use Name. May contain "." for nested fields. |
Help |
@@ -9165,6 +10080,7 @@ rclone rc cache/expire remote=/ withData=true
{
@@ -9386,6 +10304,7 @@ OR
"fatalError": boolean whether there has been at least one fatal error,
"lastError": last error string,
"renames" : number of files renamed,
+ "listed" : number of directory entries listed,
"retryError": boolean showing whether there has been at least one non-NoRetryError,
"serverSideCopies": number of server side copies done,
"serverSideCopyBytes": number bytes server side copied,
@@ -10073,6 +10992,93 @@ rclone rc options/set --json '{"main": {"LogLevel": 8}}&
rc/noopauth: Echo the input to the output parameters requiring auth
This echoes the input parameters to the output parameters for testing purposes. It can be used to check that rclone is still alive and to check that parameter passing is working properly.
Authentication is required for this call.
+serve/list: Show running servers
+Show running servers with IDs.
+This takes no parameters and returns
+
+- list: list of running serve commands
+
+Each list element will have
+
+- id: ID of the server
+- addr: address the server is running on
+- params: parameters used to start the server
+
+Eg
+rclone rc serve/list
+Returns
+{
+ "list": [
+ {
+ "addr": "[::]:4321",
+ "id": "nfs-ffc2a4e5",
+ "params": {
+ "fs": "remote:",
+ "opt": {
+ "ListenAddr": ":4321"
+ },
+ "type": "nfs",
+ "vfsOpt": {
+ "CacheMode": "full"
+ }
+ }
+ }
+ ]
+}
+Authentication is required for this call.
+serve/start: Create a new server
+Create a new server with the specified parameters.
+This takes the following parameters:
+
+type
- type of server: http
, webdav
, ftp
, sftp
, nfs
, etc.
+fs
- remote storage path to serve
+addr
- the ip:port to run the server on, eg ":1234" or "localhost:1234"
+
+Other parameters are as described in the documentation for the relevant rclone serve command line options. To translate a command line option to an rc parameter, remove the leading --
and replace -
with _
, so --vfs-cache-mode
becomes vfs_cache_mode
. Note that global parameters must be set with _config
and _filter
as described above.
+Examples:
+rclone rc serve/start type=nfs fs=remote: addr=:4321 vfs_cache_mode=full
+rclone rc serve/start --json '{"type":"nfs","fs":"remote:","addr":":1234","vfs_cache_mode":"full"}'
+This will give the reply
+{
+ "addr": "[::]:4321", // Address the server was started on
+ "id": "nfs-ecfc6852" // Unique identifier for the server instance
+}
+Or an error if it failed to start.
+Stop the server with serve/stop
and list the running servers with serve/list
.
+Authentication is required for this call.
+serve/stop: Unserve selected active serve
+Stops a running serve
instance by ID.
+This takes the following parameters:
+
+- id: as returned by serve/start
+
+This will give an empty response if successful or an error if not.
+Example:
+rclone rc serve/stop id=12345
+Authentication is required for this call.
+serve/stopall: Stop all active servers
+Stop all active servers.
+This will stop all active servers.
+rclone rc serve/stopall
+Authentication is required for this call.
+serve/types: Show all possible serve types
+This shows all possible serve types and returns them as a list.
+This takes no parameters and returns
+
+- types: list of serve types, eg "nfs", "sftp", etc
+
+The serve types are strings like "serve", "serve2", "cserve" and can be passed to serve/start as the serveType parameter.
+Eg
+rclone rc serve/types
+Returns
+{
+ "types": [
+ "http",
+ "sftp",
+ "nfs"
+ ]
+}
+Authentication is required for this call.
sync/bisync: Perform bidirectional synchronization between two paths.
This takes the following parameters
@@ -10160,7 +11166,7 @@ rclone rc options/set --json '{"main": {"LogLevel": 8}}&
},
],
}
-The expiry
time is the time until the file is elegible for being uploaded in floating point seconds. This may go negative. As rclone only transfers --transfers
files at once, only the lowest --transfers
expiry times will have uploading
as true
. So there may be files with negative expiry times for which uploading
is false
.
+The expiry
time is the time until the file is eligible for being uploaded in floating point seconds. This may go negative. As rclone only transfers --transfers
files at once, only the lowest --transfers
expiry times will have uploading
as true
. So there may be files with negative expiry times for which uploading
is false
.
This command takes an "fs" parameter. If this parameter is not supplied and if there is only one VFS in use then that VFS will be used. If there is more than one VFS in use then the "fs" parameter must be supplied.
vfs/queue-set-expiry: Set the expiry time for an item queued for upload.
Use this to adjust the expiry
time for an item in the upload queue. You will need to read the id
of the item using vfs/queue
before using this call.
@@ -10433,6 +11439,15 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
+FileLu Cloud Storage |
+MD5 |
+R/W |
+No |
+Yes |
+R |
+- |
+
+
Files.com |
MD5, CRC32 |
DR/W |
@@ -10441,7 +11456,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R |
- |
-
+
FTP |
- |
R/W ¹⁰ |
@@ -10450,7 +11465,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
- |
-
+
Gofile |
MD5 |
DR/W |
@@ -10459,7 +11474,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R |
- |
-
+
Google Cloud Storage |
MD5 |
R/W |
@@ -10468,7 +11483,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R/W |
- |
-
+
Google Drive |
MD5, SHA1, SHA256 |
DR/W |
@@ -10477,7 +11492,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R/W |
DRWU |
-
+
Google Photos |
- |
- |
@@ -10486,7 +11501,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R |
- |
-
+
HDFS |
- |
R/W |
@@ -10495,7 +11510,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
- |
-
+
HiDrive |
HiDrive ¹² |
R/W |
@@ -10504,7 +11519,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
- |
-
+
HTTP |
- |
R |
@@ -10513,7 +11528,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R |
- |
-
+
iCloud Drive |
- |
R |
@@ -10522,7 +11537,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
- |
-
+
Internet Archive |
MD5, SHA1, CRC32 |
R/W ¹¹ |
@@ -10531,7 +11546,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
RWU |
-
+
Jottacloud |
MD5 |
R/W |
@@ -10540,7 +11555,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R |
RW |
-
+
Koofr |
MD5 |
- |
@@ -10549,7 +11564,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
- |
-
+
Linkbox |
- |
R |
@@ -10558,7 +11573,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
- |
-
+
Mail.ru Cloud |
Mailru ⁶ |
R/W |
@@ -10567,7 +11582,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
- |
-
+
Mega |
- |
- |
@@ -10576,7 +11591,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
- |
-
+
Memory |
MD5 |
R/W |
@@ -10585,7 +11600,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
- |
-
+
Microsoft Azure Blob Storage |
MD5 |
R/W |
@@ -10594,7 +11609,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R/W |
- |
-
+
Microsoft Azure Files Storage |
MD5 |
R/W |
@@ -10603,7 +11618,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R/W |
- |
-
+
Microsoft OneDrive |
QuickXorHash ⁵ |
DR/W |
@@ -10612,7 +11627,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R |
DRW |
-
+
OpenDrive |
MD5 |
R/W |
@@ -10621,7 +11636,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
- |
-
+
OpenStack Swift |
MD5 |
R/W |
@@ -10630,7 +11645,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R/W |
- |
-
+
Oracle Object Storage |
MD5 |
R/W |
@@ -10639,7 +11654,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R/W |
- |
-
+
pCloud |
MD5, SHA1 ⁷ |
R/W |
@@ -10648,7 +11663,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
W |
- |
-
+
PikPak |
MD5 |
R |
@@ -10657,7 +11672,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R |
- |
-
+
Pixeldrain |
SHA256 |
R/W |
@@ -10666,7 +11681,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R |
RW |
-
+
premiumize.me |
- |
- |
@@ -10675,7 +11690,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R |
- |
-
+
put.io |
CRC-32 |
R/W |
@@ -10684,7 +11699,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R |
- |
-
+
Proton Drive |
SHA1 |
R/W |
@@ -10693,7 +11708,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R |
- |
-
+
QingStor |
MD5 |
- ⁹ |
@@ -10702,7 +11717,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R/W |
- |
-
+
Quatrix by Maytech |
- |
R/W |
@@ -10711,7 +11726,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
- |
-
+
Seafile |
- |
- |
@@ -10720,7 +11735,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
- |
-
+
SFTP |
MD5, SHA1 ² |
DR/W |
@@ -10729,7 +11744,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
- |
-
+
Sia |
- |
- |
@@ -10738,7 +11753,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
- |
-
+
SMB |
- |
R/W |
@@ -10747,7 +11762,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
- |
-
+
SugarSync |
- |
- |
@@ -10756,7 +11771,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
- |
-
+
Storj |
- |
R |
@@ -10765,7 +11780,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
- |
-
+
Uloz.to |
MD5, SHA256 ¹³ |
- |
@@ -10774,7 +11789,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
- |
-
+
Uptobox |
- |
- |
@@ -10783,7 +11798,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
- |
-
+
WebDAV |
MD5, SHA1 ³ |
R ⁴ |
@@ -10792,7 +11807,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
- |
-
+
Yandex Disk |
MD5 |
R/W |
@@ -10801,7 +11816,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R |
- |
-
+
Zoho WorkDrive |
- |
- |
@@ -10810,7 +11825,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
- |
-
+
The local filesystem |
All |
DR/W |
@@ -12166,6 +13181,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
--no-update-dir-modtime Don't update directory modification times
@@ -12185,6 +13201,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--delete-during When synchronizing, delete files during transfer
--fix-case Force rename of case insensitive dest to match source
--ignore-errors Delete even if there are I/O errors
+ --list-cutoff int To save memory, sort directory listings on disk above this threshold (default 1000000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off)
--suffix string Suffix to add to changed files
@@ -12215,13 +13232,14 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--header stringArray Set HTTP header for all transactions
--header-download stringArray Set HTTP header for download transactions
--header-upload stringArray Set HTTP header for upload transactions
+ --max-connections int Maximum number of simultaneous backend API connections, 0 for unlimited
--no-check-certificate Do not verify the server SSL certificate (insecure)
--no-gzip-encoding Don't set Accept-Encoding: gzip
--timeout Duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.69.0")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.70.0")
Flags helpful for increasing performance.
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer (default 16Mi)
@@ -12244,6 +13262,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
-i, --interactive Enable interactive mode
--kv-lock-time Duration Maximum time to keep key-value database locked by process (default 1s)
--low-level-retries int Number of low level retries to do (default 10)
+ --max-buffer-memory SizeSuffix If set, don't allocate more than this amount of memory as buffers (default off)
--no-console Hide console window (supported on Windows only)
--no-unicode-normalization Don't normalize unicode characters in filenames
--password-command SpaceSepList Command for supplying password for encrypted configuration
@@ -12269,6 +13288,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -12290,7 +13310,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Logging
Flags for logging and statistics.
--log-file string Log everything to this file
- --log-format string Comma separated list of log format options (default "date,time")
+ --log-format Bits Comma separated list of log format options (default date,time)
--log-level LogLevel Log level DEBUG|INFO|NOTICE|ERROR (default NOTICE)
--log-systemd Activate systemd integration for the logger
--max-stats-groups int Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000)
@@ -12345,6 +13365,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--rc-server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
--rc-template string User-specified template
--rc-user string User name for authentication
+ --rc-user-from-header string User name from a defined HTTP header
--rc-web-fetch-url string URL to fetch the releases for webgui (default "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest")
--rc-web-gui Launch WebGUI on localhost
--rc-web-gui-force-update Force update to latest version of web gui
@@ -12368,6 +13389,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--metrics-server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
--metrics-template string User-specified template
--metrics-user string User name for authentication
+ --metrics-user-from-header string User name from a defined HTTP header
--rc-enable-metrics Enable the Prometheus metrics path at the remote control server
Backend
Backend-only flags (these can be set in the config file also).
@@ -12382,6 +13404,8 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--azureblob-client-id string The ID of the client in use
--azureblob-client-secret string One of the service principal's client secrets
--azureblob-client-send-certificate-chain Send the certificate chain when using certificate auth
+ --azureblob-copy-concurrency int Concurrency for multipart copy (default 512)
+ --azureblob-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 8Mi)
--azureblob-delete-snapshots string Set to specify how to deal with snapshots on blob deletion
--azureblob-description string Description of the remote
--azureblob-directory-markers Upload an empty object with a trailing slash when a new directory is created
@@ -12405,6 +13429,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--azureblob-upload-concurrency int Concurrency for multipart uploads (default 16)
--azureblob-upload-cutoff string Cutoff for switching to chunked upload (<= 256 MiB) (deprecated)
--azureblob-use-az Use Azure CLI tool az for authentication
+ --azureblob-use-copy-blob Whether to use the Copy Blob API when copying to the same storage account (default true)
--azureblob-use-emulator Uses local storage emulator if provided as 'true'
--azureblob-use-msi Use a managed service identity to authenticate (only works in Azure)
--azureblob-username string User name (usually an email address)
@@ -12417,6 +13442,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--azurefiles-client-send-certificate-chain Send the certificate chain when using certificate auth
--azurefiles-connection-string string Azure Files Connection String
--azurefiles-description string Description of the remote
+ --azurefiles-disable-instance-discovery Skip requesting Microsoft Entra instance metadata
--azurefiles-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot)
--azurefiles-endpoint string Endpoint for the service
--azurefiles-env-auth Read credentials from runtime (environment variables, CLI or MSI)
@@ -12431,6 +13457,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--azurefiles-share-name string Azure Files Share Name
--azurefiles-tenant string ID of the service principal's tenant. Also called its directory ID
--azurefiles-upload-concurrency int Concurrency for multipart uploads (default 16)
+ --azurefiles-use-az Use Azure CLI tool az for authentication
--azurefiles-use-msi Use a managed service identity to authenticate (only works in Azure)
--azurefiles-username string User name (usually an email address)
--b2-account string Account ID or Application Key ID
@@ -12493,12 +13520,14 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks
--chunker-hash-type string Choose how chunker handles hash sums (default "md5")
--chunker-remote string Remote to chunk/unchunk
+ --cloudinary-adjust-media-files-extensions Cloudinary handles media formats as a file attribute and strips it from the name, which is unlike most other file systems (default true)
--cloudinary-api-key string Cloudinary API Key
--cloudinary-api-secret string Cloudinary API Secret
--cloudinary-cloud-name string Cloudinary Environment Name
--cloudinary-description string Description of the remote
--cloudinary-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
--cloudinary-eventually-consistent-delay Duration Wait N seconds for eventual consistency of the databases that support the backend operation (default 0s)
+ --cloudinary-media-extensions stringArray Cloudinary supported media extensions (default 3ds,3g2,3gp,ai,arw,avi,avif,bmp,bw,cr2,cr3,djvu,dng,eps3,fbx,flif,flv,gif,glb,gltf,hdp,heic,heif,ico,indd,jp2,jpe,jpeg,jpg,jxl,jxr,m2ts,mov,mp4,mpeg,mts,mxf,obj,ogv,pdf,ply,png,psd,svg,tga,tif,tiff,ts,u3ma,usdz,wdp,webm,webp,wmv)
--cloudinary-upload-prefix string Specify the API endpoint for environments out of the US
--cloudinary-upload-preset string Upload Preset to select asset manipulation on upload
--combine-description string Description of the remote
@@ -12522,6 +13551,10 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--crypt-show-mapping For all files listed show how the names encrypt
--crypt-strict-names If set, this will raise an error when crypt comes across a filename that can't be decrypted
--crypt-suffix string If this is set it will override the default suffix of ".bin" (default ".bin")
+ --doi-description string Description of the remote
+ --doi-doi string The DOI or the doi.org URL
+ --doi-doi-resolver-api-url string The URL of the DOI resolver API to use
+ --doi-provider string DOI provider
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs
--drive-auth-owner-only Only consider files owned by the authenticated user
@@ -12573,7 +13606,6 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--drive-use-trash Send files to the trash instead of deleting permanently (default true)
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download (default off)
--dropbox-auth-url string Auth server URL
- --dropbox-batch-commit-timeout Duration Max time to wait for a batch to finish committing (default 10m0s)
--dropbox-batch-mode string Upload file batching sync|async|off (default "sync")
--dropbox-batch-size int Max number of files in upload batch
--dropbox-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
@@ -12583,11 +13615,14 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--dropbox-client-secret string OAuth Client Secret
--dropbox-description string Description of the remote
--dropbox-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
+ --dropbox-export-formats CommaSepList Comma separated list of preferred formats for exporting files (default html,md)
--dropbox-impersonate string Impersonate this user when using a business account
--dropbox-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms)
--dropbox-root-namespace string Specify a different Dropbox namespace ID to use as the root for all paths
--dropbox-shared-files Instructs rclone to work on individual shared files
--dropbox-shared-folders Instructs rclone to work on shared folders
+ --dropbox-show-all-exports Show all exportable files in listings
+ --dropbox-skip-exports Skip exportable files in all listings
--dropbox-token string OAuth Access Token as a JSON blob
--dropbox-token-url string Token server url
--fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl
@@ -12605,6 +13640,9 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--filefabric-token-expiry string Token expiry time
--filefabric-url string URL of the Enterprise File Fabric to connect to
--filefabric-version string Version read from the file fabric
+ --filelu-description string Description of the remote
+ --filelu-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,CrLf,Del,Ctl,LeftSpace,LeftPeriod,LeftTilde,LeftCrLfHtVt,RightSpace,RightPeriod,RightCrLfHtVt,InvalidUtf8,Dot,SquareBracket,Semicolon,Exclamation)
+ --filelu-key string Your FileLu Rclone key from My Account
--filescom-api-key string The API key used to authenticate with Files.com
--filescom-description string Description of the remote
--filescom-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
@@ -12664,7 +13702,6 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--gofile-list-chunk int Number of items to list in each call (default 1000)
--gofile-root-folder-id string ID of the root folder
--gphotos-auth-url string Auth server URL
- --gphotos-batch-commit-timeout Duration Max time to wait for a batch to finish committing (default 10m0s)
--gphotos-batch-mode string Upload file batching sync|async|off (default "sync")
--gphotos-batch-size int Max number of files in upload batch
--gphotos-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
@@ -12732,6 +13769,8 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--internetarchive-encoding Encoding The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot)
--internetarchive-endpoint string IAS3 Endpoint (default "https://s3.us.archive.org")
--internetarchive-front-endpoint string Host of InternetArchive Frontend (default "https://archive.org")
+ --internetarchive-item-derive Whether to trigger derive on the IA item or not. If set to false, the item will not be derived by IA upon upload (default true)
+ --internetarchive-item-metadata stringArray Metadata to be set on the IA item, this is different from file-level metadata that can be set using --metadata-set
--internetarchive-secret-access-key string IAS3 Secret Key (password)
--internetarchive-wait-archive Duration Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish (default 0s)
--jottacloud-auth-url string Auth server URL
@@ -12827,6 +13866,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--onedrive-tenant string ID of the service principal's tenant. Also called its directory ID
--onedrive-token string OAuth Access Token as a JSON blob
--onedrive-token-url string Token server url
+ --onedrive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default off)
--oos-attempt-resume-upload If true attempt to resume previously started multipart upload for the object
--oos-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
--oos-compartment string Specify compartment OCID, if you need to list buckets
@@ -12852,6 +13892,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--oos-storage-tier string The storage class to use when storing new objects in storage. https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm (default "Standard")
--oos-upload-concurrency int Concurrency for multipart uploads (default 10)
--oos-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
+ --opendrive-access string Files and folders will be uploaded with this access permission (default private) (default "private")
--opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi)
--opendrive-description string Description of the remote
--opendrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
@@ -12948,6 +13989,8 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--s3-endpoint string Endpoint for S3 API
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars)
--s3-force-path-style If true use path style access if false use virtual hosted style (default true)
+ --s3-ibm-api-key string IBM API Key to be used to obtain IAM token
+ --s3-ibm-resource-instance-id string IBM service instance id
--s3-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery
--s3-list-chunk int Size of listing chunk (response list for each ListObject S3 request) (default 1000)
--s3-list-url-encode Tristate Whether to url encode listings: true/false/unset (default unset)
@@ -12968,6 +14011,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3
--s3-session-token string An AWS session token
--s3-shared-credentials-file string Path to the shared credentials file
+ --s3-sign-accept-encoding Tristate Set if rclone should include Accept-Encoding as part of the signature (default unset)
--s3-sse-customer-algorithm string If using SSE-C, the server-side encryption algorithm used when storing this object in S3
--s3-sse-customer-key string To use SSE-C you may provide the secret encryption key used to encrypt/decrypt your data
--s3-sse-customer-key-base64 string If using SSE-C you must provide the secret encryption key encoded in base64 format to encrypt/decrypt your data
@@ -12984,6 +14028,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--s3-use-multipart-uploads Tristate Set if rclone should use multipart uploads (default unset)
--s3-use-presigned-request Whether to use a presigned request or PutObject for single part uploads
--s3-use-unsigned-payload Tristate Whether to use an unsigned payload in PutObject (default unset)
+ --s3-use-x-id Tristate Set if rclone should add x-id URL parameters (default unset)
--s3-v2-auth If true use v2 authentication
--s3-version-at Time Show file versions as they were at the specified time (default off)
--s3-version-deleted Show deleted file markers when using versions
@@ -13009,6 +14054,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available
--sftp-host string SSH host to connect to
--sftp-host-key-algorithms SpaceSepList Space separated list of host key algorithms, ordered by preference
+ --sftp-http-proxy string URL for HTTP CONNECT proxy
--sftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--sftp-key-exchange SpaceSepList Space separated list of key exchange algorithms, ordered by preference
--sftp-key-file string Path to PEM-encoded private key file
@@ -13063,6 +14109,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--smb-pass string SMB password (obscured)
--smb-port int SMB port number (default 445)
--smb-spn string Service principal name
+ --smb-use-kerberos Use Kerberos authentication
--smb-user string SMB username (default "$USER")
--storj-access-grant string Access grant
--storj-api-key string API key
@@ -13170,7 +14217,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
In the first example we will use the SFTP rclone volume with Docker engine on a standalone Ubuntu machine.
Start from installing Docker on the host.
The FUSE driver is a prerequisite for rclone mounting and should be installed on host:
-sudo apt-get -y install fuse
+sudo apt-get -y install fuse3
Create two directories required by rclone docker plugin:
sudo mkdir -p /var/lib/docker-plugins/rclone/config
sudo mkdir -p /var/lib/docker-plugins/rclone/cache
@@ -13319,7 +14366,7 @@ systemctl start docker-volume-rclone.socket
systemctl restart docker
Or run the service directly: - run systemctl daemon-reload
to let systemd pick up new config - run systemctl enable docker-volume-rclone.service
to make the new service start automatically when you power on your machine. - run systemctl start docker-volume-rclone.service
to start the service now. - run systemctl restart docker
to restart docker daemon and let it detect the new plugin socket. Note that this step is not needed in managed mode where docker knows about plugin state changes.
The two methods are equivalent from the user perspective, but I personally prefer socket activation.
-Troubleshooting
+Troubleshooting
You can see managed plugin settings with
docker plugin list
docker plugin inspect rclone
@@ -13516,7 +14563,7 @@ rclone copy Path1 Path2 [--create-empty-src-dirs]
As a safety check, if greater than the --max-delete
percent of files were deleted on either the Path1 or Path2 filesystem, then bisync will abort with a warning message, without making any changes. The default --max-delete
is 50%
. One way to trigger this limit is to rename a directory that contains more than half of your files. This will appear to bisync as a bunch of deleted files and a bunch of new files. This safety check is intended to block bisync from deleting all of the files on both filesystems due to a temporary network access issue, or if the user had inadvertently deleted the files on one side or the other. To force the sync, either set a different delete percentage limit, e.g. --max-delete 75
(allows up to 75% deletion), or use --force
to bypass the check.
Also see the all files changed check.
--filters-file
-By using rclone filter features you can exclude file types or directory sub-trees from the sync. See the bisync filters section and generic --filter-from documentation. An example filters file contains filters for non-allowed files for synching with Dropbox.
+By using rclone filter features you can exclude file types or directory sub-trees from the sync. See the bisync filters section and generic --filter-from documentation. An example filters file contains filters for non-allowed files for syncing with Dropbox.
If you make changes to your filters file then bisync requires a run with --resync
. This is a safety feature, which prevents existing files on the Path1 and/or Path2 side from seeming to disappear from view (since they are excluded in the new listings), which would fool bisync into seeing them as deleted (as compared to the prior run listings), and then bisync would proceed to delete them for real.
To block this from happening, bisync calculates an MD5 hash of the filters file and stores the hash in a .md5
file in the same place as your filters file. On the next run with --filters-file
set, bisync re-calculates the MD5 hash of the current filters file and compares it to the hash stored in the .md5
file. If they don't match, the run aborts with a critical error and thus forces you to do a --resync
, likely avoiding a disaster.
--conflict-resolve CHOICE
@@ -13556,7 +14603,7 @@ rclone copy Path1 Path2 [--create-empty-src-dirs]
--check-sync
Enabled by default, the check-sync function checks that all of the same files exist in both the Path1 and Path2 history listings. This check-sync integrity check is performed at the end of the sync run by default. Any untrapped failing copy/deletes between the two paths might result in differences between the two listings and in the untracked file content differences between the two paths. A resync run would correct the error.
Note that the default-enabled integrity check locally executes a load of both the final Path1 and Path2 listings, and thus adds to the run time of a sync. Using --check-sync=false
will disable it and may significantly reduce the sync run times for very large numbers of files.
-The check may be run manually with --check-sync=only
. It runs only the integrity check and terminates without actually synching.
+The check may be run manually with --check-sync=only
. It runs only the integrity check and terminates without actually syncing.
Note that currently, --check-sync
only checks listing snapshots and NOT the actual files on the remotes. Note also that the listing snapshots will not know about any changes that happened during or after the latest bisync run, as those will be discovered on the next run. Therefore, while listings should always match each other at the end of a bisync run, it is expected that they will not match the underlying remotes, nor will the remotes match each other, if there were changes during or after the run. This is normal, and any differences will be detected and synced on the next run.
For a robust integrity check of the current state of the remotes (as opposed to just their listing snapshots), consider using check
(or cryptcheck
, if at least one path is a crypt
remote) instead of --check-sync
, keeping in mind that differences are expected if files changed during or after your last bisync run.
For example, a possible sequence could look like this:
@@ -13786,7 +14833,7 @@ rclone copy Path1 Path2 [--create-empty-src-dirs]
See filtering documentation for how filter rules are written and interpreted.
Bisync's --filters-file
flag slightly extends the rclone's --filter-from filtering mechanism. For a given bisync run you may provide only one --filters-file
. The --include*
, --exclude*
, and --filter
flags are also supported.
How to filter directories
-Filtering portions of the directory tree is a critical feature for synching.
+Filtering portions of the directory tree is a critical feature for syncing.
Examples of directory trees (always beneath the Path1/Path2 root level) you may want to exclude from your sync: - Directory trees containing only software build intermediate files. - Directory trees containing application temporary files and data such as the Windows C:\Users\MyLogin\AppData\
tree. - Directory trees containing files that are large, less important, or are getting thrashed continuously by ongoing processes.
On the other hand, there may be only select directories that you actually want to sync, and exclude all others. See the Example include-style filters for Windows user directories below.
Filters file writing guidelines
@@ -13853,7 +14900,7 @@ rclone copy Path1 Path2 [--create-empty-src-dirs]
Note also that Windows implements several "library" links such as C:\Users\MyLogin\My Documents\My Music
pointing to C:\Users\MyLogin\Music
. rclone sees these as links, so you must add --links
to the bisync command line if you which to follow these links. I find that I get permission errors in trying to follow the links, so I don't include the rclone --links
flag, but then you get lots of Can't follow symlink…
noise from rclone about not following the links. This noise can be quashed by adding --quiet
to the bisync command line.
Example exclude-style filters files for use with Dropbox
-- Dropbox disallows synching the listed temporary and configuration/data files. The `- ` filters exclude these files where ever they may occur in the sync tree. Consider adding similar exclusions for file types you don't need to sync, such as core dump and software build files.
+- Dropbox disallows syncing the listed temporary and configuration/data files. The `- ` filters exclude these files where ever they may occur in the sync tree. Consider adding similar exclusions for file types you don't need to sync, such as core dump and software build files.
- bisync testing creates
/testdir/
at the top level of the sync tree, and usually deletes the tree after the test. If a normal sync should run while the /testdir/
tree exists the --check-access
phase may fail due to unbalanced RCLONE_TEST files. The `- /testdir/` filter blocks this tree from being synched. You don't need this exclusion if you are not doing bisync development testing.
- Everything else beneath the Path1/Path2 root will be synched.
- RCLONE_TEST files may be placed anywhere within the tree, including the root.
@@ -14013,7 +15060,7 @@ Options:
Note: unlike rclone flags which must be prefixed by double dash (--
), the test command flags can be equally prefixed by a single -
or double dash.
Running tests
-go test . -case basic -remote local -remote2 local
runs the test_basic
test case using only the local filesystem, synching one local directory with another local directory. Test script output is to the console, while commands within scenario.txt have their output sent to the .../workdir/test.log
file, which is finally compared to the golden copy.
+go test . -case basic -remote local -remote2 local
runs the test_basic
test case using only the local filesystem, syncing one local directory with another local directory. Test script output is to the console, while commands within scenario.txt have their output sent to the .../workdir/test.log
file, which is finally compared to the golden copy.
- The first argument after
go test
should be a relative name of the directory containing bisync source code. If you run tests right from there, the argument will be .
(current directory) as in most examples below. If you run bisync tests from the rclone source directory, the command should be go test ./cmd/bisync ...
.
- The test engine will mangle rclone output to ensure comparability with golden listings and logs.
- Test scenarios are located in
./cmd/bisync/testdata
. The test -case
argument should match the full name of a subdirectory under that directory. Every test subdirectory name on disk must start with test_
, this prefix can be omitted on command line for brevity. Also, underscores in the name can be replaced by dashes for convenience.
@@ -14150,6 +15197,10 @@ Options:
Bisync adopts the differential synchronization technique, which is based on keeping history of changes performed by both synchronizing sides. See the Dual Shadow Method section in Neil Fraser's article.
Also note a number of academic publications by Benjamin Pierce about Unison and synchronization in general.
Changelog
+v1.69.1
+
+- Fixed an issue causing listings to not capture concurrent modifications under certain conditions
+
v1.68
- Fixed an issue affecting backends that round modtimes to a lower precision.
@@ -14578,9 +15629,11 @@ e/n/d/r/c/s/q> q
- Liara Object Storage
- Linode Object Storage
- Magalu Object Storage
+- MEGA S4 Object Storage
- Minio
- Outscale
- Petabox
+- Pure Storage FlashBlade
- Qiniu Cloud Object Storage (Kodo)
- RackCorp Object Storage
- Rclone Serve S3
@@ -15072,7 +16125,7 @@ $ rclone -q --s3-versions ls s3:cleanup-test
- This is a policy that can be used when creating bucket. It assumes that
USER_NAME
has been created.
- The Resource entry must include both resource ARNs, as one implies the bucket and the other implies the bucket's objects.
-- When using s3-no-check-bucket and the bucket already exsits, the
"arn:aws:s3:::BUCKET_NAME"
doesn't have to be included.
+- When using s3-no-check-bucket and the bucket already exists, the
"arn:aws:s3:::BUCKET_NAME"
doesn't have to be included.
For reference, here's an Ansible script that will generate one or more buckets that will work with rclone sync
.
Key Management System (KMS)
@@ -15080,7 +16133,7 @@ $ rclone -q --s3-versions ls s3:cleanup-test
Glacier and Glacier Deep Archive
You can upload objects using the glacier storage class or transition them to glacier using a lifecycle policy. The bucket can still be synced or copied into normally, but if rclone tries to access data from the glacier storage class you will see an error like below.
2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file
-In this case you need to restore the object(s) in question before using rclone.
+In this case you need to restore the object(s) in question before accessing object contents. The restore section below shows how to do this with rclone.
Note that rclone only speaks the S3 API it does not speak the Glacier Vault API, so rclone cannot directly access Glacier Vaults.
Object-lock enabled S3 bucket
According to AWS's documentation on S3 Object Lock:
@@ -15089,7 +16142,7 @@ $ rclone -q --s3-versions ls s3:cleanup-test
As mentioned in the Modification times and hashes section, small files that are not uploaded as multipart, use a different tag, causing the upload to fail. A simple solution is to set the --s3-upload-cutoff 0
and force all the files to be uploaded as multipart.
Standard options
-Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
+Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, Exaba, FlashBlade, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Mega, Minio, Netease, Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
--s3-provider
Choose your S3 provider.
Properties:
@@ -15132,6 +16185,14 @@ $ rclone -q --s3-versions ls s3:cleanup-test
+"Exaba"
+
+"FlashBlade"
+
+- Pure Storage FlashBlade Object Storage
+
"GCS"
- Google Cloud Storage
@@ -15172,6 +16233,10 @@ $ rclone -q --s3-versions ls s3:cleanup-test
+"Mega"
+
+- MEGA S4 Object Storage
+
"Minio"
- Minio Object Storage
@@ -15562,7 +16627,7 @@ $ rclone -q --s3-versions ls s3:cleanup-test
- Config: acl
- Env Var: RCLONE_S3_ACL
-- Provider: !Storj,Selectel,Synology,Cloudflare
+- Provider: !Storj,Selectel,Synology,Cloudflare,FlashBlade,Mega
- Type: string
- Required: false
- Examples:
@@ -15726,8 +16791,28 @@ $ rclone -q --s3-versions ls s3:cleanup-test
+--s3-ibm-api-key
+IBM API Key to be used to obtain IAM token
+Properties:
+
+- Config: ibm_api_key
+- Env Var: RCLONE_S3_IBM_API_KEY
+- Provider: IBMCOS
+- Type: string
+- Required: false
+
+--s3-ibm-resource-instance-id
+IBM service instance id
+Properties:
+
+- Config: ibm_resource_instance_id
+- Env Var: RCLONE_S3_IBM_RESOURCE_INSTANCE_ID
+- Provider: IBMCOS
+- Type: string
+- Required: false
+
Advanced options
-Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
+Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, Exaba, FlashBlade, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Mega, Minio, Netease, Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
--s3-bucket-acl
Canned ACL used when creating buckets.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
@@ -15737,6 +16822,7 @@ $ rclone -q --s3-versions ls s3:cleanup-test
- Config: bucket_acl
- Env Var: RCLONE_S3_BUCKET_ACL
+- Provider: !Storj,Selectel,Synology,Cloudflare,FlashBlade
- Type: string
- Required: false
- Examples:
@@ -16291,6 +17377,30 @@ Windows: "%USERPROFILE%\.aws\credentials"
- Type: Tristate
- Default: unset
+--s3-use-x-id
+Set if rclone should add x-id URL parameters.
+You can change this if you want to disable the AWS SDK from adding x-id URL parameters.
+This shouldn't be necessary in normal operation.
+This should be automatically set correctly for all providers rclone knows about - please make a bug report if not.
+Properties:
+
+- Config: use_x_id
+- Env Var: RCLONE_S3_USE_X_ID
+- Type: Tristate
+- Default: unset
+
+--s3-sign-accept-encoding
+Set if rclone should include Accept-Encoding as part of the signature.
+You can change this if you want to stop rclone including Accept-Encoding as part of the signature.
+This shouldn't be necessary in normal operation.
+This should be automatically set correctly for all providers rclone knows about - please make a bug report if not.
+Properties:
+
+- Config: sign_accept_encoding
+- Env Var: RCLONE_S3_SIGN_ACCEPT_ENCODING
+- Type: Tristate
+- Default: unset
+
--s3-directory-bucket
Set to use AWS Directory Buckets
If you are using an AWS Directory Bucket then set this flag.
@@ -16440,7 +17550,7 @@ Windows: "%USERPROFILE%\.aws\credentials"
rclone backend restore remote: [options] [<arguments>+]
This command can be used to restore one or more objects from GLACIER to normal storage or from INTELLIGENT-TIERING Archive Access / Deep Archive Access tier to the Frequent Access tier.
Usage Examples:
-rclone backend restore s3:bucket/path/to/object -o priority=PRIORITY -o lifetime=DAYS
+rclone backend restore s3:bucket/path/to/ --include /object -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY
@@ -17003,7 +18113,7 @@ Choose a number from below, or type in your own value
32 / Toronto Flex
\ "tor01-flex"
location_constraint>1
-
+
- Specify a canned ACL. IBM Cloud (Storage) supports "public-read" and "private". IBM Cloud(Infra) supports all the canned ACLs. On-Premise COS supports all the canned ACLs.
Canned ACL used when creating buckets and/or storing objects in S3.
@@ -17018,7 +18128,7 @@ Choose a number from below, or type in your own value
4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS
\ "authenticated-read"
acl> 1
-
+
- Review the displayed configuration and accept to save the "remote" then quit. The config file should look like this
[xxx]
@@ -17029,7 +18139,7 @@ acl> 1
endpoint = s3-api.us-geo.objectstorage.softlayer.net
location_constraint = us-standard
acl = private
-
+
- Execute rclone commands
1) Create a bucket.
@@ -17047,6 +18157,29 @@ acl> 1
rclone copy IBM-COS-XREGION:newbucket/file.txt .
6) Delete a file on remote.
rclone delete IBM-COS-XREGION:newbucket/file.txt
+IBM IAM authentication
+If using IBM IAM authentication with IBM API KEY you need to fill in these additional parameters 1. Select false for env_auth 2. Leave access_key_id
and secret_access_key
blank 3. Paste your ibm_api_key
+Option ibm_api_key.
+IBM API Key to be used to obtain IAM token
+Enter a value of type string. Press Enter for the default (1).
+ibm_api_key>
+
+- Paste your
ibm_resource_instance_id
+
+Option ibm_resource_instance_id.
+IBM service instance id
+Enter a value of type string. Press Enter for the default (2).
+ibm_resource_instance_id>
+
+- In advanced settings type true for
v2_auth
+
+Option v2_auth.
+If true use v2 authentication.
+If this is false (the default) then rclone will use v4 authentication.
+If it is set then rclone will use v2 authentication.
+Use this only if v4 signatures don't work, e.g. pre Jewel/v10 CEPH.
+Enter a boolean value (true or false). Press Enter for the default (true).
+v2_auth>
IDrive e2
Here is an example of making an IDrive e2 configuration. First run:
rclone config
@@ -17632,7 +18765,7 @@ region = au-nsw
endpoint = s3.rackcorp.com
location_constraint = au-nsw
Rclone Serve S3
-Rclone can serve any remote over the S3 protocol. For details see the rclone serve s3 documentation.
+Rclone can serve any remote over the S3 protocol. For details see the rclone serve s3 documentation.
For example, to serve remote:path
over s3, run the server like this:
rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY remote:path
This will be compatible with an rclone remote which is defined like this:
@@ -17643,7 +18776,7 @@ endpoint = http://127.0.0.1:8080/
access_key_id = ACCESS_KEY_ID
secret_access_key = SECRET_ACCESS_KEY
use_multipart_uploads = false
-Note that setting disable_multipart_uploads = true
is to work around a bug which will be fixed in due course.
+Note that setting use_multipart_uploads = false
is to work around a bug which will be fixed in due course.
Scaleway
Scaleway The Object Storage platform allows you to store anything from backups, logs and web assets to documents and photos. Files can be dropped from the Scaleway console or transferred through our API and CLI or using any S3-compatible tool.
Scaleway provides an S3 interface which can be configured for use with rclone like this:
@@ -17718,18 +18851,15 @@ Press Enter to leave empty.
2 | E.g. pre Jewel/v10 CEPH.
\ (other-v2-signature)
region>
-Choose an endpoint from the list
-Endpoint for S3 API.
+Enter your Lyve Cloud endpoint. This field cannot be kept empty.
+Endpoint for Lyve Cloud S3 API.
Required when using an S3 clone.
-Choose a number from below, or type in your own value.
-Press Enter to leave empty.
- 1 / Seagate Lyve Cloud US East 1 (Virginia)
- \ (s3.us-east-1.lyvecloud.seagate.com)
- 2 / Seagate Lyve Cloud US West 1 (California)
- \ (s3.us-west-1.lyvecloud.seagate.com)
- 3 / Seagate Lyve Cloud AP Southeast 1 (Singapore)
- \ (s3.ap-southeast-1.lyvecloud.seagate.com)
-endpoint> 1
+Please type in your LyveCloud endpoint.
+Examples:
+- s3.us-west-1.{account_name}.lyve.seagate.com (US West 1 - California)
+- s3.eu-west-1.{account_name}.lyve.seagate.com (US West 1 - Ireland)
+Enter a value.
+endpoint> s3.us-west-1.global.lyve.seagate.com
Leave location constraint blank
Location constraint - must be set to match the Region.
Leave blank if not sure. Used when creating buckets only.
@@ -18581,27 +19711,49 @@ Option endpoint.
Endpoint for Linode Object Storage API.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
- 1 / Atlanta, GA (USA), us-southeast-1
+ 1 / Amsterdam (Netherlands), nl-ams-1
+ \ (nl-ams-1.linodeobjects.com)
+ 2 / Atlanta, GA (USA), us-southeast-1
\ (us-southeast-1.linodeobjects.com)
- 2 / Chicago, IL (USA), us-ord-1
+ 3 / Chennai (India), in-maa-1
+ \ (in-maa-1.linodeobjects.com)
+ 4 / Chicago, IL (USA), us-ord-1
\ (us-ord-1.linodeobjects.com)
- 3 / Frankfurt (Germany), eu-central-1
+ 5 / Frankfurt (Germany), eu-central-1
\ (eu-central-1.linodeobjects.com)
- 4 / Milan (Italy), it-mil-1
+ 6 / Jakarta (Indonesia), id-cgk-1
+ \ (id-cgk-1.linodeobjects.com)
+ 7 / London 2 (Great Britain), gb-lon-1
+ \ (gb-lon-1.linodeobjects.com)
+ 8 / Los Angeles, CA (USA), us-lax-1
+ \ (us-lax-1.linodeobjects.com)
+ 9 / Madrid (Spain), es-mad-1
+ \ (es-mad-1.linodeobjects.com)
+10 / Melbourne (Australia), au-mel-1
+ \ (au-mel-1.linodeobjects.com)
+11 / Miami, FL (USA), us-mia-1
+ \ (us-mia-1.linodeobjects.com)
+12 / Milan (Italy), it-mil-1
\ (it-mil-1.linodeobjects.com)
- 5 / Newark, NJ (USA), us-east-1
+13 / Newark, NJ (USA), us-east-1
\ (us-east-1.linodeobjects.com)
- 6 / Paris (France), fr-par-1
+14 / Osaka (Japan), jp-osa-1
+ \ (jp-osa-1.linodeobjects.com)
+15 / Paris (France), fr-par-1
\ (fr-par-1.linodeobjects.com)
- 7 / Seattle, WA (USA), us-sea-1
+16 / São Paulo (Brazil), br-gru-1
+ \ (br-gru-1.linodeobjects.com)
+17 / Seattle, WA (USA), us-sea-1
\ (us-sea-1.linodeobjects.com)
- 8 / Singapore ap-south-1
+18 / Singapore, ap-south-1
\ (ap-south-1.linodeobjects.com)
- 9 / Stockholm (Sweden), se-sto-1
+19 / Singapore 2, sg-sin-1
+ \ (sg-sin-1.linodeobjects.com)
+20 / Stockholm (Sweden), se-sto-1
\ (se-sto-1.linodeobjects.com)
-10 / Washington, DC, (USA), us-iad-1
+21 / Washington, DC, (USA), us-iad-1
\ (us-iad-1.linodeobjects.com)
-endpoint> 3
+endpoint> 5
Option acl.
Canned ACL used when creating buckets and storing or copying objects.
@@ -18748,6 +19900,101 @@ provider = Magalu
access_key_id = ACCESS_KEY
secret_access_key = SECRET_ACCESS_KEY
endpoint = br-ne1.magaluobjects.com
+MEGA S4
+MEGA S4 Object Storage is an S3 compatible object storage system. It has a single pricing tier with no additional charges for data transfers or API requests and it is included in existing Pro plans.
+Here is an example of making a configuration. First run:
+rclone config
+This will guide you through an interactive setup process.
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+
+Enter name for new remote.
+name> megas4
+
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+XX / Amazon S3 Compliant Storage Providers including AWS,... Mega, ...
+ \ (s3)
+[snip]
+Storage> s3
+
+Option provider.
+Choose your S3 provider.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+[snip]
+XX / MEGA S4 Object Storage
+ \ (Mega)
+[snip]
+provider> Mega
+
+Option env_auth.
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+Only applies if access_key_id and secret_access_key is blank.
+Choose a number from below, or type in your own boolean value (true or false).
+Press Enter for the default (false).
+ 1 / Enter AWS credentials in the next step.
+ \ (false)
+ 2 / Get AWS credentials from the environment (env vars or IAM).
+ \ (true)
+env_auth>
+
+Option access_key_id.
+AWS Access Key ID.
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+access_key_id> XXX
+
+Option secret_access_key.
+AWS Secret Access Key (password).
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+secret_access_key> XXX
+
+Option endpoint.
+Endpoint for S3 API.
+Required when using an S3 clone.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / Mega S4 eu-central-1 (Amsterdam)
+ \ (s3.eu-central-1.s4.mega.io)
+ 2 / Mega S4 eu-central-2 (Bettembourg)
+ \ (s3.eu-central-2.s4.mega.io)
+ 3 / Mega S4 ca-central-1 (Montreal)
+ \ (s3.ca-central-1.s4.mega.io)
+ 4 / Mega S4 ca-west-1 (Vancouver)
+ \ (s3.ca-west-1.s4.mega.io)
+endpoint> 1
+
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+
+Configuration complete.
+Options:
+- type: s3
+- provider: Mega
+- access_key_id: XXX
+- secret_access_key: XXX
+- endpoint: s3.eu-central-1.s4.mega.io
+Keep this "megas4" remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+This will leave the config file looking like this.
+[megas4]
+type = s3
+provider = Mega
+access_key_id = XXX
+secret_access_key = XXX
+endpoint = s3.eu-central-1.s4.mega.io
ArvanCloud
ArvanCloud ArvanCloud Object Storage goes beyond the limited traditional file storage. It gives you access to backup and archived files and allows sharing. Files like profile image in the app, images sent by users or scanned documents can be stored securely and easily in our Object Storage service.
ArvanCloud provides an S3 interface which can be configured for use with rclone like this.
@@ -18958,7 +20205,7 @@ cos s3
For Netease NOS configure as per the configurator rclone config
setting the provider Netease
. This will automatically set force_path_style = false
which is necessary for it to run properly.
Petabox
Here is an example of making a Petabox configuration. First run:
-
+
This will guide you through an interactive setup process.
No remotes found, make a new one?
n) New remote
@@ -19102,6 +20349,102 @@ access_key_id = YOUR_ACCESS_KEY_ID
secret_access_key = YOUR_SECRET_ACCESS_KEY
region = us-east-1
endpoint = s3.petabox.io
+Pure Storage FlashBlade
+Pure Storage FlashBlade is a high performance S3-compatible object store.
+FlashBlade supports most modern S3 features including:
+
+- ListObjectsV2
+- Multipart uploads with AWS-compatible ETags
+- Advanced checksum algorithms (SHA256, CRC32, CRC32C) with trailer support (Purity//FB 4.4.2+)
+- Object versioning and lifecycle management
+- Virtual hosted-style requests (requires DNS configuration)
+
+To configure rclone for Pure Storage FlashBlade:
+First run:
+rclone config
+This will guide you through an interactive setup process:
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+
+Enter name for new remote.
+name> flashblade
+
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+ 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, FlashBlade, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others
+ \ (s3)
+[snip]
+Storage> s3
+
+Option provider.
+Choose your S3 provider.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+[snip]
+ 9 / Pure Storage FlashBlade Object Storage
+ \ (FlashBlade)
+[snip]
+provider> FlashBlade
+
+Option env_auth.
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+Only applies if access_key_id and secret_access_key is blank.
+Choose a number from below, or type in your own boolean value (true or false).
+Press Enter for the default (false).
+ 1 / Enter AWS credentials in the next step.
+ \ (false)
+ 2 / Get AWS credentials from the environment (env vars or IAM).
+ \ (true)
+env_auth> 1
+
+Option access_key_id.
+AWS Access Key ID.
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+access_key_id> ACCESS_KEY_ID
+
+Option secret_access_key.
+AWS Secret Access Key (password).
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+secret_access_key> SECRET_ACCESS_KEY
+
+Option endpoint.
+Endpoint for S3 API.
+Required when using an S3 clone.
+Enter a value. Press Enter to leave empty.
+endpoint> https://s3.flashblade.example.com
+
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+
+Configuration complete.
+Options:
+- type: s3
+- provider: FlashBlade
+- access_key_id: ACCESS_KEY_ID
+- secret_access_key: SECRET_ACCESS_KEY
+- endpoint: https://s3.flashblade.example.com
+Keep this "flashblade" remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+This results in the following configuration being stored in ~/.config/rclone/rclone.conf
:
+[flashblade]
+type = s3
+provider = FlashBlade
+access_key_id = ACCESS_KEY_ID
+secret_access_key = SECRET_ACCESS_KEY
+endpoint = https://s3.flashblade.example.com
+Note: The FlashBlade endpoint should be the S3 data VIP. For virtual-hosted style requests, ensure proper DNS configuration: subdomains of the endpoint hostname should resolve to a FlashBlade data VIP. For example, if your endpoint is https://s3.flashblade.example.com
, then bucket-name.s3.flashblade.example.com
should also resolve to the data VIP.
Storj
Storj is a decentralized cloud storage which can be used through its native protocol or an S3 compatible gateway.
The S3 compatible gateway is configured using rclone config
with a type of s3
and with a provider name of Storj
. Here is an example run of the configurator.
@@ -19152,7 +20495,7 @@ y/n> n
Due to issue #39 uploading multipart files via the S3 gateway causes them to lose their metadata. For rclone's purpose this means that the modification time is not stored, nor is any MD5SUM (if one is available from the source).
This has the following consequences:
-- Using
rclone rcat
will fail as the medatada doesn't match after upload
+- Using
rclone rcat
will fail as the metadata doesn't match after upload
- Uploading files with
rclone mount
will fail for the same reason
- This can worked around by using
--vfs-cache-mode writes
or --vfs-cache-mode full
or setting --s3-upload-cutoff
large
@@ -19795,7 +21138,7 @@ e) Edit this remote
d) Delete this remote
y/e/d> y
See the remote setup docs for how to set it up on a machine with no Internet browser available.
-Note that rclone runs a webserver on your local machine to collect the token as returned from Box. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/
and this it may require you to unblock it temporarily if you are running a host firewall.
+Note that rclone runs a webserver on your local machine to collect the token as returned from Box. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/
and this may require you to unblock it temporarily if you are running a host firewall.
Once configured you can then use rclone
like this,
List directories in top level of your Box
rclone lsd remote:
@@ -21008,6 +22351,24 @@ y/e/d> y
- Type: Duration
- Default: 0s
+
+Cloudinary handles media formats as a file attribute and strips it from the name, which is unlike most other file systems
+Properties:
+
+- Config: adjust_media_files_extensions
+- Env Var: RCLONE_CLOUDINARY_ADJUST_MEDIA_FILES_EXTENSIONS
+- Type: bool
+- Default: true
+
+
+Cloudinary supported media extensions
+Properties:
+
+- Config: media_extensions
+- Env Var: RCLONE_CLOUDINARY_MEDIA_EXTENSIONS
+- Type: stringArray
+- Default: [3ds 3g2 3gp ai arw avi avif bmp bw cr2 cr3 djvu dng eps3 fbx flif flv gif glb gltf hdp heic heif ico indd jp2 jpe jpeg jpg jxl jxr m2ts mov mp4 mpeg mts mxf obj ogv pdf ply png psd svg tga tif tiff ts u3ma usdz wdp webm webp wmv]
+
--cloudinary-description
Description of the remote.
Properties:
@@ -21755,7 +23116,7 @@ rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfile
- 8 bytes magic string
RCLONE\x00\x00
- 24 bytes Nonce (IV)
-The initial nonce is generated from the operating systems crypto strong random number generator. The nonce is incremented for each chunk read making sure each nonce is unique for each block written. The chance of a nonce being reused is minuscule. If you wrote an exabyte of data (10¹⁸ bytes) you would have a probability of approximately 2×10⁻³² of re-using a nonce.
+The initial nonce is generated from the operating systems crypto strong random number generator. The nonce is incremented for each chunk read making sure each nonce is unique for each block written. The chance of a nonce being reused is minuscule. If you wrote an exabyte of data (10¹⁸ bytes) you would have a probability of approximately 2×10⁻³² of reusing a nonce.
Chunk
Each chunk will contain 64 KiB of data, except for the last one which may have less data. The data chunk is in standard NaCl SecretBox format. SecretBox uses XSalsa20 and Poly1305 to encrypt and authenticate messages.
Each chunk contains:
@@ -21765,7 +23126,7 @@ rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfile
64k chunk size was chosen as the best performing chunk size (the authenticator takes too much time below this and the performance drops off due to cache effects above this). Note that these chunks are buffered in memory so they can't be too big.
This uses a 32 byte (256 bit key) key derived from the user password.
-Examples
+Examples
1 byte file will encrypt to
- 32 bytes header
@@ -21798,7 +23159,7 @@ rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfile
Key derivation
Rclone uses scrypt
with parameters N=16384, r=8, p=1
with an optional user supplied salt (password2) to derive the 32+32+16 = 80 bytes of key material required. If the user doesn't supply a salt then rclone uses an internal one.
scrypt
makes it impractical to mount a dictionary attack on rclone encrypted data. For full protection against this you should always use a salt.
-SEE ALSO
+SEE ALSO
@@ -22032,10 +23393,136 @@ upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"Metadata
Any metadata supported by the underlying remote is read and written.
See the metadata docs for more info.
+DOI
+The DOI remote is a read only remote for reading files from digital object identifiers (DOI).
+Currently, the DOI backend supports supports DOIs hosted with: - InvenioRDM - Zenodo - CaltechDATA - Other InvenioRDM repositories - Dataverse - Harvard Dataverse - Other Dataverse repositories
+Paths are specified as remote:path
+Paths may be as deep as required, e.g. remote:directory/subdirectory
.
+Configuration
+Here is an example of how to make a remote called remote
. First run:
+
rclone config
+This will guide you through an interactive setup process:
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+Enter name for new remote.
+name> remote
+Type of storage to configure.
+Choose a number from below, or type in your own value
+[snip]
+XX / DOI datasets
+ \ (doi)
+[snip]
+Storage> doi
+Option doi.
+The DOI or the doi.org URL.
+Enter a value.
+doi> 10.5281/zenodo.5876941
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+Configuration complete.
+Options:
+- type: doi
+- doi: 10.5281/zenodo.5876941
+Keep this "remote" remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+Standard options
+Here are the Standard options specific to doi (DOI datasets).
+--doi-doi
+The DOI or the doi.org URL.
+Properties:
+
+- Config: doi
+- Env Var: RCLONE_DOI_DOI
+- Type: string
+- Required: true
+
+Advanced options
+Here are the Advanced options specific to doi (DOI datasets).
+--doi-provider
+DOI provider.
+The DOI provider can be set when rclone does not automatically recognize a supported DOI provider.
+Properties:
+
+- Config: provider
+- Env Var: RCLONE_DOI_PROVIDER
+- Type: string
+- Required: false
+- Examples:
+
+- "auto"
+
+- "zenodo"
+
+- "dataverse"
+
+- "invenio"
+
+
+
+--doi-doi-resolver-api-url
+The URL of the DOI resolver API to use.
+The DOI resolver can be set for testing or for cases when the the canonical DOI resolver API cannot be used.
+Defaults to "https://doi.org/api".
+Properties:
+
+- Config: doi_resolver_api_url
+- Env Var: RCLONE_DOI_DOI_RESOLVER_API_URL
+- Type: string
+- Required: false
+
+--doi-description
+Description of the remote.
+Properties:
+
+- Config: description
+- Env Var: RCLONE_DOI_DESCRIPTION
+- Type: string
+- Required: false
+
+Backend commands
+Here are the commands specific to the doi backend.
+Run them with
+rclone backend COMMAND remote:
+The help below will explain what arguments each command takes.
+See the backend command for more info on how to pass options and arguments.
+These can be run on a running backend using the rc command backend/command.
+
+Show metadata about the DOI.
+rclone backend metadata remote: [options] [<arguments>+]
+This command returns a JSON object with some information about the DOI.
+rclone backend medatadata doi:
+It returns a JSON object representing metadata about the DOI.
+set
+Set command for updating the config parameters.
+rclone backend set remote: [options] [<arguments>+]
+This set command can be used to update the config parameters for a running doi backend.
+Usage Examples:
+rclone backend set doi: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
+rclone rc backend/command command=set fs=doi: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
+rclone rc backend/command command=set fs=doi: -o doi=NEW_DOI
+The option keys are named as they are in the config file.
+This rebuilds the connection to the doi backend when it is called with the new parameters. Only new parameters need be passed as the values will default to those currently in use.
+It doesn't return anything.
Dropbox
Paths are specified as remote:path
Dropbox paths may be as deep as required, e.g. remote:directory/subdirectory
.
-Configuration
+Configuration
The initial setup for dropbox involves getting a token from Dropbox which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -22159,7 +23646,61 @@ y/e/d> y
This provides the maximum possible upload speed especially with lots of small files, however rclone can't check the file got uploaded properly using this mode.
If you are using this mode then using "rclone check" after the transfer completes is recommended. Or you could do an initial transfer with --dropbox-batch-mode async
then do a final transfer with --dropbox-batch-mode sync
(the default).
Note that there may be a pause when quitting rclone while rclone finishes up the last batch using this mode.
-Standard options
+Exporting files
+Certain files in Dropbox are "exportable", such as Dropbox Paper documents. These files need to be converted to another format in order to be downloaded. Often multiple formats are available for conversion.
+When rclone downloads a exportable file, it chooses the format to download based on the --dropbox-export-formats
setting. By default, the export formats are html,md
, which are sensible defaults for Dropbox Paper.
+Rclone chooses the first format ID in the export formats list that Dropbox supports for a given file. If no format in the list is usable, rclone will choose the default format that Dropbox suggests.
+Rclone will change the extension to correspond to the export format. Here are some examples of how extensions are mapped:
+
+
+
+
+
+
+Paper |
+mydoc.paper |
+mydoc.html |
+
+
+Paper template |
+mydoc.papert |
+mydoc.papert.html |
+
+
+other |
+mydoc |
+mydoc.html |
+
+
+
+Importing exportable files is not yet supported by rclone.
+Here are the supported export extensions known by rclone. Note that rclone does not currently support other formats not on this list, even if Dropbox supports them. Also, Dropbox could change the list of supported formats at any time.
+
+
+
+
+
+
+html |
+HTML |
+HTML document |
+
+
+md |
+Markdown |
+Markdown text format |
+
+
+
+Standard options
Here are the Standard options specific to dropbox (Dropbox).
--dropbox-client-id
OAuth Client Id.
@@ -22181,7 +23722,7 @@ y/e/d> y
- Type: string
- Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to dropbox (Dropbox).
--dropbox-token
OAuth Access Token as a JSON blob.
@@ -22295,6 +23836,39 @@ y/e/d> y
- Type: string
- Required: false
+
+Comma separated list of preferred formats for exporting files
+Certain Dropbox files can only be accessed by exporting them to another format. These include Dropbox Paper documents.
+For each such file, rclone will choose the first format on this list that Dropbox considers valid. If none is valid, it will choose Dropbox's default format.
+Known formats include: "html", "md" (markdown)
+Properties:
+
+- Config: export_formats
+- Env Var: RCLONE_DROPBOX_EXPORT_FORMATS
+- Type: CommaSepList
+- Default: html,md
+
+--dropbox-skip-exports
+Skip exportable files in all listings.
+If given, exportable files practically become invisible to rclone.
+Properties:
+
+- Config: skip_exports
+- Env Var: RCLONE_DROPBOX_SKIP_EXPORTS
+- Type: bool
+- Default: false
+
+--dropbox-show-all-exports
+Show all exportable files in listings.
+Adding this flag will allow all exportable files to be server side copied. Note that rclone doesn't add extensions to the exportable file names in this mode.
+Do not use this flag when trying to download exportable files - rclone will fail to download them.
+Properties:
+
+- Config: show_all_exports
+- Env Var: RCLONE_DROPBOX_SHOW_ALL_EXPORTS
+- Type: bool
+- Default: false
+
--dropbox-batch-mode
Upload file batching sync|async|off.
This sets the batch mode used by rclone.
@@ -22348,7 +23922,7 @@ y/e/d> y
- Default: 0s
--dropbox-batch-commit-timeout
-Max time to wait for a batch to finish committing
+Max time to wait for a batch to finish committing. (no longer used)
Properties:
- Config: batch_commit_timeout
@@ -22371,6 +23945,7 @@ y/e/d> y
Some errors may occur if you try to sync copyright-protected files because Dropbox has its own copyright detector that prevents this sort of file being downloaded. This will return the error ERROR : /path/to/your/file: Failed to copy: failed to open source object: path/restricted_content/.
If you have more than 10,000 files in a directory then rclone purge dropbox:dir
will return the error Failed to purge: There are too many files involved in this operation
. As a work-around do an rclone delete dropbox:dir
followed by an rclone rmdir dropbox:dir
.
When using rclone link
you'll need to set --expire
if using a non-personal account otherwise the visibility may not be correct. (Note that --expire
isn't supported on personal accounts). See the forum discussion and the dropbox SDK issue.
+Modification times for Dropbox Paper documents are not exact, and may not change for some period after the document is edited. To make sure you get recent changes in a sync, either wait an hour or so, or use --ignore-times
to force a full sync.
Get your own Dropbox App ID
When you use rclone with Dropbox in its default configuration you are using rclone's App ID. This is shared between all the rclone users.
Here is how to create your own Dropbox App ID for rclone:
@@ -22386,7 +23961,7 @@ y/e/d> y
Enterprise File Fabric
This backend supports Storage Made Easy's Enterprise File Fabric™ which provides a software solution to integrate and unify File and Object Storage accessible through a global file system.
-Configuration
+Configuration
The initial setup for the Enterprise File Fabric backend involves getting a token from the Enterprise File Fabric which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -22480,7 +24055,7 @@ y/e/d> y
120673757,My contacts/
120673761,S3 Storage/
The ID for "S3 Storage" would be 120673761
.
-Standard options
+Standard options
Here are the Standard options specific to filefabric (Enterprise File Fabric).
--filefabric-url
URL of the Enterprise File Fabric to connect to.
@@ -22529,7 +24104,7 @@ y/e/d> y
- Type: string
- Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to filefabric (Enterprise File Fabric).
--filefabric-token
Session Token.
@@ -22581,10 +24156,127 @@ y/e/d> y
- Type: string
- Required: false
+FileLu
+FileLu is a reliable cloud storage provider offering features like secure file uploads, downloads, flexible storage options, and sharing capabilities. With support for high storage limits and seamless integration with rclone, FileLu makes managing files in the cloud easy. Its cross-platform file backup services let you upload and back up files from any internet-connected device.
+Configuration
+Here is an example of how to make a remote called filelu
. First, run:
+ rclone config
+This will guide you through an interactive setup process:
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> filelu
+Type of storage to configure.
+Choose a number from below, or type in your own value
+[snip]
+xx / FileLu Cloud Storage
+ \ "filelu"
+[snip]
+Storage> filelu
+Enter your FileLu Rclone Key:
+key> YOUR_FILELU_RCLONE_KEY RC_xxxxxxxxxxxxxxxxxxxxxxxx
+Configuration complete.
+
+Keep this "filelu" remote?
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+Paths
+A path without an initial /
will operate in the Rclone
directory.
+A path with an initial /
will operate at the root where you can see the Rclone
directory.
+$ rclone lsf TestFileLu:/
+CCTV/
+Camera/
+Documents/
+Music/
+Photos/
+Rclone/
+Vault/
+Videos/
+Example Commands
+Create a new folder named foldername
in the Rclone
directory:
+rclone mkdir filelu:foldername
+Delete a folder on FileLu:
+rclone rmdir filelu:/folder/path/
+Delete a file on FileLu:
+rclone delete filelu:/hello.txt
+List files from your FileLu account:
+rclone ls filelu:
+List all folders:
+rclone lsd filelu:
+Copy a specific file to the FileLu root:
+rclone copy D:\\hello.txt filelu:
+Copy files from a local directory to a FileLu directory:
+rclone copy D:/local-folder filelu:/remote-folder/path/
+Download a file from FileLu into a local directory:
+rclone copy filelu:/file-path/hello.txt D:/local-folder
+Move files from a local directory to a FileLu directory:
+rclone move D:\\local-folder filelu:/remote-path/
+Sync files from a local directory to a FileLu directory:
+rclone sync --interactive D:/local-folder filelu:/remote-path/
+Mount remote to local Linux:
+rclone mount filelu: /root/mnt --vfs-cache-mode full
+Mount remote to local Windows:
+rclone mount filelu: D:/local_mnt --vfs-cache-mode full
+Get storage info about the FileLu account:
+rclone about filelu:
+All the other rclone commands are supported by this backend.
+FolderID instead of folder path
+We use the FolderID instead of the folder name to prevent errors when users have identical folder names or paths. For example, if a user has two or three folders named "test_folders," the system may become confused and won't know which folder to move. In large storage systems, some clients have hundred of thousands of folders and a few millions of files, duplicate folder names or paths are quite common.
+Modification Times and Hashes
+FileLu supports both modification times and MD5 hashes.
+FileLu only supports filenames and folder names up to 255 characters in length, where a character is a Unicode character.
+Duplicated Files
+When uploading and syncing via Rclone, FileLu does not allow uploading duplicate files within the same directory. However, you can upload duplicate files, provided they are in different directories (folders).
+Failure to Log / Invalid Credentials or KEY
+Ensure that you have the correct Rclone key, which can be found in My Account. Every time you toggle Rclone OFF and ON in My Account, a new RC_xxxxxxxxxxxxxxxxxxxx key is generated. Be sure to update your Rclone configuration with the new key.
+If you are connecting to your FileLu remote for the first time and encounter an error such as:
+Failed to create file system for "my-filelu-remote:": couldn't login: Invalid credentials
+Ensure your Rclone Key is correct.
+Process killed
+Accounts with large files or extensive metadata may experience significant memory usage during list/sync operations. Ensure the system running rclone
has sufficient memory and CPU to handle these operations.
+Standard options
+Here are the Standard options specific to filelu (FileLu Cloud Storage).
+--filelu-key
+Your FileLu Rclone key from My Account
+Properties:
+
+- Config: key
+- Env Var: RCLONE_FILELU_KEY
+- Type: string
+- Required: true
+
+Advanced options
+Here are the Advanced options specific to filelu (FileLu Cloud Storage).
+--filelu-encoding
+The encoding for the backend.
+See the encoding section in the overview for more info.
+Properties:
+
+- Config: encoding
+- Env Var: RCLONE_FILELU_ENCODING
+- Type: Encoding
+- Default: Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,CrLf,Del,Ctl,LeftSpace,LeftPeriod,LeftTilde,LeftCrLfHtVt,RightSpace,RightPeriod,RightCrLfHtVt,InvalidUtf8,Dot,SquareBracket,Semicolon,Exclamation
+
+--filelu-description
+Description of the remote.
+Properties:
+
+- Config: description
+- Env Var: RCLONE_FILELU_DESCRIPTION
+- Type: string
+- Required: false
+
+Limitations
+This backend uses a custom library implementing the FileLu API. While it supports file transfers, some advanced features may not yet be available. Please report any issues to the rclone forum for troubleshooting and updates.
+For further information, visit FileLu's website.
Files.com
Files.com is a cloud storage service that provides a secure and easy way to store and share files.
The initial setup for filescom involves authenticating with your Files.com account. You can do this by providing your site subdomain, username, and password. Alternatively, you can authenticate using an API Key from Files.com. rclone config
walks you through it.
-Configuration
+Configuration
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
@@ -22653,7 +24345,7 @@ y/e/d> y
rclone ls remote:
Sync /home/local/directory
to the remote directory, deleting any excess files in the directory.
rclone sync --interactive /home/local/directory remote:dir
-Standard options
+Standard options
Here are the Standard options specific to filescom (Files.com).
Your site subdomain (e.g. mysite) or custom domain (e.g. myfiles.customdomain.com).
@@ -22683,7 +24375,7 @@ y/e/d> y
- Type: string
- Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to filescom (Files.com).
The API key used to authenticate with Files.com.
@@ -22717,7 +24409,7 @@ y/e/d> y
FTP is the File Transfer Protocol. Rclone FTP support is provided using the github.com/jlaffaye/ftp package.
Limitations of Rclone's FTP backend
Paths are specified as remote:path
. If the path does not begin with a /
it is relative to the home directory of the user. An empty path remote:
refers to the user's home directory.
-Configuration
+Configuration
To create an FTP configuration named remote
, run
rclone config
Rclone config guides you through an interactive setup process. A minimal rclone FTP remote definition only requires host, username and password. For an anonymous FTP server, see below.
@@ -22833,7 +24525,7 @@ rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=IXs2wc8OJOz7SYLBk47
This backend's interactive configuration wizard provides a selection of sensible encoding settings for major FTP servers: ProFTPd, PureFTPd, VsFTPd. Just hit a selection number when prompted.
-Standard options
+Standard options
Here are the Standard options specific to ftp (FTP).
--ftp-host
FTP host to connect to.
@@ -22893,7 +24585,7 @@ rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=IXs2wc8OJOz7SYLBk47
Type: bool
Default: false
-Advanced options
+Advanced options
Here are the Advanced options specific to ftp (FTP).
--ftp-concurrency
Maximum number of FTP simultaneous connections, 0 for unlimited.
@@ -23078,7 +24770,7 @@ rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=IXs2wc8OJOz7SYLBk47
Type: string
Required: false
-Limitations
+Limitations
FTP servers acting as rclone remotes must support passive
mode. The mode cannot be configured as passive
is the only supported one. Rclone's FTP implementation is not compatible with active
mode as the library it uses doesn't support it. This will likely never be supported due to security concerns.
Rclone's FTP backend does not support any checksums but can compare file sizes.
rclone about
is not supported by the FTP backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
@@ -23096,7 +24788,7 @@ rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=IXs2wc8OJOz7SYLBk47
Gofile is a content storage and distribution platform. Its aim is to provide as much service as possible for free or at a very low price.
The initial setup for Gofile involves logging in to the web interface and going to the "My Profile" section. Copy the "Account API token" for use in the config file.
Note that if you wish to connect rclone to Gofile you will need a premium account.
-Configuration
+Configuration
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
@@ -23141,7 +24833,7 @@ y/e/d> y
rclone lsf remote:
To copy a local directory to an Gofile directory called backup
rclone copy /home/source remote:backup
-Modification times and hashes
+Modification times and hashes
Gofile supports modification times with a resolution of 1 second.
Gofile supports MD5 hashes, so you can use the --checksum
flag.
Restricted filename characters
@@ -23235,7 +24927,7 @@ d50e356c-29ca-4b27-a3a7-494d91026e04;Videos/
The ID to use is the part before the ;
so you could set
root_folder_id = d6341f53-ee65-4f29-9f59-d11e8070b2a0
To restrict rclone to the Files
directory.
-Standard options
+Standard options
Here are the Standard options specific to gofile (Gofile).
--gofile-access-token
API Access token
@@ -23247,7 +24939,7 @@ d50e356c-29ca-4b27-a3a7-494d91026e04;Videos/
Type: string
Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to gofile (Gofile).
--gofile-root-folder-id
ID of the root folder
@@ -23298,18 +24990,18 @@ d50e356c-29ca-4b27-a3a7-494d91026e04;Videos/
Type: string
Required: false
-Limitations
+Limitations
Gofile only supports filenames up to 255 characters in length, where a character is a unicode character.
Directories should not be cached for more than 24h otherwise files in the directory may not be downloadable. In practice this means when using a VFS based rclone command such as rclone mount
you should make sure --dir-cache-time
is less than 24h
.
Note that Gofile is currently limited to a total of 100,000 items. If you attempt to upload more than that you will get error-limit-100000
. This limit may be lifted in the future.
-Duplicated files
+Duplicated files
Gofile is capable of having files with duplicated file names. For instance two files called hello.txt
in the same directory.
Rclone cannot sync that to a normal file system but it can be fixed with the rclone dedupe
command.
Duplicated files cause problems with the syncing and you will see messages in the log about duplicates.
Use rclone dedupe
to fix duplicated files.
Google Cloud Storage
Paths are specified as remote:bucket
(or remote:
for the lsd
command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir
.
-Configuration
+Configuration
The initial setup for google cloud storage involves getting a token from Google Cloud Storage which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -23540,7 +25232,7 @@ ya29.c.c0ASRK0GbAFEewXD [truncated]
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Standard options
+Standard options
Here are the Standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).
--gcs-client-id
OAuth Client Id.
@@ -23925,7 +25617,7 @@ ya29.c.c0ASRK0GbAFEewXD [truncated]
-Advanced options
+Advanced options
Here are the Advanced options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).
--gcs-token
OAuth Access Token as a JSON blob.
@@ -24036,13 +25728,13 @@ ya29.c.c0ASRK0GbAFEewXD [truncated]
Type: string
Required: false
-Limitations
+Limitations
rclone about
is not supported by the Google Cloud Storage backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about and rclone about
Google Drive
Paths are specified as drive:path
Drive paths may be as deep as required, e.g. drive:directory/subdirectory
.
-Configuration
+Configuration
The initial setup for drive involves getting a token from Google drive which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -24257,7 +25949,7 @@ trashed=false and 'c' in parents
without --fast-list
: 22:05 min
with --fast-list
: 58s
-Modification times and hashes
+Modification times and hashes
Google drive stores modification times accurate to 1 ms.
Hash algorithms MD5, SHA1 and SHA256 are supported. Note, however, that a small fraction of files uploaded may not have SHA1 or SHA256 hashes especially if they were uploaded before 2018.
Restricted filename characters
@@ -24551,7 +26243,7 @@ trashed=false and 'c' in parents
-Standard options
+Standard options
Here are the Standard options specific to drive (Google Drive).
--drive-client-id
Google Application Client Id Setting your own is recommended. See https://rclone.org/drive/#making-your-own-client-id for how to create your own. If you leave this blank, it will use an internal key which is low performance.
@@ -24628,7 +26320,7 @@ trashed=false and 'c' in parents
Type: bool
Default: false
-Advanced options
+Advanced options
Here are the Advanced options specific to drive (Google Drive).
--drive-token
OAuth Access Token as a JSON blob.
@@ -25191,7 +26883,7 @@ trashed=false and 'c' in parents
Type: string
Required: false
-
+
User metadata is stored in the properties field of the drive object.
Metadata is supported on files and directories.
Here are the possible system metadata items for the drive backend.
@@ -25300,7 +26992,7 @@ trashed=false and 'c' in parents
See the metadata docs for more info.
-Backend commands
+Backend commands
Here are the commands specific to the drive backend.
Run them with
rclone backend COMMAND remote:
@@ -25319,7 +27011,7 @@ rclone rc backend/command command=get fs=drive: [-o service_account_file] [-o ch
"chunk_size": show the current upload chunk size
"service_account_file": show the current service account file
-set
+set
Set command for updating the drive config parameters
rclone backend set remote: [options] [<arguments>+]
This is a set command which will be used to update the various drive config parameters
@@ -25401,6 +27093,17 @@ rclone backend copyid drive: ID1 path1 ID2 path2
The path should end with a / to indicate copy the file as named to this directory. If it doesn't end with a / then the last path component will be used as the file name.
If the destination is a drive backend then server-side copying will be attempted if possible.
Use the --interactive/-i or --dry-run flag to see what would be copied before copying.
+moveid
+Move files by ID
+rclone backend moveid remote: [options] [<arguments>+]
+This command moves files by ID
+Usage:
+rclone backend moveid drive: ID path
+rclone backend moveid drive: ID1 path1 ID2 path2
+It moves the drive file with ID given to the path (an rclone path which will be passed internally to rclone moveto).
+The path should end with a / to indicate move the file as named to this directory. If it doesn't end with a / then the last path component will be used as the file name.
+If the destination is a drive backend then server-side moving will be attempted if possible.
+Use the --interactive/-i or --dry-run flag to see what would be moved beforehand.
Dump the export formats for debug purposes
rclone backend exportformats remote: [options] [<arguments>+]
@@ -25452,7 +27155,7 @@ rclone backend copyid drive: ID1 path1 ID2 path2
rclone backend rescue drive: Orphans
Third delete all orphaned files to the trash
rclone backend rescue drive: -o delete
-Limitations
+Limitations
Drive has quite a lot of rate limiting. This causes rclone to be limited to transferring about 2 files per second only. Individual files may be transferred much faster at 100s of MiB/s but lots of small files can take a long time.
Server side copies are also subject to a separate rate limit. If you see User rate limit exceeded errors, wait at least 24 hours and retry. You can disable server-side copies with --disable copy
to download and upload the files if you prefer.
Limitations of Google Docs
@@ -25460,7 +27163,7 @@ rclone backend copyid drive: ID1 path1 ID2 path2
This is because rclone can't find out the size of the Google docs without downloading them.
Google docs will transfer correctly with rclone sync
, rclone copy
etc as rclone knows to ignore the size when doing the transfer.
However an unfortunate consequence of this is that you may not be able to download Google docs using rclone mount
. If it doesn't work you will get a 0 sized file. If you try again the doc may gain its correct size and be downloadable. Whether it will work on not depends on the application accessing the mount and the OS you are running - experiment to find out if it does work for you!
-Duplicated files
+Duplicated files
Sometimes, for no reason I've been able to track down, drive will duplicate a file that rclone uploads. Drive unlike all the other remotes can have duplicated files.
Duplicated files cause problems with the syncing and you will see messages in the log about duplicates.
Use rclone dedupe
to fix duplicated files.
@@ -25504,7 +27207,8 @@ rclone backend copyid drive: ID1 path1 ID2 path2
Google Photos
The rclone backend for Google Photos is a specialized backend for transferring photos and videos to and from Google Photos.
NB The Google Photos API which rclone uses has quite a few limitations, so please read the limitations section carefully to make sure it is suitable for your use.
-Configuration
+NB From March 31, 2025 rclone can only download photos it uploaded. This limitation is due to policy changes at Google. You may need to run rclone config reconnect remote:
to make rclone work again after upgrading to rclone v1.70.
+Configuration
The initial setup for google cloud storage involves getting a token from Google Photos which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -25659,7 +27363,7 @@ y/e/d> y
This means that you can use the album
path pretty much like a normal filesystem and it is a good target for repeated syncing.
The shared-album
directory shows albums shared with you or by you. This is similar to the Sharing tab in the Google Photos web interface.
-Standard options
+Standard options
Here are the Standard options specific to google photos (Google Photos).
--gphotos-client-id
OAuth Client Id.
@@ -25691,7 +27395,7 @@ y/e/d> y
Type: bool
Default: false
-Advanced options
+Advanced options
Here are the Advanced options specific to google photos (Google Photos).
--gphotos-token
OAuth Access Token as a JSON blob.
@@ -25767,7 +27471,7 @@ y/e/d> y
--gphotos-proxy
Use the gphotosdl proxy for downloading the full resolution images
The Google API will deliver images and video which aren't full resolution, and/or have EXIF data missing.
-However if you ue the gphotosdl proxy tnen you can download original, unchanged images.
+However if you use the gphotosdl proxy then you can download original, unchanged images.
This runs a headless browser in the background.
Download the software from gphotosdl
First run with
@@ -25844,7 +27548,7 @@ y/e/d> y
Default: 0s
--gphotos-batch-commit-timeout
-Max time to wait for a batch to finish committing
+Max time to wait for a batch to finish committing. (no longer used)
Properties:
- Config: batch_commit_timeout
@@ -25861,8 +27565,9 @@ y/e/d> y
- Type: string
- Required: false
-Limitations
+Limitations
Only images and videos can be uploaded. If you attempt to upload non videos or images or formats that Google Photos doesn't understand, rclone will upload the file, then Google Photos will give an error when it is put turned into a media item.
+NB From March 31, 2025 rclone can only download photos it uploaded. This limitation is due to policy changes at Google. You may need to run rclone config reconnect remote:
to make rclone work again after upgrading to rclone v1.70.
Note that all media items uploaded to Google Photos through the API are stored in full resolution at "original quality" and will count towards your storage quota in your Google Account. The API does not offer a way to upload in "high quality" mode..
rclone about
is not supported by the Google Photos backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
@@ -25972,7 +27677,7 @@ rclone backend drop Hasher:
rclone backend stickyimport hasher:path/to/data sha1 remote:/path/to/sum.sha1
stickyimport
is similar to import
but works much faster because it does not need to stat existing files and skips initial tree walk. Instead of binding cache entries to file fingerprints it creates sticky entries bound to the file name alone ignoring size, modification time etc. Such hash entries can be replaced only by purge
, delete
, backend drop
or by full re-read/re-write of the files.
Configuration reference
-Standard options
+Standard options
Here are the Standard options specific to hasher (Better checksums for other remotes).
--hasher-remote
Remote to cache checksums for (e.g. myRemote:path).
@@ -26001,7 +27706,7 @@ rclone backend drop Hasher:
Type: Duration
Default: off
-Advanced options
+Advanced options
Here are the Advanced options specific to hasher (Better checksums for other remotes).
--hasher-auto-size
Auto-update checksum for files smaller than this size (disabled by default).
@@ -26021,10 +27726,10 @@ rclone backend drop Hasher:
Type: string
Required: false
-
+
Any metadata supported by the underlying remote is read and written.
See the metadata docs for more info.
-Backend commands
+Backend commands
Here are the commands specific to the hasher backend.
Run them with
rclone backend COMMAND remote:
@@ -26079,7 +27784,7 @@ rclone backend drop Hasher:
HDFS
HDFS is a distributed file-system, part of the Apache Hadoop framework.
Paths are specified as remote:
or remote:path/to/dir
.
-Configuration
+Configuration
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
@@ -26187,7 +27892,7 @@ username = root
Invalid UTF-8 bytes will also be replaced.
-Standard options
+Standard options
Here are the Standard options specific to hdfs (Hadoop distributed file system).
--hdfs-namenode
Hadoop name nodes and ports.
@@ -26215,7 +27920,7 @@ username = root
-Advanced options
+Advanced options
Here are the Advanced options specific to hdfs (Hadoop distributed file system).
--hdfs-service-principal-name
Kerberos service principal name for the namenode.
@@ -26263,7 +27968,7 @@ username = root
Type: string
Required: false
-Limitations
+Limitations
- No server-side
Move
or DirMove
.
- Checksums not implemented.
@@ -26272,7 +27977,7 @@ username = root
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
The initial setup for hidrive involves getting a token from HiDrive which you need to do in your browser. rclone config
walks you through it.
-Configuration
+Configuration
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
@@ -26340,7 +28045,7 @@ y/e/d> y
Using
rclone config reconnect remote:
the process is very similar to the process of initial setup exemplified before.
-Modification times and hashes
+Modification times and hashes
HiDrive allows modification times to be set on objects accurate to 1 second.
HiDrive supports its own hash type which is used to verify the integrity of file contents after successful transfers.
Restricted filename characters
@@ -26370,7 +28075,7 @@ rclone lsd remote:/users/test/path
By default, rclone will know the number of directory members contained in a directory. For example, rclone lsd
uses this information.
The acquisition of this information will result in additional time costs for HiDrive's API. When dealing with large directory structures, it may be desirable to circumvent this time cost, especially when this information is not explicitly needed. For this, the disable_fetching_member_count
option can be used.
See the below section about configuration options for more details.
-Standard options
+Standard options
Here are the Standard options specific to hidrive (HiDrive).
--hidrive-client-id
OAuth Client Id.
@@ -26412,7 +28117,7 @@ rclone lsd remote:/users/test/path
-Advanced options
+Advanced options
Here are the Advanced options specific to hidrive (HiDrive).
--hidrive-token
OAuth Access Token as a JSON blob.
@@ -26578,7 +28283,7 @@ rclone lsd remote:/users/test/path
Type: string
Required: false
-Limitations
+Limitations
Symbolic links
HiDrive is able to store symbolic links (symlinks) by design, for example, when unpacked from a zip archive.
There exists no direct mechanism to manage native symlinks in remotes. As such this implementation has chosen to ignore any native symlinks present in the remote. rclone will not be able to access or show any symlinks stored in the hidrive-remote. This means symlinks cannot be individually removed, copied, or moved, except when removing, copying, or moving the parent folder.
@@ -26592,7 +28297,7 @@ rclone lsd remote:/users/test/path
The remote:
represents the configured url, and any path following it will be resolved relative to this url, according to the URL standard. This means with remote url https://beta.rclone.org/branch
and path fix
, the resolved URL will be https://beta.rclone.org/branch/fix
, while with path /fix
the resolved URL will be https://beta.rclone.org/fix
as the absolute path is resolved from the root of the domain.
If the path following the remote:
ends with /
it will be assumed to point to a directory. If the path does not end with /
, then a HEAD request is sent and the response used to decide if it it is treated as a file or a directory (run with -vv
to see details). When --http-no-head is specified, a path without ending /
is always assumed to be a file. If rclone incorrectly assumes the path is a file, the solution is to specify the path with ending /
. When you know the path is a directory, ending it with /
is always better as it avoids the initial HEAD request.
To just download a single file it is easier to use copyurl.
-Configuration
+Configuration
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
@@ -26656,7 +28361,7 @@ e/n/d/r/c/s/q> q
rclone lsd --http-url https://beta.rclone.org :http:
or:
rclone lsd :http,url='https://beta.rclone.org':
-Standard options
+Standard options
Here are the Standard options specific to http (HTTP).
--http-url
URL of HTTP host to connect to.
@@ -26677,7 +28382,7 @@ e/n/d/r/c/s/q> q
Type: bool
Default: false
-Advanced options
+Advanced options
Here are the Advanced options specific to http (HTTP).
Set HTTP headers for all transactions.
@@ -26729,14 +28434,14 @@ e/n/d/r/c/s/q> q
Type: string
Required: false
-Backend commands
+Backend commands
Here are the commands specific to the http backend.
Run them with
rclone backend COMMAND remote:
The help below will explain what arguments each command takes.
See the backend command for more info on how to pass options and arguments.
These can be run on a running backend using the rc command backend/command.
-set
+set
Set command for updating the config parameters.
rclone backend set remote: [options] [<arguments>+]
This set command can be used to update the config parameters for a running http backend.
@@ -26747,7 +28452,7 @@ rclone rc backend/command command=set fs=remote: -o url=https://example.comThe option keys are named as they are in the config file.
This rebuilds the connection to the http backend when it is called with the new parameters. Only new parameters need be passed as the values will default to those currently in use.
It doesn't return anything.
-Limitations
+Limitations
rclone about
is not supported by the HTTP backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about and rclone about
ImageKit
@@ -26756,7 +28461,7 @@ rclone rc backend/command command=set fs=remote: -o url=https://example.comImageKit.io provides real-time image and video optimizations, transformations, and CDN delivery. Over 1,000 businesses and 70,000 developers trust ImageKit with their images and videos on the web.
Accounts & Pricing
To use this backend, you need to create an account on ImageKit. Start with a free plan with generous usage limits. Then, as your requirements grow, upgrade to a plan that best fits your needs. See the pricing details.
-Configuration
+Configuration
Here is an example of making an imagekit configuration.
Firstly create a ImageKit.io account and choose a plan.
You will need to log in and get the publicKey
and privateKey
for your account from the developer section.
@@ -26823,7 +28528,7 @@ y/e/d> y
ImageKit does not support modification times or hashes yet.
Checksums
No checksums are supported.
-Standard options
+Standard options
Here are the Standard options specific to imagekit (ImageKit.io).
--imagekit-endpoint
You can find your ImageKit.io URL endpoint in your dashboard
@@ -26852,7 +28557,7 @@ y/e/d> y
Type: string
Required: true
-Advanced options
+Advanced options
Here are the Advanced options specific to imagekit (ImageKit.io).
--imagekit-only-signed
If you have configured Restrict unsigned image URLs
in your dashboard settings, set this to true.
@@ -26900,7 +28605,7 @@ y/e/d> y
Type: string
Required: false
-
+
Any metadata supported by the underlying remote is read and written.
Here are the possible system metadata items for the imagekit backend.
@@ -27002,7 +28707,7 @@ y/e/d> y
See the metadata docs for more info.
iCloud Drive
-Configuration
+Configuration
The initial setup for an iCloud Drive backend involves getting a trust token/session. This can be done by simply using the regular iCloud password, and accepting the code prompt on another iCloud connected device.
IMPORTANT: At the moment an app specific password won't be accepted. Only use your regular password and 2FA.
rclone config
walks you through the token creation. The trust token is valid for 30 days. After which you will have to reauthenticate with rclone reconnect
or rclone config
.
@@ -27047,7 +28752,7 @@ Enter a value.
config_2fa> 2FACODE
Remote config
--------------------
-[koofr]
+[iclouddrive]
- type: iclouddrive
- apple_id: APPLEID
- password: *** ENCRYPTED ***
@@ -27060,7 +28765,14 @@ d) Delete this remote
y/e/d> y
Advanced Data Protection
ADP is currently unsupported and need to be disabled
-Standard options
+On iPhone, Settings >
Apple Account >
iCloud >
'Access iCloud Data on the Web' must be ON, and 'Advanced Data Protection' OFF.
+Troubleshooting
+Missing PCS cookies from the request
+This means you have Advanced Data Protection (ADP) turned on. This is not supported at the moment. If you want to use rclone you will have to turn it off. See above for how to turn it off.
+You will need to clear the cookies
and the trust_token
fields in the config. Or you can delete the remote config and start again.
+You should then run rclone reconnect remote:
.
+Note that changing the ADP setting may not take effect immediately - you may need to wait a few hours or a day before you can get rclone to work - keep clearing the config entry and running rclone reconnect remote:
until rclone functions properly.
+Standard options
Here are the Standard options specific to iclouddrive (iCloud Drive).
--iclouddrive-apple-id
Apple ID.
@@ -27099,7 +28811,7 @@ y/e/d> y
Type: string
Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to iclouddrive (iCloud Drive).
--iclouddrive-client-id
Client id
@@ -27156,7 +28868,7 @@ y/e/d> y
These auto-created files can be excluded from the sync using metadata filtering.
rclone sync ... --metadata-exclude "source=metadata" --metadata-exclude "format=Metadata"
Which excludes from the sync any files which have the source=metadata
or format=Metadata
flags which are added to Internet Archive auto-created files.
-Configuration
+Configuration
Here is an example of making an internetarchive configuration. Most applies to the other providers as well, any differences are described below.
First run
rclone config
@@ -27225,7 +28937,7 @@ y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
-Standard options
+Standard options
Here are the Standard options specific to internetarchive (Internet Archive).
--internetarchive-access-key-id
IAS3 Access Key.
@@ -27247,7 +28959,16 @@ y/e/d> y
Type: string
Required: false
-Advanced options
+--internetarchive-item-derive
+Whether to trigger derive on the IA item or not. If set to false, the item will not be derived by IA upon upload. The derive process produces a number of secondary files from an upload to make an upload more usable on the web. Setting this to false is useful for uploading files that are already in a format that IA can display or reduce burden on IA's infrastructure.
+Properties:
+
+- Config: item_derive
+- Env Var: RCLONE_INTERNETARCHIVE_ITEM_DERIVE
+- Type: bool
+- Default: true
+
+Advanced options
Here are the Advanced options specific to internetarchive (Internet Archive).
--internetarchive-endpoint
IAS3 Endpoint.
@@ -27269,6 +28990,15 @@ y/e/d> y
Type: string
Default: "https://archive.org"
+
+Metadata to be set on the IA item, this is different from file-level metadata that can be set using --metadata-set. Format is key=value and the 'x-archive-meta-' prefix is automatically added.
+Properties:
+
+- Config: item_metadata
+- Env Var: RCLONE_INTERNETARCHIVE_ITEM_METADATA
+- Type: stringArray
+- Default: []
+
--internetarchive-disable-checksum
Don't ask the server to test against MD5 checksum calculated by rclone. Normally rclone will calculate the MD5 checksum of the input before uploading it so it can ask the server to check the object against checksum. This is great for data integrity checking but can cause long delays for large files to start uploading.
Properties:
@@ -27306,7 +29036,7 @@ y/e/d> y
Type: string
Required: false
-
+
Metadata fields provided by Internet Archive. If there are multiple values for a key, only the first one is returned. This is a limitation of Rclone, that supports one value per one key.
Owner is able to add custom keys. Metadata feature grabs all the keys including them.
Here are the possible system metadata items for the internetarchive backend.
@@ -27453,7 +29183,7 @@ Response: {"error":"invalid_grant","error_description&q
Onlime has sold access to Jottacloud proper, while providing localized support to Danish Customers, but have recently set up their own hosting, transferring their customers from Jottacloud servers to their own ones.
This, of course, necessitates using their servers for authentication, but otherwise functionality and architecture seems equivalent to Jottacloud.
To setup rclone to use Onlime Cloud Storage, choose Onlime Cloud authentication in the setup. The rest of the setup is identical to the default setup.
-Configuration
+Configuration
Here is an example of how to make a remote called remote
with the default setup. First run:
rclone config
This will guide you through an interactive setup process:
@@ -27557,7 +29287,7 @@ y/e/d> y
This backend supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
Note that the implementation in Jottacloud always uses only a single API request to get the entire list, so for large folders this could lead to long wait time before the first results are shown.
Note also that with rclone version 1.58 and newer, information about MIME types and metadata item utime are not available when using --fast-list
.
-Modification times and hashes
+Modification times and hashes
Jottacloud allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.
Jottacloud supports MD5 type hashes, so you can use the --checksum
flag.
Note that Jottacloud requires the MD5 hash before upload so if the source does not have an MD5 checksum then the file will be cached temporarily on disk (in location given by --temp-dir) before it is uploaded. Small files will be cached in memory - see the --jottacloud-md5-memory-limit flag. When uploading from local disk the source checksum is always available, so this does not apply. Starting with rclone version 1.52 the same is true for encrypted remotes (in older versions the crypt backend would not calculate hashes for uploads from local disk, so the Jottacloud backend had to do it as described above).
@@ -27617,7 +29347,7 @@ y/e/d> y
Versioning can be disabled by --jottacloud-no-versions
option. This is achieved by deleting the remote file prior to uploading a new version. If the upload the fails no version of the file will be available in the remote.
To view your current quota you can use the rclone about remote:
command which will display your usage limit (unless it is unlimited) and the current usage.
-Standard options
+Standard options
Here are the Standard options specific to jottacloud (Jottacloud).
--jottacloud-client-id
OAuth Client Id.
@@ -27639,7 +29369,7 @@ y/e/d> y
Type: string
Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to jottacloud (Jottacloud).
--jottacloud-token
OAuth Access Token as a JSON blob.
@@ -27745,7 +29475,7 @@ y/e/d> y
Type: string
Required: false
-
+
Jottacloud has limited support for metadata, currently an extended set of timestamps.
Here are the possible system metadata items for the jottacloud backend.
@@ -27797,16 +29527,16 @@ y/e/d> y
See the metadata docs for more info.
-Limitations
+Limitations
Note that Jottacloud is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
There are quite a few characters that can't be in Jottacloud file names. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to ? instead.
Jottacloud only supports filenames up to 255 characters in length.
-Troubleshooting
+Troubleshooting
Jottacloud exhibits some inconsistent behaviours regarding deleted files and folders which may cause Copy, Move and DirMove operations to previously deleted paths to fail. Emptying the trash should help in such cases.
Koofr
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
-Configuration
+Configuration
The initial setup for Koofr involves creating an application password for rclone. You can do that by opening the Koofr web application, giving the password a nice name like rclone
and clicking on generate.
Here is an example of how to make a remote called koofr
. First run:
rclone config
@@ -27893,7 +29623,7 @@ y/e/d> y
Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.
-Standard options
+Standard options
Here are the Standard options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers).
--koofr-provider
Choose your storage provider.
@@ -27949,7 +29679,7 @@ y/e/d> y
Type: string
Required: true
-Advanced options
+Advanced options
Here are the Advanced options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers).
--koofr-mountid
Mount ID of the mount to use.
@@ -27990,7 +29720,7 @@ y/e/d> y
Type: string
Required: false
-Limitations
+Limitations
Note that Koofr is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
Providers
Koofr
@@ -28120,7 +29850,7 @@ d) Delete this remote
y/e/d> y
Linkbox
Linkbox is a private cloud drive.
-Configuration
+Configuration
Here is an example of making a remote for Linkbox.
First run:
rclone config
@@ -28156,7 +29886,7 @@ e) Edit this remote
d) Delete this remote
y/e/d> y
-Standard options
+Standard options
Here are the Standard options specific to linkbox (Linkbox).
--linkbox-token
Token from https://www.linkbox.to/admin/account
@@ -28167,7 +29897,7 @@ y/e/d> y
Type: string
Required: true
-Advanced options
+Advanced options
Here are the Advanced options specific to linkbox (Linkbox).
--linkbox-description
Description of the remote.
@@ -28178,7 +29908,7 @@ y/e/d> y
Type: string
Required: false
-Limitations
+Limitations
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
Mail.ru Cloud
Mail.ru Cloud is a cloud storage provided by a Russian internet company Mail.Ru Group. The official desktop client is Disk-O:, available on Windows and Mac OS.
@@ -28193,7 +29923,7 @@ y/e/d> y
Storage keeps hash for all files and performs transparent deduplication, the hash algorithm is a modified SHA1
If a particular file is already present in storage, one can quickly submit file hash instead of long file upload (this optimization is supported by rclone)
-Configuration
+Configuration
Here is an example of making a mailru configuration.
First create a Mail.ru Cloud account and choose a tariff.
You will need to log in and create an app password for rclone. Rclone will not work with your normal username and password - it will give an error like oauth2: server response missing access_token
.
@@ -28202,6 +29932,7 @@ y/e/d> y
Go to Security / "Пароль и безопасность"
Click password for apps / "Пароли для внешних приложений"
Add the password - give it a name - eg "rclone"
+Select the permissions level. For some reason just "Full access to Cloud" (WebDav) doesn't work for Rclone currently. You have to select "Full access to Mail, Cloud and Calendar" (all protocols). (thread on forum.rclone.org)
Copy the password and use this password below - your normal login password won't work.
Now run
@@ -28273,7 +30004,7 @@ y/e/d> y
rclone ls remote:directory
Sync /home/local/directory
to the remote path, deleting any excess files in the path.
rclone sync --interactive /home/local/directory remote:directory
-Modification times and hashes
+Modification times and hashes
Files support a modification time attribute with up to 1 second precision. Directories do not have a modification time, which is shown as "Jan 1 1970".
File hashes are supported, with a custom Mail.ru algorithm based on SHA1. If file size is less than or equal to the SHA1 block size (20 bytes), its hash is simply its data right-padded with zero bytes. Hashes of a larger file is computed as a SHA1 of the file data bytes concatenated with a decimal representation of the data length.
Emptying Trash
@@ -28334,7 +30065,7 @@ y/e/d> y
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Standard options
+Standard options
Here are the Standard options specific to mailru (Mail.ru Cloud).
--mailru-client-id
OAuth Client Id.
@@ -28397,7 +30128,7 @@ y/e/d> y
-Advanced options
+Advanced options
Here are the Advanced options specific to mailru (Mail.ru Cloud).
--mailru-token
OAuth Access Token as a JSON blob.
@@ -28575,15 +30306,15 @@ y/e/d> y
Type: string
Required: false
-Limitations
+Limitations
File size limits depend on your account. A single file size is limited by 2G for a free account and unlimited for paid tariffs. Please refer to the Mail.ru site for the total uploaded size limits.
Note that Mailru is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
-Mega
+Mega
Mega is a cloud storage and file hosting service known for its security feature where all files are encrypted locally before they are uploaded. This prevents anyone (including employees of Mega) from accessing the files without knowledge of the key used for encryption.
This is an rclone backend for Mega which supports the file transfer features of Mega using the same client side encryption.
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
-Configuration
+Configuration
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
@@ -28630,7 +30361,7 @@ y/e/d> y
rclone ls remote:
To copy a local directory to an Mega directory called backup
rclone copy /home/source remote:backup
-Modification times and hashes
+Modification times and hashes
Mega does not support modification times or hashes yet.
Restricted filename characters
@@ -28655,7 +30386,7 @@ y/e/d> y
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Duplicated files
+Duplicated files
Mega can have two files with exactly the same name and path (unlike a normal file system).
Duplicated files cause problems with the syncing and you will see messages in the log about duplicates.
Use rclone dedupe
to fix duplicated files.
@@ -28684,7 +30415,7 @@ me@example.com:/$
Note that once blocked, the use of other tools (such as megacmd) is not a sure workaround: following megacmd login times have been observed in succession for blocked remote: 7 minutes, 20 min, 30min, 30 min, 30min. Web access looks unaffected though.
Investigation is continuing in relation to workarounds based on timeouts, pacers, retrials and tpslimits - if you discover something relevant, please post on the forum.
So, if rclone was working nicely and suddenly you are unable to log-in and you are sure the user and the password are correct, likely you have got the remote blocked for a while.
-Standard options
+Standard options
Here are the Standard options specific to mega (Mega).
--mega-user
User name.
@@ -28705,7 +30436,7 @@ me@example.com:/$
Type: string
Required: true
-Advanced options
+Advanced options
Here are the Advanced options specific to mega (Mega).
--mega-debug
Output more debug from Mega.
@@ -28756,15 +30487,15 @@ me@example.com:/$
Type: string
Required: false
-Process killed
+Process killed
On accounts with large files or something else, memory usage can significantly increase when executing list/sync instructions. When running on cloud providers (like AWS with EC2), check if the instance type has sufficient memory/CPU to execute the commands. Use the resource monitoring tools to inspect after sending the commands. Look at this issue.
-Limitations
+Limitations
This backend uses the go-mega go library which is an opensource go library implementing the Mega API. There doesn't appear to be any documentation for the mega protocol beyond the mega C++ SDK source code so there are likely quite a few errors still remaining in this library.
Mega allows duplicate files which may confuse rclone.
Memory
The memory backend is an in RAM backend. It does not persist its data - use the local backend for that.
The memory backend behaves like a bucket-based remote (e.g. like s3). Because it has no parameters you can just use it with the :memory:
remote name.
-Configuration
+Configuration
You can configure it as a remote like this with rclone config
too if you want to:
No remotes found, make a new one?
n) New remote
@@ -28796,11 +30527,11 @@ y/e/d> y
rclone mount :memory: /mnt/tmp
rclone serve webdav :memory:
rclone serve sftp :memory:
-Modification times and hashes
+Modification times and hashes
The memory backend supports MD5 hashes and modification times accurate to 1 nS.
Restricted filename characters
The memory backend replaces the default restricted characters set.
-Advanced options
+Advanced options
Here are the Advanced options specific to memory (In memory object storage system.).
--memory-description
Description of the remote.
@@ -28815,7 +30546,7 @@ rclone serve sftp :memory:
Paths are specified as remote:
You may put subdirectories in too, e.g. remote:/path/to/dir
. If you have a CP code you can use that as the folder after the domain such as <domain>/<cpcode>/<internal directories within cpcode>.
For example, this is commonly configured with or without a CP code: * With a CP code. [your-domain-prefix]-nsu.akamaihd.net/123456/subdirectory/
* Without a CP code. [your-domain-prefix]-nsu.akamaihd.net
See all buckets rclone lsd remote: The initial setup for Netstorage involves getting an account and secret. Use rclone config
to walk you through the setup process.
-Configuration
+Configuration
Here's an example of how to make a remote called ns1
.
- To begin the interactive configuration process, enter this command:
@@ -28923,7 +30654,7 @@ y/e/d> y
Purge
NetStorage remote supports the purge feature by using the "quick-delete" NetStorage API action. The quick-delete action is disabled by default for security reasons and can be enabled for the account through the Akamai portal. Rclone will first try to use quick-delete action for the purge command and if this functionality is disabled then will fall back to a standard delete method.
Note: Read the NetStorage Usage API for considerations when using "quick-delete". In general, using quick-delete method will not delete the tree immediately and objects targeted for quick-delete may still be accessible.
-Standard options
+Standard options
Here are the Standard options specific to netstorage (Akamai NetStorage).
--netstorage-host
Domain+path of NetStorage host to connect to.
@@ -28955,7 +30686,7 @@ y/e/d> y
- Type: string
- Required: true
-Advanced options
+Advanced options
Here are the Advanced options specific to netstorage (Akamai NetStorage).
--netstorage-protocol
Select between HTTP or HTTPS protocol.
@@ -28987,7 +30718,7 @@ y/e/d> y
- Type: string
- Required: false
-Backend commands
+Backend commands
Here are the commands specific to the netstorage backend.
Run them with
rclone backend COMMAND remote:
@@ -29004,7 +30735,7 @@ y/e/d> y
The desired path location (including applicable sub-directories) ending in the object that will be the target of the symlink (for example, /links/mylink). Include the file extension for the object, if applicable. rclone backend symlink <src> <path>
Microsoft Azure Blob Storage
Paths are specified as remote:container
(or remote:
for the lsd
command.) You may put subdirectories in too, e.g. remote:container/path/to/dir
.
-Configuration
+Configuration
Here is an example of making a Microsoft Azure Blob Storage configuration. For a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
@@ -29049,7 +30780,7 @@ y/e/d> y
rclone sync --interactive /home/local/directory remote:container
--fast-list
This remote supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
-Modification times and hashes
+Modification times and hashes
The modification time is stored as metadata on the object with the mtime
key. It is stored using RFC3339 Format time with nanosecond precision. The metadata is supplied during directory listings so there is no performance overhead to using it.
If you wish to use the Azure standard LastModified
time stored on the object as the modified time, then use the --use-server-modtime
flag. Note that rclone can't set LastModified
, so using the --update
flag when syncing is recommended if using --use-server-modtime
.
MD5 hashes are stored with blobs. However blobs that were uploaded in chunks only have an MD5 if the source remote was capable of MD5 hashes, e.g. the local disk.
@@ -29206,7 +30937,7 @@ container/
Anonymous
If you want to access resources with public anonymous access then set account
only. You can do this without making an rclone config:
rclone lsf :azureblob,account=ACCOUNT:CONTAINER
-Standard options
+Standard options
Here are the Standard options specific to azureblob (Microsoft Azure Blob Storage).
--azureblob-account
Azure Storage Account Name.
@@ -29302,7 +31033,7 @@ container/
- Type: string
- Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to azureblob (Microsoft Azure Blob Storage).
--azureblob-client-send-certificate-chain
Send the certificate chain when using certificate auth.
@@ -29469,6 +31200,42 @@ container/
- Type: int
- Default: 16
+--azureblob-copy-cutoff
+Cutoff for switching to multipart copy.
+Any files larger than this that need to be server-side copied will be copied in chunks of chunk_size using the put block list API.
+Files smaller than this limit will be copied with the Copy Blob API.
+Properties:
+
+- Config: copy_cutoff
+- Env Var: RCLONE_AZUREBLOB_COPY_CUTOFF
+- Type: SizeSuffix
+- Default: 8Mi
+
+--azureblob-copy-concurrency
+Concurrency for multipart copy.
+This is the number of chunks of the same file that are copied concurrently.
+These chunks are not buffered in memory and Microsoft recommends setting this value to greater than 1000 in the azcopy documentation.
+https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-optimize#increase-concurrency
+In tests, copy speed increases almost linearly with copy concurrency.
+Properties:
+
+- Config: copy_concurrency
+- Env Var: RCLONE_AZUREBLOB_COPY_CONCURRENCY
+- Type: int
+- Default: 512
+
+--azureblob-use-copy-blob
+Whether to use the Copy Blob API when copying to the same storage account.
+If true (the default) then rclone will use the Copy Blob API for copies to the same storage account even when the size is above the copy_cutoff.
+Rclone assumes that the same storage account means the same config and does not check for the same storage account in different configs.
+There should be no need to change this value.
+Properties:
+
+- Config: use_copy_blob
+- Env Var: RCLONE_AZUREBLOB_USE_COPY_BLOB
+- Type: bool
+- Default: true
+
--azureblob-list-chunk
Size of blob list.
This sets the number of blobs requested in each listing chunk. Default is the maximum, 5000. "List blobs" requests are permitted 2 minutes per megabyte to complete. If an operation is taking longer than 2 minutes per megabyte on average, it will time out ( source ). This can be used to limit the number of blobs items to return, to avoid the time out.
@@ -29636,9 +31403,10 @@ container/
- Content-Encoding
- Content-Language
- Content-Type
+- X-MS-Tags
-Eg --header-upload "Content-Type: text/potato"
-Limitations
+Eg --header-upload "Content-Type: text/potato"
or --header-upload "X-MS-Tags: foo=bar"
+Limitations
MD5 sums are only uploaded with chunked files if the source has an MD5 sum. This will always be the case for a local to azure copy.
rclone about
is not supported by the Microsoft Azure Blob storage backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about and rclone about
@@ -29648,7 +31416,7 @@ container/
Also, if you want to access a storage emulator instance running on a different machine, you can override the endpoint
parameter in the advanced settings, setting it to http(s)://<host>:<port>/devstoreaccount1
(e.g. http://10.254.2.5:10000/devstoreaccount1
).
Microsoft Azure Files Storage
Paths are specified as remote:
You may put subdirectories in too, e.g. remote:path/to/dir
.
-Configuration
+Configuration
Here is an example of making a Microsoft Azure Files Storage configuration. For a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
@@ -29847,6 +31615,7 @@ y/e/d>
Env Auth: 2. Managed Service Identity Credentials
When using Managed Service Identity if the VM(SS) on which this program is running has a system-assigned identity, it will be used by default. If the resource has no system-assigned but exactly one user-assigned identity, the user-assigned identity will be used by default.
If the resource has multiple user-assigned identities you will need to unset env_auth
and set use_msi
instead. See the use_msi
section.
+If you are operating in disconnected clouds, or private clouds such as Azure Stack you may want to set disable_instance_discovery = true
. This determines whether rclone requests Microsoft Entra instance metadata from https://login.microsoft.com/
before authenticating. Setting this to true
will skip this request, making you responsible for ensuring the configured authority is valid and trustworthy.
Credentials created with the az
tool can be picked up using env_auth
.
For example if you were to login with a service principal like this:
@@ -29893,7 +31662,9 @@ y/e/d>
If use_msi
is set then managed service identity credentials are used. This authentication only works when running in an Azure service. env_auth
needs to be unset to use this.
However if you have multiple user identities to choose from these must be explicitly specified using exactly one of the msi_object_id
, msi_client_id
, or msi_mi_res_id
parameters.
If none of msi_object_id
, msi_client_id
, or msi_mi_res_id
is set, this is is equivalent to using env_auth
.
-Standard options
+Azure CLI tool az
+Set to use the Azure CLI tool az
as the sole means of authentication. Setting this can be useful if you wish to use the az
CLI on a host with a System Managed Identity that you do not want to use. Don't set env_auth
at the same time.
+Standard options
Here are the Standard options specific to azurefiles (Microsoft Azure Files).
--azurefiles-account
Azure Storage Account Name.
@@ -30008,7 +31779,7 @@ y/e/d>
- Type: string
- Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to azurefiles (Microsoft Azure Files).
--azurefiles-client-send-certificate-chain
Send the certificate chain when using certificate auth.
@@ -30100,6 +31871,24 @@ y/e/d>
- Type: string
- Required: false
+--azurefiles-disable-instance-discovery
+Skip requesting Microsoft Entra instance metadata This should be set true only by applications authenticating in disconnected clouds, or private clouds such as Azure Stack. It determines whether rclone requests Microsoft Entra instance metadata from https://login.microsoft.com/
before authenticating. Setting this to true will skip this request, making you responsible for ensuring the configured authority is valid and trustworthy.
+Properties:
+
+- Config: disable_instance_discovery
+- Env Var: RCLONE_AZUREFILES_DISABLE_INSTANCE_DISCOVERY
+- Type: bool
+- Default: false
+
+--azurefiles-use-az
+Use Azure CLI tool az for authentication Set to use the Azure CLI tool az as the sole means of authentication. Setting this can be useful if you wish to use the az CLI on a host with a System Managed Identity that you do not want to use. Don't set env_auth at the same time.
+Properties:
+
+- Config: use_az
+- Env Var: RCLONE_AZUREFILES_USE_AZ
+- Type: bool
+- Default: false
+
--azurefiles-endpoint
Endpoint for the service.
Leave blank normally.
@@ -30178,12 +31967,12 @@ y/e/d>
- Content-Type
Eg --header-upload "Content-Type: text/potato"
-Limitations
+Limitations
MD5 sums are only uploaded with chunked files if the source has an MD5 sum. This will always be the case for a local to azure copy.
Microsoft OneDrive
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
-Configuration
+Configuration
The initial setup for OneDrive involves getting a token from Microsoft which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -30306,7 +32095,7 @@ y/e/d> y
When it comes to choosing the type of the connection work with the client credentials flow. In particular the "onedrive" option does not work. You can use the "sharepoint" option or if that does not find the correct drive ID type it in manually with the "driveid" option.
NOTE Assigning permissions directly to the application means that anyone with the Client ID and Client Secret can access your OneDrive files. Take care to safeguard these credentials.
-Modification times and hashes
+Modification times and hashes
OneDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.
OneDrive Personal, OneDrive for Business and Sharepoint Server support QuickXorHash.
Before rclone 1.62 the default hash for Onedrive Personal was SHA1
. For rclone 1.62 and above the default for all Onedrive backends is QuickXorHash
.
@@ -30419,7 +32208,7 @@ y/e/d> y
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
Deleting files
Any files you delete with rclone will end up in the trash. Microsoft doesn't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Microsoft's apps or via the OneDrive website.
-Standard options
+Standard options
Here are the Standard options specific to onedrive (Microsoft OneDrive).
--onedrive-client-id
OAuth Client Id.
@@ -30461,7 +32250,7 @@ y/e/d> y
"de"
-- Microsoft Cloud Germany
+- Microsoft Cloud Germany (deprecated - try global region first).
"cn"
@@ -30479,7 +32268,7 @@ y/e/d> y
- Type: string
- Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to onedrive (Microsoft OneDrive).
--onedrive-token
OAuth Access Token as a JSON blob.
@@ -30520,6 +32309,18 @@ y/e/d> y
Type: bool
Default: false
+--onedrive-upload-cutoff
+Cutoff for switching to chunked upload.
+Any files larger than this will be uploaded in chunks of chunk_size.
+This is disabled by default as uploading using single part uploads causes rclone to use twice the storage on Onedrive business as when rclone sets the modification time after the upload Onedrive creates a new version.
+See: https://github.com/rclone/rclone/issues/1716
+Properties:
+
+- Config: upload_cutoff
+- Env Var: RCLONE_ONEDRIVE_UPLOAD_CUTOFF
+- Type: SizeSuffix
+- Default: off
+
--onedrive-chunk-size
Chunk size to upload files with - must be multiple of 320k (327,680 bytes).
Above this size files will be chunked - must be multiple of 320k (327,680 bytes) and should not exceed 250M (262,144,000 bytes) else you may encounter "Microsoft.SharePoint.Client.InvalidClientQueryException: The request message is too big." Note that the chunks will be buffered into memory.
@@ -30829,80 +32630,80 @@ rclone rc vfs/refresh recursive=true
Type: string
Required: false
-
+
OneDrive supports System Metadata (not User Metadata, as of this writing) for both files and directories. Much of the metadata is read-only, and there are some differences between OneDrive Personal and Business (see table below for details).
Permissions are also supported, if --onedrive-metadata-permissions
is set. The accepted values for --onedrive-metadata-permissions
are "read
", "write
", "read,write
", and "off
" (the default). "write
" supports adding new permissions, updating the "role" of existing permissions, and removing permissions. Updating and removing require the Permission ID to be known, so it is recommended to use "read,write
" instead of "write
" if you wish to update/remove permissions.
Permissions are read/written in JSON format using the same schema as the OneDrive API, which differs slightly between OneDrive Personal and Business.
Example for OneDrive Personal:
-[
- {
- "id": "1234567890ABC!123",
- "grantedTo": {
- "user": {
- "id": "ryan@contoso.com"
- },
- "application": {},
- "device": {}
- },
- "invitation": {
- "email": "ryan@contoso.com"
- },
- "link": {
- "webUrl": "https://1drv.ms/t/s!1234567890ABC"
- },
- "roles": [
- "read"
- ],
- "shareId": "s!1234567890ABC"
- }
-]
+[
+ {
+ "id": "1234567890ABC!123",
+ "grantedTo": {
+ "user": {
+ "id": "ryan@contoso.com"
+ },
+ "application": {},
+ "device": {}
+ },
+ "invitation": {
+ "email": "ryan@contoso.com"
+ },
+ "link": {
+ "webUrl": "https://1drv.ms/t/s!1234567890ABC"
+ },
+ "roles": [
+ "read"
+ ],
+ "shareId": "s!1234567890ABC"
+ }
+]
Example for OneDrive Business:
-[
- {
- "id": "48d31887-5fad-4d73-a9f5-3c356e68a038",
- "grantedToIdentities": [
- {
- "user": {
- "displayName": "ryan@contoso.com"
- },
- "application": {},
- "device": {}
- }
- ],
- "link": {
- "type": "view",
- "scope": "users",
- "webUrl": "https://contoso.sharepoint.com/:w:/t/design/a577ghg9hgh737613bmbjf839026561fmzhsr85ng9f3hjck2t5s"
- },
- "roles": [
- "read"
- ],
- "shareId": "u!LKj1lkdlals90j1nlkascl"
- },
- {
- "id": "5D33DD65C6932946",
- "grantedTo": {
- "user": {
- "displayName": "John Doe",
- "id": "efee1b77-fb3b-4f65-99d6-274c11914d12"
- },
- "application": {},
- "device": {}
- },
- "roles": [
- "owner"
- ],
- "shareId": "FWxc1lasfdbEAGM5fI7B67aB5ZMPDMmQ11U"
- }
-]
+[
+ {
+ "id": "48d31887-5fad-4d73-a9f5-3c356e68a038",
+ "grantedToIdentities": [
+ {
+ "user": {
+ "displayName": "ryan@contoso.com"
+ },
+ "application": {},
+ "device": {}
+ }
+ ],
+ "link": {
+ "type": "view",
+ "scope": "users",
+ "webUrl": "https://contoso.sharepoint.com/:w:/t/design/a577ghg9hgh737613bmbjf839026561fmzhsr85ng9f3hjck2t5s"
+ },
+ "roles": [
+ "read"
+ ],
+ "shareId": "u!LKj1lkdlals90j1nlkascl"
+ },
+ {
+ "id": "5D33DD65C6932946",
+ "grantedTo": {
+ "user": {
+ "displayName": "John Doe",
+ "id": "efee1b77-fb3b-4f65-99d6-274c11914d12"
+ },
+ "application": {},
+ "device": {}
+ },
+ "roles": [
+ "owner"
+ ],
+ "shareId": "FWxc1lasfdbEAGM5fI7B67aB5ZMPDMmQ11U"
+ }
+]
To write permissions, pass in a "permissions" metadata key using this same format. The --metadata-mapper
tool can be very helpful for this.
When adding permissions, an email address can be provided in the User.ID
or DisplayName
properties of grantedTo
or grantedToIdentities
. Alternatively, an ObjectID can be provided in User.ID
. At least one valid recipient must be provided in order to add a permission for a user. Creating a Public Link is also supported, if Link.Scope
is set to "anonymous"
.
Example request to add a "read" permission with --metadata-mapper
:
-{
- "Metadata": {
- "permissions": "[{\"grantedToIdentities\":[{\"user\":{\"id\":\"ryan@contoso.com\"}}],\"roles\":[\"read\"]}]"
- }
-}
+{
+ "Metadata": {
+ "permissions": "[{\"grantedToIdentities\":[{\"user\":{\"id\":\"ryan@contoso.com\"}}],\"roles\":[\"read\"]}]"
+ }
+}
Note that adding a permission can fail if a conflicting permission already exists for the file/folder.
To update an existing permission, include both the Permission ID and the new roles
to be assigned. roles
is the only property that can be changed.
To remove permissions, pass in a blob containing only the permissions you wish to keep (which can be empty, to remove all.) Note that the owner
role will be ignored, as it cannot be removed.
@@ -31052,7 +32853,27 @@ rclone rc vfs/refresh recursive=true
See the metadata docs for more info.
-Limitations
+Impersonate other users as Admin
+Unlike Google Drive and impersonating any domain user via service accounts, OneDrive requires you to authenticate as an admin account, and manually setup a remote per user you wish to impersonate.
+
+- In Microsoft 365 Admin Center, open each user you need to "impersonate" and go to the OneDrive section. There is a heading called "Get access to files", you need to click to create the link, this creates the link of the format
https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/
but also changes the permissions so you your admin user has access.
+- Then in powershell run the following commands:
+
+Install-Module Microsoft.Graph -Scope CurrentUser -Repository PSGallery -Force
+Import-Module Microsoft.Graph.Files
+Connect-MgGraph -Scopes "Files.ReadWrite.All"
+# Follow the steps to allow access to your admin user
+# Then run this for each user you want to impersonate to get the Drive ID
+Get-MgUserDefaultDrive -UserId '{emailaddress}'
+# This will give you output of the format:
+# Name Id DriveType CreatedDateTime
+# ---- -- --------- ---------------
+# OneDrive b!XYZ123 business 14/10/2023 1:00:58 pm
+
+
+- Then in rclone add a onedrive remote type, and use the
Type in driveID
with the DriveID you got in the previous step. One remote per user. It will then confirm the drive ID, and hopefully give you a message of Found drive "root" of type "business"
and then include the URL of the format https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/Documents
+
+Limitations
If you don't use rclone for 90 days the refresh token will expire. This will result in authorization problems. This is easy to fix by running the rclone config reconnect remote:
command to get a new token and refresh token.
Naming
Note that OneDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
@@ -31097,7 +32918,7 @@ rclone rc vfs/refresh recursive=true
rclone cleanup --interactive remote:path/subdir # interactively remove all old version for path/subdir
rclone cleanup remote:path/subdir # unconditionally remove all old version for path/subdir
NB Onedrive personal can't currently delete versions
-Troubleshooting
+Troubleshooting
Excessive throttling or blocked on SharePoint
If you experience excessive throttling or is being blocked on SharePoint then it may help to set the user agent explicitly with a flag like this: --user-agent "ISV|rclone.org|rclone/v1.55.1"
The specific details can be found in the Microsoft document: Avoid getting throttled or blocked in SharePoint Online
@@ -31147,7 +32968,7 @@ ERROR : 20230203_123826234_iOS.heic: vfs cache: failed to download: vfs reader:
OpenDrive
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
-Configuration
+Configuration
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
@@ -31189,7 +33010,7 @@ y/e/d> y
rclone ls remote:
To copy a local directory to an OpenDrive directory called backup
rclone copy /home/source remote:backup
-Modification times and hashes
+Modification times and hashes
OpenDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.
The MD5 hash algorithm is supported.
Restricted filename characters
@@ -31292,7 +33113,7 @@ y/e/d> y
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Standard options
+Standard options
Here are the Standard options specific to opendrive (OpenDrive).
--opendrive-username
Username.
@@ -31313,7 +33134,7 @@ y/e/d> y
Type: string
Required: true
-Advanced options
+Advanced options
Here are the Advanced options specific to opendrive (OpenDrive).
--opendrive-encoding
The encoding for the backend.
@@ -31335,6 +33156,30 @@ y/e/d> y
Type: SizeSuffix
Default: 10Mi
+--opendrive-access
+Files and folders will be uploaded with this access permission (default private)
+Properties:
+
+- Config: access
+- Env Var: RCLONE_OPENDRIVE_ACCESS
+- Type: string
+- Default: "private"
+- Examples:
+
+- "private"
+
+- The file or folder access can be granted in a way that will allow select users to view, read or write what is absolutely essential for them.
+
+- "public"
+
+- The file or folder can be downloaded by anyone from a web browser. The link can be shared in any way,
+
+- "hidden"
+
+- The file or folder can be accessed has the same restrictions as Public if the user knows the URL of the file or folder link in order to access the contents
+
+
+
--opendrive-description
Description of the remote.
Properties:
@@ -31344,7 +33189,7 @@ y/e/d> y
Type: string
Required: false
-Limitations
+Limitations
Note that OpenDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
There are quite a few characters that can't be in OpenDrive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ?
in it will be mapped to ?
instead.
rclone about
is not supported by the OpenDrive backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
@@ -31358,7 +33203,7 @@ y/e/d> y
Paths are specified as remote:bucket
(or remote:
for the lsd
command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir
.
Sample command to transfer local artifacts to remote:bucket in oracle object storage:
rclone -vvv --progress --stats-one-line --max-stats-groups 10 --log-format date,time,UTC,longfile --fast-list --buffer-size 256Mi --oos-no-check-bucket --oos-upload-cutoff 10Mi --multi-thread-cutoff 16Mi --multi-thread-streams 3000 --transfers 3000 --checkers 64 --retries 2 --oos-chunk-size 10Mi --oos-upload-concurrency 10000 --oos-attempt-resume-upload --oos-leave-parts-on-error sync ./artifacts remote:bucket -vv
-Configuration
+Configuration
Here is an example of making an oracle object storage configuration. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -31536,7 +33381,7 @@ namespace = id<redacted>34
compartment = ocid1.compartment.oc1..aa<redacted>ba
region = us-ashburn-1
provider = no_auth
-Modification times and hashes
+Modification times and hashes
The modification time is stored as metadata on the object as opc-meta-mtime
as floating point since the epoch, accurate to 1 ns.
If the modification time needs to be updated rclone will attempt to perform a server side copy to update the modification if the object can be copied in a single part. In the case the object is larger than 5Gb, the object will be uploaded rather than copied.
Note that reading this from the object takes an additional HEAD
request as the metadata isn't returned in object listings.
@@ -31549,7 +33394,7 @@ provider = no_auth
Multipart uploads will use --transfers
* --oos-upload-concurrency
* --oos-chunk-size
extra memory. Single part uploads to not use extra memory.
Single part transfers can be faster than multipart transfers or slower depending on your latency from oos - the more latency, the more likely single part transfers will be faster.
Increasing --oos-upload-concurrency
will increase throughput (8 would be a sensible value) and increasing --oos-chunk-size
also increases throughput (16M would be sensible). Increasing either of these will use more memory. The default values are high enough to gain most of the possible performance without using too much memory.
-Standard options
+Standard options
Here are the Standard options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage).
--oos-provider
Choose your Auth Provider
@@ -31665,7 +33510,7 @@ provider = no_auth
-Advanced options
+Advanced options
Here are the Advanced options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage).
--oos-storage-tier
The storage class to use when storing new objects in storage. https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm
@@ -31906,7 +33751,7 @@ provider = no_auth
Type: string
Required: false
-Backend commands
+Backend commands
Here are the commands specific to the oracleobjectstorage backend.
Run them with
rclone backend COMMAND remote:
@@ -31984,7 +33829,7 @@ if not.
QingStor
Paths are specified as remote:bucket
(or remote:
for the lsd
command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir
.
-Configuration
+Configuration
Here is an example of making an QingStor configuration. First run
rclone config
This will guide you through an interactive setup process.
@@ -32080,7 +33925,7 @@ y/e/d> y
Restricted filename characters
The control characters 0x00-0x1F and / are replaced as in the default restricted characters set. Note that 0x7F is not replaced.
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Standard options
+Standard options
Here are the Standard options specific to qingstor (QingCloud Object Storage).
--qingstor-env-auth
Get QingStor credentials from runtime.
@@ -32161,7 +34006,7 @@ y/e/d> y
-Advanced options
+Advanced options
Here are the Advanced options specific to qingstor (QingCloud Object Storage).
--qingstor-connection-retries
Number of connection retries.
@@ -32225,7 +34070,7 @@ y/e/d> y
Type: string
Required: false
-Limitations
+Limitations
rclone about
is not supported by the qingstor backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about and rclone about
Quatrix
@@ -32234,7 +34079,7 @@ y/e/d> y
Paths may be as deep as required, e.g., remote:directory/subdirectory
.
The initial setup for Quatrix involves getting an API Key from Quatrix. You can get the API key in the user's profile at https://<account>/profile/api-keys
or with the help of the API - https://docs.maytech.net/quatrix/quatrix-api/api-explorer#/API-Key/post_api_key_create.
See complete Swagger documentation for Quatrix - https://docs.maytech.net/quatrix/quatrix-api/api-explorer
-Configuration
+Configuration
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
@@ -32318,7 +34163,7 @@ y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
-Modification times and hashes
+Modification times and hashes
Quatrix allows modification times to be set on objects accurate to 1 microsecond. These will be used to detect whether objects need syncing or not.
Quatrix does not support hashes, so you cannot use the --checksum
flag.
Restricted filename characters
@@ -32327,7 +34172,7 @@ y/e/d> y
For files above 50 MiB rclone will use a chunked transfer. Rclone will upload up to --transfers
chunks at the same time (shared among all multipart uploads). Chunks are buffered in memory, and the minimal chunk size is 10_000_000 bytes by default, and it can be changed in the advanced configuration, so increasing --transfers
will increase the memory use. The chunk size has a maximum size limit, which is set to 100_000_000 bytes by default and can be changed in the advanced configuration. The size of the uploaded chunk will dynamically change depending on the upload speed. The total memory use equals the number of transfers multiplied by the minimal chunk size. In case there's free memory allocated for the upload (which equals the difference of maximal_summary_chunk_size
and minimal_chunk_size
* transfers
), the chunk size may increase in case of high upload speed. As well as it can decrease in case of upload speed problems. If no free memory is available, all chunks will equal minimal_chunk_size
.
Deleting files
Files you delete with rclone will end up in Trash and be stored there for 30 days. Quatrix also provides an API to permanently delete files and an API to empty the Trash so that you can remove files permanently from your account.
-Standard options
+Standard options
Here are the Standard options specific to quatrix (Quatrix by Maytech).
--quatrix-api-key
API key for accessing Quatrix account
@@ -32347,7 +34192,7 @@ y/e/d> y
Type: string
Required: true
-Advanced options
+Advanced options
Here are the Advanced options specific to quatrix (Quatrix by Maytech).
--quatrix-encoding
The encoding for the backend.
@@ -32424,7 +34269,7 @@ y/e/d> y
rclone interacts with Sia network by talking to the Sia daemon via HTTP API which is usually available on port 9980. By default you will run the daemon locally on the same computer so it's safe to leave the API password blank (the API URL will be http://127.0.0.1:9980
making external access impossible).
However, if you want to access Sia daemon running on another node, for example due to memory constraints or because you want to share single daemon between several rclone and Sia-UI instances, you'll need to make a few more provisions: - Ensure you have Sia daemon installed directly or in a docker container because Sia-UI does not support this mode natively. - Run it on externally accessible port, for example provide --api-addr :9980
and --disable-api-security
arguments on the daemon command line. - Enforce API password for the siad
daemon via environment variable SIA_API_PASSWORD
or text file named apipassword
in the daemon directory. - Set rclone backend option api_password
taking it from above locations.
Notes: 1. If your wallet is locked, rclone cannot unlock it automatically. You should either unlock it in advance by using Sia-UI or via command line siac wallet unlock
. Alternatively you can make siad
unlock your wallet automatically upon startup by running it with environment variable SIA_WALLET_PASSWORD
. 2. If siad
cannot find the SIA_API_PASSWORD
variable or the apipassword
file in the SIA_DIR
directory, it will generate a random password and store in the text file named apipassword
under YOUR_HOME/.sia/
directory on Unix or C:\Users\YOUR_HOME\AppData\Local\Sia\apipassword
on Windows. Remember this when you configure password in rclone. 3. The only way to use siad
without API password is to run it on localhost with command line argument --authorize-api=false
, but this is insecure and strongly discouraged.
-Configuration
+Configuration
Here is an example of how to make a sia
remote called mySia
. First, run:
rclone config
This will guide you through an interactive setup process:
@@ -32484,7 +34329,7 @@ y/e/d> y
Upload a local directory to the Sia directory called backup
rclone copy /home/source mySia:backup
-Standard options
+Standard options
Here are the Standard options specific to sia (Sia Decentralized Cloud).
--sia-api-url
Sia daemon API URL, like http://sia.daemon.host:9980.
@@ -32507,7 +34352,7 @@ y/e/d> y
Type: string
Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to sia (Sia Decentralized Cloud).
--sia-user-agent
Siad User Agent
@@ -32538,7 +34383,7 @@ y/e/d> y
Type: string
Required: false
-Limitations
+Limitations
Paths are specified as remote:container
(or remote:
for the lsd
command.) You may put subdirectories in too, e.g. remote:container/path/to/dir
.
-Configuration
+Configuration
Here is an example of making a swift configuration. First run
rclone config
This will guide you through an interactive setup process.
@@ -32696,7 +34541,7 @@ rclone lsd myremote:
--update and --use-server-modtime
As noted below, the modified time is stored on metadata on the object. It is used by default for all operations that require checking the time a file was last updated. It allows rclone to treat the remote more like a true filesystem, but it is inefficient because it requires an extra API call to retrieve the metadata.
For many operations, the time the object was last uploaded to the remote is sufficient to determine if it is "dirty". By using --update
along with --use-server-modtime
, you can avoid the extra API call and simply upload files whose local modtime is newer than the time it was last uploaded.
-Modification times and hashes
+Modification times and hashes
The modified time is stored as metadata on the object as X-Object-Meta-Mtime
as floating point since the epoch accurate to 1 ns.
This is a de facto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.
The MD5 hash algorithm is supported.
@@ -32723,7 +34568,7 @@ rclone lsd myremote:
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Standard options
+Standard options
Here are the Standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)).
--swift-env-auth
Get swift credentials from environment variables in standard OpenStack form.
@@ -32961,7 +34806,7 @@ rclone lsd myremote:
-Advanced options
+Advanced options
Here are the Advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)).
--swift-leave-parts-on-error
If true avoid calling abort upload on a failure.
@@ -33065,9 +34910,9 @@ rclone lsd myremote:
Type: string
Required: false
-Limitations
+Limitations
The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.
-Troubleshooting
+Troubleshooting
Rclone gives Failed to create file system for "remote:": Bad Request
Due to an oddity of the underlying swift library, it gives a "Bad Request" error rather than a more sensible error when the authentication fails for Swift.
So this most likely means your username / password is wrong. You can investigate further with the --dump-bodies
flag.
@@ -33085,7 +34930,7 @@ rclone lsd myremote:
pCloud
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
-Configuration
+Configuration
The initial setup for pCloud involves getting a token from pCloud which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -33144,7 +34989,7 @@ y/e/d> y
rclone ls remote:
To copy a local directory to a pCloud directory called backup
rclone copy /home/source remote:backup
-Modification times and hashes
+Modification times and hashes
pCloud allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. In order to set a Modification time pCloud requires the object be re-uploaded.
pCloud supports MD5 and SHA1 hashes in the US region, and SHA1 and SHA256 hashes in the EU region, so you can use the --checksum
flag.
Restricted filename characters
@@ -33176,7 +35021,7 @@ y/e/d> y
However you can set this to restrict rclone to a specific folder hierarchy.
In order to do this you will have to find the Folder ID
of the directory you wish rclone to display. This will be the folder
field of the URL when you open the relevant folder in the pCloud web interface.
So if the folder you want rclone to use has a URL which looks like https://my.pcloud.com/#page=filemanager&folder=5xxxxxxxx8&tpl=foldergrid
in the browser, then you use 5xxxxxxxx8
as the root_folder_id
in the config.
-Standard options
+Standard options
Here are the Standard options specific to pcloud (Pcloud).
--pcloud-client-id
OAuth Client Id.
@@ -33198,7 +35043,7 @@ y/e/d> y
Type: string
Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to pcloud (Pcloud).
--pcloud-token
OAuth Access Token as a JSON blob.
@@ -33311,7 +35156,7 @@ y/e/d> y
PikPak
PikPak is a private cloud drive.
Paths are specified as remote:path
, and may be as deep as required, e.g. remote:directory/subdirectory
.
-Configuration
+Configuration
Here is an example of making a remote for PikPak.
First run:
rclone config
@@ -33364,10 +35209,10 @@ y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
-Modification times and hashes
+Modification times and hashes
PikPak keeps modification times on objects, and updates them when uploading objects, but it does not support changing only the modification time
The MD5 hash algorithm is supported.
-Standard options
+Standard options
Here are the Standard options specific to pikpak (PikPak).
--pikpak-user
Pikpak username.
@@ -33388,7 +35233,7 @@ y/e/d> y
Type: string
Required: true
-Advanced options
+Advanced options
Here are the Advanced options specific to pikpak (PikPak).
--pikpak-device-id
Device ID used for authorization.
@@ -33503,7 +35348,7 @@ y/e/d> y
Type: string
Required: false
-Backend commands
+Backend commands
Here are the commands specific to the pikpak backend.
Run them with
rclone backend COMMAND remote:
@@ -33531,7 +35376,7 @@ rclone backend decompress pikpak:dirpath {filename} -o delete-src-file
-Limitations
+Limitations
Hashes may be empty
PikPak supports MD5 hash, but sometimes given empty especially for user-uploaded files.
Deleted files still visible with trashed-only
@@ -33609,7 +35454,7 @@ e/n/d/r/c/s/q> q
rclone lsf Pixeldrain: --dirs-only -Fpi
This will print directories in your Pixeldrain
home directory and their public IDs.
Enter this directory ID in the rclone config and you will be able to access the directory.
-Standard options
+Standard options
Here are the Standard options specific to pixeldrain (Pixeldrain Filesystem).
--pixeldrain-api-key
API key for your pixeldrain account. Found on https://pixeldrain.com/user/api_keys.
@@ -33630,7 +35475,7 @@ e/n/d/r/c/s/q> q
Type: string
Default: "me"
-Advanced options
+Advanced options
Here are the Advanced options specific to pixeldrain (Pixeldrain Filesystem).
--pixeldrain-api-url
The API endpoint to connect to. In the vast majority of cases it's fine to leave this at default. It is only intended to be changed for testing purposes.
@@ -33650,7 +35495,7 @@ e/n/d/r/c/s/q> q
Type: string
Required: false
-
+
Pixeldrain supports file modes and creation times.
Here are the possible system metadata items for the pixeldrain backend.
@@ -33698,7 +35543,7 @@ e/n/d/r/c/s/q> q
premiumize.me
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
-Configuration
+Configuration
The initial setup for premiumize.me involves getting a token from premiumize.me which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -33749,7 +35594,7 @@ y/e/d>
rclone ls remote:
To copy a local directory to an premiumize.me directory called backup
rclone copy /home/source remote:backup
-Modification times and hashes
+Modification times and hashes
premiumize.me does not support modification times or hashes, therefore syncing will default to --size-only
checking. Note that using --update
will work.
Restricted filename characters
In addition to the default restricted characters set the following characters are also replaced:
@@ -33775,7 +35620,7 @@ y/e/d>
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Standard options
+Standard options
Here are the Standard options specific to premiumizeme (premiumize.me).
--premiumizeme-client-id
OAuth Client Id.
@@ -33807,7 +35652,7 @@ y/e/d>
Type: string
Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to premiumizeme (premiumize.me).
--premiumizeme-token
OAuth Access Token as a JSON blob.
@@ -33867,7 +35712,7 @@ y/e/d>
Type: string
Required: false
-Limitations
+Limitations
Note that premiumize.me is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
premiumize.me file names can't have the \
or "
characters in. rclone maps these to and from an identical looking unicode equivalents \
and "
premiumize.me only supports filenames up to 255 characters in length.
@@ -33929,18 +35774,18 @@ y/e/d> y
rclone ls remote:
To copy a local directory to an Proton Drive directory called backup
rclone copy /home/source remote:backup
-Modification times and hashes
+Modification times and hashes
Proton Drive Bridge does not support updating modification times yet.
The SHA1 hash algorithm is supported.
Restricted filename characters
Invalid UTF-8 bytes will be replaced, also left and right spaces will be removed (code reference)
-Duplicated files
+Duplicated files
Proton Drive can not have two files with exactly the same name and path. If the conflict occurs, depending on the advanced config, the file might or might not be overwritten.
Please set your mailbox password in the advanced config section.
Caching
The cache is currently built for the case when the rclone is the only instance performing operations to the mount point. The event system, which is the proton API system that provides visibility of what has changed on the drive, is yet to be implemented, so updates from other clients won’t be reflected in the cache. Thus, if there are concurrent clients accessing the same mount point, then we might have a problem with caching the stale data.
-Standard options
+Standard options
Here are the Standard options specific to protondrive (Proton Drive).
--protondrive-username
The username of your proton account
@@ -33972,7 +35817,7 @@ y/e/d> y
Type: string
Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to protondrive (Proton Drive).
--protondrive-mailbox-password
The mailbox password of your two-password proton account.
@@ -34084,7 +35929,7 @@ y/e/d> y
Type: string
Required: false
-Limitations
+Limitations
This backend uses the Proton-API-Bridge, which is based on go-proton-api, a fork of the official repo.
There is no official API documentation available from Proton Drive. But, thanks to Proton open sourcing proton-go-api and the web, iOS, and Android client codebases, we don't need to completely reverse engineer the APIs by observing the web client traffic!
proton-go-api provides the basic building blocks of API calls and error handling, such as 429 exponential back-off, but it is pretty much just a barebone interface to the Proton API. For example, the encryption and decryption of the Proton Drive file are not provided in this library.
@@ -34092,7 +35937,7 @@ y/e/d> y
put.io
Paths are specified as remote:path
put.io paths may be as deep as required, e.g. remote:directory/subdirectory
.
-Configuration
+Configuration
The initial setup for put.io involves getting a token from put.io which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -34176,7 +36021,7 @@ e/n/d/r/c/s/q> q
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Standard options
+Standard options
Here are the Standard options specific to putio (Put.io).
--putio-client-id
OAuth Client Id.
@@ -34198,7 +36043,7 @@ e/n/d/r/c/s/q> q
Type: string
Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to putio (Put.io).
--putio-token
OAuth Access Token as a JSON blob.
@@ -34258,7 +36103,7 @@ e/n/d/r/c/s/q> q
Type: string
Required: false
-Limitations
+Limitations
put.io has rate limiting. When you hit a limit, rclone automatically retries after waiting the amount of time requested by the server.
If you want to avoid ever hitting these limits, you may use the --tpslimit
flag with a low number. Note that the imposed limits may be different for different operations, and may change over time.
Proton Drive
@@ -34319,18 +36164,18 @@ y/e/d> y
rclone ls remote:
To copy a local directory to an Proton Drive directory called backup
rclone copy /home/source remote:backup
-Modification times and hashes
+Modification times and hashes
Proton Drive Bridge does not support updating modification times yet.
The SHA1 hash algorithm is supported.
Restricted filename characters
Invalid UTF-8 bytes will be replaced, also left and right spaces will be removed (code reference)
-Duplicated files
+Duplicated files
Proton Drive can not have two files with exactly the same name and path. If the conflict occurs, depending on the advanced config, the file might or might not be overwritten.
Please set your mailbox password in the advanced config section.
Caching
The cache is currently built for the case when the rclone is the only instance performing operations to the mount point. The event system, which is the proton API system that provides visibility of what has changed on the drive, is yet to be implemented, so updates from other clients won’t be reflected in the cache. Thus, if there are concurrent clients accessing the same mount point, then we might have a problem with caching the stale data.
-Standard options
+Standard options
Here are the Standard options specific to protondrive (Proton Drive).
--protondrive-username
The username of your proton account
@@ -34362,7 +36207,7 @@ y/e/d> y
Type: string
Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to protondrive (Proton Drive).
--protondrive-mailbox-password
The mailbox password of your two-password proton account.
@@ -34474,14 +36319,14 @@ y/e/d> y
Type: string
Required: false
-Limitations
+Limitations
This backend uses the Proton-API-Bridge, which is based on go-proton-api, a fork of the official repo.
There is no official API documentation available from Proton Drive. But, thanks to Proton open sourcing proton-go-api and the web, iOS, and Android client codebases, we don't need to completely reverse engineer the APIs by observing the web client traffic!
proton-go-api provides the basic building blocks of API calls and error handling, such as 429 exponential back-off, but it is pretty much just a barebone interface to the Proton API. For example, the encryption and decryption of the Proton Drive file are not provided in this library.
The Proton-API-Bridge, attempts to bridge the gap, so rclone can be built on top of this quickly. This codebase handles the intricate tasks before and after calling Proton APIs, particularly the complex encryption scheme, allowing developers to implement features for other software on top of this codebase. There are likely quite a few errors in this library, as there isn't official documentation available.
Seafile
This is a backend for the Seafile storage service: - It works with both the free community edition or the professional edition. - Seafile versions 6.x, 7.x, 8.x and 9.x are all supported. - Encrypted libraries are also supported. - It supports 2FA enabled users - Using a Library API Token is not supported
-Configuration
+Configuration
There are two distinct modes you can setup your remote: - you point your remote to the root of the server, meaning you don't specify a library during the configuration: Paths are specified as remote:library
. You may put subdirectories in too, e.g. remote:library/path/to/dir
. - you point your remote to a specific library during the configuration: Paths are specified as remote:path/to/dir
. This is the recommended mode when using encrypted libraries. (This mode is possibly slightly faster than the root mode)
Configuration in root mode
Here is an example of making a seafile configuration for a user with no two-factor authentication. First run
@@ -34682,7 +36527,7 @@ http://my.seafile.server/d/9ea2455f6f55478bbb0d/
It has been actively developed using the seafile docker image of these versions: - 6.3.4 community edition - 7.0.5 community edition - 7.1.3 community edition - 9.0.10 community edition
Versions below 6.0 are not supported. Versions between 6.0 and 6.3 haven't been tested and might not work properly.
Each new version of rclone
is automatically tested against the latest docker image of the seafile community server.
-Standard options
+Standard options
Here are the Standard options specific to seafile (seafile).
--seafile-url
URL of seafile host to connect to.
@@ -34758,7 +36603,7 @@ http://my.seafile.server/d/9ea2455f6f55478bbb0d/
Type: string
Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to seafile (seafile).
--seafile-create-library
Should rclone create a library if it doesn't exist.
@@ -34799,7 +36644,7 @@ http://my.seafile.server/d/9ea2455f6f55478bbb0d/
Paths are specified as remote:path
. If the path does not begin with a /
it is relative to the home directory of the user. An empty path remote:
refers to the user's home directory. For example, rclone lsd remote:
would list the home directory of the user configured in the rclone remote config (i.e /home/sftpuser
). However, rclone lsd remote:/
would list the root directory for remote machine (i.e. /
)
Note that some SFTP servers will need the leading / - Synology is a good example of this. rsync.net and Hetzner, on the other hand, requires users to OMIT the leading /.
Note that by default rclone will try to execute shell commands on the server, see shell access considerations.
-Configuration
+Configuration
Here is an example of making an SFTP configuration. First run
rclone config
This will guide you through an interactive setup process.
@@ -34936,14 +36781,14 @@ known_hosts_file = ~/.ssh/known_hosts
The options md5sum_command
and sha1_command
can be used to customize the command to be executed for calculation of checksums. You can for example set a specific path to where md5sum and sha1sum executables are located, or use them to specify some other tools that print checksums in compatible format. The value can include command-line arguments, or even shell script blocks as with PowerShell. Rclone has subcommands md5sum and sha1sum that use compatible format, which means if you have an rclone executable on the server it can be used. As mentioned above, they will be automatically picked up if found in PATH, but if not you can set something like /path/to/rclone md5sum
as the value of option md5sum_command
to make sure a specific executable is used.
Remote checksumming is recommended and enabled by default. First time rclone is using a SFTP remote, if options md5sum_command
or sha1_command
are not set, it will check if any of the default commands for each of them, as described above, can be used. The result will be saved in the remote configuration, so next time it will use the same. Value none
will be set if none of the default commands could be used for a specific algorithm, and this algorithm will not be supported by the remote.
Disabling the checksumming may be required if you are connecting to SFTP servers which are not under your control, and to which the execution of remote shell commands is prohibited. Set the configuration option disable_hashcheck
to true
to disable checksumming entirely, or set shell_type
to none
to disable all functionality based on remote shell command execution.
-Modification times and hashes
+Modification times and hashes
Modified times are stored on the server to 1 second precision.
Modified times are used in syncing and are fully supported.
Some SFTP servers disable setting/modifying the file modification time after upload (for example, certain configurations of ProFTPd with mod_sftp). If you are using one of these servers, you can set the option set_modtime = false
in your RClone backend configuration to disable this behaviour.
About command
The about
command returns the total space, free space, and used space on the remote for the disk of the specified path on the remote or, if not set, the disk of the root on the remote.
SFTP usually supports the about command, but it depends on the server. If the server implements the vendor-specific VFS statistics extension, which is normally the case with OpenSSH instances, it will be used. If not, but the same login has access to a Unix shell, where the df
command is available (e.g. in the remote's PATH), then this will be used instead. If the server shell is PowerShell, probably with a Windows OpenSSH server, rclone will use a built-in shell command (see shell access). If none of the above is applicable, about
will fail.
-Standard options
+Standard options
Here are the Standard options specific to sftp (SSH/SFTP).
--sftp-host
SSH host to connect to.
@@ -35107,7 +36952,7 @@ known_hosts_file = ~/.ssh/known_hosts
Type: SpaceSepList
Default:
-Advanced options
+Advanced options
Here are the Advanced options specific to sftp (SSH/SFTP).
--sftp-known-hosts-file
Optional path to known_hosts file.
@@ -35408,6 +37253,16 @@ server_command = sudo /usr/libexec/openssh/sftp-server
Type: string
Required: false
+--sftp-http-proxy
+URL for HTTP CONNECT proxy
+Set this to a URL for an HTTP proxy which supports the HTTP CONNECT verb.
+Properties:
+
+- Config: http_proxy
+- Env Var: RCLONE_SFTP_HTTP_PROXY
+- Type: string
+- Required: false
+
--sftp-copy-is-hardlink
Set to enable server side copies using hardlinks.
The SFTP protocol does not define a copy command so normally server side copies are not allowed with the sftp backend.
@@ -35431,8 +37286,8 @@ server_command = sudo /usr/libexec/openssh/sftp-server
Type: string
Required: false
-Limitations
-On some SFTP servers (e.g. Synology) the paths are different for SSH and SFTP so the hashes can't be calculated properly. For them using disable_hashcheck
is a good idea.
+Limitations
+On some SFTP servers (e.g. Synology) the paths are different for SSH and SFTP so the hashes can't be calculated properly. You can either use --sftp-path-override
or disable_hashcheck
.
The only ssh agent supported under Windows is Putty's pageant.
The Go SSH library disables the use of the aes128-cbc cipher by default, due to security concerns. This can be re-enabled on a per-connection basis by setting the use_insecure_cipher
setting in the configuration file to true
. Further details on the insecurity of this cipher can be found in this paper.
SFTP isn't supported under plan9 until this issue is fixed.
@@ -35452,7 +37307,7 @@ server_command = sudo /usr/libexec/openssh/sftp-server
The first path segment must be the name of the share, which you entered when you started to share on Windows. On smbd, it's the section title in smb.conf
(usually in /etc/samba/
) file. You can find shares by querying the root if you're unsure (e.g. rclone lsd remote:
).
You can't access to the shared printers from rclone, obviously.
You can't use Anonymous access for logging in. You have to use the guest
user with an empty password instead. The rclone client tries to avoid 8.3 names when uploading files by encoding trailing spaces and periods. Alternatively, the local backend on Windows can access SMB servers using UNC paths, by \\server\share
. This doesn't apply to non-Windows OSes, such as Linux and macOS.
-Configuration
+Configuration
Here is an example of making a SMB configuration.
First run
rclone config
@@ -35527,7 +37382,7 @@ y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> d
-Standard options
+Standard options
Here are the Standard options specific to smb (SMB / CIFS).
--smb-host
SMB server hostname to connect to.
@@ -35588,7 +37443,17 @@ y/e/d> d
Type: string
Required: false
-Advanced options
+--smb-use-kerberos
+Use Kerberos authentication.
+If set, rclone will use Kerberos authentication instead of NTLM. This requires a valid Kerberos configuration and credentials cache to be available, either in the default locations or as specified by the KRB5_CONFIG and KRB5CCNAME environment variables.
+Properties:
+
+- Config: use_kerberos
+- Env Var: RCLONE_SMB_USE_KERBEROS
+- Type: bool
+- Default: false
+
+Advanced options
Here are the Advanced options specific to smb (SMB / CIFS).
--smb-idle-timeout
Max time before closing idle connections.
@@ -35699,7 +37564,7 @@ y/e/d> d
S3 backend: secret encryption key is shared with the gateway
-Configuration
+Configuration
To make a new Storj configuration you need one of the following: * Access Grant that someone else shared with you. * API Key of a Storj project you are a member of.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -35796,7 +37661,7 @@ y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
-Standard options
+Standard options
Here are the Standard options specific to storj (Storj Decentralized Cloud Storage).
--storj-provider
Choose an authentication method.
@@ -35875,7 +37740,7 @@ y/e/d> y
Type: string
Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to storj (Storj Decentralized Cloud Storage).
--storj-description
Description of the remote.
@@ -35940,7 +37805,7 @@ y/e/d> y
rclone sync --interactive --progress remote-us:bucket/path/to/dir/ remote-europe:bucket/path/to/dir/
Or even between another cloud storage and Storj.
rclone sync --interactive --progress s3:bucket/path/to/dir/ storj:bucket/path/to/dir/
-Limitations
+Limitations
rclone about
is not supported by the rclone Storj backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about and rclone about
Known issues
@@ -35948,7 +37813,7 @@ y/e/d> y
To fix these, please raise your system limits. You can do this issuing a ulimit -n 65536
just before you run rclone. To change the limits more permanently you can add this to your shell startup script, e.g. $HOME/.bashrc
, or change the system-wide configuration, usually /etc/sysctl.conf
and/or /etc/security/limits.conf
, but please refer to your operating system manual.
SugarSync
SugarSync is a cloud service that enables active synchronization of files across computers and other devices for file backup, access, syncing, and sharing.
-Configuration
+Configuration
The initial setup for SugarSync involves getting a token from SugarSync which you can do with rclone. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -36013,7 +37878,7 @@ y/e/d> y
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
NB you can't create files in the top level folder you have to create a folder, which rclone will create as a "Sync Folder" with SugarSync.
-Modification times and hashes
+Modification times and hashes
SugarSync does not support modification times or hashes, therefore syncing will default to --size-only
checking. Note that using --update
will work as rclone can read the time files were uploaded.
Restricted filename characters
SugarSync replaces the default restricted characters set except for DEL.
@@ -36021,7 +37886,7 @@ y/e/d> y
Deleting files
Deleted files will be moved to the "Deleted items" folder by default.
However you can supply the flag --sugarsync-hard-delete
or set the config parameter hard_delete = true
if you would like files to be deleted straight away.
-Standard options
+Standard options
Here are the Standard options specific to sugarsync (Sugarsync).
--sugarsync-app-id
Sugarsync App ID.
@@ -36062,7 +37927,7 @@ y/e/d> y
Type: bool
Default: false
-Advanced options
+Advanced options
Here are the Advanced options specific to sugarsync (Sugarsync).
--sugarsync-refresh-token
Sugarsync refresh token.
@@ -36143,14 +38008,14 @@ y/e/d> y
Type: string
Required: false
-Limitations
+Limitations
rclone about
is not supported by the SugarSync backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about and rclone about
Uloz.to
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
The initial setup for Uloz.to involves filling in the user credentials. rclone config
walks you through it.
-Configuration
+Configuration
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
@@ -36210,7 +38075,7 @@ y/e/d> y
rclone copy /home/source remote:backup
User credentials
The only reliable method is to authenticate the user using username and password. Uloz.to offers an API key as well, but it's reserved for the use of Uloz.to's in-house application and using it in different circumstances is unreliable.
-Modification times and hashes
+Modification times and hashes
Uloz.to doesn't allow the user to set a custom modification time, or retrieve the hashes after upload. As a result, the integration uses a free form field the API provides to encode client-provided timestamps and hashes. Timestamps are stored with microsecond precision.
A server calculated MD5 hash of the file is verified upon upload. Afterwards, the backend only serves the client-side calculated hashes. Hashes can also be retrieved upon creating a file download link, but it's impractical for list
-like use cases.
Restricted filename characters
@@ -36243,7 +38108,7 @@ y/e/d> y
In order to do this you will have to find the Folder slug
of the folder you wish to use as root. This will be the last segment of the URL when you open the relevant folder in the Uloz.to web interface.
For example, for exploring a folder with URL https://uloz.to/fm/my-files/foobar
, foobar
should be used as the root slug.
root_folder_slug
can be used alongside a specific path in the remote path. For example, if your remote's root_folder_slug
corresponds to /foo/bar
, remote:baz/qux
will refer to ABSOLUTE_ULOZTO_ROOT/foo/bar/baz/qux
.
-Standard options
+Standard options
Here are the Standard options specific to ulozto (Uloz.to).
--ulozto-app-token
The application token identifying the app. An app API key can be either found in the API doc https://uloz.to/upload-resumable-api-beta or obtained from customer service.
@@ -36273,7 +38138,7 @@ y/e/d> y
Type: string
Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to ulozto (Uloz.to).
--ulozto-root-folder-slug
If set, rclone will use this folder as the root folder for all operations. For example, if the slug identifies 'foo/bar/', 'ulozto:baz' is equivalent to 'ulozto:foo/bar/baz' without any root slug set.
@@ -36312,7 +38177,7 @@ y/e/d> y
Type: string
Required: false
-Limitations
+Limitations
Uloz.to file names can't have the \
character in. rclone maps this to and from an identical looking unicode equivalent \
(U+FF3C Fullwidth Reverse Solidus).
Uloz.to only supports filenames up to 255 characters in length.
Uloz.to rate limits access to the API, but exact details are undisclosed. Practical testing reveals that hitting the rate limit during normal use is very rare, although not impossible with higher number of concurrently uploaded files.
@@ -36322,7 +38187,7 @@ y/e/d> y
This is a Backend for Uptobox file storage service. Uptobox is closer to a one-click hoster than a traditional cloud storage provider and therefore not suitable for long term storage.
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
-Configuration
+Configuration
To configure an Uptobox backend you'll need your personal api token. You'll find it in your account settings
Here is an example of how to make a remote called remote
with the default setup. First run:
rclone config
@@ -36376,7 +38241,7 @@ y/e/d>
rclone ls remote:
To copy a local directory to an Uptobox directory called backup
rclone copy /home/source remote:backup
-Modification times and hashes
+Modification times and hashes
Uptobox supports neither modified times nor checksums. All timestamps will read as that set by --default-time
.
Restricted filename characters
In addition to the default restricted characters set the following characters are also replaced:
@@ -36402,7 +38267,7 @@ y/e/d>
Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.
-Standard options
+Standard options
Here are the Standard options specific to uptobox (Uptobox).
--uptobox-access-token
Your access token.
@@ -36414,7 +38279,7 @@ y/e/d>
Type: string
Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to uptobox (Uptobox).
--uptobox-private
Set to make uploaded files private
@@ -36444,7 +38309,7 @@ y/e/d>
Type: string
Required: false
-Limitations
+Limitations
Uptobox will delete inactive files that have not been accessed in 60 days.
rclone about
is not supported by this backend an overview of used space can however been seen in the uptobox web interface.
Union
@@ -36458,7 +38323,7 @@ y/e/d>
Subfolders can be used in upstream remotes. Assume a union remote named backup
with the remotes mydrive:private/backup
. Invoking rclone mkdir backup:desktop
is exactly the same as invoking rclone mkdir mydrive:private/backup/desktop
.
There is no special handling of paths containing ..
segments. Invoking rclone mkdir backup:../desktop
is exactly the same as invoking rclone mkdir mydrive:private/backup/../desktop
.
-Configuration
+Configuration
Here is an example of how to make a union called remote
for local folders. First run:
rclone config
This will guide you through an interactive setup process:
@@ -36691,7 +38556,7 @@ upstreams = /local:writeback remote:dir
When files are written, they will be written to both remote:dir
and /local
.
As many remotes as desired can be added to upstreams
but there should only be one :writeback
tag.
Rclone does not manage the :writeback
remote in any way other than writing files back to it. So if you need to expire old files or manage the size then you will have to do this yourself.
-Standard options
+Standard options
Here are the Standard options specific to union (Union merges the contents of several upstream fs).
--union-upstreams
List of space separated upstreams.
@@ -36740,7 +38605,7 @@ upstreams = /local:writeback remote:dir
Type: int
Default: 120
-Advanced options
+Advanced options
Here are the Advanced options specific to union (Union merges the contents of several upstream fs).
--union-min-free-space
Minimum viable free space for lfs/eplfs policies.
@@ -36761,13 +38626,13 @@ upstreams = /local:writeback remote:dir
Type: string
Required: false
-
+
Any metadata supported by the underlying remote is read and written.
See the metadata docs for more info.
WebDAV
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
-Configuration
+Configuration
To configure the WebDAV remote you will need to have a URL for it, and a username and password. If you know what kind of system you are connecting to then rclone can enable extra features.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -36841,10 +38706,10 @@ y/e/d> y
rclone ls remote:
To copy a local directory to an WebDAV directory called backup
rclone copy /home/source remote:backup
-Modification times and hashes
-Plain WebDAV does not support modified times. However when used with Fastmail Files, Owncloud or Nextcloud rclone will support modified times.
-Likewise plain WebDAV does not support hashes, however when used with Fastmail Files, Owncloud or Nextcloud rclone will support SHA1 and MD5 hashes. Depending on the exact version of Owncloud or Nextcloud hashes may appear on all objects, or only on objects which had a hash uploaded with them.
-Standard options
+Modification times and hashes
+Plain WebDAV does not support modified times. However when used with Fastmail Files, ownCloud or Nextcloud rclone will support modified times.
+Likewise plain WebDAV does not support hashes, however when used with Fastmail Files, ownCloud or Nextcloud rclone will support SHA1 and MD5 hashes. Depending on the exact version of ownCloud or Nextcloud hashes may appear on all objects, or only on objects which had a hash uploaded with them.
+Standard options
Here are the Standard options specific to webdav (WebDAV).
--webdav-url
URL of http host to connect to.
@@ -36876,7 +38741,11 @@ y/e/d> y
"owncloud"
-- Owncloud
+- Owncloud 10 PHP based WebDAV server
+
+"infinitescale"
+
"sharepoint"
@@ -36925,7 +38794,7 @@ y/e/d> y
- Type: string
- Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to webdav (WebDAV).
--webdav-bearer-token-command
Command to run to get a bearer token.
@@ -37033,11 +38902,14 @@ y/e/d> y
Fastmail Files
Use https://webdav.fastmail.com/
or a subdirectory as the URL, and your Fastmail email username@domain.tld
as the username. Follow this documentation to create an app password with access to Files (WebDAV)
and use this as the password.
Fastmail supports modified times using the X-OC-Mtime
header.
-Owncloud
+ownCloud
Click on the settings cog in the bottom right of the page and this will show the WebDAV URL that rclone needs in the config step. It will look something like https://example.com/remote.php/webdav/
.
-Owncloud supports modified times using the X-OC-Mtime
header.
+ownCloud supports modified times using the X-OC-Mtime
header.
Nextcloud
-This is configured in an identical way to Owncloud. Note that Nextcloud initially did not support streaming of files (rcat
) whereas Owncloud did, but this seems to be fixed as of 2020-11-27 (tested with rclone v1.53.1 and Nextcloud Server v19).
+This is configured in an identical way to ownCloud. Note that Nextcloud initially did not support streaming of files (rcat
) whereas ownCloud did, but this seems to be fixed as of 2020-11-27 (tested with rclone v1.53.1 and Nextcloud Server v19).
+ownCloud Infinite Scale
+The WebDAV URL for Infinite Scale can be found in the details panel of any space in Infinite Scale, if the display was enabled in the personal settings of the user through a checkbox there.
+Infinite Scale works with the chunking tus upload protocol. The chunk size is currently fixed 10 MB.
Sharepoint Online
Rclone can be used with Sharepoint provided by OneDrive for Business or Office365 Education Accounts. This feature is only needed for a few of these Accounts, mostly Office365 Education ones. These accounts are sometimes not verified by the domain owner github#1975
This means that these accounts can't be added using the official API (other Accounts should work with the "onedrive" option). However, it is possible to access them using webdav.
@@ -37106,7 +38978,7 @@ vendor = other
bearer_token_command = oidc-token XDC
Yandex Disk
Yandex Disk is a cloud storage solution created by Yandex.
-Configuration
+Configuration
Here is an example of making a yandex configuration. First run
rclone config
This will guide you through an interactive setup process:
@@ -37161,7 +39033,7 @@ y/e/d> y
Sync /home/local/directory
to the remote path, deleting any excess files in the path.
rclone sync --interactive /home/local/directory remote:directory
Yandex paths may be as deep as required, e.g. remote:directory/subdirectory
.
-Modification times and hashes
+Modification times and hashes
Modified times are supported and are stored accurate to 1 ns in custom metadata called rclone_modified
in RFC3339 with nanoseconds format.
The MD5 hash algorithm is natively supported by Yandex Disk.
Emptying Trash
@@ -37171,7 +39043,7 @@ y/e/d> y
Restricted filename characters
The default restricted characters set are replaced.
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Standard options
+Standard options
Here are the Standard options specific to yandex (Yandex Disk).
--yandex-client-id
OAuth Client Id.
@@ -37193,7 +39065,7 @@ y/e/d> y
Type: string
Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to yandex (Yandex Disk).
--yandex-token
OAuth Access Token as a JSON blob.
@@ -37271,13 +39143,13 @@ y/e/d> y
Type: string
Required: false
-Limitations
+Limitations
When uploading very large files (bigger than about 5 GiB) you will need to increase the --timeout
parameter. This is because Yandex pauses (perhaps to calculate the MD5SUM for the entire file) before returning confirmation that the file has been uploaded. The default handling of timeouts in rclone is to assume a 5 minute pause is an error and close the connection - you'll see net/http: timeout awaiting response headers
errors in the logs if this is happening. Setting the timeout to twice the max size of file in GiB should be enough, so if you want to upload a 30 GiB file set a timeout of 2 * 30 = 60m
, that is --timeout 60m
.
Having a Yandex Mail account is mandatory to use the Yandex.Disk subscription. Token generation will work without a mail account, but Rclone won't be able to complete any actions.
[403 - DiskUnsupportedUserAccountTypeError] User account type is not supported.
Zoho Workdrive
Zoho WorkDrive is a cloud storage solution created by Zoho.
-Configuration
+Configuration
Here is an example of making a zoho configuration. First run
rclone config
This will guide you through an interactive setup process:
@@ -37350,14 +39222,14 @@ y/e/d>
Sync /home/local/directory
to the remote path, deleting any excess files in the path.
rclone sync --interactive /home/local/directory remote:directory
Zoho paths may be as deep as required, eg remote:directory/subdirectory
.
-Modification times and hashes
+Modification times and hashes
Modified times are currently not supported for Zoho Workdrive
No hash algorithms are supported.
To view your current quota you can use the rclone about remote:
command which will display your current usage.
Restricted filename characters
Only control characters and invalid UTF-8 are replaced. In addition most Unicode full-width characters are not supported at all and will be removed from filenames during upload.
-Standard options
+Standard options
Here are the Standard options specific to zoho (Zoho).
--zoho-client-id
OAuth Client Id.
@@ -37416,7 +39288,7 @@ y/e/d>
-Advanced options
+Advanced options
Here are the Advanced options specific to zoho (Zoho).
--zoho-token
OAuth Access Token as a JSON blob.
@@ -37497,7 +39369,7 @@ y/e/d>
Local paths are specified as normal filesystem paths, e.g. /path/to/wherever
, so
rclone sync --interactive /home/source /tmp/destination
Will sync /home/source
to /tmp/destination
.
-Configuration
+Configuration
For consistencies sake one can also configure a remote of type local
in the config file, and access the local filesystem using rclone remote paths, e.g. remote:path/to/wherever
, but it is probably easier not to.
Modification times
Rclone reads and writes the modification times using an accuracy determined by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second on OS X.
@@ -37874,7 +39746,7 @@ $ tree /tmp/c
0 file2
NB Rclone (like most unix tools such as du
, rsync
and tar
) treats a bind mount to the same device as being on the same filesystem.
NB This flag is only available on Unix based systems. On systems where it isn't supported (e.g. Windows) it will be ignored.
-Advanced options
+Advanced options
Here are the Advanced options specific to local (Local Disk).
--local-nounc
Disable UNC (long path names) conversion on Windows.
@@ -38098,7 +39970,7 @@ $ tree /tmp/c
Type: string
Required: false
-
+
Depending on which OS is in use the local backend may return only some of the system metadata. Setting system metadata is supported on all OSes but setting user metadata is only supported on linux, freebsd, netbsd, macOS and Solaris. It is not supported on Windows yet (see pkg/attrs#47).
User metadata is stored as extended attributes (which may not be supported by all file systems) under the "user.*" prefix.
Metadata is supported on files and directories.
@@ -38173,7 +40045,7 @@ $ tree /tmp/c
See the metadata docs for more info.
-Backend commands
+Backend commands
Here are the commands specific to the local backend.
Run them with
rclone backend COMMAND remote:
@@ -38190,6 +40062,292 @@ $ tree /tmp/c
"error": return an error based on option value
Changelog
+v1.70.0 - 2025-06-17
+See commits
+
+- New backends
+
+- DOI (Flora Thiebaut)
+- FileLu (kingston125)
+- New S3 providers:
+
+
+- New commands
+
+- convmv: for moving and transforming files (nielash)
+
+- New Features
+
+- Add
--max-connections
to control maximum backend concurrency (Nick Craig-Wood)
+- Add
--max-buffer-memory
to limit total buffer memory usage (Nick Craig-Wood)
+- Add transform library and
--name-transform
flag (nielash)
+- sync: Implement
--list-cutoff
to allow on disk sorting for reduced memory use (Nick Craig-Wood)
+- accounting: Add listed stat for number of directory entries listed (Nick Craig-Wood)
+- backend: Skip hash calculation when the hashType is None (Oleksiy Stashok)
+- build
+
+- Update to go1.24 and make go1.22 the minimum required version (Nick Craig-Wood)
+- Disable docker builds on PRs & add missing dockerfile changes (Anagh Kumar Baranwal)
+- Modernize Go usage (Nick Craig-Wood)
+- Update all dependencies (Nick Craig-Wood)
+
+- cmd/authorize: Show required arguments in help text (simwai)
+- cmd/config: add
--no-output
option (Jess)
+- cmd/gitannex
+
+- Tweak parsing of "rcloneremotename" config (Dan McArdle)
+- Permit remotes with options (Dan McArdle)
+- Reject unknown layout modes in INITREMOTE (Dan McArdle)
+
+- docker image: Add label org.opencontainers.image.source for release notes in Renovate dependency updates (Robin Schneider)
+- doc fixes (albertony, Andrew Kreimer, Ben Boeckel, Christoph Berger, Danny Garside, Dimitri Papadopoulos, eccoisle, Ed Craig-Wood, Fernando Fernández, jack, Jeff Geerling, Jugal Kishore, kingston125, luzpaz, Markus Gerstel, Matt Ickstadt, Michael Kebe, Nick Craig-Wood, PrathameshLakawade, Ser-Bul, simonmcnair, Tim White, Zachary Vorhies)
+- filter:
+
+- Add
--hash-filter
to deterministically select a subset of files (Nick Craig-Wood)
+- Show
--min-size
and --max-size
in --dump
filters (Nick Craig-Wood)
+
+- hash: Add SHA512 support for file hashes (Enduriel)
+- http servers: Add
--user-from-header
to use for authentication (Moises Lima)
+- lib/batcher: Deprecate unused option: batch_commit_timeout (Dan McArdle)
+- log:
+
+- Remove github.com/sirupsen/logrus and replace with log/slog (Nick Craig-Wood)
+- Add
--windows-event-log-level
to support Windows Event Log (Nick Craig-Wood)
+
+- rc
+
+- Add add
short
parameter to core/stats
to not return transferring and checking (Nick Craig-Wood)
+- In
options/info
make FieldName contain a "." if it should be nested (Nick Craig-Wood)
+- Add rc control for serve commands (Nick Craig-Wood)
+- rcserver: Improve content-type check (Jonathan Giannuzzi)
+
+- serve nfs
+
+- Update docs to note Windows is not supported (Zachary Vorhies)
+- Change the format of
--nfs-cache-type symlink
file handles (Nick Craig-Wood)
+- Make metadata files have special file handles (Nick Craig-Wood)
+
+- touch: Make touch obey
--transfers
(Nick Craig-Wood)
+- version: Add
--deps
flag to show dependencies and other build info (Nick Craig-Wood)
+
+- Bug Fixes
+
+- serve s3:
+
+- Fix ListObjectsV2 response (fhuber)
+- Remove redundant handler initialization (Tho Neyugn)
+
+- stats: Fix goroutine leak and improve stats accounting process (Nathanael Demacon)
+
+- VFS
+
+- Add
--vfs-metadata-extension
to expose metadata sidecar files (Nick Craig-Wood)
+
+- Azure Blob
+
+- Add support for
x-ms-tags
header (Trevor Starick)
+- Cleanup uncommitted blocks on upload errors (Nick Craig-Wood)
+- Speed up server side copies for small files (Nick Craig-Wood)
+- Implement multipart server side copy (Nick Craig-Wood)
+- Remove uncommitted blocks on InvalidBlobOrBlock error (Nick Craig-Wood)
+- Fix handling of objects with // in (Nick Craig-Wood)
+- Handle retry error codes more carefully (Nick Craig-Wood)
+- Fix errors not being retried when doing single part copy (Nick Craig-Wood)
+- Fix multipart server side copies of 0 sized files (Nick Craig-Wood)
+
+- Azurefiles
+
+- Add
--azurefiles-use-az
and --azurefiles-disable-instance-discovery
(b-wimmer)
+
+- B2
+
+- Add SkipDestructive handling to backend commands (Pat Patterson)
+- Use file id from listing when not presented in headers (ahxxm)
+
+- Cloudinary
+
+- Automatically add/remove known media files extensions (yuval-cloudinary)
+- Var naming convention (yuval-cloudinary)
+
+- Drive
+
+- Added
backend moveid
command (Spencer McCullough)
+
+- Dropbox
+
+- Support Dropbox Paper (Dave Vasilevsky)
+
+- FTP
+
+- Add
--ftp-http-proxy
to connect via HTTP CONNECT proxy
+
+- Gofile
+
+- Update to use new direct upload endpoint (wbulot)
+
+- Googlephotos
+
+- Update read only and read write scopes to meet Google's requirements. (Germán Casares)
+
+- Iclouddrive
+
+- Fix panic and files potentially downloaded twice (Clément Wehrung)
+
+- Internetarchive
+
+- Add
--internetarchive-metadata="key=value"
for setting item metadata (Corentin Barreau)
+
+- Onedrive
+
+- Fix "The upload session was not found" errors (Nick Craig-Wood)
+- Re-add
--onedrive-upload-cutoff
flag (Nick Craig-Wood)
+- Fix crash if no metadata was updated (Nick Craig-Wood)
+
+- Opendrive
+
+- Added
--opendrive-access
flag to handle permissions (Joel K Biju)
+
+- Pcloud
+
+- Fix "Access denied. You do not have permissions to perform this operation" on large uploads (Nick Craig-Wood)
+
+- S3
+
+- Fix handling of objects with // in (Nick Craig-Wood)
+- Add IBM IAM signer (Alexander Minbaev)
+- Split the GCS quirks into
--s3-use-x-id
and --s3-sign-accept-encoding
(Nick Craig-Wood)
+- Implement paged listing interface ListP (Nick Craig-Wood)
+- Add Pure Storage FlashBlade provider support (Jeremy Daer)
+- Require custom endpoint for Lyve Cloud v2 support (PrathameshLakawade)
+- MEGA S4 support (Nick Craig-Wood)
+
+- SFTP
+
+- Add
--sftp-http-proxy
to connect via HTTP CONNECT proxy (Nick Craig-Wood)
+
+- Smb
+
+- Add support for kerberos authentication (Jonathan Giannuzzi)
+- Improve connection pooling efficiency (Jonathan Giannuzzi)
+
+- WebDAV
+
+- Retry propfind on 425 status (Jörn Friedrich Dreyer)
+- Add an ownCloud Infinite Scale vendor that enables tus chunked upload support (Klaas Freitag)
+
+
+v1.69.3 - 2025-05-21
+See commits
+
+- Bug Fixes
+
+- build: Reapply update github.com/golang-jwt/jwt/v5 from 5.2.1 to 5.2.2 to fix CVE-2025-30204 (dependabot[bot])
+- build: Update github.com/ebitengine/purego to work around bug in go1.24.3 (Nick Craig-Wood)
+
+
+v1.69.2 - 2025-05-01
+See commits
+
+- Bug fixes
+
+- accounting: Fix percentDiff calculation -- (Anagh Kumar Baranwal)
+- build
+
+- Update github.com/golang-jwt/jwt/v4 from 4.5.1 to 4.5.2 to fix CVE-2025-30204 (dependabot[bot])
+- Update github.com/golang-jwt/jwt/v5 from 5.2.1 to 5.2.2 to fix CVE-2025-30204 (dependabot[bot])
+- Update golang.org/x/crypto to v0.35.0 to fix CVE-2025-22869 (Nick Craig-Wood)
+- Update golang.org/x/net from 0.36.0 to 0.38.0 to fix CVE-2025-22870 (dependabot[bot])
+- Update golang.org/x/net to 0.36.0. to fix CVE-2025-22869 (dependabot[bot])
+- Stop building with go < go1.23 as security updates forbade it (Nick Craig-Wood)
+- Fix docker plugin build (Anagh Kumar Baranwal)
+
+- cmd: Fix crash if rclone is invoked without any arguments (Janne Hellsten)
+- config: Read configuration passwords from stdin even when terminated with EOF (Samantha Bowen)
+- doc fixes (Andrew Kreimer, Danny Garside, eccoisle, Ed Craig-Wood, emyarod, jack, Jugal Kishore, Markus Gerstel, Michael Kebe, Nick Craig-Wood, simonmcnair, simwai, Zachary Vorhies)
+- fs: Fix corruption of SizeSuffix with "B" suffix in config (eg --min-size) (Nick Craig-Wood)
+- lib/http: Fix race between Serve() and Shutdown() (Nick Craig-Wood)
+- object: Fix memory object out of bounds Seek (Nick Craig-Wood)
+- operations: Fix call fmt.Errorf with wrong err (alingse)
+- rc
+
+- Disable the metrics server when running
rclone rc
(hiddenmarten)
+- Fix debug/* commands not being available over unix sockets (Nick Craig-Wood)
+
+- serve nfs: Fix unlikely crash (Nick Craig-Wood)
+- stats: Fix the speed not getting updated after a pause in the processing (Anagh Kumar Baranwal)
+- sync
+
+- Fix cpu spinning when empty directory finding with leading slashes (Nick Craig-Wood)
+- Copy dir modtimes even when copyEmptySrcDirs is false (ll3006)
+
+
+- VFS
+
+- Fix directory cache serving stale data (Lorenz Brun)
+- Fix inefficient directory caching when directory reads are slow (huanghaojun)
+- Fix integration test failures (Nick Craig-Wood)
+
+- Drive
+
+- Metadata: fix error when setting copy-requires-writer-permission on a folder (Nick Craig-Wood)
+
+- Dropbox
+
+- Retry link without expiry (Dave Vasilevsky)
+
+- HTTP
+
+- Correct root if definitely pointing to a file (nielash)
+
+- Iclouddrive
+
+- Fix so created files are writable (Ben Alex)
+
+- Onedrive
+
+- Fix metadata ordering in permissions (Nick Craig-Wood)
+
+
+v1.69.1 - 2025-02-14
+See commits
+
+- Bug Fixes
+
+- lib/oauthutil: Fix redirect URL mismatch errors (Nick Craig-Wood)
+- bisync: Fix listings missing concurrent modifications (nielash)
+- serve s3: Fix list objects encoding-type (Nick Craig-Wood)
+- fs: Fix confusing "didn't find section in config file" error (Nick Craig-Wood)
+- doc fixes (Christoph Berger, Dimitri Papadopoulos, Matt Ickstadt, Nick Craig-Wood, Tim White, Zachary Vorhies)
+- build: Added parallel docker builds and caching for go build in the container (Anagh Kumar Baranwal)
+
+- VFS
+
+- Fix the cache failing to upload symlinks when
--links
was specified (Nick Craig-Wood)
+- Fix race detected by race detector (Nick Craig-Wood)
+- Close the change notify channel on Shutdown (izouxv)
+
+- B2
+
+- Fix "fatal error: concurrent map writes" (Nick Craig-Wood)
+
+- Iclouddrive
+
+- Add notes on ADP and Missing PCS cookies (Nick Craig-Wood)
+
+- Onedrive
+
+- Mark German (de) region as deprecated (Nick Craig-Wood)
+
+- S3
+
+- Added new storage class to magalu provider (Bruno Fernandes)
+- Add DigitalOcean regions SFO2, LON1, TOR1, BLR1 (jkpe)
+- Add latest Linode Object Storage endpoints (jbagwell-akamai)
+
+
v1.69.0 - 2025-01-12
See commits
@@ -38232,7 +40390,7 @@ $ tree /tmp/c
- http servers: Disable automatic authentication skipping for unix sockets in http servers (Moises Lima)
- This was making it impossible to use unix sockets with an proxy
-- This might now cause rclone to need authenticaton where it didn't before
+- This might now cause rclone to need authentication where it didn't before
- oauthutil: add support for OAuth client credential flow (Martin Hassack, Nick Craig-Wood)
- operations: make log messages consistent for mkdir/rmdir at INFO level (Nick Craig-Wood)
@@ -39272,7 +41430,7 @@ $ tree /tmp/c
- Refactor version info and icon resource handling on windows (albertony)
doc updates (albertony, alfish2000, asdffdsazqqq, Dimitri Papadopoulos, Herby Gillot, Joda Stößer, Manoj Ghosh, Nick Craig-Wood)
-Implement --metadata-mapper
to transform metatadata with a user supplied program (Nick Craig-Wood)
+Implement --metadata-mapper
to transform metadata with a user supplied program (Nick Craig-Wood)
Add ChunkWriterDoesntSeek
feature flag and set it for b2 (Nick Craig-Wood)
lib/http: Export basic go string functions for use in --template
(Gabriel Espinoza)
makefile: Use POSIX compatible install arguments (Mina Galić)
@@ -39447,7 +41605,7 @@ $ tree /tmp/c
B2
- Fix multipart upload: corrupted on transfer: sizes differ XXX vs 0 (Nick Craig-Wood)
-- Fix locking window when getting mutipart upload URL (Nick Craig-Wood)
+- Fix locking window when getting multipart upload URL (Nick Craig-Wood)
- Fix server side copies greater than 4GB (Nick Craig-Wood)
- Fix chunked streaming uploads (Nick Craig-Wood)
- Reduce default
--b2-upload-concurrency
to 4 to reduce memory usage (Nick Craig-Wood)
@@ -46522,7 +48680,7 @@ $ tree /tmp/c
- Project started
Bugs and Limitations
-Limitations
+Limitations
Directory timestamps aren't preserved on some backends
As of v1.66
, rclone supports syncing directory modtimes, if the backend supports it. Some backends do not support it -- see overview for a complete list. Additionally, note that empty directories are not synced by default (this can be enabled with --create-empty-src-dirs
.)
Rclone struggles with millions of files in a directory/bucket
@@ -46545,6 +48703,11 @@ $ tree /tmp/c
See the remote setup docs for more info.
This has now been documented in its own remote setup page.
+How can I get rid of the "Config file not found" notice?
+If you see a notice like 'NOTICE: Config file "rclone.conf" not found', this means you have not configured any remotes.
+If you need to configure a remote, see the config help docs.
+If you are using rclone entirely with on the fly remotes, you can create an empty config file to get rid of this notice, for example:
+rclone config touch
Can rclone sync directly from drive to s3
Rclone can sync between two remote cloud storage systems just fine.
Note that it effectively downloads the file and uploads it again, so the node running rclone would need to have lots of bandwidth.
@@ -46623,7 +48786,10 @@ yyyy/mm/dd hh:mm:ss Fatal error: config failed to refresh token: failed to start
Rclone is using too much memory or appears to have a memory leak
Rclone is written in Go which uses a garbage collector. The default settings for the garbage collector mean that it runs when the heap size has doubled.
However it is possible to tune the garbage collector to use less memory by setting GOGC to a lower value, say export GOGC=20
. This will make the garbage collector work harder, reducing memory size at the expense of CPU usage.
-The most common cause of rclone using lots of memory is a single directory with millions of files in. Rclone has to load this entirely into memory as rclone objects. Each rclone object takes 0.5k-1k of memory. There is a workaround for this which involves a bit of scripting.
+The most common cause of rclone using lots of memory is a single directory with millions of files in.
+Before rclone v1.70 has to load this entirely into memory as rclone objects. Each rclone object takes 0.5k-1k of memory. There is a workaround for this which involves a bit of scripting.
+However with rclone v1.70 and later rclone will automatically save directory entries to disk when a directory with more than --list-cutoff
(1,000,000 by default) entries is detected.
+From v1.70 rclone also has the --max-buffer-memory flag which helps particularly when multi-thread transfers are using too much memory.
Rclone changes fullwidth Unicode punctuation marks in file names
For example: On a Windows system, you have a file with name Test:1.jpg
, where :
is the Unicode fullwidth colon symbol. When using rclone to copy this to your Google Drive, you will notice that the file gets renamed to Test:1.jpg
, where :
is the regular (halfwidth) colon.
The reason for such renames is the way rclone handles different restricted filenames on different cloud storage systems. It tries to avoid ambiguous file names as much and allow moving files between many cloud storage systems transparently, by replacing invalid characters with similar looking Unicode characters when transferring to one storage system, and replacing back again when transferring to a different storage system where the original characters are supported. When the same Unicode characters are intentionally used in file names, this replacement strategy leads to unwanted renames. Read more here.
@@ -47448,7 +49614,6 @@ THE SOFTWARE.
ben-ba benjamin.brauner@gmx.de
Eli Orzitzer e_orz@yahoo.com
Anthony Metzidis anthony.metzidis@gmail.com
-emyarod afw5059@gmail.com
keongalvin keongalvin@gmail.com
rarspace01 rarspace01@users.noreply.github.com
Paul Stern paulstern45@gmail.com
@@ -47564,6 +49729,64 @@ THE SOFTWARE.
ToM thomas.faucher@bibliosansfrontieres.org
TAKEI Yuya 853320+takei-yuya@users.noreply.github.com
Francesco Frassinelli fraph24@gmail.com francesco.frassinelli@nina.no
+Matt Ickstadt mattico8@gmail.com matt@beckenterprises.com
+Spencer McCullough mccullough.spencer@gmail.com
+Jonathan Giannuzzi jonathan@giannuzzi.me
+Christoph Berger github@christophberger.com
+Tim White tim.white@su.org.au
+Robin Schneider robin.schneider@stackit.cloud
+izouxv izouxv@users.noreply.github.com
+Moises Lima mozlima@users.noreply.github.com
+Bruno Fernandes bruno.fernandes1996@hotmail.com
+Corentin Barreau corentin@archive.org
+hiddenmarten hiddenmarten@gmail.com
+Trevor Starick trevor.starick@gmail.com
+b-wimmer 132347192+b-wimmer@users.noreply.github.com
+Jess jess@jessie.cafe
+Zachary Vorhies zachvorhies@protonmail.com
+Alexander Minbaev minbaev@gmail.com
+Joel K Biju joelkbiju18@gmail.com
+ll3006 doublel3006@gmail.com
+jbagwell-akamai 113531113+jbagwell-akamai@users.noreply.github.com
+Michael Kebe michael.kebe@gmail.com
+Lorenz Brun lorenz@brun.one
+Dave Vasilevsky djvasi@gmail.com dave@vasilevsky.ca
+luzpaz luzpaz@users.noreply.github.com
+jack 9480542+jackusm@users.noreply.github.com
+Jörn Friedrich Dreyer jfd@butonic.de
+alingse alingse@foxmail.com
+Fernando Fernández ferferga@hotmail.com
+eccoisle 167755281+eccoisle@users.noreply.github.com
+Klaas Freitag kraft@freisturz.de
+Danny Garside dannygarside@outlook.com
+Samantha Bowen sam@bbowen.net
+simonmcnair 101189766+simonmcnair@users.noreply.github.com
+huanghaojun jasen.huang@ugreen.com
+Enduriel endur1el@protonmail.com
+Markus Gerstel markus.gerstel@osirium.com
+simwai 16225108+simwai@users.noreply.github.com
+Ben Alex ben.alex@acegi.com.au
+Klaas Freitag opensource@freisturz.de klaas.freitag@kiteworks.com
+Andrew Kreimer algonell@gmail.com
+Ed Craig-Wood 138211970+edc-w@users.noreply.github.com
+Christian Richter crichter@owncloud.com 1058116+dragonchaser@users.noreply.github.com
+Ralf Haferkamp r.haferkamp@opencloud.eu
+Jugal Kishore me@devjugal.com
+Tho Neyugn nguyentruongtho@users.noreply.github.com
+Ben Boeckel mathstuf@users.noreply.github.com
+Clément Wehrung cwehrung@nurves.com
+Jeff Geerling geerlingguy@mac.com
+Germán Casares german.casares.march+github@gmail.com
+fhuber florian.huber@noris.de
+wbulot wbulot@hotmail.com
+Jeremy Daer jeremydaer@gmail.com
+Oleksiy Stashok ostashok@tesla.com
+PrathameshLakawade prathameshlakawade@gmail.com
+Nathanael Demacon 7271496+quantumsheep@users.noreply.github.com
+ahxxm ahxxm@users.noreply.github.com
+Flora Thiebaut johann.thiebaut@gmail.com
+kingston125 support@filelu.com
+Ser-Bul 30335009+Ser-Bul@users.noreply.github.com
Forum
diff --git a/MANUAL.md b/MANUAL.md
index 8d2f405fe..ed3de7dfb 100644
--- a/MANUAL.md
+++ b/MANUAL.md
@@ -1,7 +1,79 @@
% rclone(1) User Manual
% Nick Craig-Wood
-% Jan 12, 2025
+% Jun 17, 2025
+# NAME
+
+rclone - manage files on cloud storage
+
+# SYNOPSIS
+
+```
+Usage:
+ rclone [flags]
+ rclone [command]
+
+Available commands:
+ about Get quota information from the remote.
+ authorize Remote authorization.
+ backend Run a backend-specific command.
+ bisync Perform bidirectional synchronization between two paths.
+ cat Concatenates any files and sends them to stdout.
+ check Checks the files in the source and destination match.
+ checksum Checks the files in the destination against a SUM file.
+ cleanup Clean up the remote if possible.
+ completion Output completion script for a given shell.
+ config Enter an interactive configuration session.
+ convmv Convert file and directory names in place.
+ copy Copy files from source to dest, skipping identical files.
+ copyto Copy files from source to dest, skipping identical files.
+ copyurl Copy the contents of the URL supplied content to dest:path.
+ cryptcheck Cryptcheck checks the integrity of an encrypted remote.
+ cryptdecode Cryptdecode returns unencrypted file names.
+ dedupe Interactively find duplicate filenames and delete/rename them.
+ delete Remove the files in path.
+ deletefile Remove a single file from remote.
+ gendocs Output markdown docs for rclone to the directory supplied.
+ gitannex Speaks with git-annex over stdin/stdout.
+ hashsum Produces a hashsum file for all the objects in the path.
+ help Show help for rclone commands, flags and backends.
+ link Generate public link to file/folder.
+ listremotes List all the remotes in the config file and defined in environment variables.
+ ls List the objects in the path with size and path.
+ lsd List all directories/containers/buckets in the path.
+ lsf List directories and objects in remote:path formatted for parsing.
+ lsjson List directories and objects in the path in JSON format.
+ lsl List the objects in path with modification time, size and path.
+ md5sum Produces an md5sum file for all the objects in the path.
+ mkdir Make the path if it doesn't already exist.
+ mount Mount the remote as file system on a mountpoint.
+ move Move files from source to dest.
+ moveto Move file or directory from source to dest.
+ ncdu Explore a remote with a text based user interface.
+ nfsmount Mount the remote as file system on a mountpoint.
+ obscure Obscure password for use in the rclone config file.
+ purge Remove the path and all of its contents.
+ rc Run a command against a running rclone.
+ rcat Copies standard input to file on remote.
+ rcd Run rclone listening to remote control commands only.
+ rmdir Remove the empty directory at path.
+ rmdirs Remove empty directories under the path.
+ selfupdate Update the rclone binary.
+ serve Serve a remote over a protocol.
+ settier Changes storage class/tier of objects in remote.
+ sha1sum Produces an sha1sum file for all the objects in the path.
+ size Prints the total size and number of objects in remote:path.
+ sync Make source and dest identical, modifying destination only.
+ test Run a test command
+ touch Create new file or change file modification time.
+ tree List the contents of the remote in a tree like fashion.
+ version Show the version number.
+
+Use "rclone [command] --help" for more information about a command.
+Use "rclone help flags" for to see the global flags.
+Use "rclone help backends" for a list of supported services.
+
+```
# Rclone syncs your files to cloud storage
@@ -121,6 +193,8 @@ WebDAV or S3, that work out of the box.)
- Enterprise File Fabric
- Fastmail Files
- Files.com
+- FileLu Cloud Storage
+- FlashBlade
- FTP
- Gofile
- Google Cloud Storage
@@ -145,7 +219,8 @@ WebDAV or S3, that work out of the box.)
- Magalu
- Mail.ru Cloud
- Memset Memstore
-- Mega
+- MEGA
+- MEGA S4
- Memory
- Microsoft Azure Blob Storage
- Microsoft Azure Files Storage
@@ -545,7 +620,7 @@ Note that this is controlled by [community maintainer](https://github.com/bouken
## Source installation {#source}
Make sure you have git and [Go](https://golang.org/) installed.
-Go version 1.18 or newer is required, the latest release is recommended.
+Go version 1.22 or newer is required, the latest release is recommended.
You can get it from your package manager, or download it from
[golang.org/dl](https://golang.org/dl/). Then you can run the following:
@@ -885,6 +960,7 @@ See the following for detailed instructions for
* [Digi Storage](https://rclone.org/koofr/#digi-storage)
* [Dropbox](https://rclone.org/dropbox/)
* [Enterprise File Fabric](https://rclone.org/filefabric/)
+ * [FileLu Cloud Storage](https://rclone.org/filelu/)
* [Files.com](https://rclone.org/filescom/)
* [FTP](https://rclone.org/ftp/)
* [Gofile](https://rclone.org/gofile/)
@@ -1133,6 +1209,7 @@ Flags for anything which can copy a file
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
--no-update-dir-modtime Don't update directory modification times
@@ -1169,6 +1246,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -1342,6 +1420,7 @@ Flags for anything which can copy a file
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
--no-update-dir-modtime Don't update directory modification times
@@ -1366,6 +1445,7 @@ Flags used for sync commands
--delete-during When synchronizing, delete files during transfer
--fix-case Force rename of case insensitive dest to match source
--ignore-errors Delete even if there are I/O errors
+ --list-cutoff int To save memory, sort directory listings on disk above this threshold (default 1000000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off)
--suffix string Suffix to add to changed files
@@ -1397,6 +1477,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -1513,6 +1594,7 @@ Flags for anything which can copy a file
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
--no-update-dir-modtime Don't update directory modification times
@@ -1549,6 +1631,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -1649,6 +1732,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -1690,6 +1774,9 @@ include/exclude filters - everything will be removed. Use the
delete files. To delete empty directories only, use command
[rmdir](https://rclone.org/commands/rclone_rmdir/) or [rmdirs](https://rclone.org/commands/rclone_rmdirs/).
+The concurrency of this operation is controlled by the `--checkers` global flag. However, some backends will
+implement this command directly, in which case `--checkers` will be ignored.
+
**Important**: Since this can cause data loss, test first with the
`--dry-run` or the `--interactive`/`-i` flag.
@@ -1886,6 +1973,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -1982,6 +2070,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -2089,6 +2178,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -2185,6 +2275,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -2268,6 +2359,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -2354,6 +2446,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -2435,6 +2528,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -2506,6 +2600,9 @@ Or
beta: 1.42.0.5 (released 2018-06-17)
upgrade: https://beta.rclone.org/v1.42-005-g56e1e820
+If you supply the --deps flag then rclone will print a list of all the
+packages it depends on and their versions along with some other
+information about the build.
```
@@ -2516,6 +2613,7 @@ rclone version [flags]
```
--check Check for new version
+ --deps Show the Go dependencies
-h, --help help for version
```
@@ -2784,13 +2882,18 @@ Remote authorization. Used to authorize a remote or headless
rclone from a machine with a browser - use as instructed by
rclone config.
+The command requires 1-3 arguments:
+ - fs name (e.g., "drive", "s3", etc.)
+ - Either a base64 encoded JSON blob obtained from a previous rclone config session
+ - Or a client_id and client_secret pair obtained from the remote service
+
Use --auth-no-open-browser to prevent rclone to open auth
link in default browser automatically.
Use --template to generate HTML output via a custom Go template. If a blank string is provided as an argument to this flag, the default template is used.
```
-rclone authorize [flags]
+rclone authorize [base64_json_blob | client_id client_secret] [flags]
```
## Options
@@ -2957,6 +3060,7 @@ Flags for anything which can copy a file
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
--no-update-dir-modtime Don't update directory modification times
@@ -2993,6 +3097,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -3083,6 +3188,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -3190,6 +3296,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -3529,6 +3636,7 @@ rclone config create name type [key value]* [flags]
--continue Continue the configuration process with an answer
-h, --help help for create
--no-obscure Force any passwords not to be obscured
+ --no-output Don't provide any output
--non-interactive Don't interact with user and return questions
--obscure Force any passwords to be obscured
--result string Result - use with --continue
@@ -3745,12 +3853,12 @@ password to re-encrypt the config.
When `--password-command` is called to change the password then the
environment variable `RCLONE_PASSWORD_CHANGE=1` will be set. So if
-changing passwords programatically you can use the environment
+changing passwords programmatically you can use the environment
variable to distinguish which password you must supply.
Alternatively you can remove the password first (with `rclone config
encryption remove`), then set it again with this command which may be
-easier if you don't mind the unecrypted config file being on the disk
+easier if you don't mind the unencrypted config file being on the disk
briefly.
@@ -4087,6 +4195,7 @@ rclone config update name [key value]+ [flags]
--continue Continue the configuration process with an answer
-h, --help help for update
--no-obscure Force any passwords not to be obscured
+ --no-output Don't provide any output
--non-interactive Don't interact with user and return questions
--obscure Force any passwords to be obscured
--result string Result - use with --continue
@@ -4126,6 +4235,400 @@ See the [global flags page](https://rclone.org/flags/) for global options not li
* [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session.
+# rclone convmv
+
+Convert file and directory names in place.
+
+## Synopsis
+
+
+convmv supports advanced path name transformations for converting and renaming files and directories by applying prefixes, suffixes, and other alterations.
+
+| Command | Description |
+|------|------|
+| `--name-transform prefix=XXXX` | Prepends XXXX to the file name. |
+| `--name-transform suffix=XXXX` | Appends XXXX to the file name after the extension. |
+| `--name-transform suffix_keep_extension=XXXX` | Appends XXXX to the file name while preserving the original file extension. |
+| `--name-transform trimprefix=XXXX` | Removes XXXX if it appears at the start of the file name. |
+| `--name-transform trimsuffix=XXXX` | Removes XXXX if it appears at the end of the file name. |
+| `--name-transform regex=/pattern/replacement/` | Applies a regex-based transformation. |
+| `--name-transform replace=old:new` | Replaces occurrences of old with new in the file name. |
+| `--name-transform date={YYYYMMDD}` | Appends or prefixes the specified date format. |
+| `--name-transform truncate=N` | Truncates the file name to a maximum of N characters. |
+| `--name-transform base64encode` | Encodes the file name in Base64. |
+| `--name-transform base64decode` | Decodes a Base64-encoded file name. |
+| `--name-transform encoder=ENCODING` | Converts the file name to the specified encoding (e.g., ISO-8859-1, Windows-1252, Macintosh). |
+| `--name-transform decoder=ENCODING` | Decodes the file name from the specified encoding. |
+| `--name-transform charmap=MAP` | Applies a character mapping transformation. |
+| `--name-transform lowercase` | Converts the file name to lowercase. |
+| `--name-transform uppercase` | Converts the file name to UPPERCASE. |
+| `--name-transform titlecase` | Converts the file name to Title Case. |
+| `--name-transform ascii` | Strips non-ASCII characters. |
+| `--name-transform url` | URL-encodes the file name. |
+| `--name-transform nfc` | Converts the file name to NFC Unicode normalization form. |
+| `--name-transform nfd` | Converts the file name to NFD Unicode normalization form. |
+| `--name-transform nfkc` | Converts the file name to NFKC Unicode normalization form. |
+| `--name-transform nfkd` | Converts the file name to NFKD Unicode normalization form. |
+| `--name-transform command=/path/to/my/programfile names.` | Executes an external program to transform |
+
+
+Conversion modes:
+```
+none
+nfc
+nfd
+nfkc
+nfkd
+replace
+prefix
+suffix
+suffix_keep_extension
+trimprefix
+trimsuffix
+index
+date
+truncate
+base64encode
+base64decode
+encoder
+decoder
+ISO-8859-1
+Windows-1252
+Macintosh
+charmap
+lowercase
+uppercase
+titlecase
+ascii
+url
+regex
+command
+```
+Char maps:
+```
+
+IBM-Code-Page-037
+IBM-Code-Page-437
+IBM-Code-Page-850
+IBM-Code-Page-852
+IBM-Code-Page-855
+Windows-Code-Page-858
+IBM-Code-Page-860
+IBM-Code-Page-862
+IBM-Code-Page-863
+IBM-Code-Page-865
+IBM-Code-Page-866
+IBM-Code-Page-1047
+IBM-Code-Page-1140
+ISO-8859-1
+ISO-8859-2
+ISO-8859-3
+ISO-8859-4
+ISO-8859-5
+ISO-8859-6
+ISO-8859-7
+ISO-8859-8
+ISO-8859-9
+ISO-8859-10
+ISO-8859-13
+ISO-8859-14
+ISO-8859-15
+ISO-8859-16
+KOI8-R
+KOI8-U
+Macintosh
+Macintosh-Cyrillic
+Windows-874
+Windows-1250
+Windows-1251
+Windows-1252
+Windows-1253
+Windows-1254
+Windows-1255
+Windows-1256
+Windows-1257
+Windows-1258
+X-User-Defined
+```
+Encoding masks:
+```
+Asterisk
+ BackQuote
+ BackSlash
+ Colon
+ CrLf
+ Ctl
+ Del
+ Dollar
+ Dot
+ DoubleQuote
+ Exclamation
+ Hash
+ InvalidUtf8
+ LeftCrLfHtVt
+ LeftPeriod
+ LeftSpace
+ LeftTilde
+ LtGt
+ None
+ Percent
+ Pipe
+ Question
+ Raw
+ RightCrLfHtVt
+ RightPeriod
+ RightSpace
+ Semicolon
+ SingleQuote
+ Slash
+ SquareBracket
+```
+Examples:
+
+```
+rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,uppercase"
+// Output: STORIES/THE QUICK BROWN FOX!.TXT
+```
+
+```
+rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,replace=Fox:Turtle" --name-transform "all,replace=Quick:Slow"
+// Output: stories/The Slow Brown Turtle!.txt
+```
+
+```
+rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,base64encode"
+// Output: c3Rvcmllcw==/VGhlIFF1aWNrIEJyb3duIEZveCEudHh0
+```
+
+```
+rclone convmv "c3Rvcmllcw==/VGhlIFF1aWNrIEJyb3duIEZveCEudHh0" --name-transform "all,base64decode"
+// Output: stories/The Quick Brown Fox!.txt
+```
+
+```
+rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,nfc"
+// Output: stories/The Quick Brown 🦊 Fox Went to the Café!.txt
+```
+
+```
+rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,nfd"
+// Output: stories/The Quick Brown 🦊 Fox Went to the Café!.txt
+```
+
+```
+rclone convmv "stories/The Quick Brown 🦊 Fox!.txt" --name-transform "all,ascii"
+// Output: stories/The Quick Brown Fox!.txt
+```
+
+```
+rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,trimsuffix=.txt"
+// Output: stories/The Quick Brown Fox!
+```
+
+```
+rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,prefix=OLD_"
+// Output: OLD_stories/OLD_The Quick Brown Fox!.txt
+```
+
+```
+rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,charmap=ISO-8859-7"
+// Output: stories/The Quick Brown _ Fox Went to the Caf_!.txt
+```
+
+```
+rclone convmv "stories/The Quick Brown Fox: A Memoir [draft].txt" --name-transform "all,encoder=Colon,SquareBracket"
+// Output: stories/The Quick Brown Fox: A Memoir [draft].txt
+```
+
+```
+rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,truncate=21"
+// Output: stories/The Quick Brown 🦊 Fox
+```
+
+```
+rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=echo"
+// Output: stories/The Quick Brown Fox!.txt
+```
+
+```
+rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
+// Output: stories/The Quick Brown Fox!-20250617
+```
+
+```
+rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
+// Output: stories/The Quick Brown Fox!-2025-06-17 0551PM
+```
+
+```
+rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,regex=[\\.\\w]/ab"
+// Output: ababababababab/ababab ababababab ababababab ababab!abababab
+```
+
+
+
+Multiple transformations can be used in sequence, applied in the order they are specified on the command line.
+
+The `--name-transform` flag is also available in `sync`, `copy`, and `move`.
+
+# Files vs Directories ##
+
+By default `--name-transform` will only apply to file names. The means only the leaf file name will be transformed.
+However some of the transforms would be better applied to the whole path or just directories.
+To choose which which part of the file path is affected some tags can be added to the `--name-transform`
+
+| Tag | Effect |
+|------|------|
+| `file` | Only transform the leaf name of files (DEFAULT) |
+| `dir` | Only transform name of directories - these may appear anywhere in the path |
+| `all` | Transform the entire path for files and directories |
+
+This is used by adding the tag into the transform name like this: `--name-transform file,prefix=ABC` or `--name-transform dir,prefix=DEF`.
+
+For some conversions using all is more likely to be useful, for example `--name-transform all,nfc`
+
+Note that `--name-transform` may not add path separators `/` to the name. This will cause an error.
+
+# Ordering and Conflicts ##
+
+* Transformations will be applied in the order specified by the user.
+ * If the `file` tag is in use (the default) then only the leaf name of files will be transformed.
+ * If the `dir` tag is in use then directories anywhere in the path will be transformed
+ * If the `all` tag is in use then directories and files anywhere in the path will be transformed
+ * Each transformation will be run one path segment at a time.
+ * If a transformation adds a `/` or ends up with an empty path segment then that will be an error.
+* It is up to the user to put the transformations in a sensible order.
+ * Conflicting transformations, such as `prefix` followed by `trimprefix` or `nfc` followed by `nfd`, are possible.
+ * Instead of enforcing mutual exclusivity, transformations are applied in sequence as specified by the
+user, allowing for intentional use cases (e.g., trimming one prefix before adding another).
+ * Users should be aware that certain combinations may lead to unexpected results and should verify
+transformations using `--dry-run` before execution.
+
+# Race Conditions and Non-Deterministic Behavior ##
+
+Some transformations, such as `replace=old:new`, may introduce conflicts where multiple source files map to the same destination name.
+This can lead to race conditions when performing concurrent transfers. It is up to the user to anticipate these.
+* If two files from the source are transformed into the same name at the destination, the final state may be non-deterministic.
+* Running rclone check after a sync using such transformations may erroneously report missing or differing files due to overwritten results.
+
+* To minimize risks, users should:
+ * Carefully review transformations that may introduce conflicts.
+ * Use `--dry-run` to inspect changes before executing a sync (but keep in mind that it won't show the effect of non-deterministic transformations).
+ * Avoid transformations that cause multiple distinct source files to map to the same destination name.
+ * Consider disabling concurrency with `--transfers=1` if necessary.
+ * Certain transformations (e.g. `prefix`) will have a multiplying effect every time they are used. Avoid these when using `bisync`.
+
+
+
+```
+rclone convmv dest:path --name-transform XXX [flags]
+```
+
+## Options
+
+```
+ --create-empty-src-dirs Create empty source dirs on destination after move
+ --delete-empty-src-dirs Delete empty source dirs after move
+ -h, --help help for convmv
+```
+
+Options shared with other commands are described next.
+See the [global flags page](https://rclone.org/flags/) for global options not listed here.
+
+### Copy Options
+
+Flags for anything which can copy a file
+
+```
+ --check-first Do all the checks before starting transfers
+ -c, --checksum Check for changes with size & checksum (if available, or fallback to size only)
+ --compare-dest stringArray Include additional server-side paths during comparison
+ --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
+ --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
+ --ignore-case-sync Ignore case when synchronizing
+ --ignore-checksum Skip post copy check of checksums
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use modtime or checksum
+ -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
+ --immutable Do not modify files, fail if existing files have been modified
+ --inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
+ --max-backlog int Maximum number of objects in sync or check backlog (default 10000)
+ --max-duration Duration Maximum duration rclone will transfer data for (default 0s)
+ --max-transfer SizeSuffix Maximum size of data to transfer (default off)
+ -M, --metadata If set, preserve metadata when copying objects
+ --modify-window Duration Max time diff to be considered the same (default 1ns)
+ --multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi)
+ --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
+ --multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
+ --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
+ --no-check-dest Don't check the destination, copy regardless
+ --no-traverse Don't traverse destination file system on copy
+ --no-update-dir-modtime Don't update directory modification times
+ --no-update-modtime Don't update destination modtime if files identical
+ --order-by string Instructions on how to order the transfers, e.g. 'size,descending'
+ --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
+ --refresh-times Refresh the modtime of remote files
+ --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
+ --size-only Skip based on size only, not modtime or checksum
+ --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
+ -u, --update Skip files that are newer on the destination
+```
+
+### Important Options
+
+Important flags useful for most commands
+
+```
+ -n, --dry-run Do a trial run with no permanent changes
+ -i, --interactive Enable interactive mode
+ -v, --verbose count Print lots more stuff (repeat for more)
+```
+
+### Filter Options
+
+Flags for filtering directory listings
+
+```
+ --delete-excluded Delete files on dest excluded from sync
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin)
+ --exclude-if-present stringArray Exclude directories if filename is present
+ --files-from stringArray Read list of source-file names from file (use - to read from stdin)
+ --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
+ -f, --filter stringArray Add a file filtering rule
+ --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
+ --ignore-case Ignore case in filters (case insensitive)
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read file include patterns from file (use - to read from stdin)
+ --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-depth int If set limits the recursion depth to this (default -1)
+ --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
+ --metadata-exclude stringArray Exclude metadatas matching pattern
+ --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin)
+ --metadata-filter stringArray Add a metadata filtering rule
+ --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin)
+ --metadata-include stringArray Include metadatas matching pattern
+ --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
+ --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
+```
+
+### Listing Options
+
+Flags for listing directories
+
+```
+ --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
+ --fast-list Use recursive list if available; uses more memory but fewer transactions
+```
+
+## See Also
+
+* [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.
+
# rclone copyto
Copy files from source to dest, skipping identical files.
@@ -4158,6 +4661,8 @@ This doesn't transfer files that are identical on src and dst, testing
by size and modification time or MD5SUM. It doesn't delete files from
the destination.
+*If you are looking to copy just a byte range of a file, please see 'rclone cat --offset X --count Y'*
+
**Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics
@@ -4201,6 +4706,7 @@ Flags for anything which can copy a file
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
--no-update-dir-modtime Don't update directory modification times
@@ -4237,6 +4743,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -4279,7 +4786,7 @@ Setting `--auto-filename` will attempt to automatically determine the
filename from the URL (after any redirections) and used in the
destination path.
-With `--auto-filename-header` in addition, if a specific filename is
+With `--header-filename` in addition, if a specific filename is
set in HTTP headers, it will be used instead of the name from the URL.
With `--print-filename` in addition, the resulting file name will be
printed.
@@ -4290,7 +4797,7 @@ destination if there is one with the same name.
Setting `--stdout` or making the output file name `-`
will cause the output to be written to standard output.
-## Troublshooting
+## Troubleshooting
If you can't get `rclone copyurl` to work then here are some things you can try:
@@ -4429,6 +4936,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -4685,6 +5193,7 @@ Run without a hash to see the list of all supported hashes, e.g.
* whirlpool
* crc32
* sha256
+ * sha512
Then
@@ -4723,6 +5232,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -5016,6 +5526,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -5191,6 +5702,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -5787,11 +6299,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
-If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
+If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
-`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
+`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -6116,6 +6628,45 @@ _WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
+## VFS Metadata
+
+If you use the `--vfs-metadata-extension` flag you can get the VFS to
+expose files which contain the [metadata](https://rclone.org/docs/#metadata) as a JSON
+blob. These files will not appear in the directory listing, but can be
+`stat`-ed and opened and once they have been they **will** appear in
+directory listings until the directory cache expires.
+
+Note that some backends won't create metadata unless you pass in the
+`--metadata` flag.
+
+For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata`
+we get
+
+```
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+```
+
+If the file has no metadata it will be returned as `{}` and if there
+is an error reading the metadata the error will be returned as
+`{"error":"error string"}`.
+
```
@@ -6167,6 +6718,7 @@ rclone mount remote:path /path/to/mountpoint [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -6196,6 +6748,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -6294,6 +6847,7 @@ Flags for anything which can copy a file
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
--no-update-dir-modtime Don't update directory modification times
@@ -6330,6 +6884,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -6453,6 +7008,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -7049,11 +7605,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
-If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
+If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
-`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
+`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -7378,6 +7934,45 @@ _WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
+## VFS Metadata
+
+If you use the `--vfs-metadata-extension` flag you can get the VFS to
+expose files which contain the [metadata](https://rclone.org/docs/#metadata) as a JSON
+blob. These files will not appear in the directory listing, but can be
+`stat`-ed and opened and once they have been they **will** appear in
+directory listings until the directory cache expires.
+
+Note that some backends won't create metadata unless you pass in the
+`--metadata` flag.
+
+For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata`
+we get
+
+```
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+```
+
+If the file has no metadata it will be returned as `{}` and if there
+is an error reading the metadata the error will be returned as
+`{"error":"error string"}`.
+
```
@@ -7434,6 +8029,7 @@ rclone nfsmount remote:path /path/to/mountpoint [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -7463,6 +8059,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -7802,7 +8399,11 @@ By default this will serve files without needing a login.
You can either use an htpasswd file which can take lots of users, or
set a single username and password with the `--rc-user` and `--rc-pass` flags.
-If no static users are configured by either of the above methods, and client
+Alternatively, you can have the reverse proxy manage authentication and use the
+username provided in the configured header with `--user-from-header` (e.g., `--rc---user-from-header=x-remote-user`).
+Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access.
+
+If either of the above authentication methods is not configured and client
certificates are required by the `--client-ca` flag passed to the server, the
client certificate common name will be considered as the username.
@@ -7866,6 +8467,7 @@ Flags to control the Remote Control API
--rc-server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
--rc-template string User-specified template
--rc-user string User name for authentication
+ --rc-user-from-header string User name from a defined HTTP header
--rc-web-fetch-url string URL to fetch the releases for webgui (default "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest")
--rc-web-gui Launch WebGUI on localhost
--rc-web-gui-force-update Force update to latest version of web gui
@@ -8177,11 +8779,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
-If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
+If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
-`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
+`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -8506,6 +9108,45 @@ _WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
+## VFS Metadata
+
+If you use the `--vfs-metadata-extension` flag you can get the VFS to
+expose files which contain the [metadata](https://rclone.org/docs/#metadata) as a JSON
+blob. These files will not appear in the directory listing, but can be
+`stat`-ed and opened and once they have been they **will** appear in
+directory listings until the directory cache expires.
+
+Note that some backends won't create metadata unless you pass in the
+`--metadata` flag.
+
+For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata`
+we get
+
+```
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+```
+
+If the file has no metadata it will be returned as `{}` and if there
+is an error reading the metadata the error will be returned as
+`{"error":"error string"}`.
+
```
@@ -8543,6 +9184,7 @@ rclone serve dlna remote:path [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -8570,6 +9212,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -8732,11 +9375,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
-If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
+If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
-`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
+`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -9061,6 +9704,45 @@ _WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
+## VFS Metadata
+
+If you use the `--vfs-metadata-extension` flag you can get the VFS to
+expose files which contain the [metadata](https://rclone.org/docs/#metadata) as a JSON
+blob. These files will not appear in the directory listing, but can be
+`stat`-ed and opened and once they have been they **will** appear in
+directory listings until the directory cache expires.
+
+Note that some backends won't create metadata unless you pass in the
+`--metadata` flag.
+
+For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata`
+we get
+
+```
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+```
+
+If the file has no metadata it will be returned as `{}` and if there
+is an error reading the metadata the error will be returned as
+`{"error":"error string"}`.
+
```
@@ -9117,6 +9799,7 @@ rclone serve docker [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -9146,6 +9829,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -9289,11 +9973,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
-If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
+If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
-`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
+`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -9618,6 +10302,45 @@ _WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
+## VFS Metadata
+
+If you use the `--vfs-metadata-extension` flag you can get the VFS to
+expose files which contain the [metadata](https://rclone.org/docs/#metadata) as a JSON
+blob. These files will not appear in the directory listing, but can be
+`stat`-ed and opened and once they have been they **will** appear in
+directory listings until the directory cache expires.
+
+Note that some backends won't create metadata unless you pass in the
+`--metadata` flag.
+
+For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata`
+we get
+
+```
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+```
+
+If the file has no metadata it will be returned as `{}` and if there
+is an error reading the metadata the error will be returned as
+`{"error":"error string"}`.
+
## Auth Proxy
If you supply the parameter `--auth-proxy /path/to/program` then
@@ -9739,6 +10462,7 @@ rclone serve ftp remote:path [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -9766,6 +10490,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -9910,7 +10635,11 @@ By default this will serve files without needing a login.
You can either use an htpasswd file which can take lots of users, or
set a single username and password with the `--user` and `--pass` flags.
-If no static users are configured by either of the above methods, and client
+Alternatively, you can have the reverse proxy manage authentication and use the
+username provided in the configured header with `--user-from-header` (e.g., `----user-from-header=x-remote-user`).
+Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access.
+
+If either of the above authentication methods is not configured and client
certificates are required by the `--client-ca` flag passed to the server, the
client certificate common name will be considered as the username.
@@ -10027,11 +10756,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
-If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
+If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
-`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
+`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -10356,6 +11085,45 @@ _WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
+## VFS Metadata
+
+If you use the `--vfs-metadata-extension` flag you can get the VFS to
+expose files which contain the [metadata](https://rclone.org/docs/#metadata) as a JSON
+blob. These files will not appear in the directory listing, but can be
+`stat`-ed and opened and once they have been they **will** appear in
+directory listings until the directory cache expires.
+
+Note that some backends won't create metadata unless you pass in the
+`--metadata` flag.
+
+For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata`
+we get
+
+```
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+```
+
+If the file has no metadata it will be returned as `{}` and if there
+is an error reading the metadata the error will be returned as
+`{"error":"error string"}`.
+
## Auth Proxy
If you supply the parameter `--auth-proxy /path/to/program` then
@@ -10446,19 +11214,19 @@ rclone serve http remote:path [flags]
## Options
```
- --addr stringArray IPaddress:Port, :Port or [unix://]/path/to/socket to bind server to (default [127.0.0.1:8080])
+ --addr stringArray IPaddress:Port or :Port to bind server to (default 127.0.0.1:8080)
--allow-origin string Origin which cross-domain request (CORS) can be executed from
--auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
- --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
- --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
+ --cert string TLS PEM key (concatenation of certificate and CA certificate)
+ --client-ca string Client certificate authority to verify clients with
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 777)
--file-perms FileMode File permissions (default 666)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for http
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string Path to TLS PEM private key file
+ --key string TLS PEM Private key
--link-perms FileMode Link permissions (default 666)
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
@@ -10476,6 +11244,7 @@ rclone serve http remote:path [flags]
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask FileMode Override the permission bits set by the filesystem (not supported on Windows) (default 002)
--user string User name for authentication
+ --user-from-header string User name from a defined HTTP header
--vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost)
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
@@ -10486,6 +11255,7 @@ rclone serve http remote:path [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -10513,6 +11283,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -10581,7 +11352,7 @@ that it uses an on disk cache, but the cache entries are held as
symlinks. Rclone will use the handle of the underlying file as the NFS
handle which improves performance. This sort of cache can't be backed
up and restored as the underlying handles will change. This is Linux
-only. It requres running rclone as root or with `CAP_DAC_READ_SEARCH`.
+only. It requires running rclone as root or with `CAP_DAC_READ_SEARCH`.
You can run rclone with this extra permission by doing this to the
rclone binary `sudo setcap cap_dac_read_search+ep /path/to/rclone`.
@@ -10605,6 +11376,12 @@ Where `$PORT` is the same port number used in the `serve nfs` command
and `$HOSTNAME` is the network address of the machine that `serve nfs`
was run on.
+If `--vfs-metadata-extension` is in use then for the `--nfs-cache-type disk`
+and `--nfs-cache-type cache` the metadata files will have the file
+handle of their parent file suffixed with `0x00, 0x00, 0x00, 0x01`.
+This means they can be looked up directly from the parent file handle
+is desired.
+
This command is only available on Unix platforms.
## VFS - Virtual File System
@@ -10704,11 +11481,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
-If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
+If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
-`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
+`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -11033,6 +11810,45 @@ _WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
+## VFS Metadata
+
+If you use the `--vfs-metadata-extension` flag you can get the VFS to
+expose files which contain the [metadata](https://rclone.org/docs/#metadata) as a JSON
+blob. These files will not appear in the directory listing, but can be
+`stat`-ed and opened and once they have been they **will** appear in
+directory listings until the directory cache expires.
+
+Note that some backends won't create metadata unless you pass in the
+`--metadata` flag.
+
+For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata`
+we get
+
+```
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+```
+
+If the file has no metadata it will be returned as `{}` and if there
+is an error reading the metadata the error will be returned as
+`{"error":"error string"}`.
+
```
@@ -11069,6 +11885,7 @@ rclone serve nfs remote:path [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -11096,6 +11913,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -11274,7 +12092,11 @@ By default this will serve files without needing a login.
You can either use an htpasswd file which can take lots of users, or
set a single username and password with the `--user` and `--pass` flags.
-If no static users are configured by either of the above methods, and client
+Alternatively, you can have the reverse proxy manage authentication and use the
+username provided in the configured header with `--user-from-header` (e.g., `----user-from-header=x-remote-user`).
+Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access.
+
+If either of the above authentication methods is not configured and client
certificates are required by the `--client-ca` flag passed to the server, the
client certificate common name will be considered as the username.
@@ -11303,16 +12125,16 @@ rclone serve restic remote:path [flags]
## Options
```
- --addr stringArray IPaddress:Port, :Port or [unix://]/path/to/socket to bind server to (default [127.0.0.1:8080])
+ --addr stringArray IPaddress:Port or :Port to bind server to (default 127.0.0.1:8080)
--allow-origin string Origin which cross-domain request (CORS) can be executed from
--append-only Disallow deletion of repository data
--baseurl string Prefix for URLs - leave blank for root
--cache-objects Cache listed objects (default true)
- --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
- --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
+ --cert string TLS PEM key (concatenation of certificate and CA certificate)
+ --client-ca string Client certificate authority to verify clients with
-h, --help help for restic
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string Path to TLS PEM private key file
+ --key string TLS PEM Private key
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--pass string Password for authentication
@@ -11323,6 +12145,7 @@ rclone serve restic remote:path [flags]
--server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
--stdio Run an HTTP2 server on stdin/stdout
--user string User name for authentication
+ --user-from-header string User name from a defined HTTP header
```
See the [global flags page](https://rclone.org/flags/) for global options not listed here.
@@ -11353,7 +12176,7 @@ docs](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html)).
access.
Please note that some clients may require HTTPS endpoints. See [the
-SSL docs](#ssl-tls) for more information.
+SSL docs](#tls-ssl) for more information.
This command uses the [VFS directory cache](#vfs-virtual-file-system).
All the functionality will work with `--vfs-cache-mode off`. Using
@@ -11408,7 +12231,7 @@ secret_access_key = SECRET_ACCESS_KEY
use_multipart_uploads = false
```
-Note that setting `disable_multipart_uploads = true` is to work around
+Note that setting `use_multipart_uploads = false` is to work around
[a bug](#bugs) which will be fixed in due course.
## Bugs
@@ -11480,7 +12303,11 @@ By default this will serve files without needing a login.
You can either use an htpasswd file which can take lots of users, or
set a single username and password with the `--user` and `--pass` flags.
-If no static users are configured by either of the above methods, and client
+Alternatively, you can have the reverse proxy manage authentication and use the
+username provided in the configured header with `--user-from-header` (e.g., `----user-from-header=x-remote-user`).
+Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access.
+
+If either of the above authentication methods is not configured and client
certificates are required by the `--client-ca` flag passed to the server, the
client certificate common name will be considered as the username.
@@ -11660,11 +12487,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
-If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
+If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
-`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
+`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -11989,6 +12816,45 @@ _WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
+## VFS Metadata
+
+If you use the `--vfs-metadata-extension` flag you can get the VFS to
+expose files which contain the [metadata](https://rclone.org/docs/#metadata) as a JSON
+blob. These files will not appear in the directory listing, but can be
+`stat`-ed and opened and once they have been they **will** appear in
+directory listings until the directory cache expires.
+
+Note that some backends won't create metadata unless you pass in the
+`--metadata` flag.
+
+For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata`
+we get
+
+```
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+```
+
+If the file has no metadata it will be returned as `{}` and if there
+is an error reading the metadata the error will be returned as
+`{"error":"error string"}`.
+
```
@@ -11998,22 +12864,22 @@ rclone serve s3 remote:path [flags]
## Options
```
- --addr stringArray IPaddress:Port, :Port or [unix://]/path/to/socket to bind server to (default [127.0.0.1:8080])
+ --addr stringArray IPaddress:Port or :Port to bind server to (default 127.0.0.1:8080)
--allow-origin string Origin which cross-domain request (CORS) can be executed from
--auth-key stringArray Set key pair for v4 authorization: access_key_id,secret_access_key
--auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
- --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
- --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
+ --cert string TLS PEM key (concatenation of certificate and CA certificate)
+ --client-ca string Client certificate authority to verify clients with
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 777)
--etag-hash string Which hash to use for the ETag, or auto or blank for off (default "MD5")
--file-perms FileMode File permissions (default 666)
- --force-path-style If true use path style access if false use virtual hosted style (default true) (default true)
+ --force-path-style If true use path style access if false use virtual hosted style (default true)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for s3
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string Path to TLS PEM private key file
+ --key string TLS PEM Private key
--link-perms FileMode Link permissions (default 666)
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
@@ -12031,6 +12897,7 @@ rclone serve s3 remote:path [flags]
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask FileMode Override the permission bits set by the filesystem (not supported on Windows) (default 002)
--user string User name for authentication
+ --user-from-header string User name from a defined HTTP header
--vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost)
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
@@ -12041,6 +12908,7 @@ rclone serve s3 remote:path [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -12068,6 +12936,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -12254,11 +13123,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
-If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
+If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
-`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
+`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -12583,6 +13452,45 @@ _WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
+## VFS Metadata
+
+If you use the `--vfs-metadata-extension` flag you can get the VFS to
+expose files which contain the [metadata](https://rclone.org/docs/#metadata) as a JSON
+blob. These files will not appear in the directory listing, but can be
+`stat`-ed and opened and once they have been they **will** appear in
+directory listings until the directory cache expires.
+
+Note that some backends won't create metadata unless you pass in the
+`--metadata` flag.
+
+For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata`
+we get
+
+```
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+```
+
+If the file has no metadata it will be returned as `{}` and if there
+is an error reading the metadata the error will be returned as
+`{"error":"error string"}`.
+
## Auth Proxy
If you supply the parameter `--auth-proxy /path/to/program` then
@@ -12704,6 +13612,7 @@ rclone serve sftp remote:path [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -12731,6 +13640,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -12918,7 +13828,11 @@ By default this will serve files without needing a login.
You can either use an htpasswd file which can take lots of users, or
set a single username and password with the `--user` and `--pass` flags.
-If no static users are configured by either of the above methods, and client
+Alternatively, you can have the reverse proxy manage authentication and use the
+username provided in the configured header with `--user-from-header` (e.g., `----user-from-header=x-remote-user`).
+Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access.
+
+If either of the above authentication methods is not configured and client
certificates are required by the `--client-ca` flag passed to the server, the
client certificate common name will be considered as the username.
@@ -13035,11 +13949,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
-If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
+If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
-`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
+`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -13364,6 +14278,45 @@ _WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
+## VFS Metadata
+
+If you use the `--vfs-metadata-extension` flag you can get the VFS to
+expose files which contain the [metadata](https://rclone.org/docs/#metadata) as a JSON
+blob. These files will not appear in the directory listing, but can be
+`stat`-ed and opened and once they have been they **will** appear in
+directory listings until the directory cache expires.
+
+Note that some backends won't create metadata unless you pass in the
+`--metadata` flag.
+
+For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata`
+we get
+
+```
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+```
+
+If the file has no metadata it will be returned as `{}` and if there
+is an error reading the metadata the error will be returned as
+`{"error":"error string"}`.
+
## Auth Proxy
If you supply the parameter `--auth-proxy /path/to/program` then
@@ -13454,12 +14407,12 @@ rclone serve webdav remote:path [flags]
## Options
```
- --addr stringArray IPaddress:Port, :Port or [unix://]/path/to/socket to bind server to (default [127.0.0.1:8080])
+ --addr stringArray IPaddress:Port or :Port to bind server to (default 127.0.0.1:8080)
--allow-origin string Origin which cross-domain request (CORS) can be executed from
--auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
- --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
- --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
+ --cert string TLS PEM key (concatenation of certificate and CA certificate)
+ --client-ca string Client certificate authority to verify clients with
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 777)
--disable-dir-list Disable HTML directory list on GET request for a directory
@@ -13468,7 +14421,7 @@ rclone serve webdav remote:path [flags]
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for webdav
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string Path to TLS PEM private key file
+ --key string TLS PEM Private key
--link-perms FileMode Link permissions (default 666)
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
@@ -13486,6 +14439,7 @@ rclone serve webdav remote:path [flags]
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask FileMode Override the permission bits set by the filesystem (not supported on Windows) (default 002)
--user string User name for authentication
+ --user-from-header string User name from a defined HTTP header
--vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost)
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
@@ -13496,6 +14450,7 @@ rclone serve webdav remote:path [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -13523,6 +14478,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -13808,6 +14764,7 @@ unless `--no-create` or `--recursive` is provided.
If `--recursive` is used then recursively sets the modification
time on all existing files that is found under the path. Filters are supported,
and you can test with the `--dry-run` or the `--interactive`/`-i` flag.
+This will touch `--transfers` files concurrently.
If `--timestamp` is used then sets the modification time to that
time instead of the current time. Times may be specified as one of:
@@ -13860,6 +14817,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -13966,6 +14924,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -14444,6 +15403,11 @@ it to `false`. It is also possible to specify `--boolean=false` or
parsed as `--boolean` and the `false` is parsed as an extra command
line argument for rclone.
+Options documented to take a `stringArray` parameter accept multiple
+values. To pass more than one value, repeat the option; for example:
+`--include value1 --include value2`.
+
+
### Time or duration options {#time-option}
TIME or DURATION options can be specified as a duration string or a
@@ -14788,8 +15752,9 @@ on any OS, and the value is defined as following:
- On Unix: `$HOME` if defined, else by looking up current user in OS-specific user database
(e.g. passwd file), or else use the result from shell command `cd && pwd`.
-If you run `rclone config file` you will see where the default
-location is for you.
+If you run `rclone config file` you will see where the default location is for
+you. Running `rclone config touch` will ensure a configuration file exists,
+creating an empty one in the default location if there is none.
The fact that an existing file `rclone.conf` in the same directory
as the rclone executable is always preferred, means that it is easy
@@ -14800,7 +15765,13 @@ same directory.
If the location is set to empty string `""` or path to a file
with name `notfound`, or the os null device represented by value `NUL` on
Windows and `/dev/null` on Unix systems, then rclone will keep the
-config file in memory only.
+configuration file in memory only.
+
+You may see a log message "Config file not found - using defaults" if there is
+no configuration file. This can be supressed, e.g. if you are using rclone
+entirely with [on the fly remotes](https://rclone.org/docs/#backend-path-to-dir), by using
+memory-only configuration file or by creating an empty configuration file, as
+described above.
The file format is basic [INI](https://en.wikipedia.org/wiki/INI_file#Format):
Sections of text, led by a `[section]` header and followed by
@@ -15268,6 +16239,19 @@ backends and the VFS. There are individual flags for just enabling it
for the VFS `--vfs-links` and the local backend `--local-links` if
required.
+### --list-cutoff N {#list-cutoff}
+
+When syncing rclone needs to sort directory entries before comparing
+them. Below this threshold (1,000,000) by default, rclone will store
+the directory entries in memory. 1,000,000 entries will take approx
+1GB of RAM to store. Above this threshold rclone will store directory
+entries on disk and sort them without using a lot of memory.
+
+Doing this is slightly less efficient then sorting them in memory and
+will only work well for the bucket based backends (eg s3, b2,
+azureblob, swift) but these are the only backends likely to have
+millions of entries in a directory.
+
### --log-file=FILE ###
Log all of rclone's output to FILE. This is not active by default.
@@ -15283,12 +16267,21 @@ have a signal to rotate logs.
### --log-format LIST ###
-Comma separated list of log format options. Accepted options are `date`,
-`time`, `microseconds`, `pid`, `longfile`, `shortfile`, `UTC`. Any other
-keywords will be silently ignored. `pid` will tag log messages with process
-identifier which useful with `rclone mount --daemon`. Other accepted
-options are explained in the [go documentation](https://pkg.go.dev/log#pkg-constants).
-The default log format is "`date`,`time`".
+Comma separated list of log format options. The accepted options are:
+
+- `date` - Add a date in the format YYYY/MM/YY to the log.
+- `time` - Add a time to the log in format HH:MM:SS.
+- `microseconds` - Add microseconds to the time in format HH:MM:SS.SSSSSS.
+- `UTC` - Make the logs in UTC not localtime.
+- `longfile` - Adds the source file and line number of the log statement.
+- `shortfile` - Adds the source file and line number of the log statement.
+- `pid` - Add the process ID to the log - useful with `rclone mount --daemon`.
+- `nolevel` - Don't add the level to the log.
+- `json` - Equivalent to adding `--use-json-log`
+
+They are added to the log line in the order above.
+
+The default log format is `"date,time"`.
### --log-level LEVEL ###
@@ -15306,10 +16299,90 @@ warnings and significant events.
`ERROR` is equivalent to `-q`. It only outputs error messages.
+### --windows-event-log LEVEL ###
+
+If this is configured (the default is `OFF`) then logs of this level
+and above will be logged to the Windows event log in **addition** to
+the normal logs. These will be logged in JSON format as described
+below regardless of what format the main logs are configured for.
+
+The Windows event log only has 3 levels of severity `Info`, `Warning`
+and `Error`. If enabled we map rclone levels like this.
+
+- `Error` ← `ERROR` (and above)
+- `Warning` ← `WARNING` (note that this level is defined but not currently used).
+- `Info` ← `NOTICE`, `INFO` and `DEBUG`.
+
+Rclone will declare its log source as "rclone" if it is has enough
+permissions to create the registry key needed. If not then logs will
+appear as "Application". You can run `rclone version --windows-event-log DEBUG`
+once as administrator to create the registry key in advance.
+
+**Note** that the `--windows-event-log` level must be greater (more
+severe) than or equal to the `--log-level`. For example to log DEBUG
+to a log file but ERRORs to the event log you would use
+
+ --log-file rclone.log --log-level DEBUG --windows-event-log ERROR
+
+This option is only supported Windows platforms.
+
### --use-json-log ###
-This switches the log format to JSON for rclone. The fields of json log
-are level, msg, source, time.
+This switches the log format to JSON for rclone. The fields of JSON
+log are `level`, `msg`, `source`, `time`. The JSON logs will be
+printed on a single line, but are shown expanded here for clarity.
+
+```json
+{
+ "time": "2025-05-13T17:30:51.036237518+01:00",
+ "level": "debug",
+ "msg": "4 go routines active\n",
+ "source": "cmd/cmd.go:298"
+}
+```
+
+Completed data transfer logs will have extra `size` information. Logs
+which are about a particular object will have `object` and
+`objectType` fields also.
+
+```json
+{
+ "time": "2025-05-13T17:38:05.540846352+01:00",
+ "level": "info",
+ "msg": "Copied (new) to: file2.txt",
+ "size": 6,
+ "object": "file.txt",
+ "objectType": "*local.Object",
+ "source": "operations/copy.go:368"
+}
+```
+
+Stats logs will contain a `stats` field which is the same as
+returned from the rc call [core/stats](https://rclone.org/rc/#core-stats).
+
+```json
+{
+ "time": "2025-05-13T17:38:05.540912847+01:00",
+ "level": "info",
+ "msg": "...text version of the stats...",
+ "stats": {
+ "bytes": 6,
+ "checks": 0,
+ "deletedDirs": 0,
+ "deletes": 0,
+ "elapsedTime": 0.000904825,
+ ...truncated for clarity...
+ "totalBytes": 6,
+ "totalChecks": 0,
+ "totalTransfers": 1,
+ "transferTime": 0.000882794,
+ "transfers": 1
+ },
+ "source": "accounting/stats.go:569"
+}
+```
+
+
### --low-level-retries NUMBER ###
@@ -15346,6 +16419,50 @@ of the remote which may be desirable.
Setting this to a negative number will make the backlog as large as
possible.
+### --max-buffer-memory=SIZE {#max-buffer-memory}
+
+If set, don't allocate more than SIZE amount of memory as buffers. If
+not set or set to `0` or `off` this will not limit the amount of memory
+in use.
+
+This includes memory used by buffers created by the `--buffer` flag
+and buffers used by multi-thread transfers.
+
+Most multi-thread transfers do not take additional memory, but some do
+depending on the backend (eg the s3 backend for uploads). This means
+there is a tension between total setting `--transfers` as high as
+possible and memory use.
+
+Setting `--max-buffer-memory` allows the buffer memory to be
+controlled so that it doesn't overwhelm the machine and allows
+`--transfers` to be set large.
+
+### --max-connections=N ###
+
+This sets the maximum number of concurrent calls to the backend API.
+It may not map 1:1 to TCP or HTTP connections depending on the backend
+in use and the use of HTTP1 vs HTTP2.
+
+When downloading files, backends only limit the initial opening of the
+stream. The bulk data download is not counted as a connection. This
+means that the `--max--connections` flag won't limit the total number
+of downloads.
+
+Note that it is possible to cause deadlocks with this setting so it
+should be used with care.
+
+If you are doing a sync or copy then make sure `--max-connections` is
+one more than the sum of `--transfers` and `--checkers`.
+
+If you use `--check-first` then `--max-connections` just needs to be
+one more than the maximum of `--checkers` and `--transfers`.
+
+So for `--max-connections 3` you'd use `--checkers 2 --transfers 2
+--check-first` or `--checkers 1 --transfers 1`.
+
+Setting this flag can be useful for backends which do multipart
+uploads to limit the number of simultaneous parts being transferred.
+
### --max-delete=N ###
This tells rclone not to delete more than N files. If that limit is
@@ -15605,6 +16722,14 @@ This will work with the `sync`/`copy`/`move` commands and friends
mount` and `rclone serve` if `--vfs-cache-mode` is set to `writes` or
above.
+Most multi-thread transfers do not take additional memory, but some do
+(for example uploading to s3). In the worst case memory usage can be
+at maximum `--transfers` * `--multi-thread-chunk-size` *
+`--multi-thread-streams` or specifically for the s3 backend
+`--transfers` * `--s3-chunk-size` * `--s3-concurrency`. However you
+can use the the [--max-buffer-memory](https://rclone.org/docs/#max-buffer-memory) flag
+to control the maximum memory used here.
+
**NB** that this **only** works with supported backends as the
destination but will work with any backend as the source.
@@ -15629,6 +16754,13 @@ If the backend has a `--backend-upload-concurrency` setting (eg
number of transfers instead if it is larger than the value of
`--multi-thread-streams` or `--multi-thread-streams` isn't set.
+### --name-transform COMMAND[=XXXX] ###
+`--name-transform` introduces path name transformations for
+`rclone copy`, `rclone sync`, and `rclone move`. These transformations
+enable modifications to source and destination file names by applying
+prefixes, suffixes, and other alterations during transfer operations.
+For detailed docs and examples, see [`convmv`](https://rclone.org/commands/rclone_convmv/).
+
### --no-check-dest ###
The `--no-check-dest` can be used with `move` or `copy` and it causes
@@ -16628,6 +17760,7 @@ For the filtering options
* `--max-size`
* `--min-age`
* `--max-age`
+ * `--hash-filter`
* `--dump filters`
* `--metadata-include`
* `--metadata-include-from`
@@ -16755,7 +17888,7 @@ so they take exactly the same form.
The options set by environment variables can be seen with the `-vv` flag, e.g. `rclone version -vv`.
Options that can appear multiple times (type `stringArray`) are
-treated slighly differently as environment variables can only be
+treated slightly differently as environment variables can only be
defined once. In order to allow a simple mechanism for adding one or
many items, the input is treated as a [CSV encoded](https://godoc.org/encoding/csv)
string. For example
@@ -17679,6 +18812,98 @@ old or more.
See [the time option docs](https://rclone.org/docs/#time-option) for valid formats.
+### `--hash-filter` - Deterministically select a subset of files {#hash-filter}
+
+The `--hash-filter` flag enables selecting a deterministic subset of files, useful for:
+
+1. Running large sync operations across multiple machines.
+2. Checking a subset of files for bitrot.
+3. Any other operations where a sample of files is required.
+
+#### Syntax
+
+The flag takes two parameters expressed as a fraction:
+
+```
+--hash-filter K/N
+```
+
+- `N`: The total number of partitions (must be a positive integer).
+- `K`: The specific partition to select (an integer from `0` to `N`).
+
+For example:
+- `--hash-filter 1/3`: Selects the first third of the files.
+- `--hash-filter 2/3` and `--hash-filter 3/3`: Select the second and third partitions, respectively.
+
+Each partition is non-overlapping, ensuring all files are covered without duplication.
+
+#### Random Partition Selection
+
+Use `@` as `K` to randomly select a partition:
+
+```
+--hash-filter @/M
+```
+
+For example, `--hash-filter @/3` will randomly select a number between 0 and 2. This will stay constant across retries.
+
+#### How It Works
+
+- Rclone takes each file's full path, normalizes it to lowercase, and applies Unicode normalization.
+- It then hashes the normalized path into a 64 bit number.
+- The hash result is reduced modulo `N` to assign the file to a partition.
+- If the calculated partition does not match `K` the file is excluded.
+- Other filters may apply if the file is not excluded.
+
+**Important:** Rclone will traverse all directories to apply the filter.
+
+#### Usage Notes
+
+- Safe to use with `rclone sync`; source and destination selections will match.
+- **Do not** use with `--delete-excluded`, as this could delete unselected files.
+- Ignored if `--files-from` is used.
+
+#### Examples
+
+##### Dividing files into 4 partitions
+
+Assuming the current directory contains `file1.jpg` through `file9.jpg`:
+
+```
+$ rclone lsf --hash-filter 0/4 .
+file1.jpg
+file5.jpg
+
+$ rclone lsf --hash-filter 1/4 .
+file3.jpg
+file6.jpg
+file9.jpg
+
+$ rclone lsf --hash-filter 2/4 .
+file2.jpg
+file4.jpg
+
+$ rclone lsf --hash-filter 3/4 .
+file7.jpg
+file8.jpg
+
+$ rclone lsf --hash-filter 4/4 . # the same as --hash-filter 0/4
+file1.jpg
+file5.jpg
+```
+
+##### Syncing the first quarter of files
+
+```
+rclone sync --hash-filter 1/4 source:path destination:path
+```
+
+##### Checking a random 1% of files for integrity
+
+```
+rclone check --download --hash-filter @/100 source:path destination:path
+```
+
## Other flags
### `--delete-excluded` - Delete files on dest excluded from sync
@@ -18187,9 +19412,16 @@ $ rclone rc job/list
If you wish to set config (the equivalent of the global flags) for the
duration of an rc call only then pass in the `_config` parameter.
-This should be in the same format as the `config` key returned by
+This should be in the same format as the `main` key returned by
[options/get](#options-get).
+ rclone rc --loopback options/get blocks=main
+
+You can see more help on these options with this command (see [the
+options blocks section](#option-blocks) for more info).
+
+ rclone rc --loopback options/info blocks=main
+
For example, if you wished to run a sync with the `--checksum`
parameter, you would pass this parameter in your JSON blob.
@@ -18220,6 +19452,13 @@ pass in the `_filter` parameter.
This should be in the same format as the `filter` key returned by
[options/get](#options-get).
+ rclone rc --loopback options/get blocks=filter
+
+You can see more help on these options with this command (see [the
+options blocks section](#option-blocks) for more info).
+
+ rclone rc --loopback options/info blocks=filter
+
For example, if you wished to run a sync with these flags
--max-size 1M --max-age 42s --include "a" --include "b"
@@ -18301,7 +19540,7 @@ format. Each block describes a single option.
| Field | Type | Optional | Description |
|-------|------|----------|-------------|
| Name | string | N | name of the option in snake_case |
-| FieldName | string | N | name of the field used in the rc - if blank use Name |
+| FieldName | string | N | name of the field used in the rc - if blank use Name. May contain "." for nested fields. |
| Help | string | N | help, started with a single sentence on a single line |
| Groups | string | Y | groups this option belongs to - comma separated string for options classification |
| Provider | string | Y | set to filter on provider |
@@ -18507,6 +19746,7 @@ This takes the following parameters:
- opt - a dictionary of options to control the configuration
- obscure - declare passwords are plain and need obscuring
- noObscure - declare passwords are already obscured and don't need obscuring
+ - noOutput - don't print anything to stdout
- nonInteractive - don't interact with a user, return questions
- continue - continue the config process with an answer
- all - ask all the config questions not just the post config ones
@@ -18621,6 +19861,7 @@ This takes the following parameters:
- opt - a dictionary of options to control the configuration
- obscure - declare passwords are plain and need obscuring
- noObscure - declare passwords are already obscured and don't need obscuring
+ - noOutput - don't print anything to stdout
- nonInteractive - don't interact with a user, return questions
- continue - continue the config process with an answer
- all - ask all the config questions not just the post config ones
@@ -18805,7 +20046,8 @@ returned.
Parameters
-- group - name of the stats group (string)
+- group - name of the stats group (string, optional)
+- short - if true will not return the transferring and checking arrays (boolean, optional)
Returns the following values:
@@ -18820,6 +20062,7 @@ Returns the following values:
"fatalError": boolean whether there has been at least one fatal error,
"lastError": last error string,
"renames" : number of files renamed,
+ "listed" : number of directory entries listed,
"retryError": boolean showing whether there has been at least one non-NoRetryError,
"serverSideCopies": number of server side copies done,
"serverSideCopyBytes": number bytes server side copied,
@@ -19786,6 +21029,141 @@ check that parameter passing is working properly.
**Authentication is required for this call.**
+### serve/list: Show running servers {#serve-list}
+
+Show running servers with IDs.
+
+This takes no parameters and returns
+
+- list: list of running serve commands
+
+Each list element will have
+
+- id: ID of the server
+- addr: address the server is running on
+- params: parameters used to start the server
+
+Eg
+
+ rclone rc serve/list
+
+Returns
+
+```json
+{
+ "list": [
+ {
+ "addr": "[::]:4321",
+ "id": "nfs-ffc2a4e5",
+ "params": {
+ "fs": "remote:",
+ "opt": {
+ "ListenAddr": ":4321"
+ },
+ "type": "nfs",
+ "vfsOpt": {
+ "CacheMode": "full"
+ }
+ }
+ }
+ ]
+}
+```
+
+**Authentication is required for this call.**
+
+### serve/start: Create a new server {#serve-start}
+
+Create a new server with the specified parameters.
+
+This takes the following parameters:
+
+- `type` - type of server: `http`, `webdav`, `ftp`, `sftp`, `nfs`, etc.
+- `fs` - remote storage path to serve
+- `addr` - the ip:port to run the server on, eg ":1234" or "localhost:1234"
+
+Other parameters are as described in the documentation for the
+relevant [rclone serve](https://rclone.org/commands/rclone_serve/) command line options.
+To translate a command line option to an rc parameter, remove the
+leading `--` and replace `-` with `_`, so `--vfs-cache-mode` becomes
+`vfs_cache_mode`. Note that global parameters must be set with
+`_config` and `_filter` as described above.
+
+Examples:
+
+ rclone rc serve/start type=nfs fs=remote: addr=:4321 vfs_cache_mode=full
+ rclone rc serve/start --json '{"type":"nfs","fs":"remote:","addr":":1234","vfs_cache_mode":"full"}'
+
+This will give the reply
+
+```json
+{
+ "addr": "[::]:4321", // Address the server was started on
+ "id": "nfs-ecfc6852" // Unique identifier for the server instance
+}
+```
+
+Or an error if it failed to start.
+
+Stop the server with `serve/stop` and list the running servers with `serve/list`.
+
+**Authentication is required for this call.**
+
+### serve/stop: Unserve selected active serve {#serve-stop}
+
+Stops a running `serve` instance by ID.
+
+This takes the following parameters:
+
+- id: as returned by serve/start
+
+This will give an empty response if successful or an error if not.
+
+Example:
+
+ rclone rc serve/stop id=12345
+
+**Authentication is required for this call.**
+
+### serve/stopall: Stop all active servers {#serve-stopall}
+
+Stop all active servers.
+
+This will stop all active servers.
+
+ rclone rc serve/stopall
+
+**Authentication is required for this call.**
+
+### serve/types: Show all possible serve types {#serve-types}
+
+This shows all possible serve types and returns them as a list.
+
+This takes no parameters and returns
+
+- types: list of serve types, eg "nfs", "sftp", etc
+
+The serve types are strings like "serve", "serve2", "cserve" and can
+be passed to serve/start as the serveType parameter.
+
+Eg
+
+ rclone rc serve/types
+
+Returns
+
+```json
+{
+ "types": [
+ "http",
+ "sftp",
+ "nfs"
+ ]
+}
+```
+
+**Authentication is required for this call.**
+
### sync/bisync: Perform bidirectional synchronization between two paths. {#sync-bisync}
This takes the following parameters
@@ -19937,7 +21315,7 @@ the `--vfs-cache-mode` is off, it will return an empty result.
],
}
-The `expiry` time is the time until the file is elegible for being
+The `expiry` time is the time until the file is eligible for being
uploaded in floating point seconds. This may go negative. As rclone
only transfers `--transfers` files at once, only the lowest
`--transfers` expiry times will have `uploading` as `true`. So there
@@ -20285,6 +21663,7 @@ Here is an overview of the major features of each cloud storage system.
| Cloudinary | MD5 | R | No | Yes | - | - |
| Dropbox | DBHASH ¹ | R | Yes | No | - | - |
| Enterprise File Fabric | - | R/W | Yes | No | R/W | - |
+| FileLu Cloud Storage | MD5 | R/W | No | Yes | R | - |
| Files.com | MD5, CRC32 | DR/W | Yes | No | R | - |
| FTP | - | R/W ¹⁰ | No | No | - | - |
| Gofile | MD5 | DR/W | No | Yes | R | - |
@@ -20939,6 +22318,7 @@ Flags for anything which can copy a file.
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
--no-update-dir-modtime Don't update directory modification times
@@ -20964,6 +22344,7 @@ Flags used for sync commands.
--delete-during When synchronizing, delete files during transfer
--fix-case Force rename of case insensitive dest to match source
--ignore-errors Delete even if there are I/O errors
+ --list-cutoff int To save memory, sort directory listings on disk above this threshold (default 1000000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off)
--suffix string Suffix to add to changed files
@@ -21012,13 +22393,14 @@ Flags for general networking and HTTP stuff.
--header stringArray Set HTTP header for all transactions
--header-download stringArray Set HTTP header for download transactions
--header-upload stringArray Set HTTP header for upload transactions
+ --max-connections int Maximum number of simultaneous backend API connections, 0 for unlimited
--no-check-certificate Do not verify the server SSL certificate (insecure)
--no-gzip-encoding Don't set Accept-Encoding: gzip
--timeout Duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.69.0")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.70.0")
```
@@ -21053,6 +22435,7 @@ Flags for general configuration of rclone.
-i, --interactive Enable interactive mode
--kv-lock-time Duration Maximum time to keep key-value database locked by process (default 1s)
--low-level-retries int Number of low level retries to do (default 10)
+ --max-buffer-memory SizeSuffix If set, don't allocate more than this amount of memory as buffers (default off)
--no-console Hide console window (supported on Windows only)
--no-unicode-normalization Don't normalize unicode characters in filenames
--password-command SpaceSepList Command for supplying password for encrypted configuration
@@ -21090,6 +22473,7 @@ Flags for filtering directory listings.
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -21123,7 +22507,7 @@ Flags for logging and statistics.
```
--log-file string Log everything to this file
- --log-format string Comma separated list of log format options (default "date,time")
+ --log-format Bits Comma separated list of log format options (default date,time)
--log-level LogLevel Log level DEBUG|INFO|NOTICE|ERROR (default NOTICE)
--log-systemd Activate systemd integration for the logger
--max-stats-groups int Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000)
@@ -21190,6 +22574,7 @@ Flags to control the Remote Control API.
--rc-server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
--rc-template string User-specified template
--rc-user string User name for authentication
+ --rc-user-from-header string User name from a defined HTTP header
--rc-web-fetch-url string URL to fetch the releases for webgui (default "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest")
--rc-web-gui Launch WebGUI on localhost
--rc-web-gui-force-update Force update to latest version of web gui
@@ -21219,6 +22604,7 @@ Flags to control the Metrics HTTP endpoint..
--metrics-server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
--metrics-template string User-specified template
--metrics-user string User name for authentication
+ --metrics-user-from-header string User name from a defined HTTP header
--rc-enable-metrics Enable the Prometheus metrics path at the remote control server
```
@@ -21239,6 +22625,8 @@ Backend-only flags (these can be set in the config file also).
--azureblob-client-id string The ID of the client in use
--azureblob-client-secret string One of the service principal's client secrets
--azureblob-client-send-certificate-chain Send the certificate chain when using certificate auth
+ --azureblob-copy-concurrency int Concurrency for multipart copy (default 512)
+ --azureblob-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 8Mi)
--azureblob-delete-snapshots string Set to specify how to deal with snapshots on blob deletion
--azureblob-description string Description of the remote
--azureblob-directory-markers Upload an empty object with a trailing slash when a new directory is created
@@ -21262,6 +22650,7 @@ Backend-only flags (these can be set in the config file also).
--azureblob-upload-concurrency int Concurrency for multipart uploads (default 16)
--azureblob-upload-cutoff string Cutoff for switching to chunked upload (<= 256 MiB) (deprecated)
--azureblob-use-az Use Azure CLI tool az for authentication
+ --azureblob-use-copy-blob Whether to use the Copy Blob API when copying to the same storage account (default true)
--azureblob-use-emulator Uses local storage emulator if provided as 'true'
--azureblob-use-msi Use a managed service identity to authenticate (only works in Azure)
--azureblob-username string User name (usually an email address)
@@ -21274,6 +22663,7 @@ Backend-only flags (these can be set in the config file also).
--azurefiles-client-send-certificate-chain Send the certificate chain when using certificate auth
--azurefiles-connection-string string Azure Files Connection String
--azurefiles-description string Description of the remote
+ --azurefiles-disable-instance-discovery Skip requesting Microsoft Entra instance metadata
--azurefiles-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot)
--azurefiles-endpoint string Endpoint for the service
--azurefiles-env-auth Read credentials from runtime (environment variables, CLI or MSI)
@@ -21288,6 +22678,7 @@ Backend-only flags (these can be set in the config file also).
--azurefiles-share-name string Azure Files Share Name
--azurefiles-tenant string ID of the service principal's tenant. Also called its directory ID
--azurefiles-upload-concurrency int Concurrency for multipart uploads (default 16)
+ --azurefiles-use-az Use Azure CLI tool az for authentication
--azurefiles-use-msi Use a managed service identity to authenticate (only works in Azure)
--azurefiles-username string User name (usually an email address)
--b2-account string Account ID or Application Key ID
@@ -21350,12 +22741,14 @@ Backend-only flags (these can be set in the config file also).
--chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks
--chunker-hash-type string Choose how chunker handles hash sums (default "md5")
--chunker-remote string Remote to chunk/unchunk
+ --cloudinary-adjust-media-files-extensions Cloudinary handles media formats as a file attribute and strips it from the name, which is unlike most other file systems (default true)
--cloudinary-api-key string Cloudinary API Key
--cloudinary-api-secret string Cloudinary API Secret
--cloudinary-cloud-name string Cloudinary Environment Name
--cloudinary-description string Description of the remote
--cloudinary-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
--cloudinary-eventually-consistent-delay Duration Wait N seconds for eventual consistency of the databases that support the backend operation (default 0s)
+ --cloudinary-media-extensions stringArray Cloudinary supported media extensions (default 3ds,3g2,3gp,ai,arw,avi,avif,bmp,bw,cr2,cr3,djvu,dng,eps3,fbx,flif,flv,gif,glb,gltf,hdp,heic,heif,ico,indd,jp2,jpe,jpeg,jpg,jxl,jxr,m2ts,mov,mp4,mpeg,mts,mxf,obj,ogv,pdf,ply,png,psd,svg,tga,tif,tiff,ts,u3ma,usdz,wdp,webm,webp,wmv)
--cloudinary-upload-prefix string Specify the API endpoint for environments out of the US
--cloudinary-upload-preset string Upload Preset to select asset manipulation on upload
--combine-description string Description of the remote
@@ -21379,6 +22772,10 @@ Backend-only flags (these can be set in the config file also).
--crypt-show-mapping For all files listed show how the names encrypt
--crypt-strict-names If set, this will raise an error when crypt comes across a filename that can't be decrypted
--crypt-suffix string If this is set it will override the default suffix of ".bin" (default ".bin")
+ --doi-description string Description of the remote
+ --doi-doi string The DOI or the doi.org URL
+ --doi-doi-resolver-api-url string The URL of the DOI resolver API to use
+ --doi-provider string DOI provider
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs
--drive-auth-owner-only Only consider files owned by the authenticated user
@@ -21430,7 +22827,6 @@ Backend-only flags (these can be set in the config file also).
--drive-use-trash Send files to the trash instead of deleting permanently (default true)
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download (default off)
--dropbox-auth-url string Auth server URL
- --dropbox-batch-commit-timeout Duration Max time to wait for a batch to finish committing (default 10m0s)
--dropbox-batch-mode string Upload file batching sync|async|off (default "sync")
--dropbox-batch-size int Max number of files in upload batch
--dropbox-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
@@ -21440,11 +22836,14 @@ Backend-only flags (these can be set in the config file also).
--dropbox-client-secret string OAuth Client Secret
--dropbox-description string Description of the remote
--dropbox-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
+ --dropbox-export-formats CommaSepList Comma separated list of preferred formats for exporting files (default html,md)
--dropbox-impersonate string Impersonate this user when using a business account
--dropbox-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms)
--dropbox-root-namespace string Specify a different Dropbox namespace ID to use as the root for all paths
--dropbox-shared-files Instructs rclone to work on individual shared files
--dropbox-shared-folders Instructs rclone to work on shared folders
+ --dropbox-show-all-exports Show all exportable files in listings
+ --dropbox-skip-exports Skip exportable files in all listings
--dropbox-token string OAuth Access Token as a JSON blob
--dropbox-token-url string Token server url
--fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl
@@ -21462,6 +22861,9 @@ Backend-only flags (these can be set in the config file also).
--filefabric-token-expiry string Token expiry time
--filefabric-url string URL of the Enterprise File Fabric to connect to
--filefabric-version string Version read from the file fabric
+ --filelu-description string Description of the remote
+ --filelu-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,CrLf,Del,Ctl,LeftSpace,LeftPeriod,LeftTilde,LeftCrLfHtVt,RightSpace,RightPeriod,RightCrLfHtVt,InvalidUtf8,Dot,SquareBracket,Semicolon,Exclamation)
+ --filelu-key string Your FileLu Rclone key from My Account
--filescom-api-key string The API key used to authenticate with Files.com
--filescom-description string Description of the remote
--filescom-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
@@ -21521,7 +22923,6 @@ Backend-only flags (these can be set in the config file also).
--gofile-list-chunk int Number of items to list in each call (default 1000)
--gofile-root-folder-id string ID of the root folder
--gphotos-auth-url string Auth server URL
- --gphotos-batch-commit-timeout Duration Max time to wait for a batch to finish committing (default 10m0s)
--gphotos-batch-mode string Upload file batching sync|async|off (default "sync")
--gphotos-batch-size int Max number of files in upload batch
--gphotos-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
@@ -21589,6 +22990,8 @@ Backend-only flags (these can be set in the config file also).
--internetarchive-encoding Encoding The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot)
--internetarchive-endpoint string IAS3 Endpoint (default "https://s3.us.archive.org")
--internetarchive-front-endpoint string Host of InternetArchive Frontend (default "https://archive.org")
+ --internetarchive-item-derive Whether to trigger derive on the IA item or not. If set to false, the item will not be derived by IA upon upload (default true)
+ --internetarchive-item-metadata stringArray Metadata to be set on the IA item, this is different from file-level metadata that can be set using --metadata-set
--internetarchive-secret-access-key string IAS3 Secret Key (password)
--internetarchive-wait-archive Duration Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish (default 0s)
--jottacloud-auth-url string Auth server URL
@@ -21684,6 +23087,7 @@ Backend-only flags (these can be set in the config file also).
--onedrive-tenant string ID of the service principal's tenant. Also called its directory ID
--onedrive-token string OAuth Access Token as a JSON blob
--onedrive-token-url string Token server url
+ --onedrive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default off)
--oos-attempt-resume-upload If true attempt to resume previously started multipart upload for the object
--oos-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
--oos-compartment string Specify compartment OCID, if you need to list buckets
@@ -21709,6 +23113,7 @@ Backend-only flags (these can be set in the config file also).
--oos-storage-tier string The storage class to use when storing new objects in storage. https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm (default "Standard")
--oos-upload-concurrency int Concurrency for multipart uploads (default 10)
--oos-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
+ --opendrive-access string Files and folders will be uploaded with this access permission (default private) (default "private")
--opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi)
--opendrive-description string Description of the remote
--opendrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
@@ -21805,6 +23210,8 @@ Backend-only flags (these can be set in the config file also).
--s3-endpoint string Endpoint for S3 API
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars)
--s3-force-path-style If true use path style access if false use virtual hosted style (default true)
+ --s3-ibm-api-key string IBM API Key to be used to obtain IAM token
+ --s3-ibm-resource-instance-id string IBM service instance id
--s3-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery
--s3-list-chunk int Size of listing chunk (response list for each ListObject S3 request) (default 1000)
--s3-list-url-encode Tristate Whether to url encode listings: true/false/unset (default unset)
@@ -21825,6 +23232,7 @@ Backend-only flags (these can be set in the config file also).
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3
--s3-session-token string An AWS session token
--s3-shared-credentials-file string Path to the shared credentials file
+ --s3-sign-accept-encoding Tristate Set if rclone should include Accept-Encoding as part of the signature (default unset)
--s3-sse-customer-algorithm string If using SSE-C, the server-side encryption algorithm used when storing this object in S3
--s3-sse-customer-key string To use SSE-C you may provide the secret encryption key used to encrypt/decrypt your data
--s3-sse-customer-key-base64 string If using SSE-C you must provide the secret encryption key encoded in base64 format to encrypt/decrypt your data
@@ -21841,6 +23249,7 @@ Backend-only flags (these can be set in the config file also).
--s3-use-multipart-uploads Tristate Set if rclone should use multipart uploads (default unset)
--s3-use-presigned-request Whether to use a presigned request or PutObject for single part uploads
--s3-use-unsigned-payload Tristate Whether to use an unsigned payload in PutObject (default unset)
+ --s3-use-x-id Tristate Set if rclone should add x-id URL parameters (default unset)
--s3-v2-auth If true use v2 authentication
--s3-version-at Time Show file versions as they were at the specified time (default off)
--s3-version-deleted Show deleted file markers when using versions
@@ -21866,6 +23275,7 @@ Backend-only flags (these can be set in the config file also).
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available
--sftp-host string SSH host to connect to
--sftp-host-key-algorithms SpaceSepList Space separated list of host key algorithms, ordered by preference
+ --sftp-http-proxy string URL for HTTP CONNECT proxy
--sftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--sftp-key-exchange SpaceSepList Space separated list of key exchange algorithms, ordered by preference
--sftp-key-file string Path to PEM-encoded private key file
@@ -21920,6 +23330,7 @@ Backend-only flags (these can be set in the config file also).
--smb-pass string SMB password (obscured)
--smb-port int SMB port number (default 445)
--smb-spn string Service principal name
+ --smb-use-kerberos Use Kerberos authentication
--smb-user string SMB username (default "$USER")
--storj-access-grant string Access grant
--storj-api-key string API key
@@ -22062,7 +23473,7 @@ on the host.
The _FUSE_ driver is a prerequisite for rclone mounting and should be
installed on host:
```
-sudo apt-get -y install fuse
+sudo apt-get -y install fuse3
```
Create two directories required by rclone docker plugin:
@@ -23066,7 +24477,7 @@ See the [bisync filters](#filtering) section and generic
[--filter-from](https://rclone.org/filtering/#filter-from-read-filtering-patterns-from-a-file)
documentation.
An [example filters file](#example-filters-file) contains filters for
-non-allowed files for synching with Dropbox.
+non-allowed files for syncing with Dropbox.
If you make changes to your filters file then bisync requires a run
with `--resync`. This is a safety feature, which prevents existing files
@@ -23243,7 +24654,7 @@ Using `--check-sync=false` will disable it and may significantly reduce the
sync run times for very large numbers of files.
The check may be run manually with `--check-sync=only`. It runs only the
-integrity check and terminates without actually synching.
+integrity check and terminates without actually syncing.
Note that currently, `--check-sync` **only checks listing snapshots and NOT the
actual files on the remotes.** Note also that the listing snapshots will not
@@ -23720,7 +25131,7 @@ The `--include*`, `--exclude*`, and `--filter` flags are also supported.
### How to filter directories
-Filtering portions of the directory tree is a critical feature for synching.
+Filtering portions of the directory tree is a critical feature for syncing.
Examples of directory trees (always beneath the Path1/Path2 root level)
you may want to exclude from your sync:
@@ -23829,7 +25240,7 @@ quashed by adding `--quiet` to the bisync command line.
## Example exclude-style filters files for use with Dropbox {#exclude-filters}
-- Dropbox disallows synching the listed temporary and configuration/data files.
+- Dropbox disallows syncing the listed temporary and configuration/data files.
The `- ` filters exclude these files where ever they may occur
in the sync tree. Consider adding similar exclusions for file types
you don't need to sync, such as core dump and software build files.
@@ -24163,7 +25574,7 @@ test command flags can be equally prefixed by a single `-` or double dash.
- `go test . -case basic -remote local -remote2 local`
runs the `test_basic` test case using only the local filesystem,
- synching one local directory with another local directory.
+ syncing one local directory with another local directory.
Test script output is to the console, while commands within scenario.txt
have their output sent to the `.../workdir/test.log` file,
which is finally compared to the golden copy.
@@ -24394,6 +25805,9 @@ about _Unison_ and synchronization in general.
## Changelog
+### `v1.69.1`
+* Fixed an issue causing listings to not capture concurrent modifications under certain conditions
+
### `v1.68`
* Fixed an issue affecting backends that round modtimes to a lower precision.
@@ -24956,9 +26370,11 @@ The S3 backend can be used with a number of different providers:
- Liara Object Storage
- Linode Object Storage
- Magalu Object Storage
+- MEGA S4 Object Storage
- Minio
- Outscale
- Petabox
+- Pure Storage FlashBlade
- Qiniu Cloud Object Storage (Kodo)
- RackCorp Object Storage
- Rclone Serve S3
@@ -25680,7 +27096,7 @@ Notes on above:
that `USER_NAME` has been created.
2. The Resource entry must include both resource ARNs, as one implies
the bucket and the other implies the bucket's objects.
-3. When using [s3-no-check-bucket](#s3-no-check-bucket) and the bucket already exsits, the `"arn:aws:s3:::BUCKET_NAME"` doesn't have to be included.
+3. When using [s3-no-check-bucket](#s3-no-check-bucket) and the bucket already exists, the `"arn:aws:s3:::BUCKET_NAME"` doesn't have to be included.
For reference, [here's an Ansible script](https://gist.github.com/ebridges/ebfc9042dd7c756cd101cfa807b7ae2b)
that will generate one or more buckets that will work with `rclone sync`.
@@ -25701,7 +27117,8 @@ tries to access data from the glacier storage class you will see an error like b
2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file
In this case you need to [restore](http://docs.aws.amazon.com/AmazonS3/latest/user-guide/restore-archived-objects.html)
-the object(s) in question before using rclone.
+the object(s) in question before accessing object contents.
+The [restore](#restore) section below shows how to do this with rclone.
Note that rclone only speaks the S3 API it does not speak the Glacier
Vault API, so rclone cannot directly access Glacier Vaults.
@@ -25719,7 +27136,7 @@ A simple solution is to set the `--s3-upload-cutoff 0` and force all the files t
### Standard options
-Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
+Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, Exaba, FlashBlade, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Mega, Minio, Netease, Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
#### --s3-provider
@@ -25748,6 +27165,10 @@ Properties:
- DigitalOcean Spaces
- "Dreamhost"
- Dreamhost DreamObjects
+ - "Exaba"
+ - Exaba Object Storage
+ - "FlashBlade"
+ - Pure Storage FlashBlade Object Storage
- "GCS"
- Google Cloud Storage
- "HuaweiOBS"
@@ -25768,6 +27189,8 @@ Properties:
- Linode Object Storage
- "Magalu"
- Magalu Object Storage
+ - "Mega"
+ - MEGA S4 Object Storage
- "Minio"
- Minio Object Storage
- "Netease"
@@ -26037,7 +27460,7 @@ Properties:
- Config: acl
- Env Var: RCLONE_S3_ACL
-- Provider: !Storj,Selectel,Synology,Cloudflare
+- Provider: !Storj,Selectel,Synology,Cloudflare,FlashBlade,Mega
- Type: string
- Required: false
- Examples:
@@ -26149,9 +27572,33 @@ Properties:
- "GLACIER_IR"
- Glacier Instant Retrieval storage class
+#### --s3-ibm-api-key
+
+IBM API Key to be used to obtain IAM token
+
+Properties:
+
+- Config: ibm_api_key
+- Env Var: RCLONE_S3_IBM_API_KEY
+- Provider: IBMCOS
+- Type: string
+- Required: false
+
+#### --s3-ibm-resource-instance-id
+
+IBM service instance id
+
+Properties:
+
+- Config: ibm_resource_instance_id
+- Env Var: RCLONE_S3_IBM_RESOURCE_INSTANCE_ID
+- Provider: IBMCOS
+- Type: string
+- Required: false
+
### Advanced options
-Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
+Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, Exaba, FlashBlade, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Mega, Minio, Netease, Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
#### --s3-bucket-acl
@@ -26170,6 +27617,7 @@ Properties:
- Config: bucket_acl
- Env Var: RCLONE_S3_BUCKET_ACL
+- Provider: !Storj,Selectel,Synology,Cloudflare,FlashBlade
- Type: string
- Required: false
- Examples:
@@ -26985,6 +28433,46 @@ Properties:
- Type: Tristate
- Default: unset
+#### --s3-use-x-id
+
+Set if rclone should add x-id URL parameters.
+
+You can change this if you want to disable the AWS SDK from
+adding x-id URL parameters.
+
+This shouldn't be necessary in normal operation.
+
+This should be automatically set correctly for all providers rclone
+knows about - please make a bug report if not.
+
+
+Properties:
+
+- Config: use_x_id
+- Env Var: RCLONE_S3_USE_X_ID
+- Type: Tristate
+- Default: unset
+
+#### --s3-sign-accept-encoding
+
+Set if rclone should include Accept-Encoding as part of the signature.
+
+You can change this if you want to stop rclone including
+Accept-Encoding as part of the signature.
+
+This shouldn't be necessary in normal operation.
+
+This should be automatically set correctly for all providers rclone
+knows about - please make a bug report if not.
+
+
+Properties:
+
+- Config: sign_accept_encoding
+- Env Var: RCLONE_S3_SIGN_ACCEPT_ENCODING
+- Type: Tristate
+- Default: unset
+
#### --s3-directory-bucket
Set to use AWS Directory Buckets
@@ -27104,7 +28592,7 @@ or from INTELLIGENT-TIERING Archive Access / Deep Archive Access tier to the Fre
Usage Examples:
- rclone backend restore s3:bucket/path/to/object -o priority=PRIORITY -o lifetime=DAYS
+ rclone backend restore s3:bucket/path/to/ --include /object -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY
@@ -27888,7 +29376,7 @@ Choose a number from below, or type in your own value
location_constraint>1
```
-9. Specify a canned ACL. IBM Cloud (Storage) supports "public-read" and "private". IBM Cloud(Infra) supports all the canned ACLs. On-Premise COS supports all the canned ACLs.
+8. Specify a canned ACL. IBM Cloud (Storage) supports "public-read" and "private". IBM Cloud(Infra) supports all the canned ACLs. On-Premise COS supports all the canned ACLs.
```
Canned ACL used when creating buckets and/or storing objects in S3.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
@@ -27904,8 +29392,7 @@ Choose a number from below, or type in your own value
acl> 1
```
-
-12. Review the displayed configuration and accept to save the "remote" then quit. The config file should look like this
+9. Review the displayed configuration and accept to save the "remote" then quit. The config file should look like this
```
[xxx]
type = s3
@@ -27917,7 +29404,7 @@ acl> 1
acl = private
```
-13. Execute rclone commands
+10. Execute rclone commands
```
1) Create a bucket.
rclone mkdir IBM-COS-XREGION:newbucket
@@ -27936,6 +29423,35 @@ acl> 1
rclone delete IBM-COS-XREGION:newbucket/file.txt
```
+#### IBM IAM authentication
+If using IBM IAM authentication with IBM API KEY you need to fill in these additional parameters
+1. Select false for env_auth
+2. Leave `access_key_id` and `secret_access_key` blank
+3. Paste your `ibm_api_key`
+```
+Option ibm_api_key.
+IBM API Key to be used to obtain IAM token
+Enter a value of type string. Press Enter for the default (1).
+ibm_api_key>
+```
+4. Paste your `ibm_resource_instance_id`
+```
+Option ibm_resource_instance_id.
+IBM service instance id
+Enter a value of type string. Press Enter for the default (2).
+ibm_resource_instance_id>
+```
+5. In advanced settings type true for `v2_auth`
+```
+Option v2_auth.
+If true use v2 authentication.
+If this is false (the default) then rclone will use v4 authentication.
+If it is set then rclone will use v2 authentication.
+Use this only if v4 signatures don't work, e.g. pre Jewel/v10 CEPH.
+Enter a boolean value (true or false). Press Enter for the default (true).
+v2_auth>
+```
+
### IDrive e2 {#idrive-e2}
Here is an example of making an [IDrive e2](https://www.idrive.com/e2/)
@@ -28658,7 +30174,7 @@ location_constraint = au-nsw
### Rclone Serve S3 {#rclone}
Rclone can serve any remote over the S3 protocol. For details see the
-[rclone serve s3](https://rclone.org/commands/rclone_serve_http/) documentation.
+[rclone serve s3](https://rclone.org/commands/rclone_serve_s3/) documentation.
For example, to serve `remote:path` over s3, run the server like this:
@@ -28678,8 +30194,8 @@ secret_access_key = SECRET_ACCESS_KEY
use_multipart_uploads = false
```
-Note that setting `disable_multipart_uploads = true` is to work around
-[a bug](https://rclone.org/commands/rclone_serve_http/#bugs) which will be fixed in due course.
+Note that setting `use_multipart_uploads = false` is to work around
+[a bug](https://rclone.org/commands/rclone_serve_s3/#bugs) which will be fixed in due course.
### Scaleway
@@ -28795,20 +30311,17 @@ Press Enter to leave empty.
region>
```
-Choose an endpoint from the list
+Enter your Lyve Cloud endpoint. This field cannot be kept empty.
```
-Endpoint for S3 API.
+Endpoint for Lyve Cloud S3 API.
Required when using an S3 clone.
-Choose a number from below, or type in your own value.
-Press Enter to leave empty.
- 1 / Seagate Lyve Cloud US East 1 (Virginia)
- \ (s3.us-east-1.lyvecloud.seagate.com)
- 2 / Seagate Lyve Cloud US West 1 (California)
- \ (s3.us-west-1.lyvecloud.seagate.com)
- 3 / Seagate Lyve Cloud AP Southeast 1 (Singapore)
- \ (s3.ap-southeast-1.lyvecloud.seagate.com)
-endpoint> 1
+Please type in your LyveCloud endpoint.
+Examples:
+- s3.us-west-1.{account_name}.lyve.seagate.com (US West 1 - California)
+- s3.eu-west-1.{account_name}.lyve.seagate.com (US West 1 - Ireland)
+Enter a value.
+endpoint> s3.us-west-1.global.lyve.seagate.com
```
Leave location constraint blank
@@ -29775,27 +31288,49 @@ Option endpoint.
Endpoint for Linode Object Storage API.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
- 1 / Atlanta, GA (USA), us-southeast-1
+ 1 / Amsterdam (Netherlands), nl-ams-1
+ \ (nl-ams-1.linodeobjects.com)
+ 2 / Atlanta, GA (USA), us-southeast-1
\ (us-southeast-1.linodeobjects.com)
- 2 / Chicago, IL (USA), us-ord-1
+ 3 / Chennai (India), in-maa-1
+ \ (in-maa-1.linodeobjects.com)
+ 4 / Chicago, IL (USA), us-ord-1
\ (us-ord-1.linodeobjects.com)
- 3 / Frankfurt (Germany), eu-central-1
+ 5 / Frankfurt (Germany), eu-central-1
\ (eu-central-1.linodeobjects.com)
- 4 / Milan (Italy), it-mil-1
+ 6 / Jakarta (Indonesia), id-cgk-1
+ \ (id-cgk-1.linodeobjects.com)
+ 7 / London 2 (Great Britain), gb-lon-1
+ \ (gb-lon-1.linodeobjects.com)
+ 8 / Los Angeles, CA (USA), us-lax-1
+ \ (us-lax-1.linodeobjects.com)
+ 9 / Madrid (Spain), es-mad-1
+ \ (es-mad-1.linodeobjects.com)
+10 / Melbourne (Australia), au-mel-1
+ \ (au-mel-1.linodeobjects.com)
+11 / Miami, FL (USA), us-mia-1
+ \ (us-mia-1.linodeobjects.com)
+12 / Milan (Italy), it-mil-1
\ (it-mil-1.linodeobjects.com)
- 5 / Newark, NJ (USA), us-east-1
+13 / Newark, NJ (USA), us-east-1
\ (us-east-1.linodeobjects.com)
- 6 / Paris (France), fr-par-1
+14 / Osaka (Japan), jp-osa-1
+ \ (jp-osa-1.linodeobjects.com)
+15 / Paris (France), fr-par-1
\ (fr-par-1.linodeobjects.com)
- 7 / Seattle, WA (USA), us-sea-1
+16 / São Paulo (Brazil), br-gru-1
+ \ (br-gru-1.linodeobjects.com)
+17 / Seattle, WA (USA), us-sea-1
\ (us-sea-1.linodeobjects.com)
- 8 / Singapore ap-south-1
+18 / Singapore, ap-south-1
\ (ap-south-1.linodeobjects.com)
- 9 / Stockholm (Sweden), se-sto-1
+19 / Singapore 2, sg-sin-1
+ \ (sg-sin-1.linodeobjects.com)
+20 / Stockholm (Sweden), se-sto-1
\ (se-sto-1.linodeobjects.com)
-10 / Washington, DC, (USA), us-iad-1
+21 / Washington, DC, (USA), us-iad-1
\ (us-iad-1.linodeobjects.com)
-endpoint> 3
+endpoint> 5
Option acl.
Canned ACL used when creating buckets and storing or copying objects.
@@ -29960,6 +31495,116 @@ secret_access_key = SECRET_ACCESS_KEY
endpoint = br-ne1.magaluobjects.com
```
+### MEGA S4 {#mega}
+
+[MEGA S4 Object Storage](https://mega.io/objectstorage) is an S3
+compatible object storage system. It has a single pricing tier with no
+additional charges for data transfers or API requests and it is
+included in existing Pro plans.
+
+Here is an example of making a configuration. First run:
+
+ rclone config
+
+This will guide you through an interactive setup process.
+
+```
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+
+Enter name for new remote.
+name> megas4
+
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+XX / Amazon S3 Compliant Storage Providers including AWS,... Mega, ...
+ \ (s3)
+[snip]
+Storage> s3
+
+Option provider.
+Choose your S3 provider.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+[snip]
+XX / MEGA S4 Object Storage
+ \ (Mega)
+[snip]
+provider> Mega
+
+Option env_auth.
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+Only applies if access_key_id and secret_access_key is blank.
+Choose a number from below, or type in your own boolean value (true or false).
+Press Enter for the default (false).
+ 1 / Enter AWS credentials in the next step.
+ \ (false)
+ 2 / Get AWS credentials from the environment (env vars or IAM).
+ \ (true)
+env_auth>
+
+Option access_key_id.
+AWS Access Key ID.
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+access_key_id> XXX
+
+Option secret_access_key.
+AWS Secret Access Key (password).
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+secret_access_key> XXX
+
+Option endpoint.
+Endpoint for S3 API.
+Required when using an S3 clone.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / Mega S4 eu-central-1 (Amsterdam)
+ \ (s3.eu-central-1.s4.mega.io)
+ 2 / Mega S4 eu-central-2 (Bettembourg)
+ \ (s3.eu-central-2.s4.mega.io)
+ 3 / Mega S4 ca-central-1 (Montreal)
+ \ (s3.ca-central-1.s4.mega.io)
+ 4 / Mega S4 ca-west-1 (Vancouver)
+ \ (s3.ca-west-1.s4.mega.io)
+endpoint> 1
+
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+
+Configuration complete.
+Options:
+- type: s3
+- provider: Mega
+- access_key_id: XXX
+- secret_access_key: XXX
+- endpoint: s3.eu-central-1.s4.mega.io
+Keep this "megas4" remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+```
+
+This will leave the config file looking like this.
+
+```
+[megas4]
+type = s3
+provider = Mega
+access_key_id = XXX
+secret_access_key = XXX
+endpoint = s3.eu-central-1.s4.mega.io
+```
+
### ArvanCloud {#arvan-cloud}
[ArvanCloud](https://www.arvancloud.com/en/products/cloud-storage) ArvanCloud Object Storage goes beyond the limited traditional file storage.
@@ -30361,6 +32006,119 @@ region = us-east-1
endpoint = s3.petabox.io
```
+### Pure Storage FlashBlade
+
+[Pure Storage FlashBlade](https://www.purestorage.com/products/unstructured-data-storage.html) is a high performance S3-compatible object store.
+
+FlashBlade supports most modern S3 features including:
+
+- ListObjectsV2
+- Multipart uploads with AWS-compatible ETags
+- Advanced checksum algorithms (SHA256, CRC32, CRC32C) with trailer support (Purity//FB 4.4.2+)
+- Object versioning and lifecycle management
+- Virtual hosted-style requests (requires DNS configuration)
+
+To configure rclone for Pure Storage FlashBlade:
+
+First run:
+
+ rclone config
+
+This will guide you through an interactive setup process:
+
+```
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+
+Enter name for new remote.
+name> flashblade
+
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+ 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, FlashBlade, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others
+ \ (s3)
+[snip]
+Storage> s3
+
+Option provider.
+Choose your S3 provider.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+[snip]
+ 9 / Pure Storage FlashBlade Object Storage
+ \ (FlashBlade)
+[snip]
+provider> FlashBlade
+
+Option env_auth.
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+Only applies if access_key_id and secret_access_key is blank.
+Choose a number from below, or type in your own boolean value (true or false).
+Press Enter for the default (false).
+ 1 / Enter AWS credentials in the next step.
+ \ (false)
+ 2 / Get AWS credentials from the environment (env vars or IAM).
+ \ (true)
+env_auth> 1
+
+Option access_key_id.
+AWS Access Key ID.
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+access_key_id> ACCESS_KEY_ID
+
+Option secret_access_key.
+AWS Secret Access Key (password).
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+secret_access_key> SECRET_ACCESS_KEY
+
+Option endpoint.
+Endpoint for S3 API.
+Required when using an S3 clone.
+Enter a value. Press Enter to leave empty.
+endpoint> https://s3.flashblade.example.com
+
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+
+Configuration complete.
+Options:
+- type: s3
+- provider: FlashBlade
+- access_key_id: ACCESS_KEY_ID
+- secret_access_key: SECRET_ACCESS_KEY
+- endpoint: https://s3.flashblade.example.com
+Keep this "flashblade" remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+```
+
+This results in the following configuration being stored in `~/.config/rclone/rclone.conf`:
+
+```
+[flashblade]
+type = s3
+provider = FlashBlade
+access_key_id = ACCESS_KEY_ID
+secret_access_key = SECRET_ACCESS_KEY
+endpoint = https://s3.flashblade.example.com
+```
+
+Note: The FlashBlade endpoint should be the S3 data VIP. For virtual-hosted style requests,
+ensure proper DNS configuration: subdomains of the endpoint hostname should resolve to a
+FlashBlade data VIP. For example, if your endpoint is `https://s3.flashblade.example.com`,
+then `bucket-name.s3.flashblade.example.com` should also resolve to the data VIP.
+
### Storj
Storj is a decentralized cloud storage which can be used through its
@@ -30430,7 +32188,7 @@ source).
This has the following consequences:
-- Using `rclone rcat` will fail as the medatada doesn't match after upload
+- Using `rclone rcat` will fail as the metadata doesn't match after upload
- Uploading files with `rclone mount` will fail for the same reason
- This can worked around by using `--vfs-cache-mode writes` or `--vfs-cache-mode full` or setting `--s3-upload-cutoff` large
- Files uploaded via a multipart upload won't have their modtimes
@@ -31488,7 +33246,7 @@ machine with no Internet browser available.
Note that rclone runs a webserver on your local machine to collect the
token as returned from Box. This only runs from the moment it opens
your browser to the moment you get back the verification code. This
-is on `http://127.0.0.1:53682/` and this it may require you to unblock
+is on `http://127.0.0.1:53682/` and this may require you to unblock
it temporarily if you are running a host firewall.
Once configured you can then use `rclone` like this,
@@ -33327,6 +35085,28 @@ Properties:
- Type: Duration
- Default: 0s
+#### --cloudinary-adjust-media-files-extensions
+
+Cloudinary handles media formats as a file attribute and strips it from the name, which is unlike most other file systems
+
+Properties:
+
+- Config: adjust_media_files_extensions
+- Env Var: RCLONE_CLOUDINARY_ADJUST_MEDIA_FILES_EXTENSIONS
+- Type: bool
+- Default: true
+
+#### --cloudinary-media-extensions
+
+Cloudinary supported media extensions
+
+Properties:
+
+- Config: media_extensions
+- Env Var: RCLONE_CLOUDINARY_MEDIA_EXTENSIONS
+- Type: stringArray
+- Default: [3ds 3g2 3gp ai arw avi avif bmp bw cr2 cr3 djvu dng eps3 fbx flif flv gif glb gltf hdp heic heif ico indd jp2 jpe jpeg jpg jxl jxr m2ts mov mp4 mpeg mts mxf obj ogv pdf ply png psd svg tga tif tiff ts u3ma usdz wdp webm webp wmv]
+
#### --cloudinary-description
Description of the remote.
@@ -34415,7 +36195,7 @@ strong random number generator. The nonce is incremented for each
chunk read making sure each nonce is unique for each block written.
The chance of a nonce being reused is minuscule. If you wrote an
exabyte of data (10¹⁸ bytes) you would have a probability of
-approximately 2×10⁻³² of re-using a nonce.
+approximately 2×10⁻³² of reusing a nonce.
#### Chunk
@@ -34847,6 +36627,188 @@ See the [metadata](https://rclone.org/docs/#metadata) docs for more info.
+# DOI
+
+The DOI remote is a read only remote for reading files from digital object identifiers (DOI).
+
+Currently, the DOI backend supports supports DOIs hosted with:
+- [InvenioRDM](https://inveniosoftware.org/products/rdm/)
+ - [Zenodo](https://zenodo.org)
+ - [CaltechDATA](https://data.caltech.edu)
+ - [Other InvenioRDM repositories](https://inveniosoftware.org/showcase/)
+- [Dataverse](https://dataverse.org)
+ - [Harvard Dataverse](https://dataverse.harvard.edu)
+ - [Other Dataverse repositories](https://dataverse.org/installations)
+
+Paths are specified as `remote:path`
+
+Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
+
+## Configuration
+
+Here is an example of how to make a remote called `remote`. First run:
+
+ rclone config
+
+This will guide you through an interactive setup process:
+
+```
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+Enter name for new remote.
+name> remote
+Type of storage to configure.
+Choose a number from below, or type in your own value
+[snip]
+XX / DOI datasets
+ \ (doi)
+[snip]
+Storage> doi
+Option doi.
+The DOI or the doi.org URL.
+Enter a value.
+doi> 10.5281/zenodo.5876941
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+Configuration complete.
+Options:
+- type: doi
+- doi: 10.5281/zenodo.5876941
+Keep this "remote" remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+```
+
+
+### Standard options
+
+Here are the Standard options specific to doi (DOI datasets).
+
+#### --doi-doi
+
+The DOI or the doi.org URL.
+
+Properties:
+
+- Config: doi
+- Env Var: RCLONE_DOI_DOI
+- Type: string
+- Required: true
+
+### Advanced options
+
+Here are the Advanced options specific to doi (DOI datasets).
+
+#### --doi-provider
+
+DOI provider.
+
+The DOI provider can be set when rclone does not automatically recognize a supported DOI provider.
+
+Properties:
+
+- Config: provider
+- Env Var: RCLONE_DOI_PROVIDER
+- Type: string
+- Required: false
+- Examples:
+ - "auto"
+ - Auto-detect provider
+ - "zenodo"
+ - Zenodo
+ - "dataverse"
+ - Dataverse
+ - "invenio"
+ - Invenio
+
+#### --doi-doi-resolver-api-url
+
+The URL of the DOI resolver API to use.
+
+The DOI resolver can be set for testing or for cases when the the canonical DOI resolver API cannot be used.
+
+Defaults to "https://doi.org/api".
+
+Properties:
+
+- Config: doi_resolver_api_url
+- Env Var: RCLONE_DOI_DOI_RESOLVER_API_URL
+- Type: string
+- Required: false
+
+#### --doi-description
+
+Description of the remote.
+
+Properties:
+
+- Config: description
+- Env Var: RCLONE_DOI_DESCRIPTION
+- Type: string
+- Required: false
+
+## Backend commands
+
+Here are the commands specific to the doi backend.
+
+Run them with
+
+ rclone backend COMMAND remote:
+
+The help below will explain what arguments each command takes.
+
+See the [backend](https://rclone.org/commands/rclone_backend/) command for more
+info on how to pass options and arguments.
+
+These can be run on a running backend using the rc command
+[backend/command](https://rclone.org/rc/#backend-command).
+
+### metadata
+
+Show metadata about the DOI.
+
+ rclone backend metadata remote: [options] [+]
+
+This command returns a JSON object with some information about the DOI.
+
+ rclone backend medatadata doi:
+
+It returns a JSON object representing metadata about the DOI.
+
+
+### set
+
+Set command for updating the config parameters.
+
+ rclone backend set remote: [options] [+]
+
+This set command can be used to update the config parameters
+for a running doi backend.
+
+Usage Examples:
+
+ rclone backend set doi: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
+ rclone rc backend/command command=set fs=doi: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
+ rclone rc backend/command command=set fs=doi: -o doi=NEW_DOI
+
+The option keys are named as they are in the config file.
+
+This rebuilds the connection to the doi backend when it is called with
+the new parameters. Only new parameters need be passed as the values
+will default to those currently in use.
+
+It doesn't return anything.
+
+
+
+
# Dropbox
Paths are specified as `remote:path`
@@ -35033,6 +36995,42 @@ with `--dropbox-batch-mode async` then do a final transfer with
Note that there may be a pause when quitting rclone while rclone
finishes up the last batch using this mode.
+### Exporting files
+
+Certain files in Dropbox are "exportable", such as Dropbox Paper
+documents. These files need to be converted to another format in
+order to be downloaded. Often multiple formats are available for
+conversion.
+
+When rclone downloads a exportable file, it chooses the format to
+download based on the `--dropbox-export-formats` setting. By
+default, the export formats are `html,md`, which are sensible
+defaults for Dropbox Paper.
+
+Rclone chooses the first format ID in the export formats list that
+Dropbox supports for a given file. If no format in the list is
+usable, rclone will choose the default format that Dropbox suggests.
+
+Rclone will change the extension to correspond to the export format.
+Here are some examples of how extensions are mapped:
+
+| File type | Filename in Dropbox | Filename in rclone |
+|----------------|---------------------|--------------------|
+| Paper | mydoc.paper | mydoc.html |
+| Paper template | mydoc.papert | mydoc.papert.html |
+| other | mydoc | mydoc.html |
+
+_Importing_ exportable files is not yet supported by rclone.
+
+Here are the supported export extensions known by rclone. Note that
+rclone does not currently support other formats not on this list,
+even if Dropbox supports them. Also, Dropbox could change the list
+of supported formats at any time.
+
+| Format ID | Name | Description |
+|-----------|----------|----------------------|
+| html | HTML | HTML document |
+| md | Markdown | Markdown text format |
### Standard options
@@ -35238,6 +37236,56 @@ Properties:
- Type: string
- Required: false
+#### --dropbox-export-formats
+
+Comma separated list of preferred formats for exporting files
+
+Certain Dropbox files can only be accessed by exporting them to another format.
+These include Dropbox Paper documents.
+
+For each such file, rclone will choose the first format on this list that Dropbox
+considers valid. If none is valid, it will choose Dropbox's default format.
+
+Known formats include: "html", "md" (markdown)
+
+Properties:
+
+- Config: export_formats
+- Env Var: RCLONE_DROPBOX_EXPORT_FORMATS
+- Type: CommaSepList
+- Default: html,md
+
+#### --dropbox-skip-exports
+
+Skip exportable files in all listings.
+
+If given, exportable files practically become invisible to rclone.
+
+Properties:
+
+- Config: skip_exports
+- Env Var: RCLONE_DROPBOX_SKIP_EXPORTS
+- Type: bool
+- Default: false
+
+#### --dropbox-show-all-exports
+
+Show all exportable files in listings.
+
+Adding this flag will allow all exportable files to be server side copied.
+Note that rclone doesn't add extensions to the exportable file names in this mode.
+
+Do **not** use this flag when trying to download exportable files - rclone
+will fail to download them.
+
+
+Properties:
+
+- Config: show_all_exports
+- Env Var: RCLONE_DROPBOX_SHOW_ALL_EXPORTS
+- Type: bool
+- Default: false
+
#### --dropbox-batch-mode
Upload file batching sync|async|off.
@@ -35315,7 +37363,7 @@ Properties:
#### --dropbox-batch-commit-timeout
-Max time to wait for a batch to finish committing
+Max time to wait for a batch to finish committing. (no longer used)
Properties:
@@ -35365,6 +37413,11 @@ non-personal account otherwise the visibility may not be correct.
[forum discussion](https://forum.rclone.org/t/rclone-link-dropbox-permissions/23211) and the
[dropbox SDK issue](https://github.com/dropbox/dropbox-sdk-go-unofficial/issues/75).
+Modification times for Dropbox Paper documents are not exact, and
+may not change for some period after the document is edited.
+To make sure you get recent changes in a sync, either wait an hour
+or so, or use `--ignore-times` to force a full sync.
+
## Get your own Dropbox App ID
When you use rclone with Dropbox in its default configuration you are using rclone's App ID. This is shared between all the rclone users.
@@ -35671,6 +37724,225 @@ Properties:
+# FileLu
+
+[FileLu](https://filelu.com/) is a reliable cloud storage provider
+offering features like secure file uploads, downloads, flexible
+storage options, and sharing capabilities. With support for high
+storage limits and seamless integration with rclone, FileLu makes
+managing files in the cloud easy. Its cross-platform file backup
+services let you upload and back up files from any internet-connected
+device.
+
+## Configuration
+
+Here is an example of how to make a remote called `filelu`. First, run:
+
+ rclone config
+
+This will guide you through an interactive setup process:
+
+```
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> filelu
+Type of storage to configure.
+Choose a number from below, or type in your own value
+[snip]
+xx / FileLu Cloud Storage
+ \ "filelu"
+[snip]
+Storage> filelu
+Enter your FileLu Rclone Key:
+key> YOUR_FILELU_RCLONE_KEY RC_xxxxxxxxxxxxxxxxxxxxxxxx
+Configuration complete.
+
+Keep this "filelu" remote?
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+```
+
+### Paths
+
+A path without an initial `/` will operate in the `Rclone` directory.
+
+A path with an initial `/` will operate at the root where you can see
+the `Rclone` directory.
+
+```
+$ rclone lsf TestFileLu:/
+CCTV/
+Camera/
+Documents/
+Music/
+Photos/
+Rclone/
+Vault/
+Videos/
+```
+
+### Example Commands
+
+Create a new folder named `foldername` in the `Rclone` directory:
+
+ rclone mkdir filelu:foldername
+
+Delete a folder on FileLu:
+
+ rclone rmdir filelu:/folder/path/
+
+Delete a file on FileLu:
+
+ rclone delete filelu:/hello.txt
+
+List files from your FileLu account:
+
+ rclone ls filelu:
+
+List all folders:
+
+ rclone lsd filelu:
+
+Copy a specific file to the FileLu root:
+
+ rclone copy D:\\hello.txt filelu:
+
+Copy files from a local directory to a FileLu directory:
+
+ rclone copy D:/local-folder filelu:/remote-folder/path/
+
+Download a file from FileLu into a local directory:
+
+ rclone copy filelu:/file-path/hello.txt D:/local-folder
+
+Move files from a local directory to a FileLu directory:
+
+ rclone move D:\\local-folder filelu:/remote-path/
+
+Sync files from a local directory to a FileLu directory:
+
+ rclone sync --interactive D:/local-folder filelu:/remote-path/
+
+Mount remote to local Linux:
+
+ rclone mount filelu: /root/mnt --vfs-cache-mode full
+
+Mount remote to local Windows:
+
+ rclone mount filelu: D:/local_mnt --vfs-cache-mode full
+
+Get storage info about the FileLu account:
+
+ rclone about filelu:
+
+All the other rclone commands are supported by this backend.
+
+### FolderID instead of folder path
+
+We use the FolderID instead of the folder name to prevent errors when
+users have identical folder names or paths. For example, if a user has
+two or three folders named "test_folders," the system may become
+confused and won't know which folder to move. In large storage
+systems, some clients have hundred of thousands of folders and a few
+millions of files, duplicate folder names or paths are quite common.
+
+### Modification Times and Hashes
+
+FileLu supports both modification times and MD5 hashes.
+
+FileLu only supports filenames and folder names up to 255 characters in length, where a
+character is a Unicode character.
+
+### Duplicated Files
+
+When uploading and syncing via Rclone, FileLu does not allow uploading
+duplicate files within the same directory. However, you can upload
+duplicate files, provided they are in different directories (folders).
+
+### Failure to Log / Invalid Credentials or KEY
+
+Ensure that you have the correct Rclone key, which can be found in [My
+Account](https://filelu.com/account/). Every time you toggle Rclone
+OFF and ON in My Account, a new RC_xxxxxxxxxxxxxxxxxxxx key is
+generated. Be sure to update your Rclone configuration with the new
+key.
+
+If you are connecting to your FileLu remote for the first time and
+encounter an error such as:
+
+```
+Failed to create file system for "my-filelu-remote:": couldn't login: Invalid credentials
+```
+
+Ensure your Rclone Key is correct.
+
+### Process `killed`
+
+Accounts with large files or extensive metadata may experience
+significant memory usage during list/sync operations. Ensure the
+system running `rclone` has sufficient memory and CPU to handle these
+operations.
+
+
+### Standard options
+
+Here are the Standard options specific to filelu (FileLu Cloud Storage).
+
+#### --filelu-key
+
+Your FileLu Rclone key from My Account
+
+Properties:
+
+- Config: key
+- Env Var: RCLONE_FILELU_KEY
+- Type: string
+- Required: true
+
+### Advanced options
+
+Here are the Advanced options specific to filelu (FileLu Cloud Storage).
+
+#### --filelu-encoding
+
+The encoding for the backend.
+
+See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
+
+Properties:
+
+- Config: encoding
+- Env Var: RCLONE_FILELU_ENCODING
+- Type: Encoding
+- Default: Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,CrLf,Del,Ctl,LeftSpace,LeftPeriod,LeftTilde,LeftCrLfHtVt,RightSpace,RightPeriod,RightCrLfHtVt,InvalidUtf8,Dot,SquareBracket,Semicolon,Exclamation
+
+#### --filelu-description
+
+Description of the remote.
+
+Properties:
+
+- Config: description
+- Env Var: RCLONE_FILELU_DESCRIPTION
+- Type: string
+- Required: false
+
+
+
+## Limitations
+
+This backend uses a custom library implementing the FileLu API. While
+it supports file transfers, some advanced features may not yet be
+available. Please report any issues to the [rclone forum](https://forum.rclone.org/)
+for troubleshooting and updates.
+
+For further information, visit [FileLu's website](https://filelu.com/).
+
# Files.com
[Files.com](https://www.files.com/) is a cloud storage service that provides a
@@ -39074,6 +41346,32 @@ attempted if possible.
Use the --interactive/-i or --dry-run flag to see what would be copied before copying.
+### moveid
+
+Move files by ID
+
+ rclone backend moveid remote: [options] [+]
+
+This command moves files by ID
+
+Usage:
+
+ rclone backend moveid drive: ID path
+ rclone backend moveid drive: ID1 path1 ID2 path2
+
+It moves the drive file with ID given to the path (an rclone path which
+will be passed internally to rclone moveto).
+
+The path should end with a / to indicate move the file as named to
+this directory. If it doesn't end with a / then the last path
+component will be used as the file name.
+
+If the destination is a drive backend then server-side moving will be
+attempted if possible.
+
+Use the --interactive/-i or --dry-run flag to see what would be moved beforehand.
+
+
### exportformats
Dump the export formats for debug purposes
@@ -39331,6 +41629,11 @@ Google Photos.
limitations, so please read the [limitations section](#limitations)
carefully to make sure it is suitable for your use.
+**NB** From March 31, 2025 rclone can only download photos it
+uploaded. This limitation is due to policy changes at Google. You may
+need to run `rclone config reconnect remote:` to make rclone work
+again after upgrading to rclone v1.70.
+
## Configuration
The initial setup for google cloud storage involves getting a token from Google Photos
@@ -39701,7 +42004,7 @@ Use the gphotosdl proxy for downloading the full resolution images
The Google API will deliver images and video which aren't full
resolution, and/or have EXIF data missing.
-However if you ue the gphotosdl proxy tnen you can download original,
+However if you use the gphotosdl proxy then you can download original,
unchanged images.
This runs a headless browser in the background.
@@ -39816,7 +42119,7 @@ Properties:
#### --gphotos-batch-commit-timeout
-Max time to wait for a batch to finish committing
+Max time to wait for a batch to finish committing. (no longer used)
Properties:
@@ -39845,6 +42148,11 @@ videos or images or formats that Google Photos doesn't understand,
rclone will upload the file, then Google Photos will give an error
when it is put turned into a media item.
+**NB** From March 31, 2025 rclone can only download photos it
+uploaded. This limitation is due to policy changes at Google. You may
+need to run `rclone config reconnect remote:` to make rclone work
+again after upgrading to rclone v1.70.
+
Note that all media items uploaded to Google Photos through the API
are stored in full resolution at "original quality" and **will** count
towards your storage quota in your Google Account. The API does
@@ -41561,7 +43869,7 @@ Enter a value.
config_2fa> 2FACODE
Remote config
--------------------
-[koofr]
+[iclouddrive]
- type: iclouddrive
- apple_id: APPLEID
- password: *** ENCRYPTED ***
@@ -41578,6 +43886,20 @@ y/e/d> y
ADP is currently unsupported and need to be disabled
+On iPhone, Settings `>` Apple Account `>` iCloud `>` 'Access iCloud Data on the Web' must be ON, and 'Advanced Data Protection' OFF.
+
+## Troubleshooting
+
+### Missing PCS cookies from the request
+
+This means you have Advanced Data Protection (ADP) turned on. This is not supported at the moment. If you want to use rclone you will have to turn it off. See above for how to turn it off.
+
+You will need to clear the `cookies` and the `trust_token` fields in the config. Or you can delete the remote config and start again.
+
+You should then run `rclone reconnect remote:`.
+
+Note that changing the ADP setting may not take effect immediately - you may need to wait a few hours or a day before you can get rclone to work - keep clearing the config entry and running `rclone reconnect remote:` until rclone functions properly.
+
### Standard options
@@ -41858,6 +44180,19 @@ Properties:
- Type: string
- Required: false
+#### --internetarchive-item-derive
+
+Whether to trigger derive on the IA item or not. If set to false, the item will not be derived by IA upon upload.
+The derive process produces a number of secondary files from an upload to make an upload more usable on the web.
+Setting this to false is useful for uploading files that are already in a format that IA can display or reduce burden on IA's infrastructure.
+
+Properties:
+
+- Config: item_derive
+- Env Var: RCLONE_INTERNETARCHIVE_ITEM_DERIVE
+- Type: bool
+- Default: true
+
### Advanced options
Here are the Advanced options specific to internetarchive (Internet Archive).
@@ -41888,6 +44223,18 @@ Properties:
- Type: string
- Default: "https://archive.org"
+#### --internetarchive-item-metadata
+
+Metadata to be set on the IA item, this is different from file-level metadata that can be set using --metadata-set.
+Format is key=value and the 'x-archive-meta-' prefix is automatically added.
+
+Properties:
+
+- Config: item_metadata
+- Env Var: RCLONE_INTERNETARCHIVE_ITEM_METADATA
+- Type: stringArray
+- Default: []
+
#### --internetarchive-disable-checksum
Don't ask the server to test against MD5 checksum calculated by rclone.
@@ -42961,6 +45308,7 @@ give an error like `oauth2: server response missing access_token`.
- Go to Security / "Пароль и безопасность"
- Click password for apps / "Пароли для внешних приложений"
- Add the password - give it a name - eg "rclone"
+- Select the permissions level. For some reason just "Full access to Cloud" (WebDav) doesn't work for Rclone currently. You have to select "Full access to Mail, Cloud and Calendar" (all protocols). ([thread on forum.rclone.org](https://forum.rclone.org/t/failed-to-create-file-system-for-mailru-failed-to-authorize-oauth2-invalid-username-or-password-username-or-password-is-incorrect/49298))
- Copy the password and use this password below - your normal login password won't work.
Now run
@@ -44756,6 +47104,65 @@ Properties:
- Type: int
- Default: 16
+#### --azureblob-copy-cutoff
+
+Cutoff for switching to multipart copy.
+
+Any files larger than this that need to be server-side copied will be
+copied in chunks of chunk_size using the put block list API.
+
+Files smaller than this limit will be copied with the Copy Blob API.
+
+Properties:
+
+- Config: copy_cutoff
+- Env Var: RCLONE_AZUREBLOB_COPY_CUTOFF
+- Type: SizeSuffix
+- Default: 8Mi
+
+#### --azureblob-copy-concurrency
+
+Concurrency for multipart copy.
+
+This is the number of chunks of the same file that are copied
+concurrently.
+
+These chunks are not buffered in memory and Microsoft recommends
+setting this value to greater than 1000 in the azcopy documentation.
+
+https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-optimize#increase-concurrency
+
+In tests, copy speed increases almost linearly with copy
+concurrency.
+
+Properties:
+
+- Config: copy_concurrency
+- Env Var: RCLONE_AZUREBLOB_COPY_CONCURRENCY
+- Type: int
+- Default: 512
+
+#### --azureblob-use-copy-blob
+
+Whether to use the Copy Blob API when copying to the same storage account.
+
+If true (the default) then rclone will use the Copy Blob API for
+copies to the same storage account even when the size is above the
+copy_cutoff.
+
+Rclone assumes that the same storage account means the same config
+and does not check for the same storage account in different configs.
+
+There should be no need to change this value.
+
+
+Properties:
+
+- Config: use_copy_blob
+- Env Var: RCLONE_AZUREBLOB_USE_COPY_BLOB
+- Type: bool
+- Default: true
+
#### --azureblob-list-chunk
Size of blob list.
@@ -44975,8 +47382,9 @@ You can set custom upload headers with the `--header-upload` flag.
- Content-Encoding
- Content-Language
- Content-Type
+- X-MS-Tags
-Eg `--header-upload "Content-Type: text/potato"`
+Eg `--header-upload "Content-Type: text/potato"` or `--header-upload "X-MS-Tags: foo=bar"`
## Limitations
@@ -45209,6 +47617,13 @@ If the resource has multiple user-assigned identities you will need to
unset `env_auth` and set `use_msi` instead. See the [`use_msi`
section](#use_msi).
+If you are operating in disconnected clouds, or private clouds such as
+Azure Stack you may want to set `disable_instance_discovery = true`.
+This determines whether rclone requests Microsoft Entra instance
+metadata from `https://login.microsoft.com/` before authenticating.
+Setting this to `true` will skip this request, making you responsible
+for ensuring the configured authority is valid and trustworthy.
+
##### Env Auth: 3. Azure CLI credentials (as used by the az tool)
Credentials created with the `az` tool can be picked up using `env_auth`.
@@ -45291,6 +47706,13 @@ be explicitly specified using exactly one of the `msi_object_id`,
If none of `msi_object_id`, `msi_client_id`, or `msi_mi_res_id` is
set, this is is equivalent to using `env_auth`.
+
+#### Azure CLI tool `az` {#use_az}
+Set to use the [Azure CLI tool `az`](https://learn.microsoft.com/en-us/cli/azure/)
+as the sole means of authentication.
+Setting this can be useful if you wish to use the `az` CLI on a host with
+a System Managed Identity that you do not want to use.
+Don't set `env_auth` at the same time.
### Standard options
@@ -45604,6 +48026,42 @@ Properties:
- Type: string
- Required: false
+#### --azurefiles-disable-instance-discovery
+
+Skip requesting Microsoft Entra instance metadata
+This should be set true only by applications authenticating in
+disconnected clouds, or private clouds such as Azure Stack.
+It determines whether rclone requests Microsoft Entra instance
+metadata from `https://login.microsoft.com/` before
+authenticating.
+Setting this to true will skip this request, making you responsible
+for ensuring the configured authority is valid and trustworthy.
+
+
+Properties:
+
+- Config: disable_instance_discovery
+- Env Var: RCLONE_AZUREFILES_DISABLE_INSTANCE_DISCOVERY
+- Type: bool
+- Default: false
+
+#### --azurefiles-use-az
+
+Use Azure CLI tool az for authentication
+Set to use the [Azure CLI tool az](https://learn.microsoft.com/en-us/cli/azure/)
+as the sole means of authentication.
+Setting this can be useful if you wish to use the az CLI on a host with
+a System Managed Identity that you do not want to use.
+Don't set env_auth at the same time.
+
+
+Properties:
+
+- Config: use_az
+- Env Var: RCLONE_AZUREFILES_USE_AZ
+- Type: bool
+- Default: false
+
#### --azurefiles-endpoint
Endpoint for the service.
@@ -46035,7 +48493,7 @@ Properties:
- "us"
- Microsoft Cloud for US Government
- "de"
- - Microsoft Cloud Germany
+ - Microsoft Cloud Germany (deprecated - try global region first).
- "cn"
- Azure and Office 365 operated by Vnet Group in China
@@ -46108,6 +48566,27 @@ Properties:
- Type: bool
- Default: false
+#### --onedrive-upload-cutoff
+
+Cutoff for switching to chunked upload.
+
+Any files larger than this will be uploaded in chunks of chunk_size.
+
+This is disabled by default as uploading using single part uploads
+causes rclone to use twice the storage on Onedrive business as when
+rclone sets the modification time after the upload Onedrive creates a
+new version.
+
+See: https://github.com/rclone/rclone/issues/1716
+
+
+Properties:
+
+- Config: upload_cutoff
+- Env Var: RCLONE_ONEDRIVE_UPLOAD_CUTOFF
+- Type: SizeSuffix
+- Default: off
+
#### --onedrive-chunk-size
Chunk size to upload files with - must be multiple of 320k (327,680 bytes).
@@ -46652,6 +49131,28 @@ See the [metadata](https://rclone.org/docs/#metadata) docs for more info.
+### Impersonate other users as Admin
+
+Unlike Google Drive and impersonating any domain user via service accounts, OneDrive requires you to authenticate as an admin account, and manually setup a remote per user you wish to impersonate.
+
+1. In [Microsoft 365 Admin Center](https://admin.microsoft.com), open each user you need to "impersonate" and go to the OneDrive section. There is a heading called "Get access to files", you need to click to create the link, this creates the link of the format `https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/` but also changes the permissions so you your admin user has access.
+2. Then in powershell run the following commands:
+```console
+Install-Module Microsoft.Graph -Scope CurrentUser -Repository PSGallery -Force
+Import-Module Microsoft.Graph.Files
+Connect-MgGraph -Scopes "Files.ReadWrite.All"
+# Follow the steps to allow access to your admin user
+# Then run this for each user you want to impersonate to get the Drive ID
+Get-MgUserDefaultDrive -UserId '{emailaddress}'
+# This will give you output of the format:
+# Name Id DriveType CreatedDateTime
+# ---- -- --------- ---------------
+# OneDrive b!XYZ123 business 14/10/2023 1:00:58 pm
+
+```
+3. Then in rclone add a onedrive remote type, and use the `Type in driveID` with the DriveID you got in the previous step. One remote per user. It will then confirm the drive ID, and hopefully give you a message of `Found drive "root" of type "business"` and then include the URL of the format `https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/Documents`
+
+
## Limitations
If you don't use rclone for 90 days the refresh token will
@@ -47040,6 +49541,24 @@ Properties:
- Type: SizeSuffix
- Default: 10Mi
+#### --opendrive-access
+
+Files and folders will be uploaded with this access permission (default private)
+
+Properties:
+
+- Config: access
+- Env Var: RCLONE_OPENDRIVE_ACCESS
+- Type: string
+- Default: "private"
+- Examples:
+ - "private"
+ - The file or folder access can be granted in a way that will allow select users to view, read or write what is absolutely essential for them.
+ - "public"
+ - The file or folder can be downloaded by anyone from a web browser. The link can be shared in any way,
+ - "hidden"
+ - The file or folder can be accessed has the same restrictions as Public if the user knows the URL of the file or folder link in order to access the contents
+
#### --opendrive-description
Description of the remote.
@@ -52930,6 +55449,20 @@ Properties:
- Type: string
- Required: false
+#### --sftp-http-proxy
+
+URL for HTTP CONNECT proxy
+
+Set this to a URL for an HTTP proxy which supports the HTTP CONNECT verb.
+
+
+Properties:
+
+- Config: http_proxy
+- Env Var: RCLONE_SFTP_HTTP_PROXY
+- Type: string
+- Required: false
+
#### --sftp-copy-is-hardlink
Set to enable server side copies using hardlinks.
@@ -52973,7 +55506,8 @@ Properties:
On some SFTP servers (e.g. Synology) the paths are different
for SSH and SFTP so the hashes can't be calculated properly.
-For them using `disable_hashcheck` is a good idea.
+You can either use [`--sftp-path-override`](#--sftp-path-override)
+or [`disable_hashcheck`](#--sftp-disable-hashcheck).
The only ssh agent supported under Windows is Putty's pageant.
@@ -53190,6 +55724,23 @@ Properties:
- Type: string
- Required: false
+#### --smb-use-kerberos
+
+Use Kerberos authentication.
+
+If set, rclone will use Kerberos authentication instead of NTLM. This
+requires a valid Kerberos configuration and credentials cache to be
+available, either in the default locations or as specified by the
+KRB5_CONFIG and KRB5CCNAME environment variables.
+
+
+Properties:
+
+- Config: use_kerberos
+- Env Var: RCLONE_SMB_USE_KERBEROS
+- Type: bool
+- Default: false
+
### Advanced options
Here are the Advanced options specific to smb (SMB / CIFS).
@@ -54823,11 +57374,11 @@ To copy a local directory to an WebDAV directory called backup
### Modification times and hashes
Plain WebDAV does not support modified times. However when used with
-Fastmail Files, Owncloud or Nextcloud rclone will support modified times.
+Fastmail Files, ownCloud or Nextcloud rclone will support modified times.
Likewise plain WebDAV does not support hashes, however when used with
-Fastmail Files, Owncloud or Nextcloud rclone will support SHA1 and MD5 hashes.
-Depending on the exact version of Owncloud or Nextcloud hashes may
+Fastmail Files, ownCloud or Nextcloud rclone will support SHA1 and MD5 hashes.
+Depending on the exact version of ownCloud or Nextcloud hashes may
appear on all objects, or only on objects which had a hash uploaded
with them.
@@ -54865,7 +57416,9 @@ Properties:
- "nextcloud"
- Nextcloud
- "owncloud"
- - Owncloud
+ - Owncloud 10 PHP based WebDAV server
+ - "infinitescale"
+ - ownCloud Infinite Scale
- "sharepoint"
- Sharepoint Online, authenticated by Microsoft account
- "sharepoint-ntlm"
@@ -55074,19 +57627,28 @@ this as the password.
Fastmail supports modified times using the `X-OC-Mtime` header.
-### Owncloud
+### ownCloud
Click on the settings cog in the bottom right of the page and this
will show the WebDAV URL that rclone needs in the config step. It
will look something like `https://example.com/remote.php/webdav/`.
-Owncloud supports modified times using the `X-OC-Mtime` header.
+ownCloud supports modified times using the `X-OC-Mtime` header.
### Nextcloud
-This is configured in an identical way to Owncloud. Note that
+This is configured in an identical way to ownCloud. Note that
Nextcloud initially did not support streaming of files (`rcat`) whereas
-Owncloud did, but [this](https://github.com/nextcloud/nextcloud-snap/issues/365) seems to be fixed as of 2020-11-27 (tested with rclone v1.53.1 and Nextcloud Server v19).
+ownCloud did, but [this](https://github.com/nextcloud/nextcloud-snap/issues/365) seems to be fixed as of 2020-11-27 (tested with rclone v1.53.1 and Nextcloud Server v19).
+
+### ownCloud Infinite Scale
+
+The WebDAV URL for Infinite Scale can be found in the details panel of
+any space in Infinite Scale, if the display was enabled in the personal
+settings of the user through a checkbox there.
+
+Infinite Scale works with the chunking [tus](https://tus.io) upload protocol.
+The chunk size is currently fixed 10 MB.
### Sharepoint Online
@@ -56509,6 +59071,200 @@ Options:
# Changelog
+## v1.70.0 - 2025-06-17
+
+[See commits](https://github.com/rclone/rclone/compare/v1.69.0...v1.70.0)
+
+* New backends
+ * [DOI](https://rclone.org/doi/) (Flora Thiebaut)
+ * [FileLu](https://rclone.org/filelu/) (kingston125)
+ * New S3 providers:
+ * [MEGA S4](https://rclone.org/s3/#mega) (Nick Craig-Wood)
+ * [Pure Storage FlashBlade](https://rclone.org/s3/#pure-storage-flashblade) (Jeremy Daer)
+* New commands
+ * [convmv](https://rclone.org/commands/rclone_convmv/): for moving and transforming files (nielash)
+* New Features
+ * Add [`--max-connections`](https://rclone.org/docs/#max-connections-n) to control maximum backend concurrency (Nick Craig-Wood)
+ * Add [`--max-buffer-memory`](https://rclone.org/docs/#max-buffer-memory) to limit total buffer memory usage (Nick Craig-Wood)
+ * Add transform library and [`--name-transform`](https://rclone.org/docs/#name-transform-command-xxxx) flag (nielash)
+ * sync: Implement [`--list-cutoff`](https://rclone.org/docs/#list-cutoff) to allow on disk sorting for reduced memory use (Nick Craig-Wood)
+ * accounting: Add listed stat for number of directory entries listed (Nick Craig-Wood)
+ * backend: Skip hash calculation when the hashType is None (Oleksiy Stashok)
+ * build
+ * Update to go1.24 and make go1.22 the minimum required version (Nick Craig-Wood)
+ * Disable docker builds on PRs & add missing dockerfile changes (Anagh Kumar Baranwal)
+ * Modernize Go usage (Nick Craig-Wood)
+ * Update all dependencies (Nick Craig-Wood)
+ * cmd/authorize: Show required arguments in help text (simwai)
+ * cmd/config: add `--no-output` option (Jess)
+ * cmd/gitannex
+ * Tweak parsing of "rcloneremotename" config (Dan McArdle)
+ * Permit remotes with options (Dan McArdle)
+ * Reject unknown layout modes in INITREMOTE (Dan McArdle)
+ * docker image: Add label org.opencontainers.image.source for release notes in Renovate dependency updates (Robin Schneider)
+ * doc fixes (albertony, Andrew Kreimer, Ben Boeckel, Christoph Berger, Danny Garside, Dimitri Papadopoulos, eccoisle, Ed Craig-Wood, Fernando Fernández, jack, Jeff Geerling, Jugal Kishore, kingston125, luzpaz, Markus Gerstel, Matt Ickstadt, Michael Kebe, Nick Craig-Wood, PrathameshLakawade, Ser-Bul, simonmcnair, Tim White, Zachary Vorhies)
+ * filter:
+ * Add `--hash-filter` to deterministically select a subset of files (Nick Craig-Wood)
+ * Show `--min-size` and `--max-size` in `--dump` filters (Nick Craig-Wood)
+ * hash: Add SHA512 support for file hashes (Enduriel)
+ * http servers: Add `--user-from-header` to use for authentication (Moises Lima)
+ * lib/batcher: Deprecate unused option: batch_commit_timeout (Dan McArdle)
+ * log:
+ * Remove github.com/sirupsen/logrus and replace with log/slog (Nick Craig-Wood)
+ * Add `--windows-event-log-level` to support Windows Event Log (Nick Craig-Wood)
+ * rc
+ * Add add `short` parameter to `core/stats` to not return transferring and checking (Nick Craig-Wood)
+ * In `options/info` make FieldName contain a "." if it should be nested (Nick Craig-Wood)
+ * Add rc control for serve commands (Nick Craig-Wood)
+ * rcserver: Improve content-type check (Jonathan Giannuzzi)
+ * serve nfs
+ * Update docs to note Windows is not supported (Zachary Vorhies)
+ * Change the format of `--nfs-cache-type symlink` file handles (Nick Craig-Wood)
+ * Make metadata files have special file handles (Nick Craig-Wood)
+ * touch: Make touch obey `--transfers` (Nick Craig-Wood)
+ * version: Add `--deps` flag to show dependencies and other build info (Nick Craig-Wood)
+* Bug Fixes
+ * serve s3:
+ * Fix ListObjectsV2 response (fhuber)
+ * Remove redundant handler initialization (Tho Neyugn)
+ * stats: Fix goroutine leak and improve stats accounting process (Nathanael Demacon)
+* VFS
+ * Add `--vfs-metadata-extension` to expose metadata sidecar files (Nick Craig-Wood)
+* Azure Blob
+ * Add support for `x-ms-tags` header (Trevor Starick)
+ * Cleanup uncommitted blocks on upload errors (Nick Craig-Wood)
+ * Speed up server side copies for small files (Nick Craig-Wood)
+ * Implement multipart server side copy (Nick Craig-Wood)
+ * Remove uncommitted blocks on InvalidBlobOrBlock error (Nick Craig-Wood)
+ * Fix handling of objects with // in (Nick Craig-Wood)
+ * Handle retry error codes more carefully (Nick Craig-Wood)
+ * Fix errors not being retried when doing single part copy (Nick Craig-Wood)
+ * Fix multipart server side copies of 0 sized files (Nick Craig-Wood)
+* Azurefiles
+ * Add `--azurefiles-use-az` and `--azurefiles-disable-instance-discovery` (b-wimmer)
+* B2
+ * Add SkipDestructive handling to backend commands (Pat Patterson)
+ * Use file id from listing when not presented in headers (ahxxm)
+* Cloudinary
+ * Automatically add/remove known media files extensions (yuval-cloudinary)
+ * Var naming convention (yuval-cloudinary)
+* Drive
+ * Added `backend moveid` command (Spencer McCullough)
+* Dropbox
+ * Support Dropbox Paper (Dave Vasilevsky)
+* FTP
+ * Add `--ftp-http-proxy` to connect via HTTP CONNECT proxy
+* Gofile
+ * Update to use new direct upload endpoint (wbulot)
+* Googlephotos
+ * Update read only and read write scopes to meet Google's requirements. (Germán Casares)
+* Iclouddrive
+ * Fix panic and files potentially downloaded twice (Clément Wehrung)
+* Internetarchive
+ * Add `--internetarchive-metadata="key=value"` for setting item metadata (Corentin Barreau)
+* Onedrive
+ * Fix "The upload session was not found" errors (Nick Craig-Wood)
+ * Re-add `--onedrive-upload-cutoff` flag (Nick Craig-Wood)
+ * Fix crash if no metadata was updated (Nick Craig-Wood)
+* Opendrive
+ * Added `--opendrive-access` flag to handle permissions (Joel K Biju)
+* Pcloud
+ * Fix "Access denied. You do not have permissions to perform this operation" on large uploads (Nick Craig-Wood)
+* S3
+ * Fix handling of objects with // in (Nick Craig-Wood)
+ * Add IBM IAM signer (Alexander Minbaev)
+ * Split the GCS quirks into `--s3-use-x-id` and `--s3-sign-accept-encoding` (Nick Craig-Wood)
+ * Implement paged listing interface ListP (Nick Craig-Wood)
+ * Add Pure Storage FlashBlade provider support (Jeremy Daer)
+ * Require custom endpoint for Lyve Cloud v2 support (PrathameshLakawade)
+ * MEGA S4 support (Nick Craig-Wood)
+* SFTP
+ * Add `--sftp-http-proxy` to connect via HTTP CONNECT proxy (Nick Craig-Wood)
+* Smb
+ * Add support for kerberos authentication (Jonathan Giannuzzi)
+ * Improve connection pooling efficiency (Jonathan Giannuzzi)
+* WebDAV
+ * Retry propfind on 425 status (Jörn Friedrich Dreyer)
+ * Add an ownCloud Infinite Scale vendor that enables tus chunked upload support (Klaas Freitag)
+
+## v1.69.3 - 2025-05-21
+
+[See commits](https://github.com/rclone/rclone/compare/v1.69.2...v1.69.3)
+
+* Bug Fixes
+ * build: Reapply update github.com/golang-jwt/jwt/v5 from 5.2.1 to 5.2.2 to fix CVE-2025-30204 (dependabot[bot])
+ * build: Update github.com/ebitengine/purego to work around bug in go1.24.3 (Nick Craig-Wood)
+
+## v1.69.2 - 2025-05-01
+
+[See commits](https://github.com/rclone/rclone/compare/v1.69.1...v1.69.2)
+
+* Bug fixes
+ * accounting: Fix percentDiff calculation -- (Anagh Kumar Baranwal)
+ * build
+ * Update github.com/golang-jwt/jwt/v4 from 4.5.1 to 4.5.2 to fix CVE-2025-30204 (dependabot[bot])
+ * Update github.com/golang-jwt/jwt/v5 from 5.2.1 to 5.2.2 to fix CVE-2025-30204 (dependabot[bot])
+ * Update golang.org/x/crypto to v0.35.0 to fix CVE-2025-22869 (Nick Craig-Wood)
+ * Update golang.org/x/net from 0.36.0 to 0.38.0 to fix CVE-2025-22870 (dependabot[bot])
+ * Update golang.org/x/net to 0.36.0. to fix CVE-2025-22869 (dependabot[bot])
+ * Stop building with go < go1.23 as security updates forbade it (Nick Craig-Wood)
+ * Fix docker plugin build (Anagh Kumar Baranwal)
+ * cmd: Fix crash if rclone is invoked without any arguments (Janne Hellsten)
+ * config: Read configuration passwords from stdin even when terminated with EOF (Samantha Bowen)
+ * doc fixes (Andrew Kreimer, Danny Garside, eccoisle, Ed Craig-Wood, emyarod, jack, Jugal Kishore, Markus Gerstel, Michael Kebe, Nick Craig-Wood, simonmcnair, simwai, Zachary Vorhies)
+ * fs: Fix corruption of SizeSuffix with "B" suffix in config (eg --min-size) (Nick Craig-Wood)
+ * lib/http: Fix race between Serve() and Shutdown() (Nick Craig-Wood)
+ * object: Fix memory object out of bounds Seek (Nick Craig-Wood)
+ * operations: Fix call fmt.Errorf with wrong err (alingse)
+ * rc
+ * Disable the metrics server when running `rclone rc` (hiddenmarten)
+ * Fix debug/* commands not being available over unix sockets (Nick Craig-Wood)
+ * serve nfs: Fix unlikely crash (Nick Craig-Wood)
+ * stats: Fix the speed not getting updated after a pause in the processing (Anagh Kumar Baranwal)
+ * sync
+ * Fix cpu spinning when empty directory finding with leading slashes (Nick Craig-Wood)
+ * Copy dir modtimes even when copyEmptySrcDirs is false (ll3006)
+* VFS
+ * Fix directory cache serving stale data (Lorenz Brun)
+ * Fix inefficient directory caching when directory reads are slow (huanghaojun)
+ * Fix integration test failures (Nick Craig-Wood)
+* Drive
+ * Metadata: fix error when setting copy-requires-writer-permission on a folder (Nick Craig-Wood)
+* Dropbox
+ * Retry link without expiry (Dave Vasilevsky)
+* HTTP
+ * Correct root if definitely pointing to a file (nielash)
+* Iclouddrive
+ * Fix so created files are writable (Ben Alex)
+* Onedrive
+ * Fix metadata ordering in permissions (Nick Craig-Wood)
+
+## v1.69.1 - 2025-02-14
+
+[See commits](https://github.com/rclone/rclone/compare/v1.69.0...v1.69.1)
+
+* Bug Fixes
+ * lib/oauthutil: Fix redirect URL mismatch errors (Nick Craig-Wood)
+ * bisync: Fix listings missing concurrent modifications (nielash)
+ * serve s3: Fix list objects encoding-type (Nick Craig-Wood)
+ * fs: Fix confusing "didn't find section in config file" error (Nick Craig-Wood)
+ * doc fixes (Christoph Berger, Dimitri Papadopoulos, Matt Ickstadt, Nick Craig-Wood, Tim White, Zachary Vorhies)
+ * build: Added parallel docker builds and caching for go build in the container (Anagh Kumar Baranwal)
+* VFS
+ * Fix the cache failing to upload symlinks when `--links` was specified (Nick Craig-Wood)
+ * Fix race detected by race detector (Nick Craig-Wood)
+ * Close the change notify channel on Shutdown (izouxv)
+* B2
+ * Fix "fatal error: concurrent map writes" (Nick Craig-Wood)
+* Iclouddrive
+ * Add notes on ADP and Missing PCS cookies (Nick Craig-Wood)
+* Onedrive
+ * Mark German (de) region as deprecated (Nick Craig-Wood)
+* S3
+ * Added new storage class to magalu provider (Bruno Fernandes)
+ * Add DigitalOcean regions SFO2, LON1, TOR1, BLR1 (jkpe)
+ * Add latest Linode Object Storage endpoints (jbagwell-akamai)
+
## v1.69.0 - 2025-01-12
[See commits](https://github.com/rclone/rclone/compare/v1.68.0...v1.69.0)
@@ -56538,7 +59294,7 @@ Options:
* fs: Make `--links` flag global and add new `--local-links` and `--vfs-links` flags (Nick Craig-Wood)
* http servers: Disable automatic authentication skipping for unix sockets in http servers (Moises Lima)
* This was making it impossible to use unix sockets with an proxy
- * This might now cause rclone to need authenticaton where it didn't before
+ * This might now cause rclone to need authentication where it didn't before
* oauthutil: add support for OAuth client credential flow (Martin Hassack, Nick Craig-Wood)
* operations: make log messages consistent for mkdir/rmdir at INFO level (Nick Craig-Wood)
* rc: Add `relative` to [vfs/queue-set-expiry](https://rclone.org/rc/#vfs-queue-set-expiry) (Nick Craig-Wood)
@@ -57216,7 +59972,7 @@ instead of of `--size-only`, when `check` is not available.
* Update all dependencies (Nick Craig-Wood)
* Refactor version info and icon resource handling on windows (albertony)
* doc updates (albertony, alfish2000, asdffdsazqqq, Dimitri Papadopoulos, Herby Gillot, Joda Stößer, Manoj Ghosh, Nick Craig-Wood)
- * Implement `--metadata-mapper` to transform metatadata with a user supplied program (Nick Craig-Wood)
+ * Implement `--metadata-mapper` to transform metadata with a user supplied program (Nick Craig-Wood)
* Add `ChunkWriterDoesntSeek` feature flag and set it for b2 (Nick Craig-Wood)
* lib/http: Export basic go string functions for use in `--template` (Gabriel Espinoza)
* makefile: Use POSIX compatible install arguments (Mina Galić)
@@ -57331,7 +60087,7 @@ instead of of `--size-only`, when `check` is not available.
* Fix "fatal error: concurrent map writes" (Nick Craig-Wood)
* B2
* Fix multipart upload: corrupted on transfer: sizes differ XXX vs 0 (Nick Craig-Wood)
- * Fix locking window when getting mutipart upload URL (Nick Craig-Wood)
+ * Fix locking window when getting multipart upload URL (Nick Craig-Wood)
* Fix server side copies greater than 4GB (Nick Craig-Wood)
* Fix chunked streaming uploads (Nick Craig-Wood)
* Reduce default `--b2-upload-concurrency` to 4 to reduce memory usage (Nick Craig-Wood)
@@ -62170,6 +64926,20 @@ See the [remote setup docs](https://rclone.org/remote_setup/) for more info.
This has now been documented in its own [remote setup page](https://rclone.org/remote_setup/).
+### How can I get rid of the "Config file not found" notice?
+
+If you see a notice like 'NOTICE: Config file "rclone.conf" not found', this
+means you have not configured any remotes.
+
+If you need to configure a remote, see the [config help docs](https://rclone.org/docs/#configure).
+
+If you are using rclone entirely with [on the fly remotes](https://rclone.org/docs/#backend-path-to-dir),
+you can create an empty config file to get rid of this notice, for example:
+
+```
+rclone config touch
+```
+
### Can rclone sync directly from drive to s3 ###
Rclone can sync between two remote cloud storage systems just fine.
@@ -62381,12 +65151,22 @@ value, say `export GOGC=20`. This will make the garbage collector
work harder, reducing memory size at the expense of CPU usage.
The most common cause of rclone using lots of memory is a single
-directory with millions of files in. Rclone has to load this entirely
-into memory as rclone objects. Each rclone object takes 0.5k-1k of
-memory. There is
+directory with millions of files in.
+
+Before rclone v1.70 has to load this entirely into memory as rclone
+objects. Each rclone object takes 0.5k-1k of memory. There is
[a workaround for this](https://github.com/rclone/rclone/wiki/Big-syncs-with-millions-of-files)
which involves a bit of scripting.
+However with rclone v1.70 and later rclone will automatically save
+directory entries to disk when a directory with more than
+[`--list-cutoff`](https://rclone.org/docs/#list-cutoff) (1,000,000 by default) entries
+is detected.
+
+From v1.70 rclone also has the [--max-buffer-memory](https://rclone.org/docs/#max-buffer-memory)
+flag which helps particularly when multi-thread transfers are using
+too much memory.
+
### Rclone changes fullwidth Unicode punctuation marks in file names
For example: On a Windows system, you have a file with name `Test:1.jpg`,
@@ -63238,7 +66018,6 @@ put them back in again.` >}}
* ben-ba
* Eli Orzitzer
* Anthony Metzidis
- * emyarod
* keongalvin
* rarspace01
* Paul Stern
@@ -63354,6 +66133,64 @@ put them back in again.` >}}
* ToM
* TAKEI Yuya <853320+takei-yuya@users.noreply.github.com>
* Francesco Frassinelli
+ * Matt Ickstadt
+ * Spencer McCullough
+ * Jonathan Giannuzzi
+ * Christoph Berger
+ * Tim White
+ * Robin Schneider
+ * izouxv
+ * Moises Lima
+ * Bruno Fernandes
+ * Corentin Barreau
+ * hiddenmarten
+ * Trevor Starick
+ * b-wimmer <132347192+b-wimmer@users.noreply.github.com>
+ * Jess
+ * Zachary Vorhies
+ * Alexander Minbaev
+ * Joel K Biju
+ * ll3006
+ * jbagwell-akamai <113531113+jbagwell-akamai@users.noreply.github.com>
+ * Michael Kebe
+ * Lorenz Brun
+ * Dave Vasilevsky
+ * luzpaz
+ * jack <9480542+jackusm@users.noreply.github.com>
+ * Jörn Friedrich Dreyer
+ * alingse
+ * Fernando Fernández
+ * eccoisle <167755281+eccoisle@users.noreply.github.com>
+ * Klaas Freitag
+ * Danny Garside
+ * Samantha Bowen
+ * simonmcnair <101189766+simonmcnair@users.noreply.github.com>
+ * huanghaojun
+ * Enduriel
+ * Markus Gerstel
+ * simwai <16225108+simwai@users.noreply.github.com>
+ * Ben Alex
+ * Klaas Freitag
+ * Andrew Kreimer
+ * Ed Craig-Wood <138211970+edc-w@users.noreply.github.com>
+ * Christian Richter <1058116+dragonchaser@users.noreply.github.com>
+ * Ralf Haferkamp
+ * Jugal Kishore
+ * Tho Neyugn
+ * Ben Boeckel
+ * Clément Wehrung
+ * Jeff Geerling
+ * Germán Casares
+ * fhuber
+ * wbulot
+ * Jeremy Daer
+ * Oleksiy Stashok
+ * PrathameshLakawade
+ * Nathanael Demacon <7271496+quantumsheep@users.noreply.github.com>
+ * ahxxm
+ * Flora Thiebaut
+ * kingston125
+ * Ser-Bul <30335009+Ser-Bul@users.noreply.github.com>
# Contact the rclone project
diff --git a/MANUAL.txt b/MANUAL.txt
index 9247df8dc..f302efda6 100644
--- a/MANUAL.txt
+++ b/MANUAL.txt
@@ -1,6 +1,76 @@
rclone(1) User Manual
Nick Craig-Wood
-Jan 12, 2025
+Jun 17, 2025
+
+NAME
+
+rclone - manage files on cloud storage
+
+SYNOPSIS
+
+ Usage:
+ rclone [flags]
+ rclone [command]
+
+ Available commands:
+ about Get quota information from the remote.
+ authorize Remote authorization.
+ backend Run a backend-specific command.
+ bisync Perform bidirectional synchronization between two paths.
+ cat Concatenates any files and sends them to stdout.
+ check Checks the files in the source and destination match.
+ checksum Checks the files in the destination against a SUM file.
+ cleanup Clean up the remote if possible.
+ completion Output completion script for a given shell.
+ config Enter an interactive configuration session.
+ convmv Convert file and directory names in place.
+ copy Copy files from source to dest, skipping identical files.
+ copyto Copy files from source to dest, skipping identical files.
+ copyurl Copy the contents of the URL supplied content to dest:path.
+ cryptcheck Cryptcheck checks the integrity of an encrypted remote.
+ cryptdecode Cryptdecode returns unencrypted file names.
+ dedupe Interactively find duplicate filenames and delete/rename them.
+ delete Remove the files in path.
+ deletefile Remove a single file from remote.
+ gendocs Output markdown docs for rclone to the directory supplied.
+ gitannex Speaks with git-annex over stdin/stdout.
+ hashsum Produces a hashsum file for all the objects in the path.
+ help Show help for rclone commands, flags and backends.
+ link Generate public link to file/folder.
+ listremotes List all the remotes in the config file and defined in environment variables.
+ ls List the objects in the path with size and path.
+ lsd List all directories/containers/buckets in the path.
+ lsf List directories and objects in remote:path formatted for parsing.
+ lsjson List directories and objects in the path in JSON format.
+ lsl List the objects in path with modification time, size and path.
+ md5sum Produces an md5sum file for all the objects in the path.
+ mkdir Make the path if it doesn't already exist.
+ mount Mount the remote as file system on a mountpoint.
+ move Move files from source to dest.
+ moveto Move file or directory from source to dest.
+ ncdu Explore a remote with a text based user interface.
+ nfsmount Mount the remote as file system on a mountpoint.
+ obscure Obscure password for use in the rclone config file.
+ purge Remove the path and all of its contents.
+ rc Run a command against a running rclone.
+ rcat Copies standard input to file on remote.
+ rcd Run rclone listening to remote control commands only.
+ rmdir Remove the empty directory at path.
+ rmdirs Remove empty directories under the path.
+ selfupdate Update the rclone binary.
+ serve Serve a remote over a protocol.
+ settier Changes storage class/tier of objects in remote.
+ sha1sum Produces an sha1sum file for all the objects in the path.
+ size Prints the total size and number of objects in remote:path.
+ sync Make source and dest identical, modifying destination only.
+ test Run a test command
+ touch Create new file or change file modification time.
+ tree List the contents of the remote in a tree like fashion.
+ version Show the version number.
+
+ Use "rclone [command] --help" for more information about a command.
+ Use "rclone help flags" for to see the global flags.
+ Use "rclone help backends" for a list of supported services.
Rclone syncs your files to cloud storage
@@ -110,6 +180,8 @@ S3, that work out of the box.)
- Enterprise File Fabric
- Fastmail Files
- Files.com
+- FileLu Cloud Storage
+- FlashBlade
- FTP
- Gofile
- Google Cloud Storage
@@ -134,7 +206,8 @@ S3, that work out of the box.)
- Magalu
- Mail.ru Cloud
- Memset Memstore
-- Mega
+- MEGA
+- MEGA S4
- Memory
- Microsoft Azure Blob Storage
- Microsoft Azure Files Storage
@@ -528,7 +601,7 @@ developers so it may be out of date. Its current version is as below.
Source installation
-Make sure you have git and Go installed. Go version 1.18 or newer is
+Make sure you have git and Go installed. Go version 1.22 or newer is
required, the latest release is recommended. You can get it from your
package manager, or download it from golang.org/dl. Then you can run the
following:
@@ -849,6 +922,7 @@ See the following for detailed instructions for
- Digi Storage
- Dropbox
- Enterprise File Fabric
+- FileLu Cloud Storage
- Files.com
- FTP
- Gofile
@@ -1088,6 +1162,7 @@ Flags for anything which can copy a file
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
--no-update-dir-modtime Don't update directory modification times
@@ -1120,6 +1195,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -1290,6 +1366,7 @@ Flags for anything which can copy a file
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
--no-update-dir-modtime Don't update directory modification times
@@ -1312,6 +1389,7 @@ Flags used for sync commands
--delete-during When synchronizing, delete files during transfer
--fix-case Force rename of case insensitive dest to match source
--ignore-errors Delete even if there are I/O errors
+ --list-cutoff int To save memory, sort directory listings on disk above this threshold (default 1000000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off)
--suffix string Suffix to add to changed files
@@ -1339,6 +1417,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -1442,6 +1521,7 @@ Flags for anything which can copy a file
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
--no-update-dir-modtime Don't update directory modification times
@@ -1474,6 +1554,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -1563,6 +1644,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -1600,6 +1682,10 @@ include/exclude filters - everything will be removed. Use the delete
command if you want to selectively delete files. To delete empty
directories only, use command rmdir or rmdirs.
+The concurrency of this operation is controlled by the --checkers global
+flag. However, some backends will implement this command directly, in
+which case --checkers will be ignored.
+
Important: Since this can cause data loss, test first with the --dry-run
or the --interactive/-i flag.
@@ -1771,6 +1857,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -1858,6 +1945,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -1956,6 +2044,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -2044,6 +2133,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -2118,6 +2208,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -2195,6 +2286,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -2264,6 +2356,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -2332,11 +2425,16 @@ Or
beta: 1.42.0.5 (released 2018-06-17)
upgrade: https://beta.rclone.org/v1.42-005-g56e1e820
+If you supply the --deps flag then rclone will print a list of all the
+packages it depends on and their versions along with some other
+information about the build.
+
rclone version [flags]
Options
--check Check for new version
+ --deps Show the Go dependencies
-h, --help help for version
See the global flags page for global options not listed here.
@@ -2595,6 +2693,11 @@ Synopsis
Remote authorization. Used to authorize a remote or headless rclone from
a machine with a browser - use as instructed by rclone config.
+The command requires 1-3 arguments: - fs name (e.g., "drive", "s3",
+etc.) - Either a base64 encoded JSON blob obtained from a previous
+rclone config session - Or a client_id and client_secret pair obtained
+from the remote service
+
Use --auth-no-open-browser to prevent rclone to open auth link in
default browser automatically.
@@ -2602,7 +2705,7 @@ Use --template to generate HTML output via a custom Go template. If a
blank string is provided as an argument to this flag, the default
template is used.
- rclone authorize [flags]
+ rclone authorize [base64_json_blob | client_id client_secret] [flags]
Options
@@ -2751,6 +2854,7 @@ Flags for anything which can copy a file
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
--no-update-dir-modtime Don't update directory modification times
@@ -2783,6 +2887,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -2866,6 +2971,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -2970,6 +3076,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -3278,6 +3385,7 @@ Options
--continue Continue the configuration process with an answer
-h, --help help for create
--no-obscure Force any passwords not to be obscured
+ --no-output Don't provide any output
--non-interactive Don't interact with user and return questions
--obscure Force any passwords to be obscured
--result string Result - use with --continue
@@ -3467,12 +3575,12 @@ re-encrypt the config.
When --password-command is called to change the password then the
environment variable RCLONE_PASSWORD_CHANGE=1 will be set. So if
-changing passwords programatically you can use the environment variable
+changing passwords programmatically you can use the environment variable
to distinguish which password you must supply.
Alternatively you can remove the password first (with
rclone config encryption remove), then set it again with this command
-which may be easier if you don't mind the unecrypted config file being
+which may be easier if you don't mind the unencrypted config file being
on the disk briefly.
rclone config encryption set [flags]
@@ -3766,6 +3874,7 @@ Options
--continue Continue the configuration process with an answer
-h, --help help for update
--no-obscure Force any passwords not to be obscured
+ --no-output Don't provide any output
--non-interactive Don't interact with user and return questions
--obscure Force any passwords to be obscured
--result string Result - use with --continue
@@ -3799,6 +3908,429 @@ See Also
- rclone config - Enter an interactive configuration session.
+rclone convmv
+
+Convert file and directory names in place.
+
+Synopsis
+
+convmv supports advanced path name transformations for converting and
+renaming files and directories by applying prefixes, suffixes, and other
+alterations.
+
+ ---------------------------------------------------------------------------------------------
+ Command Description
+ --------------------------------------------------------- -----------------------------------
+ --name-transform prefix=XXXX Prepends XXXX to the file name.
+
+ --name-transform suffix=XXXX Appends XXXX to the file name after
+ the extension.
+
+ --name-transform suffix_keep_extension=XXXX Appends XXXX to the file name while
+ preserving the original file
+ extension.
+
+ --name-transform trimprefix=XXXX Removes XXXX if it appears at the
+ start of the file name.
+
+ --name-transform trimsuffix=XXXX Removes XXXX if it appears at the
+ end of the file name.
+
+ --name-transform regex=/pattern/replacement/ Applies a regex-based
+ transformation.
+
+ --name-transform replace=old:new Replaces occurrences of old with
+ new in the file name.
+
+ --name-transform date={YYYYMMDD} Appends or prefixes the specified
+ date format.
+
+ --name-transform truncate=N Truncates the file name to a
+ maximum of N characters.
+
+ --name-transform base64encode Encodes the file name in Base64.
+
+ --name-transform base64decode Decodes a Base64-encoded file name.
+
+ --name-transform encoder=ENCODING Converts the file name to the
+ specified encoding (e.g.,
+ ISO-8859-1, Windows-1252,
+ Macintosh).
+
+ --name-transform decoder=ENCODING Decodes the file name from the
+ specified encoding.
+
+ --name-transform charmap=MAP Applies a character mapping
+ transformation.
+
+ --name-transform lowercase Converts the file name to
+ lowercase.
+
+ --name-transform uppercase Converts the file name to
+ UPPERCASE.
+
+ --name-transform titlecase Converts the file name to Title
+ Case.
+
+ --name-transform ascii Strips non-ASCII characters.
+
+ --name-transform url URL-encodes the file name.
+
+ --name-transform nfc Converts the file name to NFC
+ Unicode normalization form.
+
+ --name-transform nfd Converts the file name to NFD
+ Unicode normalization form.
+
+ --name-transform nfkc Converts the file name to NFKC
+ Unicode normalization form.
+
+ --name-transform nfkd Converts the file name to NFKD
+ Unicode normalization form.
+
+ --name-transform command=/path/to/my/programfile names. Executes an external program to
+ transform
+ ---------------------------------------------------------------------------------------------
+
+Conversion modes:
+
+ none
+ nfc
+ nfd
+ nfkc
+ nfkd
+ replace
+ prefix
+ suffix
+ suffix_keep_extension
+ trimprefix
+ trimsuffix
+ index
+ date
+ truncate
+ base64encode
+ base64decode
+ encoder
+ decoder
+ ISO-8859-1
+ Windows-1252
+ Macintosh
+ charmap
+ lowercase
+ uppercase
+ titlecase
+ ascii
+ url
+ regex
+ command
+
+Char maps:
+
+
+ IBM-Code-Page-037
+ IBM-Code-Page-437
+ IBM-Code-Page-850
+ IBM-Code-Page-852
+ IBM-Code-Page-855
+ Windows-Code-Page-858
+ IBM-Code-Page-860
+ IBM-Code-Page-862
+ IBM-Code-Page-863
+ IBM-Code-Page-865
+ IBM-Code-Page-866
+ IBM-Code-Page-1047
+ IBM-Code-Page-1140
+ ISO-8859-1
+ ISO-8859-2
+ ISO-8859-3
+ ISO-8859-4
+ ISO-8859-5
+ ISO-8859-6
+ ISO-8859-7
+ ISO-8859-8
+ ISO-8859-9
+ ISO-8859-10
+ ISO-8859-13
+ ISO-8859-14
+ ISO-8859-15
+ ISO-8859-16
+ KOI8-R
+ KOI8-U
+ Macintosh
+ Macintosh-Cyrillic
+ Windows-874
+ Windows-1250
+ Windows-1251
+ Windows-1252
+ Windows-1253
+ Windows-1254
+ Windows-1255
+ Windows-1256
+ Windows-1257
+ Windows-1258
+ X-User-Defined
+
+Encoding masks:
+
+ Asterisk
+ BackQuote
+ BackSlash
+ Colon
+ CrLf
+ Ctl
+ Del
+ Dollar
+ Dot
+ DoubleQuote
+ Exclamation
+ Hash
+ InvalidUtf8
+ LeftCrLfHtVt
+ LeftPeriod
+ LeftSpace
+ LeftTilde
+ LtGt
+ None
+ Percent
+ Pipe
+ Question
+ Raw
+ RightCrLfHtVt
+ RightPeriod
+ RightSpace
+ Semicolon
+ SingleQuote
+ Slash
+ SquareBracket
+
+Examples:
+
+ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,uppercase"
+ // Output: STORIES/THE QUICK BROWN FOX!.TXT
+
+ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,replace=Fox:Turtle" --name-transform "all,replace=Quick:Slow"
+ // Output: stories/The Slow Brown Turtle!.txt
+
+ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,base64encode"
+ // Output: c3Rvcmllcw==/VGhlIFF1aWNrIEJyb3duIEZveCEudHh0
+
+ rclone convmv "c3Rvcmllcw==/VGhlIFF1aWNrIEJyb3duIEZveCEudHh0" --name-transform "all,base64decode"
+ // Output: stories/The Quick Brown Fox!.txt
+
+ rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,nfc"
+ // Output: stories/The Quick Brown 🦊 Fox Went to the Café!.txt
+
+ rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,nfd"
+ // Output: stories/The Quick Brown 🦊 Fox Went to the Café!.txt
+
+ rclone convmv "stories/The Quick Brown 🦊 Fox!.txt" --name-transform "all,ascii"
+ // Output: stories/The Quick Brown Fox!.txt
+
+ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,trimsuffix=.txt"
+ // Output: stories/The Quick Brown Fox!
+
+ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,prefix=OLD_"
+ // Output: OLD_stories/OLD_The Quick Brown Fox!.txt
+
+ rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,charmap=ISO-8859-7"
+ // Output: stories/The Quick Brown _ Fox Went to the Caf_!.txt
+
+ rclone convmv "stories/The Quick Brown Fox: A Memoir [draft].txt" --name-transform "all,encoder=Colon,SquareBracket"
+ // Output: stories/The Quick Brown Fox: A Memoir [draft].txt
+
+ rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,truncate=21"
+ // Output: stories/The Quick Brown 🦊 Fox
+
+ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=echo"
+ // Output: stories/The Quick Brown Fox!.txt
+
+ rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
+ // Output: stories/The Quick Brown Fox!-20250617
+
+ rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
+ // Output: stories/The Quick Brown Fox!-2025-06-17 0551PM
+
+ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,regex=[\\.\\w]/ab"
+ // Output: ababababababab/ababab ababababab ababababab ababab!abababab
+
+Multiple transformations can be used in sequence, applied in the order
+they are specified on the command line.
+
+The --name-transform flag is also available in sync, copy, and move.
+
+Files vs Directories
+
+By default --name-transform will only apply to file names. The means
+only the leaf file name will be transformed. However some of the
+transforms would be better applied to the whole path or just
+directories. To choose which which part of the file path is affected
+some tags can be added to the --name-transform
+
+ -----------------------------------------------------------------------
+ Tag Effect
+ ----------------------------------- -----------------------------------
+ file Only transform the leaf name of
+ files (DEFAULT)
+
+ dir Only transform name of directories
+ - these may appear anywhere in the
+ path
+
+ all Transform the entire path for files
+ and directories
+ -----------------------------------------------------------------------
+
+This is used by adding the tag into the transform name like this:
+--name-transform file,prefix=ABC or --name-transform dir,prefix=DEF.
+
+For some conversions using all is more likely to be useful, for example
+--name-transform all,nfc
+
+Note that --name-transform may not add path separators / to the name.
+This will cause an error.
+
+Ordering and Conflicts
+
+- Transformations will be applied in the order specified by the user.
+ - If the file tag is in use (the default) then only the leaf name
+ of files will be transformed.
+ - If the dir tag is in use then directories anywhere in the path
+ will be transformed
+ - If the all tag is in use then directories and files anywhere in
+ the path will be transformed
+ - Each transformation will be run one path segment at a time.
+ - If a transformation adds a / or ends up with an empty path
+ segment then that will be an error.
+- It is up to the user to put the transformations in a sensible order.
+ - Conflicting transformations, such as prefix followed by
+ trimprefix or nfc followed by nfd, are possible.
+ - Instead of enforcing mutual exclusivity, transformations are
+ applied in sequence as specified by the user, allowing for
+ intentional use cases (e.g., trimming one prefix before adding
+ another).
+ - Users should be aware that certain combinations may lead to
+ unexpected results and should verify transformations using
+ --dry-run before execution.
+
+Race Conditions and Non-Deterministic Behavior
+
+Some transformations, such as replace=old:new, may introduce conflicts
+where multiple source files map to the same destination name. This can
+lead to race conditions when performing concurrent transfers. It is up
+to the user to anticipate these. * If two files from the source are
+transformed into the same name at the destination, the final state may
+be non-deterministic. * Running rclone check after a sync using such
+transformations may erroneously report missing or differing files due to
+overwritten results.
+
+- To minimize risks, users should:
+ - Carefully review transformations that may introduce conflicts.
+ - Use --dry-run to inspect changes before executing a sync (but
+ keep in mind that it won't show the effect of non-deterministic
+ transformations).
+ - Avoid transformations that cause multiple distinct source files
+ to map to the same destination name.
+ - Consider disabling concurrency with --transfers=1 if necessary.
+ - Certain transformations (e.g. prefix) will have a multiplying
+ effect every time they are used. Avoid these when using bisync.
+
+ rclone convmv dest:path --name-transform XXX [flags]
+
+Options
+
+ --create-empty-src-dirs Create empty source dirs on destination after move
+ --delete-empty-src-dirs Delete empty source dirs after move
+ -h, --help help for convmv
+
+Options shared with other commands are described next. See the global
+flags page for global options not listed here.
+
+Copy Options
+
+Flags for anything which can copy a file
+
+ --check-first Do all the checks before starting transfers
+ -c, --checksum Check for changes with size & checksum (if available, or fallback to size only)
+ --compare-dest stringArray Include additional server-side paths during comparison
+ --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
+ --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
+ --ignore-case-sync Ignore case when synchronizing
+ --ignore-checksum Skip post copy check of checksums
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use modtime or checksum
+ -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
+ --immutable Do not modify files, fail if existing files have been modified
+ --inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
+ --max-backlog int Maximum number of objects in sync or check backlog (default 10000)
+ --max-duration Duration Maximum duration rclone will transfer data for (default 0s)
+ --max-transfer SizeSuffix Maximum size of data to transfer (default off)
+ -M, --metadata If set, preserve metadata when copying objects
+ --modify-window Duration Max time diff to be considered the same (default 1ns)
+ --multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi)
+ --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
+ --multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
+ --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
+ --no-check-dest Don't check the destination, copy regardless
+ --no-traverse Don't traverse destination file system on copy
+ --no-update-dir-modtime Don't update directory modification times
+ --no-update-modtime Don't update destination modtime if files identical
+ --order-by string Instructions on how to order the transfers, e.g. 'size,descending'
+ --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
+ --refresh-times Refresh the modtime of remote files
+ --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
+ --size-only Skip based on size only, not modtime or checksum
+ --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
+ -u, --update Skip files that are newer on the destination
+
+Important Options
+
+Important flags useful for most commands
+
+ -n, --dry-run Do a trial run with no permanent changes
+ -i, --interactive Enable interactive mode
+ -v, --verbose count Print lots more stuff (repeat for more)
+
+Filter Options
+
+Flags for filtering directory listings
+
+ --delete-excluded Delete files on dest excluded from sync
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin)
+ --exclude-if-present stringArray Exclude directories if filename is present
+ --files-from stringArray Read list of source-file names from file (use - to read from stdin)
+ --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
+ -f, --filter stringArray Add a file filtering rule
+ --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
+ --ignore-case Ignore case in filters (case insensitive)
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read file include patterns from file (use - to read from stdin)
+ --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-depth int If set limits the recursion depth to this (default -1)
+ --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
+ --metadata-exclude stringArray Exclude metadatas matching pattern
+ --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin)
+ --metadata-filter stringArray Add a metadata filtering rule
+ --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin)
+ --metadata-include stringArray Include metadatas matching pattern
+ --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
+ --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
+
+Listing Options
+
+Flags for listing directories
+
+ --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
+ --fast-list Use recursive list if available; uses more memory but fewer transactions
+
+See Also
+
+- rclone - Show help for rclone commands, flags and backends.
+
rclone copyto
Copy files from source to dest, skipping identical files.
@@ -3831,6 +4363,9 @@ This doesn't transfer files that are identical on src and dst, testing
by size and modification time or MD5SUM. It doesn't delete files from
the destination.
+If you are looking to copy just a byte range of a file, please see
+'rclone cat --offset X --count Y'
+
Note: Use the -P/--progress flag to view real-time transfer statistics
rclone copyto source:path dest:path [flags]
@@ -3868,6 +4403,7 @@ Flags for anything which can copy a file
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
--no-update-dir-modtime Don't update directory modification times
@@ -3900,6 +4436,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -3939,8 +4476,8 @@ Setting --auto-filename will attempt to automatically determine the
filename from the URL (after any redirections) and used in the
destination path.
-With --auto-filename-header in addition, if a specific filename is set
-in HTTP headers, it will be used instead of the name from the URL. With
+With --header-filename in addition, if a specific filename is set in
+HTTP headers, it will be used instead of the name from the URL. With
--print-filename in addition, the resulting file name will be printed.
Setting --no-clobber will prevent overwriting file on the destination if
@@ -3949,7 +4486,7 @@ there is one with the same name.
Setting --stdout or making the output file name - will cause the output
to be written to standard output.
-Troublshooting
+Troubleshooting
If you can't get rclone copyurl to work then here are some things you
can try:
@@ -4080,6 +4617,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -4298,6 +4836,7 @@ Run without a hash to see the list of all supported hashes, e.g.
* whirlpool
* crc32
* sha256
+ * sha512
Then
@@ -4331,6 +4870,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -4604,6 +5144,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -4774,6 +5315,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -5368,11 +5910,11 @@ and if they haven't been accessed for --vfs-write-back seconds. If
rclone is quit or dies with files that haven't been uploaded, these will
be uploaded next time rclone is run with the same flags.
-If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the
-cache may exceed these quotas for two reasons. Firstly because it is
+If using --vfs-cache-max-size or --vfs-cache-min-free-space note that
+the cache may exceed these quotas for two reasons. Firstly because it is
only checked every --vfs-cache-poll-interval. Secondly because open
files cannot be evicted from the cache. When --vfs-cache-max-size or
---vfs-cache-min-free-size is exceeded, rclone will attempt to evict the
+--vfs-cache-min-free-space is exceeded, rclone will attempt to evict the
least accessed files from the cache first. rclone will start with files
that haven't been accessed for the longest. This cache flushing strategy
is efficient and more relevant files are likely to remain cached.
@@ -5698,6 +6240,43 @@ result is accurate. However, this is very inefficient and may cost lots
of API calls resulting in extra charges. Use it as a last resort and
only with caching.
+VFS Metadata
+
+If you use the --vfs-metadata-extension flag you can get the VFS to
+expose files which contain the metadata as a JSON blob. These files will
+not appear in the directory listing, but can be stat-ed and opened and
+once they have been they will appear in directory listings until the
+directory cache expires.
+
+Note that some backends won't create metadata unless you pass in the
+--metadata flag.
+
+For example, using rclone mount with
+--metadata --vfs-metadata-extension .metadata we get
+
+ $ ls -l /mnt/
+ total 1048577
+ -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+ $ cat /mnt/1G.metadata
+ {
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+ }
+
+ $ ls -l /mnt/
+ total 1048578
+ -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+ -rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+
+If the file has no metadata it will be returned as {} and if there is an
+error reading the metadata the error will be returned as
+{"error":"error string"}.
+
rclone mount remote:path /path/to/mountpoint [flags]
Options
@@ -5744,6 +6323,7 @@ Options
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -5771,6 +6351,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -5862,6 +6443,7 @@ Flags for anything which can copy a file
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
--no-update-dir-modtime Don't update directory modification times
@@ -5894,6 +6476,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -6006,6 +6589,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -6600,11 +7184,11 @@ and if they haven't been accessed for --vfs-write-back seconds. If
rclone is quit or dies with files that haven't been uploaded, these will
be uploaded next time rclone is run with the same flags.
-If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the
-cache may exceed these quotas for two reasons. Firstly because it is
+If using --vfs-cache-max-size or --vfs-cache-min-free-space note that
+the cache may exceed these quotas for two reasons. Firstly because it is
only checked every --vfs-cache-poll-interval. Secondly because open
files cannot be evicted from the cache. When --vfs-cache-max-size or
---vfs-cache-min-free-size is exceeded, rclone will attempt to evict the
+--vfs-cache-min-free-space is exceeded, rclone will attempt to evict the
least accessed files from the cache first. rclone will start with files
that haven't been accessed for the longest. This cache flushing strategy
is efficient and more relevant files are likely to remain cached.
@@ -6930,6 +7514,43 @@ result is accurate. However, this is very inefficient and may cost lots
of API calls resulting in extra charges. Use it as a last resort and
only with caching.
+VFS Metadata
+
+If you use the --vfs-metadata-extension flag you can get the VFS to
+expose files which contain the metadata as a JSON blob. These files will
+not appear in the directory listing, but can be stat-ed and opened and
+once they have been they will appear in directory listings until the
+directory cache expires.
+
+Note that some backends won't create metadata unless you pass in the
+--metadata flag.
+
+For example, using rclone mount with
+--metadata --vfs-metadata-extension .metadata we get
+
+ $ ls -l /mnt/
+ total 1048577
+ -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+ $ cat /mnt/1G.metadata
+ {
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+ }
+
+ $ ls -l /mnt/
+ total 1048578
+ -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+ -rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+
+If the file has no metadata it will be returned as {} and if there is an
+error reading the metadata the error will be returned as
+{"error":"error string"}.
+
rclone nfsmount remote:path /path/to/mountpoint [flags]
Options
@@ -6981,6 +7602,7 @@ Options
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -7008,6 +7630,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -7368,7 +7991,13 @@ By default this will serve files without needing a login.
You can either use an htpasswd file which can take lots of users, or set
a single username and password with the --rc-user and --rc-pass flags.
-If no static users are configured by either of the above methods, and
+Alternatively, you can have the reverse proxy manage authentication and
+use the username provided in the configured header with
+--user-from-header (e.g., --rc---user-from-header=x-remote-user). Ensure
+the proxy is trusted and headers cannot be spoofed, as misconfiguration
+may lead to unauthorized access.
+
+If either of the above authentication methods is not configured and
client certificates are required by the --client-ca flag passed to the
server, the client certificate common name will be considered as the
username.
@@ -7426,6 +8055,7 @@ Flags to control the Remote Control API
--rc-server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
--rc-template string User-specified template
--rc-user string User name for authentication
+ --rc-user-from-header string User name from a defined HTTP header
--rc-web-fetch-url string URL to fetch the releases for webgui (default "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest")
--rc-web-gui Launch WebGUI on localhost
--rc-web-gui-force-update Force update to latest version of web gui
@@ -7716,11 +8346,11 @@ and if they haven't been accessed for --vfs-write-back seconds. If
rclone is quit or dies with files that haven't been uploaded, these will
be uploaded next time rclone is run with the same flags.
-If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the
-cache may exceed these quotas for two reasons. Firstly because it is
+If using --vfs-cache-max-size or --vfs-cache-min-free-space note that
+the cache may exceed these quotas for two reasons. Firstly because it is
only checked every --vfs-cache-poll-interval. Secondly because open
files cannot be evicted from the cache. When --vfs-cache-max-size or
---vfs-cache-min-free-size is exceeded, rclone will attempt to evict the
+--vfs-cache-min-free-space is exceeded, rclone will attempt to evict the
least accessed files from the cache first. rclone will start with files
that haven't been accessed for the longest. This cache flushing strategy
is efficient and more relevant files are likely to remain cached.
@@ -8046,6 +8676,43 @@ result is accurate. However, this is very inefficient and may cost lots
of API calls resulting in extra charges. Use it as a last resort and
only with caching.
+VFS Metadata
+
+If you use the --vfs-metadata-extension flag you can get the VFS to
+expose files which contain the metadata as a JSON blob. These files will
+not appear in the directory listing, but can be stat-ed and opened and
+once they have been they will appear in directory listings until the
+directory cache expires.
+
+Note that some backends won't create metadata unless you pass in the
+--metadata flag.
+
+For example, using rclone mount with
+--metadata --vfs-metadata-extension .metadata we get
+
+ $ ls -l /mnt/
+ total 1048577
+ -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+ $ cat /mnt/1G.metadata
+ {
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+ }
+
+ $ ls -l /mnt/
+ total 1048578
+ -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+ -rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+
+If the file has no metadata it will be returned as {} and if there is an
+error reading the metadata the error will be returned as
+{"error":"error string"}.
+
rclone serve dlna remote:path [flags]
Options
@@ -8078,6 +8745,7 @@ Options
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -8103,6 +8771,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -8261,11 +8930,11 @@ and if they haven't been accessed for --vfs-write-back seconds. If
rclone is quit or dies with files that haven't been uploaded, these will
be uploaded next time rclone is run with the same flags.
-If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the
-cache may exceed these quotas for two reasons. Firstly because it is
+If using --vfs-cache-max-size or --vfs-cache-min-free-space note that
+the cache may exceed these quotas for two reasons. Firstly because it is
only checked every --vfs-cache-poll-interval. Secondly because open
files cannot be evicted from the cache. When --vfs-cache-max-size or
---vfs-cache-min-free-size is exceeded, rclone will attempt to evict the
+--vfs-cache-min-free-space is exceeded, rclone will attempt to evict the
least accessed files from the cache first. rclone will start with files
that haven't been accessed for the longest. This cache flushing strategy
is efficient and more relevant files are likely to remain cached.
@@ -8591,6 +9260,43 @@ result is accurate. However, this is very inefficient and may cost lots
of API calls resulting in extra charges. Use it as a last resort and
only with caching.
+VFS Metadata
+
+If you use the --vfs-metadata-extension flag you can get the VFS to
+expose files which contain the metadata as a JSON blob. These files will
+not appear in the directory listing, but can be stat-ed and opened and
+once they have been they will appear in directory listings until the
+directory cache expires.
+
+Note that some backends won't create metadata unless you pass in the
+--metadata flag.
+
+For example, using rclone mount with
+--metadata --vfs-metadata-extension .metadata we get
+
+ $ ls -l /mnt/
+ total 1048577
+ -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+ $ cat /mnt/1G.metadata
+ {
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+ }
+
+ $ ls -l /mnt/
+ total 1048578
+ -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+ -rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+
+If the file has no metadata it will be returned as {} and if there is an
+error reading the metadata the error will be returned as
+{"error":"error string"}.
+
rclone serve docker [flags]
Options
@@ -8642,6 +9348,7 @@ Options
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -8669,6 +9376,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -8810,11 +9518,11 @@ and if they haven't been accessed for --vfs-write-back seconds. If
rclone is quit or dies with files that haven't been uploaded, these will
be uploaded next time rclone is run with the same flags.
-If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the
-cache may exceed these quotas for two reasons. Firstly because it is
+If using --vfs-cache-max-size or --vfs-cache-min-free-space note that
+the cache may exceed these quotas for two reasons. Firstly because it is
only checked every --vfs-cache-poll-interval. Secondly because open
files cannot be evicted from the cache. When --vfs-cache-max-size or
---vfs-cache-min-free-size is exceeded, rclone will attempt to evict the
+--vfs-cache-min-free-space is exceeded, rclone will attempt to evict the
least accessed files from the cache first. rclone will start with files
that haven't been accessed for the longest. This cache flushing strategy
is efficient and more relevant files are likely to remain cached.
@@ -9140,6 +9848,43 @@ result is accurate. However, this is very inefficient and may cost lots
of API calls resulting in extra charges. Use it as a last resort and
only with caching.
+VFS Metadata
+
+If you use the --vfs-metadata-extension flag you can get the VFS to
+expose files which contain the metadata as a JSON blob. These files will
+not appear in the directory listing, but can be stat-ed and opened and
+once they have been they will appear in directory listings until the
+directory cache expires.
+
+Note that some backends won't create metadata unless you pass in the
+--metadata flag.
+
+For example, using rclone mount with
+--metadata --vfs-metadata-extension .metadata we get
+
+ $ ls -l /mnt/
+ total 1048577
+ -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+ $ cat /mnt/1G.metadata
+ {
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+ }
+
+ $ ls -l /mnt/
+ total 1048578
+ -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+ -rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+
+If the file has no metadata it will be returned as {} and if there is an
+error reading the metadata the error will be returned as
+{"error":"error string"}.
+
Auth Proxy
If you supply the parameter --auth-proxy /path/to/program then rclone
@@ -9246,6 +9991,7 @@ Options
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -9271,6 +10017,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -9451,7 +10198,13 @@ By default this will serve files without needing a login.
You can either use an htpasswd file which can take lots of users, or set
a single username and password with the --user and --pass flags.
-If no static users are configured by either of the above methods, and
+Alternatively, you can have the reverse proxy manage authentication and
+use the username provided in the configured header with
+--user-from-header (e.g., ----user-from-header=x-remote-user). Ensure
+the proxy is trusted and headers cannot be spoofed, as misconfiguration
+may lead to unauthorized access.
+
+If either of the above authentication methods is not configured and
client certificates are required by the --client-ca flag passed to the
server, the client certificate common name will be considered as the
username.
@@ -9567,11 +10320,11 @@ and if they haven't been accessed for --vfs-write-back seconds. If
rclone is quit or dies with files that haven't been uploaded, these will
be uploaded next time rclone is run with the same flags.
-If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the
-cache may exceed these quotas for two reasons. Firstly because it is
+If using --vfs-cache-max-size or --vfs-cache-min-free-space note that
+the cache may exceed these quotas for two reasons. Firstly because it is
only checked every --vfs-cache-poll-interval. Secondly because open
files cannot be evicted from the cache. When --vfs-cache-max-size or
---vfs-cache-min-free-size is exceeded, rclone will attempt to evict the
+--vfs-cache-min-free-space is exceeded, rclone will attempt to evict the
least accessed files from the cache first. rclone will start with files
that haven't been accessed for the longest. This cache flushing strategy
is efficient and more relevant files are likely to remain cached.
@@ -9897,6 +10650,43 @@ result is accurate. However, this is very inefficient and may cost lots
of API calls resulting in extra charges. Use it as a last resort and
only with caching.
+VFS Metadata
+
+If you use the --vfs-metadata-extension flag you can get the VFS to
+expose files which contain the metadata as a JSON blob. These files will
+not appear in the directory listing, but can be stat-ed and opened and
+once they have been they will appear in directory listings until the
+directory cache expires.
+
+Note that some backends won't create metadata unless you pass in the
+--metadata flag.
+
+For example, using rclone mount with
+--metadata --vfs-metadata-extension .metadata we get
+
+ $ ls -l /mnt/
+ total 1048577
+ -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+ $ cat /mnt/1G.metadata
+ {
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+ }
+
+ $ ls -l /mnt/
+ total 1048578
+ -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+ -rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+
+If the file has no metadata it will be returned as {} and if there is an
+error reading the metadata the error will be returned as
+{"error":"error string"}.
+
Auth Proxy
If you supply the parameter --auth-proxy /path/to/program then rclone
@@ -9972,19 +10762,19 @@ that rclone supports.
Options
- --addr stringArray IPaddress:Port, :Port or [unix://]/path/to/socket to bind server to (default [127.0.0.1:8080])
+ --addr stringArray IPaddress:Port or :Port to bind server to (default 127.0.0.1:8080)
--allow-origin string Origin which cross-domain request (CORS) can be executed from
--auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
- --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
- --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
+ --cert string TLS PEM key (concatenation of certificate and CA certificate)
+ --client-ca string Client certificate authority to verify clients with
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 777)
--file-perms FileMode File permissions (default 666)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for http
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string Path to TLS PEM private key file
+ --key string TLS PEM Private key
--link-perms FileMode Link permissions (default 666)
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
@@ -10002,6 +10792,7 @@ Options
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask FileMode Override the permission bits set by the filesystem (not supported on Windows) (default 002)
--user string User name for authentication
+ --user-from-header string User name from a defined HTTP header
--vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost)
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
@@ -10012,6 +10803,7 @@ Options
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -10037,6 +10829,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -10102,7 +10895,7 @@ uses an on disk cache, but the cache entries are held as symlinks.
Rclone will use the handle of the underlying file as the NFS handle
which improves performance. This sort of cache can't be backed up and
restored as the underlying handles will change. This is Linux only. It
-requres running rclone as root or with CAP_DAC_READ_SEARCH. You can run
+requires running rclone as root or with CAP_DAC_READ_SEARCH. You can run
rclone with this extra permission by doing this to the rclone binary
sudo setcap cap_dac_read_search+ep /path/to/rclone.
@@ -10126,6 +10919,11 @@ Where $PORT is the same port number used in the serve nfs command and
$HOSTNAME is the network address of the machine that serve nfs was run
on.
+If --vfs-metadata-extension is in use then for the --nfs-cache-type disk
+and --nfs-cache-type cache the metadata files will have the file handle
+of their parent file suffixed with 0x00, 0x00, 0x00, 0x01. This means
+they can be looked up directly from the parent file handle is desired.
+
This command is only available on Unix platforms.
VFS - Virtual File System
@@ -10223,11 +11021,11 @@ and if they haven't been accessed for --vfs-write-back seconds. If
rclone is quit or dies with files that haven't been uploaded, these will
be uploaded next time rclone is run with the same flags.
-If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the
-cache may exceed these quotas for two reasons. Firstly because it is
+If using --vfs-cache-max-size or --vfs-cache-min-free-space note that
+the cache may exceed these quotas for two reasons. Firstly because it is
only checked every --vfs-cache-poll-interval. Secondly because open
files cannot be evicted from the cache. When --vfs-cache-max-size or
---vfs-cache-min-free-size is exceeded, rclone will attempt to evict the
+--vfs-cache-min-free-space is exceeded, rclone will attempt to evict the
least accessed files from the cache first. rclone will start with files
that haven't been accessed for the longest. This cache flushing strategy
is efficient and more relevant files are likely to remain cached.
@@ -10553,6 +11351,43 @@ result is accurate. However, this is very inefficient and may cost lots
of API calls resulting in extra charges. Use it as a last resort and
only with caching.
+VFS Metadata
+
+If you use the --vfs-metadata-extension flag you can get the VFS to
+expose files which contain the metadata as a JSON blob. These files will
+not appear in the directory listing, but can be stat-ed and opened and
+once they have been they will appear in directory listings until the
+directory cache expires.
+
+Note that some backends won't create metadata unless you pass in the
+--metadata flag.
+
+For example, using rclone mount with
+--metadata --vfs-metadata-extension .metadata we get
+
+ $ ls -l /mnt/
+ total 1048577
+ -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+ $ cat /mnt/1G.metadata
+ {
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+ }
+
+ $ ls -l /mnt/
+ total 1048578
+ -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+ -rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+
+If the file has no metadata it will be returned as {} and if there is an
+error reading the metadata the error will be returned as
+{"error":"error string"}.
+
rclone serve nfs remote:path [flags]
Options
@@ -10584,6 +11419,7 @@ Options
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -10609,6 +11445,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -10784,7 +11621,13 @@ By default this will serve files without needing a login.
You can either use an htpasswd file which can take lots of users, or set
a single username and password with the --user and --pass flags.
-If no static users are configured by either of the above methods, and
+Alternatively, you can have the reverse proxy manage authentication and
+use the username provided in the configured header with
+--user-from-header (e.g., ----user-from-header=x-remote-user). Ensure
+the proxy is trusted and headers cannot be spoofed, as misconfiguration
+may lead to unauthorized access.
+
+If either of the above authentication methods is not configured and
client certificates are required by the --client-ca flag passed to the
server, the client certificate common name will be considered as the
username.
@@ -10809,16 +11652,16 @@ Use --salt to change the password hashing salt from the default.
Options
- --addr stringArray IPaddress:Port, :Port or [unix://]/path/to/socket to bind server to (default [127.0.0.1:8080])
+ --addr stringArray IPaddress:Port or :Port to bind server to (default 127.0.0.1:8080)
--allow-origin string Origin which cross-domain request (CORS) can be executed from
--append-only Disallow deletion of repository data
--baseurl string Prefix for URLs - leave blank for root
--cache-objects Cache listed objects (default true)
- --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
- --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
+ --cert string TLS PEM key (concatenation of certificate and CA certificate)
+ --client-ca string Client certificate authority to verify clients with
-h, --help help for restic
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string Path to TLS PEM private key file
+ --key string TLS PEM Private key
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--pass string Password for authentication
@@ -10829,6 +11672,7 @@ Options
--server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
--stdio Run an HTTP2 server on stdin/stdout
--user string User name for authentication
+ --user-from-header string User name from a defined HTTP header
See the global flags page for global options not listed here.
@@ -10903,8 +11747,8 @@ which is defined like this:
secret_access_key = SECRET_ACCESS_KEY
use_multipart_uploads = false
-Note that setting disable_multipart_uploads = true is to work around a
-bug which will be fixed in due course.
+Note that setting use_multipart_uploads = false is to work around a bug
+which will be fixed in due course.
Bugs
@@ -10971,7 +11815,13 @@ By default this will serve files without needing a login.
You can either use an htpasswd file which can take lots of users, or set
a single username and password with the --user and --pass flags.
-If no static users are configured by either of the above methods, and
+Alternatively, you can have the reverse proxy manage authentication and
+use the username provided in the configured header with
+--user-from-header (e.g., ----user-from-header=x-remote-user). Ensure
+the proxy is trusted and headers cannot be spoofed, as misconfiguration
+may lead to unauthorized access.
+
+If either of the above authentication methods is not configured and
client certificates are required by the --client-ca flag passed to the
server, the client certificate common name will be considered as the
username.
@@ -11151,11 +12001,11 @@ and if they haven't been accessed for --vfs-write-back seconds. If
rclone is quit or dies with files that haven't been uploaded, these will
be uploaded next time rclone is run with the same flags.
-If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the
-cache may exceed these quotas for two reasons. Firstly because it is
+If using --vfs-cache-max-size or --vfs-cache-min-free-space note that
+the cache may exceed these quotas for two reasons. Firstly because it is
only checked every --vfs-cache-poll-interval. Secondly because open
files cannot be evicted from the cache. When --vfs-cache-max-size or
---vfs-cache-min-free-size is exceeded, rclone will attempt to evict the
+--vfs-cache-min-free-space is exceeded, rclone will attempt to evict the
least accessed files from the cache first. rclone will start with files
that haven't been accessed for the longest. This cache flushing strategy
is efficient and more relevant files are likely to remain cached.
@@ -11481,26 +12331,63 @@ result is accurate. However, this is very inefficient and may cost lots
of API calls resulting in extra charges. Use it as a last resort and
only with caching.
+VFS Metadata
+
+If you use the --vfs-metadata-extension flag you can get the VFS to
+expose files which contain the metadata as a JSON blob. These files will
+not appear in the directory listing, but can be stat-ed and opened and
+once they have been they will appear in directory listings until the
+directory cache expires.
+
+Note that some backends won't create metadata unless you pass in the
+--metadata flag.
+
+For example, using rclone mount with
+--metadata --vfs-metadata-extension .metadata we get
+
+ $ ls -l /mnt/
+ total 1048577
+ -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+ $ cat /mnt/1G.metadata
+ {
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+ }
+
+ $ ls -l /mnt/
+ total 1048578
+ -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+ -rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+
+If the file has no metadata it will be returned as {} and if there is an
+error reading the metadata the error will be returned as
+{"error":"error string"}.
+
rclone serve s3 remote:path [flags]
Options
- --addr stringArray IPaddress:Port, :Port or [unix://]/path/to/socket to bind server to (default [127.0.0.1:8080])
+ --addr stringArray IPaddress:Port or :Port to bind server to (default 127.0.0.1:8080)
--allow-origin string Origin which cross-domain request (CORS) can be executed from
--auth-key stringArray Set key pair for v4 authorization: access_key_id,secret_access_key
--auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
- --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
- --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
+ --cert string TLS PEM key (concatenation of certificate and CA certificate)
+ --client-ca string Client certificate authority to verify clients with
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 777)
--etag-hash string Which hash to use for the ETag, or auto or blank for off (default "MD5")
--file-perms FileMode File permissions (default 666)
- --force-path-style If true use path style access if false use virtual hosted style (default true) (default true)
+ --force-path-style If true use path style access if false use virtual hosted style (default true)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for s3
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string Path to TLS PEM private key file
+ --key string TLS PEM Private key
--link-perms FileMode Link permissions (default 666)
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
@@ -11518,6 +12405,7 @@ Options
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask FileMode Override the permission bits set by the filesystem (not supported on Windows) (default 002)
--user string User name for authentication
+ --user-from-header string User name from a defined HTTP header
--vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost)
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
@@ -11528,6 +12416,7 @@ Options
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -11553,6 +12442,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -11738,11 +12628,11 @@ and if they haven't been accessed for --vfs-write-back seconds. If
rclone is quit or dies with files that haven't been uploaded, these will
be uploaded next time rclone is run with the same flags.
-If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the
-cache may exceed these quotas for two reasons. Firstly because it is
+If using --vfs-cache-max-size or --vfs-cache-min-free-space note that
+the cache may exceed these quotas for two reasons. Firstly because it is
only checked every --vfs-cache-poll-interval. Secondly because open
files cannot be evicted from the cache. When --vfs-cache-max-size or
---vfs-cache-min-free-size is exceeded, rclone will attempt to evict the
+--vfs-cache-min-free-space is exceeded, rclone will attempt to evict the
least accessed files from the cache first. rclone will start with files
that haven't been accessed for the longest. This cache flushing strategy
is efficient and more relevant files are likely to remain cached.
@@ -12068,6 +12958,43 @@ result is accurate. However, this is very inefficient and may cost lots
of API calls resulting in extra charges. Use it as a last resort and
only with caching.
+VFS Metadata
+
+If you use the --vfs-metadata-extension flag you can get the VFS to
+expose files which contain the metadata as a JSON blob. These files will
+not appear in the directory listing, but can be stat-ed and opened and
+once they have been they will appear in directory listings until the
+directory cache expires.
+
+Note that some backends won't create metadata unless you pass in the
+--metadata flag.
+
+For example, using rclone mount with
+--metadata --vfs-metadata-extension .metadata we get
+
+ $ ls -l /mnt/
+ total 1048577
+ -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+ $ cat /mnt/1G.metadata
+ {
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+ }
+
+ $ ls -l /mnt/
+ total 1048578
+ -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+ -rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+
+If the file has no metadata it will be returned as {} and if there is an
+error reading the metadata the error will be returned as
+{"error":"error string"}.
+
Auth Proxy
If you supply the parameter --auth-proxy /path/to/program then rclone
@@ -12174,6 +13101,7 @@ Options
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -12199,6 +13127,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -12422,7 +13351,13 @@ By default this will serve files without needing a login.
You can either use an htpasswd file which can take lots of users, or set
a single username and password with the --user and --pass flags.
-If no static users are configured by either of the above methods, and
+Alternatively, you can have the reverse proxy manage authentication and
+use the username provided in the configured header with
+--user-from-header (e.g., ----user-from-header=x-remote-user). Ensure
+the proxy is trusted and headers cannot be spoofed, as misconfiguration
+may lead to unauthorized access.
+
+If either of the above authentication methods is not configured and
client certificates are required by the --client-ca flag passed to the
server, the client certificate common name will be considered as the
username.
@@ -12538,11 +13473,11 @@ and if they haven't been accessed for --vfs-write-back seconds. If
rclone is quit or dies with files that haven't been uploaded, these will
be uploaded next time rclone is run with the same flags.
-If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the
-cache may exceed these quotas for two reasons. Firstly because it is
+If using --vfs-cache-max-size or --vfs-cache-min-free-space note that
+the cache may exceed these quotas for two reasons. Firstly because it is
only checked every --vfs-cache-poll-interval. Secondly because open
files cannot be evicted from the cache. When --vfs-cache-max-size or
---vfs-cache-min-free-size is exceeded, rclone will attempt to evict the
+--vfs-cache-min-free-space is exceeded, rclone will attempt to evict the
least accessed files from the cache first. rclone will start with files
that haven't been accessed for the longest. This cache flushing strategy
is efficient and more relevant files are likely to remain cached.
@@ -12868,6 +13803,43 @@ result is accurate. However, this is very inefficient and may cost lots
of API calls resulting in extra charges. Use it as a last resort and
only with caching.
+VFS Metadata
+
+If you use the --vfs-metadata-extension flag you can get the VFS to
+expose files which contain the metadata as a JSON blob. These files will
+not appear in the directory listing, but can be stat-ed and opened and
+once they have been they will appear in directory listings until the
+directory cache expires.
+
+Note that some backends won't create metadata unless you pass in the
+--metadata flag.
+
+For example, using rclone mount with
+--metadata --vfs-metadata-extension .metadata we get
+
+ $ ls -l /mnt/
+ total 1048577
+ -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+ $ cat /mnt/1G.metadata
+ {
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+ }
+
+ $ ls -l /mnt/
+ total 1048578
+ -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+ -rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+
+If the file has no metadata it will be returned as {} and if there is an
+error reading the metadata the error will be returned as
+{"error":"error string"}.
+
Auth Proxy
If you supply the parameter --auth-proxy /path/to/program then rclone
@@ -12943,12 +13915,12 @@ that rclone supports.
Options
- --addr stringArray IPaddress:Port, :Port or [unix://]/path/to/socket to bind server to (default [127.0.0.1:8080])
+ --addr stringArray IPaddress:Port or :Port to bind server to (default 127.0.0.1:8080)
--allow-origin string Origin which cross-domain request (CORS) can be executed from
--auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
- --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
- --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
+ --cert string TLS PEM key (concatenation of certificate and CA certificate)
+ --client-ca string Client certificate authority to verify clients with
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 777)
--disable-dir-list Disable HTML directory list on GET request for a directory
@@ -12957,7 +13929,7 @@ Options
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for webdav
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string Path to TLS PEM private key file
+ --key string TLS PEM Private key
--link-perms FileMode Link permissions (default 666)
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
@@ -12975,6 +13947,7 @@ Options
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask FileMode Override the permission bits set by the filesystem (not supported on Windows) (default 002)
--user string User name for authentication
+ --user-from-header string User name from a defined HTTP header
--vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost)
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
@@ -12985,6 +13958,7 @@ Options
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -13010,6 +13984,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -13264,7 +14239,8 @@ unless --no-create or --recursive is provided.
If --recursive is used then recursively sets the modification time on
all existing files that is found under the path. Filters are supported,
-and you can test with the --dry-run or the --interactive/-i flag.
+and you can test with the --dry-run or the --interactive/-i flag. This
+will touch --transfers files concurrently.
If --timestamp is used then sets the modification time to that time
instead of the current time. Times may be specified as one of:
@@ -13309,6 +14285,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -13406,6 +14383,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -13895,6 +14873,10 @@ also possible to specify --boolean=false or --boolean=true. Note that
--boolean false is not valid - this is parsed as --boolean and the false
is parsed as an extra command line argument for rclone.
+Options documented to take a stringArray parameter accept multiple
+values. To pass more than one value, repeat the option; for example:
+--include value1 --include value2.
+
Time or duration options
TIME or DURATION options can be specified as a duration string or a time
@@ -14244,7 +15226,8 @@ user on any OS, and the value is defined as following:
from shell command cd && pwd.
If you run rclone config file you will see where the default location is
-for you.
+for you. Running rclone config touch will ensure a configuration file
+exists, creating an empty one in the default location if there is none.
The fact that an existing file rclone.conf in the same directory as the
rclone executable is always preferred, means that it is easy to run in
@@ -14253,8 +15236,14 @@ and then create an empty file rclone.conf in the same directory.
If the location is set to empty string "" or path to a file with name
notfound, or the os null device represented by value NUL on Windows and
-/dev/null on Unix systems, then rclone will keep the config file in
-memory only.
+/dev/null on Unix systems, then rclone will keep the configuration file
+in memory only.
+
+You may see a log message "Config file not found - using defaults" if
+there is no configuration file. This can be supressed, e.g. if you are
+using rclone entirely with on the fly remotes, by using memory-only
+configuration file or by creating an empty configuration file, as
+described above.
The file format is basic INI: Sections of text, led by a [section]
header and followed by key=value entries on separate lines. In rclone
@@ -14712,6 +15701,19 @@ The --links / -l flag enables this feature for all supported backends
and the VFS. There are individual flags for just enabling it for the VFS
--vfs-links and the local backend --local-links if required.
+--list-cutoff N
+
+When syncing rclone needs to sort directory entries before comparing
+them. Below this threshold (1,000,000) by default, rclone will store the
+directory entries in memory. 1,000,000 entries will take approx 1GB of
+RAM to store. Above this threshold rclone will store directory entries
+on disk and sort them without using a lot of memory.
+
+Doing this is slightly less efficient then sorting them in memory and
+will only work well for the bucket based backends (eg s3, b2, azureblob,
+swift) but these are the only backends likely to have millions of
+entries in a directory.
+
--log-file=FILE
Log all of rclone's output to FILE. This is not active by default. This
@@ -14726,12 +15728,25 @@ a signal to rotate logs.
--log-format LIST
-Comma separated list of log format options. Accepted options are date,
-time, microseconds, pid, longfile, shortfile, UTC. Any other keywords
-will be silently ignored. pid will tag log messages with process
-identifier which useful with rclone mount --daemon. Other accepted
-options are explained in the go documentation. The default log format is
-"date,time".
+Comma separated list of log format options. The accepted options are:
+
+- date - Add a date in the format YYYY/MM/YY to the log.
+- time - Add a time to the log in format HH:MM:SS.
+- microseconds - Add microseconds to the time in format
+ HH:MM:SS.SSSSSS.
+- UTC - Make the logs in UTC not localtime.
+- longfile - Adds the source file and line number of the log
+ statement.
+- shortfile - Adds the source file and line number of the log
+ statement.
+- pid - Add the process ID to the log - useful with
+ rclone mount --daemon.
+- nolevel - Don't add the level to the log.
+- json - Equivalent to adding --use-json-log
+
+They are added to the log line in the order above.
+
+The default log format is "date,time".
--log-level LEVEL
@@ -14749,10 +15764,84 @@ warnings and significant events.
ERROR is equivalent to -q. It only outputs error messages.
+--windows-event-log LEVEL
+
+If this is configured (the default is OFF) then logs of this level and
+above will be logged to the Windows event log in addition to the normal
+logs. These will be logged in JSON format as described below regardless
+of what format the main logs are configured for.
+
+The Windows event log only has 3 levels of severity Info, Warning and
+Error. If enabled we map rclone levels like this.
+
+- Error ← ERROR (and above)
+- Warning ← WARNING (note that this level is defined but not currently
+ used).
+- Info ← NOTICE, INFO and DEBUG.
+
+Rclone will declare its log source as "rclone" if it is has enough
+permissions to create the registry key needed. If not then logs will
+appear as "Application". You can run
+rclone version --windows-event-log DEBUG once as administrator to create
+the registry key in advance.
+
+Note that the --windows-event-log level must be greater (more severe)
+than or equal to the --log-level. For example to log DEBUG to a log file
+but ERRORs to the event log you would use
+
+ --log-file rclone.log --log-level DEBUG --windows-event-log ERROR
+
+This option is only supported Windows platforms.
+
--use-json-log
-This switches the log format to JSON for rclone. The fields of json log
-are level, msg, source, time.
+This switches the log format to JSON for rclone. The fields of JSON log
+are level, msg, source, time. The JSON logs will be printed on a single
+line, but are shown expanded here for clarity.
+
+ {
+ "time": "2025-05-13T17:30:51.036237518+01:00",
+ "level": "debug",
+ "msg": "4 go routines active\n",
+ "source": "cmd/cmd.go:298"
+ }
+
+Completed data transfer logs will have extra size information. Logs
+which are about a particular object will have object and objectType
+fields also.
+
+ {
+ "time": "2025-05-13T17:38:05.540846352+01:00",
+ "level": "info",
+ "msg": "Copied (new) to: file2.txt",
+ "size": 6,
+ "object": "file.txt",
+ "objectType": "*local.Object",
+ "source": "operations/copy.go:368"
+ }
+
+Stats logs will contain a stats field which is the same as returned from
+the rc call core/stats.
+
+ {
+ "time": "2025-05-13T17:38:05.540912847+01:00",
+ "level": "info",
+ "msg": "...text version of the stats...",
+ "stats": {
+ "bytes": 6,
+ "checks": 0,
+ "deletedDirs": 0,
+ "deletes": 0,
+ "elapsedTime": 0.000904825,
+ ...truncated for clarity...
+ "totalBytes": 6,
+ "totalChecks": 0,
+ "totalTransfers": 1,
+ "transferTime": 0.000882794,
+ "transfers": 1
+ },
+ "source": "accounting/stats.go:569"
+ }
--low-level-retries NUMBER
@@ -14788,6 +15877,50 @@ the remote which may be desirable.
Setting this to a negative number will make the backlog as large as
possible.
+--max-buffer-memory=SIZE
+
+If set, don't allocate more than SIZE amount of memory as buffers. If
+not set or set to 0 or off this will not limit the amount of memory in
+use.
+
+This includes memory used by buffers created by the --buffer flag and
+buffers used by multi-thread transfers.
+
+Most multi-thread transfers do not take additional memory, but some do
+depending on the backend (eg the s3 backend for uploads). This means
+there is a tension between total setting --transfers as high as possible
+and memory use.
+
+Setting --max-buffer-memory allows the buffer memory to be controlled so
+that it doesn't overwhelm the machine and allows --transfers to be set
+large.
+
+--max-connections=N
+
+This sets the maximum number of concurrent calls to the backend API. It
+may not map 1:1 to TCP or HTTP connections depending on the backend in
+use and the use of HTTP1 vs HTTP2.
+
+When downloading files, backends only limit the initial opening of the
+stream. The bulk data download is not counted as a connection. This
+means that the --max--connections flag won't limit the total number of
+downloads.
+
+Note that it is possible to cause deadlocks with this setting so it
+should be used with care.
+
+If you are doing a sync or copy then make sure --max-connections is one
+more than the sum of --transfers and --checkers.
+
+If you use --check-first then --max-connections just needs to be one
+more than the maximum of --checkers and --transfers.
+
+So for --max-connections 3 you'd use
+--checkers 2 --transfers 2 --check-first or --checkers 1 --transfers 1.
+
+Setting this flag can be useful for backends which do multipart uploads
+to limit the number of simultaneous parts being transferred.
+
--max-delete=N
This tells rclone not to delete more than N files. If that limit is
@@ -15036,6 +16169,13 @@ This will work with the sync/copy/move commands and friends
copyto/moveto. Multi thread transfers will be used with rclone mount and
rclone serve if --vfs-cache-mode is set to writes or above.
+Most multi-thread transfers do not take additional memory, but some do
+(for example uploading to s3). In the worst case memory usage can be at
+maximum --transfers * --multi-thread-chunk-size --multi-thread-streams
+or specifically for the s3 backend --transfers --s3-chunk-size *
+--s3-concurrency. However you can use the the --max-buffer-memory flag
+to control the maximum memory used here.
+
NB that this only works with supported backends as the destination but
will work with any backend as the source.
@@ -15058,6 +16198,14 @@ If the backend has a --backend-upload-concurrency setting (eg
transfers instead if it is larger than the value of
--multi-thread-streams or --multi-thread-streams isn't set.
+--name-transform COMMAND[=XXXX]
+
+--name-transform introduces path name transformations for rclone copy,
+rclone sync, and rclone move. These transformations enable modifications
+to source and destination file names by applying prefixes, suffixes, and
+other alterations during transfer operations. For detailed docs and
+examples, see convmv.
+
--no-check-dest
The --no-check-dest can be used with move or copy and it causes rclone
@@ -16043,6 +17191,7 @@ For the filtering options
- --max-size
- --min-age
- --max-age
+- --hash-filter
- --dump filters
- --metadata-include
- --metadata-include-from
@@ -16177,7 +17326,7 @@ The options set by environment variables can be seen with the -vv flag,
e.g. rclone version -vv.
Options that can appear multiple times (type stringArray) are treated
-slighly differently as environment variables can only be defined once.
+slightly differently as environment variables can only be defined once.
In order to allow a simple mechanism for adding one or many items, the
input is treated as a CSV encoded string. For example
@@ -17106,6 +18255,95 @@ or more.
See the time option docs for valid formats.
+--hash-filter - Deterministically select a subset of files
+
+The --hash-filter flag enables selecting a deterministic subset of
+files, useful for:
+
+1. Running large sync operations across multiple machines.
+2. Checking a subset of files for bitrot.
+3. Any other operations where a sample of files is required.
+
+Syntax
+
+The flag takes two parameters expressed as a fraction:
+
+ --hash-filter K/N
+
+- N: The total number of partitions (must be a positive integer).
+- K: The specific partition to select (an integer from 0 to N).
+
+For example: - --hash-filter 1/3: Selects the first third of the files.
+- --hash-filter 2/3 and --hash-filter 3/3: Select the second and third
+partitions, respectively.
+
+Each partition is non-overlapping, ensuring all files are covered
+without duplication.
+
+Random Partition Selection
+
+Use @ as K to randomly select a partition:
+
+ --hash-filter @/M
+
+For example, --hash-filter @/3 will randomly select a number between 0
+and 2. This will stay constant across retries.
+
+How It Works
+
+- Rclone takes each file's full path, normalizes it to lowercase, and
+ applies Unicode normalization.
+- It then hashes the normalized path into a 64 bit number.
+- The hash result is reduced modulo N to assign the file to a
+ partition.
+- If the calculated partition does not match K the file is excluded.
+- Other filters may apply if the file is not excluded.
+
+Important: Rclone will traverse all directories to apply the filter.
+
+Usage Notes
+
+- Safe to use with rclone sync; source and destination selections will
+ match.
+- Do not use with --delete-excluded, as this could delete unselected
+ files.
+- Ignored if --files-from is used.
+
+Examples
+
+Dividing files into 4 partitions
+
+Assuming the current directory contains file1.jpg through file9.jpg:
+
+ $ rclone lsf --hash-filter 0/4 .
+ file1.jpg
+ file5.jpg
+
+ $ rclone lsf --hash-filter 1/4 .
+ file3.jpg
+ file6.jpg
+ file9.jpg
+
+ $ rclone lsf --hash-filter 2/4 .
+ file2.jpg
+ file4.jpg
+
+ $ rclone lsf --hash-filter 3/4 .
+ file7.jpg
+ file8.jpg
+
+ $ rclone lsf --hash-filter 4/4 . # the same as --hash-filter 0/4
+ file1.jpg
+ file5.jpg
+
+Syncing the first quarter of files
+
+ rclone sync --hash-filter 1/4 source:path destination:path
+
+Checking a random 1% of files for integrity
+
+ rclone check --download --hash-filter @/100 source:path destination:path
+
Other flags
--delete-excluded - Delete files on dest excluded from sync
@@ -17608,9 +18846,16 @@ Setting config flags with _config
If you wish to set config (the equivalent of the global flags) for the
duration of an rc call only then pass in the _config parameter.
-This should be in the same format as the config key returned by
+This should be in the same format as the main key returned by
options/get.
+ rclone rc --loopback options/get blocks=main
+
+You can see more help on these options with this command (see the
+options blocks section for more info).
+
+ rclone rc --loopback options/info blocks=main
+
For example, if you wished to run a sync with the --checksum parameter,
you would pass this parameter in your JSON blob.
@@ -17641,6 +18886,13 @@ in the _filter parameter.
This should be in the same format as the filter key returned by
options/get.
+ rclone rc --loopback options/get blocks=filter
+
+You can see more help on these options with this command (see the
+options blocks section for more info).
+
+ rclone rc --loopback options/info blocks=filter
+
For example, if you wished to run a sync with these flags
--max-size 1M --max-age 42s --include "a" --include "b"
@@ -17722,7 +18974,8 @@ format. Each block describes a single option.
FieldName string N name of the field used in
the rc - if blank use
- Name
+ Name. May contain "." for
+ nested fields.
Help string N help, started with a
single sentence on a
@@ -17957,6 +19210,7 @@ This takes the following parameters:
- obscure - declare passwords are plain and need obscuring
- noObscure - declare passwords are already obscured and don't
need obscuring
+ - noOutput - don't print anything to stdout
- nonInteractive - don't interact with a user, return questions
- continue - continue the config process with an answer
- all - ask all the config questions not just the post config ones
@@ -18065,6 +19319,7 @@ This takes the following parameters:
- obscure - declare passwords are plain and need obscuring
- noObscure - declare passwords are already obscured and don't
need obscuring
+ - noOutput - don't print anything to stdout
- nonInteractive - don't interact with a user, return questions
- continue - continue the config process with an answer
- all - ask all the config questions not just the post config ones
@@ -18245,7 +19500,9 @@ returned.
Parameters
-- group - name of the stats group (string)
+- group - name of the stats group (string, optional)
+- short - if true will not return the transferring and checking arrays
+ (boolean, optional)
Returns the following values:
@@ -18259,6 +19516,7 @@ Returns the following values:
"fatalError": boolean whether there has been at least one fatal error,
"lastError": last error string,
"renames" : number of files renamed,
+ "listed" : number of directory entries listed,
"retryError": boolean showing whether there has been at least one non-NoRetryError,
"serverSideCopies": number of server side copies done,
"serverSideCopyBytes": number bytes server side copied,
@@ -19268,6 +20526,136 @@ check that parameter passing is working properly.
Authentication is required for this call.
+serve/list: Show running servers
+
+Show running servers with IDs.
+
+This takes no parameters and returns
+
+- list: list of running serve commands
+
+Each list element will have
+
+- id: ID of the server
+- addr: address the server is running on
+- params: parameters used to start the server
+
+Eg
+
+ rclone rc serve/list
+
+Returns
+
+ {
+ "list": [
+ {
+ "addr": "[::]:4321",
+ "id": "nfs-ffc2a4e5",
+ "params": {
+ "fs": "remote:",
+ "opt": {
+ "ListenAddr": ":4321"
+ },
+ "type": "nfs",
+ "vfsOpt": {
+ "CacheMode": "full"
+ }
+ }
+ }
+ ]
+ }
+
+Authentication is required for this call.
+
+serve/start: Create a new server
+
+Create a new server with the specified parameters.
+
+This takes the following parameters:
+
+- type - type of server: http, webdav, ftp, sftp, nfs, etc.
+- fs - remote storage path to serve
+- addr - the ip:port to run the server on, eg ":1234" or
+ "localhost:1234"
+
+Other parameters are as described in the documentation for the relevant
+rclone serve command line options. To translate a command line option to
+an rc parameter, remove the leading -- and replace - with _, so
+--vfs-cache-mode becomes vfs_cache_mode. Note that global parameters
+must be set with _config and _filter as described above.
+
+Examples:
+
+ rclone rc serve/start type=nfs fs=remote: addr=:4321 vfs_cache_mode=full
+ rclone rc serve/start --json '{"type":"nfs","fs":"remote:","addr":":1234","vfs_cache_mode":"full"}'
+
+This will give the reply
+
+ {
+ "addr": "[::]:4321", // Address the server was started on
+ "id": "nfs-ecfc6852" // Unique identifier for the server instance
+ }
+
+Or an error if it failed to start.
+
+Stop the server with serve/stop and list the running servers with
+serve/list.
+
+Authentication is required for this call.
+
+serve/stop: Unserve selected active serve
+
+Stops a running serve instance by ID.
+
+This takes the following parameters:
+
+- id: as returned by serve/start
+
+This will give an empty response if successful or an error if not.
+
+Example:
+
+ rclone rc serve/stop id=12345
+
+Authentication is required for this call.
+
+serve/stopall: Stop all active servers
+
+Stop all active servers.
+
+This will stop all active servers.
+
+ rclone rc serve/stopall
+
+Authentication is required for this call.
+
+serve/types: Show all possible serve types
+
+This shows all possible serve types and returns them as a list.
+
+This takes no parameters and returns
+
+- types: list of serve types, eg "nfs", "sftp", etc
+
+The serve types are strings like "serve", "serve2", "cserve" and can be
+passed to serve/start as the serveType parameter.
+
+Eg
+
+ rclone rc serve/types
+
+Returns
+
+ {
+ "types": [
+ "http",
+ "sftp",
+ "nfs"
+ ]
+ }
+
+Authentication is required for this call.
+
sync/bisync: Perform bidirectional synchronization between two paths.
This takes the following parameters
@@ -19420,7 +20808,7 @@ This is only useful if --vfs-cache-mode > off. If you call it when the
],
}
-The expiry time is the time until the file is elegible for being
+The expiry time is the time until the file is eligible for being
uploaded in floating point seconds. This may go negative. As rclone only
transfers --transfers files at once, only the lowest --transfers expiry
times will have uploading as true. So there may be files with negative
@@ -19734,6 +21122,7 @@ Here is an overview of the major features of each cloud storage system.
Cloudinary MD5 R No Yes - -
Dropbox DBHASH ¹ R Yes No - -
Enterprise File Fabric - R/W Yes No R/W -
+ FileLu Cloud Storage MD5 R/W No Yes R -
Files.com MD5, CRC32 DR/W Yes No R -
FTP - R/W ¹⁰ No No - -
Gofile MD5 DR/W No Yes R -
@@ -20502,6 +21891,7 @@ Flags for anything which can copy a file.
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
--no-update-dir-modtime Don't update directory modification times
@@ -20524,6 +21914,7 @@ Flags used for sync commands.
--delete-during When synchronizing, delete files during transfer
--fix-case Force rename of case insensitive dest to match source
--ignore-errors Delete even if there are I/O errors
+ --list-cutoff int To save memory, sort directory listings on disk above this threshold (default 1000000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off)
--suffix string Suffix to add to changed files
@@ -20563,13 +21954,14 @@ Flags for general networking and HTTP stuff.
--header stringArray Set HTTP header for all transactions
--header-download stringArray Set HTTP header for download transactions
--header-upload stringArray Set HTTP header for upload transactions
+ --max-connections int Maximum number of simultaneous backend API connections, 0 for unlimited
--no-check-certificate Do not verify the server SSL certificate (insecure)
--no-gzip-encoding Don't set Accept-Encoding: gzip
--timeout Duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.69.0")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.70.0")
Performance
@@ -20598,6 +21990,7 @@ Flags for general configuration of rclone.
-i, --interactive Enable interactive mode
--kv-lock-time Duration Maximum time to keep key-value database locked by process (default 1s)
--low-level-retries int Number of low level retries to do (default 10)
+ --max-buffer-memory SizeSuffix If set, don't allocate more than this amount of memory as buffers (default off)
--no-console Hide console window (supported on Windows only)
--no-unicode-normalization Don't normalize unicode characters in filenames
--password-command SpaceSepList Command for supplying password for encrypted configuration
@@ -20629,6 +22022,7 @@ Flags for filtering directory listings.
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -20656,7 +22050,7 @@ Logging
Flags for logging and statistics.
--log-file string Log everything to this file
- --log-format string Comma separated list of log format options (default "date,time")
+ --log-format Bits Comma separated list of log format options (default date,time)
--log-level LogLevel Log level DEBUG|INFO|NOTICE|ERROR (default NOTICE)
--log-systemd Activate systemd integration for the logger
--max-stats-groups int Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000)
@@ -20717,6 +22111,7 @@ Flags to control the Remote Control API.
--rc-server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
--rc-template string User-specified template
--rc-user string User name for authentication
+ --rc-user-from-header string User name from a defined HTTP header
--rc-web-fetch-url string URL to fetch the releases for webgui (default "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest")
--rc-web-gui Launch WebGUI on localhost
--rc-web-gui-force-update Force update to latest version of web gui
@@ -20743,6 +22138,7 @@ Flags to control the Metrics HTTP endpoint..
--metrics-server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
--metrics-template string User-specified template
--metrics-user string User name for authentication
+ --metrics-user-from-header string User name from a defined HTTP header
--rc-enable-metrics Enable the Prometheus metrics path at the remote control server
Backend
@@ -20760,6 +22156,8 @@ Backend-only flags (these can be set in the config file also).
--azureblob-client-id string The ID of the client in use
--azureblob-client-secret string One of the service principal's client secrets
--azureblob-client-send-certificate-chain Send the certificate chain when using certificate auth
+ --azureblob-copy-concurrency int Concurrency for multipart copy (default 512)
+ --azureblob-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 8Mi)
--azureblob-delete-snapshots string Set to specify how to deal with snapshots on blob deletion
--azureblob-description string Description of the remote
--azureblob-directory-markers Upload an empty object with a trailing slash when a new directory is created
@@ -20783,6 +22181,7 @@ Backend-only flags (these can be set in the config file also).
--azureblob-upload-concurrency int Concurrency for multipart uploads (default 16)
--azureblob-upload-cutoff string Cutoff for switching to chunked upload (<= 256 MiB) (deprecated)
--azureblob-use-az Use Azure CLI tool az for authentication
+ --azureblob-use-copy-blob Whether to use the Copy Blob API when copying to the same storage account (default true)
--azureblob-use-emulator Uses local storage emulator if provided as 'true'
--azureblob-use-msi Use a managed service identity to authenticate (only works in Azure)
--azureblob-username string User name (usually an email address)
@@ -20795,6 +22194,7 @@ Backend-only flags (these can be set in the config file also).
--azurefiles-client-send-certificate-chain Send the certificate chain when using certificate auth
--azurefiles-connection-string string Azure Files Connection String
--azurefiles-description string Description of the remote
+ --azurefiles-disable-instance-discovery Skip requesting Microsoft Entra instance metadata
--azurefiles-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot)
--azurefiles-endpoint string Endpoint for the service
--azurefiles-env-auth Read credentials from runtime (environment variables, CLI or MSI)
@@ -20809,6 +22209,7 @@ Backend-only flags (these can be set in the config file also).
--azurefiles-share-name string Azure Files Share Name
--azurefiles-tenant string ID of the service principal's tenant. Also called its directory ID
--azurefiles-upload-concurrency int Concurrency for multipart uploads (default 16)
+ --azurefiles-use-az Use Azure CLI tool az for authentication
--azurefiles-use-msi Use a managed service identity to authenticate (only works in Azure)
--azurefiles-username string User name (usually an email address)
--b2-account string Account ID or Application Key ID
@@ -20871,12 +22272,14 @@ Backend-only flags (these can be set in the config file also).
--chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks
--chunker-hash-type string Choose how chunker handles hash sums (default "md5")
--chunker-remote string Remote to chunk/unchunk
+ --cloudinary-adjust-media-files-extensions Cloudinary handles media formats as a file attribute and strips it from the name, which is unlike most other file systems (default true)
--cloudinary-api-key string Cloudinary API Key
--cloudinary-api-secret string Cloudinary API Secret
--cloudinary-cloud-name string Cloudinary Environment Name
--cloudinary-description string Description of the remote
--cloudinary-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
--cloudinary-eventually-consistent-delay Duration Wait N seconds for eventual consistency of the databases that support the backend operation (default 0s)
+ --cloudinary-media-extensions stringArray Cloudinary supported media extensions (default 3ds,3g2,3gp,ai,arw,avi,avif,bmp,bw,cr2,cr3,djvu,dng,eps3,fbx,flif,flv,gif,glb,gltf,hdp,heic,heif,ico,indd,jp2,jpe,jpeg,jpg,jxl,jxr,m2ts,mov,mp4,mpeg,mts,mxf,obj,ogv,pdf,ply,png,psd,svg,tga,tif,tiff,ts,u3ma,usdz,wdp,webm,webp,wmv)
--cloudinary-upload-prefix string Specify the API endpoint for environments out of the US
--cloudinary-upload-preset string Upload Preset to select asset manipulation on upload
--combine-description string Description of the remote
@@ -20900,6 +22303,10 @@ Backend-only flags (these can be set in the config file also).
--crypt-show-mapping For all files listed show how the names encrypt
--crypt-strict-names If set, this will raise an error when crypt comes across a filename that can't be decrypted
--crypt-suffix string If this is set it will override the default suffix of ".bin" (default ".bin")
+ --doi-description string Description of the remote
+ --doi-doi string The DOI or the doi.org URL
+ --doi-doi-resolver-api-url string The URL of the DOI resolver API to use
+ --doi-provider string DOI provider
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs
--drive-auth-owner-only Only consider files owned by the authenticated user
@@ -20951,7 +22358,6 @@ Backend-only flags (these can be set in the config file also).
--drive-use-trash Send files to the trash instead of deleting permanently (default true)
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download (default off)
--dropbox-auth-url string Auth server URL
- --dropbox-batch-commit-timeout Duration Max time to wait for a batch to finish committing (default 10m0s)
--dropbox-batch-mode string Upload file batching sync|async|off (default "sync")
--dropbox-batch-size int Max number of files in upload batch
--dropbox-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
@@ -20961,11 +22367,14 @@ Backend-only flags (these can be set in the config file also).
--dropbox-client-secret string OAuth Client Secret
--dropbox-description string Description of the remote
--dropbox-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
+ --dropbox-export-formats CommaSepList Comma separated list of preferred formats for exporting files (default html,md)
--dropbox-impersonate string Impersonate this user when using a business account
--dropbox-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms)
--dropbox-root-namespace string Specify a different Dropbox namespace ID to use as the root for all paths
--dropbox-shared-files Instructs rclone to work on individual shared files
--dropbox-shared-folders Instructs rclone to work on shared folders
+ --dropbox-show-all-exports Show all exportable files in listings
+ --dropbox-skip-exports Skip exportable files in all listings
--dropbox-token string OAuth Access Token as a JSON blob
--dropbox-token-url string Token server url
--fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl
@@ -20983,6 +22392,9 @@ Backend-only flags (these can be set in the config file also).
--filefabric-token-expiry string Token expiry time
--filefabric-url string URL of the Enterprise File Fabric to connect to
--filefabric-version string Version read from the file fabric
+ --filelu-description string Description of the remote
+ --filelu-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,CrLf,Del,Ctl,LeftSpace,LeftPeriod,LeftTilde,LeftCrLfHtVt,RightSpace,RightPeriod,RightCrLfHtVt,InvalidUtf8,Dot,SquareBracket,Semicolon,Exclamation)
+ --filelu-key string Your FileLu Rclone key from My Account
--filescom-api-key string The API key used to authenticate with Files.com
--filescom-description string Description of the remote
--filescom-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
@@ -21042,7 +22454,6 @@ Backend-only flags (these can be set in the config file also).
--gofile-list-chunk int Number of items to list in each call (default 1000)
--gofile-root-folder-id string ID of the root folder
--gphotos-auth-url string Auth server URL
- --gphotos-batch-commit-timeout Duration Max time to wait for a batch to finish committing (default 10m0s)
--gphotos-batch-mode string Upload file batching sync|async|off (default "sync")
--gphotos-batch-size int Max number of files in upload batch
--gphotos-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
@@ -21110,6 +22521,8 @@ Backend-only flags (these can be set in the config file also).
--internetarchive-encoding Encoding The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot)
--internetarchive-endpoint string IAS3 Endpoint (default "https://s3.us.archive.org")
--internetarchive-front-endpoint string Host of InternetArchive Frontend (default "https://archive.org")
+ --internetarchive-item-derive Whether to trigger derive on the IA item or not. If set to false, the item will not be derived by IA upon upload (default true)
+ --internetarchive-item-metadata stringArray Metadata to be set on the IA item, this is different from file-level metadata that can be set using --metadata-set
--internetarchive-secret-access-key string IAS3 Secret Key (password)
--internetarchive-wait-archive Duration Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish (default 0s)
--jottacloud-auth-url string Auth server URL
@@ -21205,6 +22618,7 @@ Backend-only flags (these can be set in the config file also).
--onedrive-tenant string ID of the service principal's tenant. Also called its directory ID
--onedrive-token string OAuth Access Token as a JSON blob
--onedrive-token-url string Token server url
+ --onedrive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default off)
--oos-attempt-resume-upload If true attempt to resume previously started multipart upload for the object
--oos-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
--oos-compartment string Specify compartment OCID, if you need to list buckets
@@ -21230,6 +22644,7 @@ Backend-only flags (these can be set in the config file also).
--oos-storage-tier string The storage class to use when storing new objects in storage. https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm (default "Standard")
--oos-upload-concurrency int Concurrency for multipart uploads (default 10)
--oos-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
+ --opendrive-access string Files and folders will be uploaded with this access permission (default private) (default "private")
--opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi)
--opendrive-description string Description of the remote
--opendrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
@@ -21326,6 +22741,8 @@ Backend-only flags (these can be set in the config file also).
--s3-endpoint string Endpoint for S3 API
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars)
--s3-force-path-style If true use path style access if false use virtual hosted style (default true)
+ --s3-ibm-api-key string IBM API Key to be used to obtain IAM token
+ --s3-ibm-resource-instance-id string IBM service instance id
--s3-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery
--s3-list-chunk int Size of listing chunk (response list for each ListObject S3 request) (default 1000)
--s3-list-url-encode Tristate Whether to url encode listings: true/false/unset (default unset)
@@ -21346,6 +22763,7 @@ Backend-only flags (these can be set in the config file also).
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3
--s3-session-token string An AWS session token
--s3-shared-credentials-file string Path to the shared credentials file
+ --s3-sign-accept-encoding Tristate Set if rclone should include Accept-Encoding as part of the signature (default unset)
--s3-sse-customer-algorithm string If using SSE-C, the server-side encryption algorithm used when storing this object in S3
--s3-sse-customer-key string To use SSE-C you may provide the secret encryption key used to encrypt/decrypt your data
--s3-sse-customer-key-base64 string If using SSE-C you must provide the secret encryption key encoded in base64 format to encrypt/decrypt your data
@@ -21362,6 +22780,7 @@ Backend-only flags (these can be set in the config file also).
--s3-use-multipart-uploads Tristate Set if rclone should use multipart uploads (default unset)
--s3-use-presigned-request Whether to use a presigned request or PutObject for single part uploads
--s3-use-unsigned-payload Tristate Whether to use an unsigned payload in PutObject (default unset)
+ --s3-use-x-id Tristate Set if rclone should add x-id URL parameters (default unset)
--s3-v2-auth If true use v2 authentication
--s3-version-at Time Show file versions as they were at the specified time (default off)
--s3-version-deleted Show deleted file markers when using versions
@@ -21387,6 +22806,7 @@ Backend-only flags (these can be set in the config file also).
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available
--sftp-host string SSH host to connect to
--sftp-host-key-algorithms SpaceSepList Space separated list of host key algorithms, ordered by preference
+ --sftp-http-proxy string URL for HTTP CONNECT proxy
--sftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--sftp-key-exchange SpaceSepList Space separated list of key exchange algorithms, ordered by preference
--sftp-key-file string Path to PEM-encoded private key file
@@ -21441,6 +22861,7 @@ Backend-only flags (these can be set in the config file also).
--smb-pass string SMB password (obscured)
--smb-port int SMB port number (default 445)
--smb-spn string Service principal name
+ --smb-use-kerberos Use Kerberos authentication
--smb-user string SMB username (default "$USER")
--storj-access-grant string Access grant
--storj-api-key string API key
@@ -21574,7 +22995,7 @@ Start from installing Docker on the host.
The FUSE driver is a prerequisite for rclone mounting and should be
installed on host:
- sudo apt-get -y install fuse
+ sudo apt-get -y install fuse3
Create two directories required by rclone docker plugin:
@@ -22531,7 +23952,7 @@ Also see the all files changed check.
By using rclone filter features you can exclude file types or directory
sub-trees from the sync. See the bisync filters section and generic
--filter-from documentation. An example filters file contains filters
-for non-allowed files for synching with Dropbox.
+for non-allowed files for syncing with Dropbox.
If you make changes to your filters file then bisync requires a run with
--resync. This is a safety feature, which prevents existing files on the
@@ -22704,7 +24125,7 @@ of a sync. Using --check-sync=false will disable it and may
significantly reduce the sync run times for very large numbers of files.
The check may be run manually with --check-sync=only. It runs only the
-integrity check and terminates without actually synching.
+integrity check and terminates without actually syncing.
Note that currently, --check-sync only checks listing snapshots and NOT
the actual files on the remotes. Note also that the listing snapshots
@@ -23237,7 +24658,7 @@ supported.
How to filter directories
Filtering portions of the directory tree is a critical feature for
-synching.
+syncing.
Examples of directory trees (always beneath the Path1/Path2 root level)
you may want to exclude from your sync: - Directory trees containing
@@ -23348,7 +24769,7 @@ This noise can be quashed by adding --quiet to the bisync command line.
Example exclude-style filters files for use with Dropbox
-- Dropbox disallows synching the listed temporary and
+- Dropbox disallows syncing the listed temporary and
configuration/data files. The `- ` filters exclude these files where
ever they may occur in the sync tree. Consider adding similar
exclusions for file types you don't need to sync, such as core dump
@@ -23668,7 +25089,7 @@ dash.
Running tests
- go test . -case basic -remote local -remote2 local runs the
- test_basic test case using only the local filesystem, synching one
+ test_basic test case using only the local filesystem, syncing one
local directory with another local directory. Test script output is
to the console, while commands within scenario.txt have their output
sent to the .../workdir/test.log file, which is finally compared to
@@ -23901,6 +25322,11 @@ Unison and synchronization in general.
Changelog
+v1.69.1
+
+- Fixed an issue causing listings to not capture concurrent
+ modifications under certain conditions
+
v1.68
- Fixed an issue affecting backends that round modtimes to a lower
@@ -24473,9 +25899,11 @@ The S3 backend can be used with a number of different providers:
- Liara Object Storage
- Linode Object Storage
- Magalu Object Storage
+- MEGA S4 Object Storage
- Minio
- Outscale
- Petabox
+- Pure Storage FlashBlade
- Qiniu Cloud Object Storage (Kodo)
- RackCorp Object Storage
- Rclone Serve S3
@@ -25192,7 +26620,7 @@ Notes on above:
that USER_NAME has been created.
2. The Resource entry must include both resource ARNs, as one implies
the bucket and the other implies the bucket's objects.
-3. When using s3-no-check-bucket and the bucket already exsits, the
+3. When using s3-no-check-bucket and the bucket already exists, the
"arn:aws:s3:::BUCKET_NAME" doesn't have to be included.
For reference, here's an Ansible script that will generate one or more
@@ -25214,8 +26642,9 @@ glacier storage class you will see an error like below.
2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file
-In this case you need to restore the object(s) in question before using
-rclone.
+In this case you need to restore the object(s) in question before
+accessing object contents. The restore section below shows how to do
+this with rclone.
Note that rclone only speaks the S3 API it does not speak the Glacier
Vault API, so rclone cannot directly access Glacier Vaults.
@@ -25236,10 +26665,11 @@ Standard options
Here are the Standard options specific to s3 (Amazon S3 Compliant
Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile,
-Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive,
-IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease,
-Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel,
-StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
+Cloudflare, DigitalOcean, Dreamhost, Exaba, FlashBlade, GCS, HuaweiOBS,
+IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Mega,
+Minio, Netease, Outscale, Petabox, RackCorp, Rclone, Scaleway,
+SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi,
+Qiniu and others).
--s3-provider
@@ -25268,6 +26698,10 @@ Properties:
- DigitalOcean Spaces
- "Dreamhost"
- Dreamhost DreamObjects
+ - "Exaba"
+ - Exaba Object Storage
+ - "FlashBlade"
+ - Pure Storage FlashBlade Object Storage
- "GCS"
- Google Cloud Storage
- "HuaweiOBS"
@@ -25288,6 +26722,8 @@ Properties:
- Linode Object Storage
- "Magalu"
- Magalu Object Storage
+ - "Mega"
+ - MEGA S4 Object Storage
- "Minio"
- Minio Object Storage
- "Netease"
@@ -25559,7 +26995,7 @@ Properties:
- Config: acl
- Env Var: RCLONE_S3_ACL
-- Provider: !Storj,Selectel,Synology,Cloudflare
+- Provider: !Storj,Selectel,Synology,Cloudflare,FlashBlade,Mega
- Type: string
- Required: false
- Examples:
@@ -25679,14 +27115,39 @@ Properties:
- "GLACIER_IR"
- Glacier Instant Retrieval storage class
+--s3-ibm-api-key
+
+IBM API Key to be used to obtain IAM token
+
+Properties:
+
+- Config: ibm_api_key
+- Env Var: RCLONE_S3_IBM_API_KEY
+- Provider: IBMCOS
+- Type: string
+- Required: false
+
+--s3-ibm-resource-instance-id
+
+IBM service instance id
+
+Properties:
+
+- Config: ibm_resource_instance_id
+- Env Var: RCLONE_S3_IBM_RESOURCE_INSTANCE_ID
+- Provider: IBMCOS
+- Type: string
+- Required: false
+
Advanced options
Here are the Advanced options specific to s3 (Amazon S3 Compliant
Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile,
-Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive,
-IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease,
-Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel,
-StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
+Cloudflare, DigitalOcean, Dreamhost, Exaba, FlashBlade, GCS, HuaweiOBS,
+IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Mega,
+Minio, Netease, Outscale, Petabox, RackCorp, Rclone, Scaleway,
+SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi,
+Qiniu and others).
--s3-bucket-acl
@@ -25705,6 +27166,7 @@ Properties:
- Config: bucket_acl
- Env Var: RCLONE_S3_BUCKET_ACL
+- Provider: !Storj,Selectel,Synology,Cloudflare,FlashBlade
- Type: string
- Required: false
- Examples:
@@ -26510,6 +27972,44 @@ Properties:
- Type: Tristate
- Default: unset
+--s3-use-x-id
+
+Set if rclone should add x-id URL parameters.
+
+You can change this if you want to disable the AWS SDK from adding x-id
+URL parameters.
+
+This shouldn't be necessary in normal operation.
+
+This should be automatically set correctly for all providers rclone
+knows about - please make a bug report if not.
+
+Properties:
+
+- Config: use_x_id
+- Env Var: RCLONE_S3_USE_X_ID
+- Type: Tristate
+- Default: unset
+
+--s3-sign-accept-encoding
+
+Set if rclone should include Accept-Encoding as part of the signature.
+
+You can change this if you want to stop rclone including Accept-Encoding
+as part of the signature.
+
+This shouldn't be necessary in normal operation.
+
+This should be automatically set correctly for all providers rclone
+knows about - please make a bug report if not.
+
+Properties:
+
+- Config: sign_accept_encoding
+- Env Var: RCLONE_S3_SIGN_ACCEPT_ENCODING
+- Type: Tristate
+- Default: unset
+
--s3-directory-bucket
Set to use AWS Directory Buckets
@@ -26646,7 +28146,7 @@ Access tier to the Frequent Access tier.
Usage Examples:
- rclone backend restore s3:bucket/path/to/object -o priority=PRIORITY -o lifetime=DAYS
+ rclone backend restore s3:bucket/path/to/ --include /object -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY
@@ -27410,7 +28910,7 @@ To configure access to IBM COS S3, follow the steps below:
\ "tor01-flex"
location_constraint>1
-9. Specify a canned ACL. IBM Cloud (Storage) supports "public-read" and
+8. Specify a canned ACL. IBM Cloud (Storage) supports "public-read" and
"private". IBM Cloud(Infra) supports all the canned ACLs. On-Premise
COS supports all the canned ACLs.
@@ -27427,7 +28927,7 @@ To configure access to IBM COS S3, follow the steps below:
\ "authenticated-read"
acl> 1
-12. Review the displayed configuration and accept to save the "remote"
+9. Review the displayed configuration and accept to save the "remote"
then quit. The config file should look like this
[xxx]
@@ -27439,7 +28939,7 @@ To configure access to IBM COS S3, follow the steps below:
location_constraint = us-standard
acl = private
-13. Execute rclone commands
+10. Execute rclone commands
1) Create a bucket.
rclone mkdir IBM-COS-XREGION:newbucket
@@ -27457,6 +28957,34 @@ To configure access to IBM COS S3, follow the steps below:
6) Delete a file on remote.
rclone delete IBM-COS-XREGION:newbucket/file.txt
+IBM IAM authentication
+
+If using IBM IAM authentication with IBM API KEY you need to fill in
+these additional parameters 1. Select false for env_auth 2. Leave
+access_key_id and secret_access_key blank 3. Paste your ibm_api_key
+
+ Option ibm_api_key.
+ IBM API Key to be used to obtain IAM token
+ Enter a value of type string. Press Enter for the default (1).
+ ibm_api_key>
+
+4. Paste your ibm_resource_instance_id
+
+ Option ibm_resource_instance_id.
+ IBM service instance id
+ Enter a value of type string. Press Enter for the default (2).
+ ibm_resource_instance_id>
+
+5. In advanced settings type true for v2_auth
+
+ Option v2_auth.
+ If true use v2 authentication.
+ If this is false (the default) then rclone will use v4 authentication.
+ If it is set then rclone will use v2 authentication.
+ Use this only if v4 signatures don't work, e.g. pre Jewel/v10 CEPH.
+ Enter a boolean value (true or false). Press Enter for the default (true).
+ v2_auth>
+
IDrive e2
Here is an example of making an IDrive e2 configuration. First run:
@@ -28155,8 +29683,8 @@ this:
secret_access_key = SECRET_ACCESS_KEY
use_multipart_uploads = false
-Note that setting disable_multipart_uploads = true is to work around a
-bug which will be fixed in due course.
+Note that setting use_multipart_uploads = false is to work around a bug
+which will be fixed in due course.
Scaleway
@@ -28265,19 +29793,16 @@ Leave region blank
\ (other-v2-signature)
region>
-Choose an endpoint from the list
+Enter your Lyve Cloud endpoint. This field cannot be kept empty.
- Endpoint for S3 API.
+ Endpoint for Lyve Cloud S3 API.
Required when using an S3 clone.
- Choose a number from below, or type in your own value.
- Press Enter to leave empty.
- 1 / Seagate Lyve Cloud US East 1 (Virginia)
- \ (s3.us-east-1.lyvecloud.seagate.com)
- 2 / Seagate Lyve Cloud US West 1 (California)
- \ (s3.us-west-1.lyvecloud.seagate.com)
- 3 / Seagate Lyve Cloud AP Southeast 1 (Singapore)
- \ (s3.ap-southeast-1.lyvecloud.seagate.com)
- endpoint> 1
+ Please type in your LyveCloud endpoint.
+ Examples:
+ - s3.us-west-1.{account_name}.lyve.seagate.com (US West 1 - California)
+ - s3.eu-west-1.{account_name}.lyve.seagate.com (US West 1 - Ireland)
+ Enter a value.
+ endpoint> s3.us-west-1.global.lyve.seagate.com
Leave location constraint blank
@@ -29203,27 +30728,49 @@ This will guide you through an interactive setup process.
Endpoint for Linode Object Storage API.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
- 1 / Atlanta, GA (USA), us-southeast-1
+ 1 / Amsterdam (Netherlands), nl-ams-1
+ \ (nl-ams-1.linodeobjects.com)
+ 2 / Atlanta, GA (USA), us-southeast-1
\ (us-southeast-1.linodeobjects.com)
- 2 / Chicago, IL (USA), us-ord-1
+ 3 / Chennai (India), in-maa-1
+ \ (in-maa-1.linodeobjects.com)
+ 4 / Chicago, IL (USA), us-ord-1
\ (us-ord-1.linodeobjects.com)
- 3 / Frankfurt (Germany), eu-central-1
+ 5 / Frankfurt (Germany), eu-central-1
\ (eu-central-1.linodeobjects.com)
- 4 / Milan (Italy), it-mil-1
+ 6 / Jakarta (Indonesia), id-cgk-1
+ \ (id-cgk-1.linodeobjects.com)
+ 7 / London 2 (Great Britain), gb-lon-1
+ \ (gb-lon-1.linodeobjects.com)
+ 8 / Los Angeles, CA (USA), us-lax-1
+ \ (us-lax-1.linodeobjects.com)
+ 9 / Madrid (Spain), es-mad-1
+ \ (es-mad-1.linodeobjects.com)
+ 10 / Melbourne (Australia), au-mel-1
+ \ (au-mel-1.linodeobjects.com)
+ 11 / Miami, FL (USA), us-mia-1
+ \ (us-mia-1.linodeobjects.com)
+ 12 / Milan (Italy), it-mil-1
\ (it-mil-1.linodeobjects.com)
- 5 / Newark, NJ (USA), us-east-1
+ 13 / Newark, NJ (USA), us-east-1
\ (us-east-1.linodeobjects.com)
- 6 / Paris (France), fr-par-1
+ 14 / Osaka (Japan), jp-osa-1
+ \ (jp-osa-1.linodeobjects.com)
+ 15 / Paris (France), fr-par-1
\ (fr-par-1.linodeobjects.com)
- 7 / Seattle, WA (USA), us-sea-1
+ 16 / São Paulo (Brazil), br-gru-1
+ \ (br-gru-1.linodeobjects.com)
+ 17 / Seattle, WA (USA), us-sea-1
\ (us-sea-1.linodeobjects.com)
- 8 / Singapore ap-south-1
+ 18 / Singapore, ap-south-1
\ (ap-south-1.linodeobjects.com)
- 9 / Stockholm (Sweden), se-sto-1
+ 19 / Singapore 2, sg-sin-1
+ \ (sg-sin-1.linodeobjects.com)
+ 20 / Stockholm (Sweden), se-sto-1
\ (se-sto-1.linodeobjects.com)
- 10 / Washington, DC, (USA), us-iad-1
+ 21 / Washington, DC, (USA), us-iad-1
\ (us-iad-1.linodeobjects.com)
- endpoint> 3
+ endpoint> 5
Option acl.
Canned ACL used when creating buckets and storing or copying objects.
@@ -29381,6 +30928,111 @@ This will leave the config file looking like this.
secret_access_key = SECRET_ACCESS_KEY
endpoint = br-ne1.magaluobjects.com
+MEGA S4
+
+MEGA S4 Object Storage is an S3 compatible object storage system. It has
+a single pricing tier with no additional charges for data transfers or
+API requests and it is included in existing Pro plans.
+
+Here is an example of making a configuration. First run:
+
+ rclone config
+
+This will guide you through an interactive setup process.
+
+ No remotes found, make a new one?
+ n) New remote
+ s) Set configuration password
+ q) Quit config
+ n/s/q> n
+
+ Enter name for new remote.
+ name> megas4
+
+ Option Storage.
+ Type of storage to configure.
+ Choose a number from below, or type in your own value.
+ [snip]
+ XX / Amazon S3 Compliant Storage Providers including AWS,... Mega, ...
+ \ (s3)
+ [snip]
+ Storage> s3
+
+ Option provider.
+ Choose your S3 provider.
+ Choose a number from below, or type in your own value.
+ Press Enter to leave empty.
+ [snip]
+ XX / MEGA S4 Object Storage
+ \ (Mega)
+ [snip]
+ provider> Mega
+
+ Option env_auth.
+ Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ Only applies if access_key_id and secret_access_key is blank.
+ Choose a number from below, or type in your own boolean value (true or false).
+ Press Enter for the default (false).
+ 1 / Enter AWS credentials in the next step.
+ \ (false)
+ 2 / Get AWS credentials from the environment (env vars or IAM).
+ \ (true)
+ env_auth>
+
+ Option access_key_id.
+ AWS Access Key ID.
+ Leave blank for anonymous access or runtime credentials.
+ Enter a value. Press Enter to leave empty.
+ access_key_id> XXX
+
+ Option secret_access_key.
+ AWS Secret Access Key (password).
+ Leave blank for anonymous access or runtime credentials.
+ Enter a value. Press Enter to leave empty.
+ secret_access_key> XXX
+
+ Option endpoint.
+ Endpoint for S3 API.
+ Required when using an S3 clone.
+ Choose a number from below, or type in your own value.
+ Press Enter to leave empty.
+ 1 / Mega S4 eu-central-1 (Amsterdam)
+ \ (s3.eu-central-1.s4.mega.io)
+ 2 / Mega S4 eu-central-2 (Bettembourg)
+ \ (s3.eu-central-2.s4.mega.io)
+ 3 / Mega S4 ca-central-1 (Montreal)
+ \ (s3.ca-central-1.s4.mega.io)
+ 4 / Mega S4 ca-west-1 (Vancouver)
+ \ (s3.ca-west-1.s4.mega.io)
+ endpoint> 1
+
+ Edit advanced config?
+ y) Yes
+ n) No (default)
+ y/n> n
+
+ Configuration complete.
+ Options:
+ - type: s3
+ - provider: Mega
+ - access_key_id: XXX
+ - secret_access_key: XXX
+ - endpoint: s3.eu-central-1.s4.mega.io
+ Keep this "megas4" remote?
+ y) Yes this is OK (default)
+ e) Edit this remote
+ d) Delete this remote
+ y/e/d> y
+
+This will leave the config file looking like this.
+
+ [megas4]
+ type = s3
+ provider = Mega
+ access_key_id = XXX
+ secret_access_key = XXX
+ endpoint = s3.eu-central-1.s4.mega.io
+
ArvanCloud
ArvanCloud ArvanCloud Object Storage goes beyond the limited traditional
@@ -29763,6 +31415,120 @@ This will leave the config file looking like this.
region = us-east-1
endpoint = s3.petabox.io
+Pure Storage FlashBlade
+
+Pure Storage FlashBlade is a high performance S3-compatible object
+store.
+
+FlashBlade supports most modern S3 features including:
+
+- ListObjectsV2
+- Multipart uploads with AWS-compatible ETags
+- Advanced checksum algorithms (SHA256, CRC32, CRC32C) with trailer
+ support (Purity//FB 4.4.2+)
+- Object versioning and lifecycle management
+- Virtual hosted-style requests (requires DNS configuration)
+
+To configure rclone for Pure Storage FlashBlade:
+
+First run:
+
+ rclone config
+
+This will guide you through an interactive setup process:
+
+ No remotes found, make a new one?
+ n) New remote
+ s) Set configuration password
+ q) Quit config
+ n/s/q> n
+
+ Enter name for new remote.
+ name> flashblade
+
+ Option Storage.
+ Type of storage to configure.
+ Choose a number from below, or type in your own value.
+ [snip]
+ 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, FlashBlade, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others
+ \ (s3)
+ [snip]
+ Storage> s3
+
+ Option provider.
+ Choose your S3 provider.
+ Choose a number from below, or type in your own value.
+ Press Enter to leave empty.
+ [snip]
+ 9 / Pure Storage FlashBlade Object Storage
+ \ (FlashBlade)
+ [snip]
+ provider> FlashBlade
+
+ Option env_auth.
+ Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ Only applies if access_key_id and secret_access_key is blank.
+ Choose a number from below, or type in your own boolean value (true or false).
+ Press Enter for the default (false).
+ 1 / Enter AWS credentials in the next step.
+ \ (false)
+ 2 / Get AWS credentials from the environment (env vars or IAM).
+ \ (true)
+ env_auth> 1
+
+ Option access_key_id.
+ AWS Access Key ID.
+ Leave blank for anonymous access or runtime credentials.
+ Enter a value. Press Enter to leave empty.
+ access_key_id> ACCESS_KEY_ID
+
+ Option secret_access_key.
+ AWS Secret Access Key (password).
+ Leave blank for anonymous access or runtime credentials.
+ Enter a value. Press Enter to leave empty.
+ secret_access_key> SECRET_ACCESS_KEY
+
+ Option endpoint.
+ Endpoint for S3 API.
+ Required when using an S3 clone.
+ Enter a value. Press Enter to leave empty.
+ endpoint> https://s3.flashblade.example.com
+
+ Edit advanced config?
+ y) Yes
+ n) No (default)
+ y/n> n
+
+ Configuration complete.
+ Options:
+ - type: s3
+ - provider: FlashBlade
+ - access_key_id: ACCESS_KEY_ID
+ - secret_access_key: SECRET_ACCESS_KEY
+ - endpoint: https://s3.flashblade.example.com
+ Keep this "flashblade" remote?
+ y) Yes this is OK (default)
+ e) Edit this remote
+ d) Delete this remote
+ y/e/d> y
+
+This results in the following configuration being stored in
+~/.config/rclone/rclone.conf:
+
+ [flashblade]
+ type = s3
+ provider = FlashBlade
+ access_key_id = ACCESS_KEY_ID
+ secret_access_key = SECRET_ACCESS_KEY
+ endpoint = https://s3.flashblade.example.com
+
+Note: The FlashBlade endpoint should be the S3 data VIP. For
+virtual-hosted style requests, ensure proper DNS configuration:
+subdomains of the endpoint hostname should resolve to a FlashBlade data
+VIP. For example, if your endpoint is https://s3.flashblade.example.com,
+then bucket-name.s3.flashblade.example.com should also resolve to the
+data VIP.
+
Storj
Storj is a decentralized cloud storage which can be used through its
@@ -29828,7 +31594,7 @@ from the source).
This has the following consequences:
-- Using rclone rcat will fail as the medatada doesn't match after
+- Using rclone rcat will fail as the metadata doesn't match after
upload
- Uploading files with rclone mount will fail for the same reason
- This can worked around by using --vfs-cache-mode writes or
@@ -30848,7 +32614,7 @@ Internet browser available.
Note that rclone runs a webserver on your local machine to collect the
token as returned from Box. This only runs from the moment it opens your
browser to the moment you get back the verification code. This is on
-http://127.0.0.1:53682/ and this it may require you to unblock it
+http://127.0.0.1:53682/ and this may require you to unblock it
temporarily if you are running a host firewall.
Once configured you can then use rclone like this,
@@ -32697,6 +34463,32 @@ Properties:
- Type: Duration
- Default: 0s
+--cloudinary-adjust-media-files-extensions
+
+Cloudinary handles media formats as a file attribute and strips it from
+the name, which is unlike most other file systems
+
+Properties:
+
+- Config: adjust_media_files_extensions
+- Env Var: RCLONE_CLOUDINARY_ADJUST_MEDIA_FILES_EXTENSIONS
+- Type: bool
+- Default: true
+
+--cloudinary-media-extensions
+
+Cloudinary supported media extensions
+
+Properties:
+
+- Config: media_extensions
+- Env Var: RCLONE_CLOUDINARY_MEDIA_EXTENSIONS
+- Type: stringArray
+- Default: [3ds 3g2 3gp ai arw avi avif bmp bw cr2 cr3 djvu dng eps3
+ fbx flif flv gif glb gltf hdp heic heif ico indd jp2 jpe jpeg jpg
+ jxl jxr m2ts mov mp4 mpeg mts mxf obj ogv pdf ply png psd svg tga
+ tif tiff ts u3ma usdz wdp webm webp wmv]
+
--cloudinary-description
Description of the remote.
@@ -33757,7 +35549,7 @@ The initial nonce is generated from the operating systems crypto strong
random number generator. The nonce is incremented for each chunk read
making sure each nonce is unique for each block written. The chance of a
nonce being reused is minuscule. If you wrote an exabyte of data (10¹⁸
-bytes) you would have a probability of approximately 2×10⁻³² of re-using
+bytes) you would have a probability of approximately 2×10⁻³² of reusing
a nonce.
Chunk
@@ -34181,6 +35973,179 @@ Any metadata supported by the underlying remote is read and written.
See the metadata docs for more info.
+DOI
+
+The DOI remote is a read only remote for reading files from digital
+object identifiers (DOI).
+
+Currently, the DOI backend supports supports DOIs hosted with: -
+InvenioRDM - Zenodo - CaltechDATA - Other InvenioRDM repositories -
+Dataverse - Harvard Dataverse - Other Dataverse repositories
+
+Paths are specified as remote:path
+
+Paths may be as deep as required, e.g. remote:directory/subdirectory.
+
+Configuration
+
+Here is an example of how to make a remote called remote. First run:
+
+ rclone config
+
+This will guide you through an interactive setup process:
+
+ No remotes found, make a new one?
+ n) New remote
+ s) Set configuration password
+ q) Quit config
+ n/s/q> n
+ Enter name for new remote.
+ name> remote
+ Type of storage to configure.
+ Choose a number from below, or type in your own value
+ [snip]
+ XX / DOI datasets
+ \ (doi)
+ [snip]
+ Storage> doi
+ Option doi.
+ The DOI or the doi.org URL.
+ Enter a value.
+ doi> 10.5281/zenodo.5876941
+ Edit advanced config?
+ y) Yes
+ n) No (default)
+ y/n> n
+ Configuration complete.
+ Options:
+ - type: doi
+ - doi: 10.5281/zenodo.5876941
+ Keep this "remote" remote?
+ y) Yes this is OK (default)
+ e) Edit this remote
+ d) Delete this remote
+ y/e/d> y
+
+Standard options
+
+Here are the Standard options specific to doi (DOI datasets).
+
+--doi-doi
+
+The DOI or the doi.org URL.
+
+Properties:
+
+- Config: doi
+- Env Var: RCLONE_DOI_DOI
+- Type: string
+- Required: true
+
+Advanced options
+
+Here are the Advanced options specific to doi (DOI datasets).
+
+--doi-provider
+
+DOI provider.
+
+The DOI provider can be set when rclone does not automatically recognize
+a supported DOI provider.
+
+Properties:
+
+- Config: provider
+- Env Var: RCLONE_DOI_PROVIDER
+- Type: string
+- Required: false
+- Examples:
+ - "auto"
+ - Auto-detect provider
+ - "zenodo"
+ - Zenodo
+ - "dataverse"
+ - Dataverse
+ - "invenio"
+ - Invenio
+
+--doi-doi-resolver-api-url
+
+The URL of the DOI resolver API to use.
+
+The DOI resolver can be set for testing or for cases when the the
+canonical DOI resolver API cannot be used.
+
+Defaults to "https://doi.org/api".
+
+Properties:
+
+- Config: doi_resolver_api_url
+- Env Var: RCLONE_DOI_DOI_RESOLVER_API_URL
+- Type: string
+- Required: false
+
+--doi-description
+
+Description of the remote.
+
+Properties:
+
+- Config: description
+- Env Var: RCLONE_DOI_DESCRIPTION
+- Type: string
+- Required: false
+
+Backend commands
+
+Here are the commands specific to the doi backend.
+
+Run them with
+
+ rclone backend COMMAND remote:
+
+The help below will explain what arguments each command takes.
+
+See the backend command for more info on how to pass options and
+arguments.
+
+These can be run on a running backend using the rc command
+backend/command.
+
+metadata
+
+Show metadata about the DOI.
+
+ rclone backend metadata remote: [options] [+]
+
+This command returns a JSON object with some information about the DOI.
+
+ rclone backend medatadata doi:
+
+It returns a JSON object representing metadata about the DOI.
+
+set
+
+Set command for updating the config parameters.
+
+ rclone backend set remote: [options] [+]
+
+This set command can be used to update the config parameters for a
+running doi backend.
+
+Usage Examples:
+
+ rclone backend set doi: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
+ rclone rc backend/command command=set fs=doi: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
+ rclone rc backend/command command=set fs=doi: -o doi=NEW_DOI
+
+The option keys are named as they are in the config file.
+
+This rebuilds the connection to the doi backend when it is called with
+the new parameters. Only new parameters need be passed as the values
+will default to those currently in use.
+
+It doesn't return anything.
+
Dropbox
Paths are specified as remote:path
@@ -34359,6 +36324,42 @@ completes is recommended. Or you could do an initial transfer with
Note that there may be a pause when quitting rclone while rclone
finishes up the last batch using this mode.
+Exporting files
+
+Certain files in Dropbox are "exportable", such as Dropbox Paper
+documents. These files need to be converted to another format in order
+to be downloaded. Often multiple formats are available for conversion.
+
+When rclone downloads a exportable file, it chooses the format to
+download based on the --dropbox-export-formats setting. By default, the
+export formats are html,md, which are sensible defaults for Dropbox
+Paper.
+
+Rclone chooses the first format ID in the export formats list that
+Dropbox supports for a given file. If no format in the list is usable,
+rclone will choose the default format that Dropbox suggests.
+
+Rclone will change the extension to correspond to the export format.
+Here are some examples of how extensions are mapped:
+
+ File type Filename in Dropbox Filename in rclone
+ ---------------- --------------------- --------------------
+ Paper mydoc.paper mydoc.html
+ Paper template mydoc.papert mydoc.papert.html
+ other mydoc mydoc.html
+
+Importing exportable files is not yet supported by rclone.
+
+Here are the supported export extensions known by rclone. Note that
+rclone does not currently support other formats not on this list, even
+if Dropbox supports them. Also, Dropbox could change the list of
+supported formats at any time.
+
+ Format ID Name Description
+ ----------- ---------- ----------------------
+ html HTML HTML document
+ md Markdown Markdown text format
+
Standard options
Here are the Standard options specific to dropbox (Dropbox).
@@ -34563,6 +36564,57 @@ Properties:
- Type: string
- Required: false
+--dropbox-export-formats
+
+Comma separated list of preferred formats for exporting files
+
+Certain Dropbox files can only be accessed by exporting them to another
+format. These include Dropbox Paper documents.
+
+For each such file, rclone will choose the first format on this list
+that Dropbox considers valid. If none is valid, it will choose Dropbox's
+default format.
+
+Known formats include: "html", "md" (markdown)
+
+Properties:
+
+- Config: export_formats
+- Env Var: RCLONE_DROPBOX_EXPORT_FORMATS
+- Type: CommaSepList
+- Default: html,md
+
+--dropbox-skip-exports
+
+Skip exportable files in all listings.
+
+If given, exportable files practically become invisible to rclone.
+
+Properties:
+
+- Config: skip_exports
+- Env Var: RCLONE_DROPBOX_SKIP_EXPORTS
+- Type: bool
+- Default: false
+
+--dropbox-show-all-exports
+
+Show all exportable files in listings.
+
+Adding this flag will allow all exportable files to be server side
+copied. Note that rclone doesn't add extensions to the exportable file
+names in this mode.
+
+Do not use this flag when trying to download exportable files - rclone
+will fail to download them.
+
+Properties:
+
+- Config: show_all_exports
+- Env Var: RCLONE_DROPBOX_SHOW_ALL_EXPORTS
+- Type: bool
+- Default: false
+
--dropbox-batch-mode
Upload file batching sync|async|off.
@@ -34638,7 +36690,7 @@ Properties:
--dropbox-batch-commit-timeout
-Max time to wait for a batch to finish committing
+Max time to wait for a batch to finish committing. (no longer used)
Properties:
@@ -34685,6 +36737,11 @@ non-personal account otherwise the visibility may not be correct. (Note
that --expire isn't supported on personal accounts). See the forum
discussion and the dropbox SDK issue.
+Modification times for Dropbox Paper documents are not exact, and may
+not change for some period after the document is edited. To make sure
+you get recent changes in a sync, either wait an hour or so, or use
+--ignore-times to force a full sync.
+
Get your own Dropbox App ID
When you use rclone with Dropbox in its default configuration you are
@@ -34994,6 +37051,214 @@ Properties:
- Type: string
- Required: false
+FileLu
+
+FileLu is a reliable cloud storage provider offering features like
+secure file uploads, downloads, flexible storage options, and sharing
+capabilities. With support for high storage limits and seamless
+integration with rclone, FileLu makes managing files in the cloud easy.
+Its cross-platform file backup services let you upload and back up files
+from any internet-connected device.
+
+Configuration
+
+Here is an example of how to make a remote called filelu. First, run:
+
+ rclone config
+
+This will guide you through an interactive setup process:
+
+ No remotes found, make a new one?
+ n) New remote
+ s) Set configuration password
+ q) Quit config
+ n/s/q> n
+ name> filelu
+ Type of storage to configure.
+ Choose a number from below, or type in your own value
+ [snip]
+ xx / FileLu Cloud Storage
+ \ "filelu"
+ [snip]
+ Storage> filelu
+ Enter your FileLu Rclone Key:
+ key> YOUR_FILELU_RCLONE_KEY RC_xxxxxxxxxxxxxxxxxxxxxxxx
+ Configuration complete.
+
+ Keep this "filelu" remote?
+ y) Yes this is OK
+ e) Edit this remote
+ d) Delete this remote
+ y/e/d> y
+
+Paths
+
+A path without an initial / will operate in the Rclone directory.
+
+A path with an initial / will operate at the root where you can see the
+Rclone directory.
+
+ $ rclone lsf TestFileLu:/
+ CCTV/
+ Camera/
+ Documents/
+ Music/
+ Photos/
+ Rclone/
+ Vault/
+ Videos/
+
+Example Commands
+
+Create a new folder named foldername in the Rclone directory:
+
+ rclone mkdir filelu:foldername
+
+Delete a folder on FileLu:
+
+ rclone rmdir filelu:/folder/path/
+
+Delete a file on FileLu:
+
+ rclone delete filelu:/hello.txt
+
+List files from your FileLu account:
+
+ rclone ls filelu:
+
+List all folders:
+
+ rclone lsd filelu:
+
+Copy a specific file to the FileLu root:
+
+ rclone copy D:\\hello.txt filelu:
+
+Copy files from a local directory to a FileLu directory:
+
+ rclone copy D:/local-folder filelu:/remote-folder/path/
+
+Download a file from FileLu into a local directory:
+
+ rclone copy filelu:/file-path/hello.txt D:/local-folder
+
+Move files from a local directory to a FileLu directory:
+
+ rclone move D:\\local-folder filelu:/remote-path/
+
+Sync files from a local directory to a FileLu directory:
+
+ rclone sync --interactive D:/local-folder filelu:/remote-path/
+
+Mount remote to local Linux:
+
+ rclone mount filelu: /root/mnt --vfs-cache-mode full
+
+Mount remote to local Windows:
+
+ rclone mount filelu: D:/local_mnt --vfs-cache-mode full
+
+Get storage info about the FileLu account:
+
+ rclone about filelu:
+
+All the other rclone commands are supported by this backend.
+
+FolderID instead of folder path
+
+We use the FolderID instead of the folder name to prevent errors when
+users have identical folder names or paths. For example, if a user has
+two or three folders named "test_folders," the system may become
+confused and won't know which folder to move. In large storage systems,
+some clients have hundred of thousands of folders and a few millions of
+files, duplicate folder names or paths are quite common.
+
+Modification Times and Hashes
+
+FileLu supports both modification times and MD5 hashes.
+
+FileLu only supports filenames and folder names up to 255 characters in
+length, where a character is a Unicode character.
+
+Duplicated Files
+
+When uploading and syncing via Rclone, FileLu does not allow uploading
+duplicate files within the same directory. However, you can upload
+duplicate files, provided they are in different directories (folders).
+
+Failure to Log / Invalid Credentials or KEY
+
+Ensure that you have the correct Rclone key, which can be found in My
+Account. Every time you toggle Rclone OFF and ON in My Account, a new
+RC_xxxxxxxxxxxxxxxxxxxx key is generated. Be sure to update your Rclone
+configuration with the new key.
+
+If you are connecting to your FileLu remote for the first time and
+encounter an error such as:
+
+ Failed to create file system for "my-filelu-remote:": couldn't login: Invalid credentials
+
+Ensure your Rclone Key is correct.
+
+Process killed
+
+Accounts with large files or extensive metadata may experience
+significant memory usage during list/sync operations. Ensure the system
+running rclone has sufficient memory and CPU to handle these operations.
+
+Standard options
+
+Here are the Standard options specific to filelu (FileLu Cloud Storage).
+
+--filelu-key
+
+Your FileLu Rclone key from My Account
+
+Properties:
+
+- Config: key
+- Env Var: RCLONE_FILELU_KEY
+- Type: string
+- Required: true
+
+Advanced options
+
+Here are the Advanced options specific to filelu (FileLu Cloud Storage).
+
+--filelu-encoding
+
+The encoding for the backend.
+
+See the encoding section in the overview for more info.
+
+Properties:
+
+- Config: encoding
+- Env Var: RCLONE_FILELU_ENCODING
+- Type: Encoding
+- Default:
+ Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,CrLf,Del,Ctl,LeftSpace,LeftPeriod,LeftTilde,LeftCrLfHtVt,RightSpace,RightPeriod,RightCrLfHtVt,InvalidUtf8,Dot,SquareBracket,Semicolon,Exclamation
+
+--filelu-description
+
+Description of the remote.
+
+Properties:
+
+- Config: description
+- Env Var: RCLONE_FILELU_DESCRIPTION
+- Type: string
+- Required: false
+
+Limitations
+
+This backend uses a custom library implementing the FileLu API. While it
+supports file transfers, some advanced features may not yet be
+available. Please report any issues to the rclone forum for
+troubleshooting and updates.
+
+For further information, visit FileLu's website.
+
Files.com
Files.com is a cloud storage service that provides a secure and easy way
@@ -38451,6 +40716,32 @@ attempted if possible.
Use the --interactive/-i or --dry-run flag to see what would be copied
before copying.
+moveid
+
+Move files by ID
+
+ rclone backend moveid remote: [options] [+]
+
+This command moves files by ID
+
+Usage:
+
+ rclone backend moveid drive: ID path
+ rclone backend moveid drive: ID1 path1 ID2 path2
+
+It moves the drive file with ID given to the path (an rclone path which
+will be passed internally to rclone moveto).
+
+The path should end with a / to indicate move the file as named to this
+directory. If it doesn't end with a / then the last path component will
+be used as the file name.
+
+If the destination is a drive backend then server-side moving will be
+attempted if possible.
+
+Use the --interactive/-i or --dry-run flag to see what would be moved
+beforehand.
+
exportformats
Dump the export formats for debug purposes
@@ -38718,6 +41009,11 @@ NB The Google Photos API which rclone uses has quite a few limitations,
so please read the limitations section carefully to make sure it is
suitable for your use.
+NB From March 31, 2025 rclone can only download photos it uploaded. This
+limitation is due to policy changes at Google. You may need to run
+rclone config reconnect remote: to make rclone work again after
+upgrading to rclone v1.70.
+
Configuration
The initial setup for google cloud storage involves getting a token from
@@ -39081,7 +41377,7 @@ Use the gphotosdl proxy for downloading the full resolution images
The Google API will deliver images and video which aren't full
resolution, and/or have EXIF data missing.
-However if you ue the gphotosdl proxy tnen you can download original,
+However if you use the gphotosdl proxy then you can download original,
unchanged images.
This runs a headless browser in the background.
@@ -39192,7 +41488,7 @@ Properties:
--gphotos-batch-commit-timeout
-Max time to wait for a batch to finish committing
+Max time to wait for a batch to finish committing. (no longer used)
Properties:
@@ -39219,6 +41515,11 @@ videos or images or formats that Google Photos doesn't understand,
rclone will upload the file, then Google Photos will give an error when
it is put turned into a media item.
+NB From March 31, 2025 rclone can only download photos it uploaded. This
+limitation is due to policy changes at Google. You may need to run
+rclone config reconnect remote: to make rclone work again after
+upgrading to rclone v1.70.
+
Note that all media items uploaded to Google Photos through the API are
stored in full resolution at "original quality" and will count towards
your storage quota in your Google Account. The API does not offer a way
@@ -40978,7 +43279,7 @@ This will guide you through an interactive setup process:
config_2fa> 2FACODE
Remote config
--------------------
- [koofr]
+ [iclouddrive]
- type: iclouddrive
- apple_id: APPLEID
- password: *** ENCRYPTED ***
@@ -40994,6 +43295,27 @@ Advanced Data Protection
ADP is currently unsupported and need to be disabled
+On iPhone, Settings > Apple Account > iCloud > 'Access iCloud Data on
+the Web' must be ON, and 'Advanced Data Protection' OFF.
+
+Troubleshooting
+
+Missing PCS cookies from the request
+
+This means you have Advanced Data Protection (ADP) turned on. This is
+not supported at the moment. If you want to use rclone you will have to
+turn it off. See above for how to turn it off.
+
+You will need to clear the cookies and the trust_token fields in the
+config. Or you can delete the remote config and start again.
+
+You should then run rclone reconnect remote:.
+
+Note that changing the ADP setting may not take effect immediately - you
+may need to wait a few hours or a day before you can get rclone to work
+- keep clearing the config entry and running rclone reconnect remote:
+until rclone functions properly.
+
Standard options
Here are the Standard options specific to iclouddrive (iCloud Drive).
@@ -41277,6 +43599,22 @@ Properties:
- Type: string
- Required: false
+--internetarchive-item-derive
+
+Whether to trigger derive on the IA item or not. If set to false, the
+item will not be derived by IA upon upload. The derive process produces
+a number of secondary files from an upload to make an upload more usable
+on the web. Setting this to false is useful for uploading files that are
+already in a format that IA can display or reduce burden on IA's
+infrastructure.
+
+Properties:
+
+- Config: item_derive
+- Env Var: RCLONE_INTERNETARCHIVE_ITEM_DERIVE
+- Type: bool
+- Default: true
+
Advanced options
Here are the Advanced options specific to internetarchive (Internet
@@ -41308,6 +43646,19 @@ Properties:
- Type: string
- Default: "https://archive.org"
+--internetarchive-item-metadata
+
+Metadata to be set on the IA item, this is different from file-level
+metadata that can be set using --metadata-set. Format is key=value and
+the 'x-archive-meta-' prefix is automatically added.
+
+Properties:
+
+- Config: item_metadata
+- Env Var: RCLONE_INTERNETARCHIVE_ITEM_METADATA
+- Type: stringArray
+- Default: []
+
--internetarchive-disable-checksum
Don't ask the server to test against MD5 checksum calculated by rclone.
@@ -42445,6 +44796,10 @@ error like oauth2: server response missing access_token.
- Go to Security / "Пароль и безопасность"
- Click password for apps / "Пароли для внешних приложений"
- Add the password - give it a name - eg "rclone"
+- Select the permissions level. For some reason just "Full access to
+ Cloud" (WebDav) doesn't work for Rclone currently. You have to
+ select "Full access to Mail, Cloud and Calendar" (all protocols).
+ (thread on forum.rclone.org)
- Copy the password and use this password below - your normal login
password won't work.
@@ -44274,6 +46629,63 @@ Properties:
- Type: int
- Default: 16
+--azureblob-copy-cutoff
+
+Cutoff for switching to multipart copy.
+
+Any files larger than this that need to be server-side copied will be
+copied in chunks of chunk_size using the put block list API.
+
+Files smaller than this limit will be copied with the Copy Blob API.
+
+Properties:
+
+- Config: copy_cutoff
+- Env Var: RCLONE_AZUREBLOB_COPY_CUTOFF
+- Type: SizeSuffix
+- Default: 8Mi
+
+--azureblob-copy-concurrency
+
+Concurrency for multipart copy.
+
+This is the number of chunks of the same file that are copied
+concurrently.
+
+These chunks are not buffered in memory and Microsoft recommends setting
+this value to greater than 1000 in the azcopy documentation.
+
+https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-optimize#increase-concurrency
+
+In tests, copy speed increases almost linearly with copy concurrency.
+
+Properties:
+
+- Config: copy_concurrency
+- Env Var: RCLONE_AZUREBLOB_COPY_CONCURRENCY
+- Type: int
+- Default: 512
+
+--azureblob-use-copy-blob
+
+Whether to use the Copy Blob API when copying to the same storage
+account.
+
+If true (the default) then rclone will use the Copy Blob API for copies
+to the same storage account even when the size is above the copy_cutoff.
+
+Rclone assumes that the same storage account means the same config and
+does not check for the same storage account in different configs.
+
+There should be no need to change this value.
+
+Properties:
+
+- Config: use_copy_blob
+- Env Var: RCLONE_AZUREBLOB_USE_COPY_BLOB
+- Type: bool
+- Default: true
+
--azureblob-list-chunk
Size of blob list.
@@ -44492,8 +46904,10 @@ You can set custom upload headers with the --header-upload flag.
- Content-Encoding
- Content-Language
- Content-Type
+- X-MS-Tags
-Eg --header-upload "Content-Type: text/potato"
+Eg --header-upload "Content-Type: text/potato" or
+--header-upload "X-MS-Tags: foo=bar"
Limitations
@@ -44733,6 +47147,13 @@ identity, the user-assigned identity will be used by default.
If the resource has multiple user-assigned identities you will need to
unset env_auth and set use_msi instead. See the use_msi section.
+If you are operating in disconnected clouds, or private clouds such as
+Azure Stack you may want to set disable_instance_discovery = true. This
+determines whether rclone requests Microsoft Entra instance metadata
+from https://login.microsoft.com/ before authenticating. Setting this to
+true will skip this request, making you responsible for ensuring the
+configured authority is valid and trustworthy.
+
Env Auth: 3. Azure CLI credentials (as used by the az tool)
Credentials created with the az tool can be picked up using env_auth.
@@ -44830,6 +47251,13 @@ msi_client_id, or msi_mi_res_id parameters.
If none of msi_object_id, msi_client_id, or msi_mi_res_id is set, this
is is equivalent to using env_auth.
+Azure CLI tool az
+
+Set to use the Azure CLI tool az as the sole means of authentication.
+Setting this can be useful if you wish to use the az CLI on a host with
+a System Managed Identity that you do not want to use. Don't set
+env_auth at the same time.
+
Standard options
Here are the Standard options specific to azurefiles (Microsoft Azure
@@ -45128,6 +47556,37 @@ Properties:
- Type: string
- Required: false
+--azurefiles-disable-instance-discovery
+
+Skip requesting Microsoft Entra instance metadata This should be set
+true only by applications authenticating in disconnected clouds, or
+private clouds such as Azure Stack. It determines whether rclone
+requests Microsoft Entra instance metadata from
+https://login.microsoft.com/ before authenticating. Setting this to true
+will skip this request, making you responsible for ensuring the
+configured authority is valid and trustworthy.
+
+Properties:
+
+- Config: disable_instance_discovery
+- Env Var: RCLONE_AZUREFILES_DISABLE_INSTANCE_DISCOVERY
+- Type: bool
+- Default: false
+
+--azurefiles-use-az
+
+Use Azure CLI tool az for authentication Set to use the Azure CLI tool
+az as the sole means of authentication. Setting this can be useful if
+you wish to use the az CLI on a host with a System Managed Identity that
+you do not want to use. Don't set env_auth at the same time.
+
+Properties:
+
+- Config: use_az
+- Env Var: RCLONE_AZUREFILES_USE_AZ
+- Type: bool
+- Default: false
+
--azurefiles-endpoint
Endpoint for the service.
@@ -45589,7 +48048,8 @@ Properties:
- "us"
- Microsoft Cloud for US Government
- "de"
- - Microsoft Cloud Germany
+ - Microsoft Cloud Germany (deprecated - try global region
+ first).
- "cn"
- Azure and Office 365 operated by Vnet Group in China
@@ -45661,6 +48121,26 @@ Properties:
- Type: bool
- Default: false
+--onedrive-upload-cutoff
+
+Cutoff for switching to chunked upload.
+
+Any files larger than this will be uploaded in chunks of chunk_size.
+
+This is disabled by default as uploading using single part uploads
+causes rclone to use twice the storage on Onedrive business as when
+rclone sets the modification time after the upload Onedrive creates a
+new version.
+
+See: https://github.com/rclone/rclone/issues/1716
+
+Properties:
+
+- Config: upload_cutoff
+- Env Var: RCLONE_ONEDRIVE_UPLOAD_CUTOFF
+- Type: SizeSuffix
+- Default: off
+
--onedrive-chunk-size
Chunk size to upload files with - must be multiple of 320k (327,680
@@ -46248,6 +48728,38 @@ Here are the possible system metadata items for the onedrive backend.
See the metadata docs for more info.
+Impersonate other users as Admin
+
+Unlike Google Drive and impersonating any domain user via service
+accounts, OneDrive requires you to authenticate as an admin account, and
+manually setup a remote per user you wish to impersonate.
+
+1. In Microsoft 365 Admin Center, open each user you need to
+ "impersonate" and go to the OneDrive section. There is a heading
+ called "Get access to files", you need to click to create the link,
+ this creates the link of the format
+ https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/
+ but also changes the permissions so you your admin user has access.
+2. Then in powershell run the following commands:
+
+ Install-Module Microsoft.Graph -Scope CurrentUser -Repository PSGallery -Force
+ Import-Module Microsoft.Graph.Files
+ Connect-MgGraph -Scopes "Files.ReadWrite.All"
+ # Follow the steps to allow access to your admin user
+ # Then run this for each user you want to impersonate to get the Drive ID
+ Get-MgUserDefaultDrive -UserId '{emailaddress}'
+ # This will give you output of the format:
+ # Name Id DriveType CreatedDateTime
+ # ---- -- --------- ---------------
+ # OneDrive b!XYZ123 business 14/10/2023 1:00:58 pm
+
+3. Then in rclone add a onedrive remote type, and use the
+ Type in driveID with the DriveID you got in the previous step. One
+ remote per user. It will then confirm the drive ID, and hopefully
+ give you a message of Found drive "root" of type "business" and then
+ include the URL of the format
+ https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/Documents
+
Limitations
If you don't use rclone for 90 days the refresh token will expire. This
@@ -46657,6 +49169,30 @@ Properties:
- Type: SizeSuffix
- Default: 10Mi
+--opendrive-access
+
+Files and folders will be uploaded with this access permission (default
+private)
+
+Properties:
+
+- Config: access
+- Env Var: RCLONE_OPENDRIVE_ACCESS
+- Type: string
+- Default: "private"
+- Examples:
+ - "private"
+ - The file or folder access can be granted in a way that will
+ allow select users to view, read or write what is absolutely
+ essential for them.
+ - "public"
+ - The file or folder can be downloaded by anyone from a web
+ browser. The link can be shared in any way,
+ - "hidden"
+ - The file or folder can be accessed has the same restrictions
+ as Public if the user knows the URL of the file or folder
+ link in order to access the contents
+
--opendrive-description
Description of the remote.
@@ -52536,6 +55072,20 @@ Properties:
- Type: string
- Required: false
+--sftp-http-proxy
+
+URL for HTTP CONNECT proxy
+
+Set this to a URL for an HTTP proxy which supports the HTTP CONNECT
+verb.
+
+Properties:
+
+- Config: http_proxy
+- Env Var: RCLONE_SFTP_HTTP_PROXY
+- Type: string
+- Required: false
+
--sftp-copy-is-hardlink
Set to enable server side copies using hardlinks.
@@ -52576,8 +55126,8 @@ Properties:
Limitations
On some SFTP servers (e.g. Synology) the paths are different for SSH and
-SFTP so the hashes can't be calculated properly. For them using
-disable_hashcheck is a good idea.
+SFTP so the hashes can't be calculated properly. You can either use
+--sftp-path-override or disable_hashcheck.
The only ssh agent supported under Windows is Putty's pageant.
@@ -52795,6 +55345,22 @@ Properties:
- Type: string
- Required: false
+--smb-use-kerberos
+
+Use Kerberos authentication.
+
+If set, rclone will use Kerberos authentication instead of NTLM. This
+requires a valid Kerberos configuration and credentials cache to be
+available, either in the default locations or as specified by the
+KRB5_CONFIG and KRB5CCNAME environment variables.
+
+Properties:
+
+- Config: use_kerberos
+- Env Var: RCLONE_SMB_USE_KERBEROS
+- Type: bool
+- Default: false
+
Advanced options
Here are the Advanced options specific to smb (SMB / CIFS).
@@ -54494,12 +57060,12 @@ To copy a local directory to an WebDAV directory called backup
Modification times and hashes
Plain WebDAV does not support modified times. However when used with
-Fastmail Files, Owncloud or Nextcloud rclone will support modified
+Fastmail Files, ownCloud or Nextcloud rclone will support modified
times.
Likewise plain WebDAV does not support hashes, however when used with
-Fastmail Files, Owncloud or Nextcloud rclone will support SHA1 and MD5
-hashes. Depending on the exact version of Owncloud or Nextcloud hashes
+Fastmail Files, ownCloud or Nextcloud rclone will support SHA1 and MD5
+hashes. Depending on the exact version of ownCloud or Nextcloud hashes
may appear on all objects, or only on objects which had a hash uploaded
with them.
@@ -54536,7 +57102,9 @@ Properties:
- "nextcloud"
- Nextcloud
- "owncloud"
- - Owncloud
+ - Owncloud 10 PHP based WebDAV server
+ - "infinitescale"
+ - ownCloud Infinite Scale
- "sharepoint"
- Sharepoint Online, authenticated by Microsoft account
- "sharepoint-ntlm"
@@ -54748,21 +57316,30 @@ and use this as the password.
Fastmail supports modified times using the X-OC-Mtime header.
-Owncloud
+ownCloud
Click on the settings cog in the bottom right of the page and this will
show the WebDAV URL that rclone needs in the config step. It will look
something like https://example.com/remote.php/webdav/.
-Owncloud supports modified times using the X-OC-Mtime header.
+ownCloud supports modified times using the X-OC-Mtime header.
Nextcloud
-This is configured in an identical way to Owncloud. Note that Nextcloud
-initially did not support streaming of files (rcat) whereas Owncloud
+This is configured in an identical way to ownCloud. Note that Nextcloud
+initially did not support streaming of files (rcat) whereas ownCloud
did, but this seems to be fixed as of 2020-11-27 (tested with rclone
v1.53.1 and Nextcloud Server v19).
+ownCloud Infinite Scale
+
+The WebDAV URL for Infinite Scale can be found in the details panel of
+any space in Infinite Scale, if the display was enabled in the personal
+settings of the user through a checkbox there.
+
+Infinite Scale works with the chunking tus upload protocol. The chunk
+size is currently fixed 10 MB.
+
Sharepoint Online
Rclone can be used with Sharepoint provided by OneDrive for Business or
@@ -56157,6 +58734,269 @@ Options:
Changelog
+v1.70.0 - 2025-06-17
+
+See commits
+
+- New backends
+ - DOI (Flora Thiebaut)
+ - FileLu (kingston125)
+ - New S3 providers:
+ - MEGA S4 (Nick Craig-Wood)
+ - Pure Storage FlashBlade (Jeremy Daer)
+- New commands
+ - convmv: for moving and transforming files (nielash)
+- New Features
+ - Add --max-connections to control maximum backend concurrency
+ (Nick Craig-Wood)
+ - Add --max-buffer-memory to limit total buffer memory usage (Nick
+ Craig-Wood)
+ - Add transform library and --name-transform flag (nielash)
+ - sync: Implement --list-cutoff to allow on disk sorting for
+ reduced memory use (Nick Craig-Wood)
+ - accounting: Add listed stat for number of directory entries
+ listed (Nick Craig-Wood)
+ - backend: Skip hash calculation when the hashType is None
+ (Oleksiy Stashok)
+ - build
+ - Update to go1.24 and make go1.22 the minimum required
+ version (Nick Craig-Wood)
+ - Disable docker builds on PRs & add missing dockerfile
+ changes (Anagh Kumar Baranwal)
+ - Modernize Go usage (Nick Craig-Wood)
+ - Update all dependencies (Nick Craig-Wood)
+ - cmd/authorize: Show required arguments in help text (simwai)
+ - cmd/config: add --no-output option (Jess)
+ - cmd/gitannex
+ - Tweak parsing of "rcloneremotename" config (Dan McArdle)
+ - Permit remotes with options (Dan McArdle)
+ - Reject unknown layout modes in INITREMOTE (Dan McArdle)
+ - docker image: Add label org.opencontainers.image.source for
+ release notes in Renovate dependency updates (Robin Schneider)
+ - doc fixes (albertony, Andrew Kreimer, Ben Boeckel, Christoph
+ Berger, Danny Garside, Dimitri Papadopoulos, eccoisle, Ed
+ Craig-Wood, Fernando Fernández, jack, Jeff Geerling, Jugal
+ Kishore, kingston125, luzpaz, Markus Gerstel, Matt Ickstadt,
+ Michael Kebe, Nick Craig-Wood, PrathameshLakawade, Ser-Bul,
+ simonmcnair, Tim White, Zachary Vorhies)
+ - filter:
+ - Add --hash-filter to deterministically select a subset of
+ files (Nick Craig-Wood)
+ - Show --min-size and --max-size in --dump filters (Nick
+ Craig-Wood)
+ - hash: Add SHA512 support for file hashes (Enduriel)
+ - http servers: Add --user-from-header to use for authentication
+ (Moises Lima)
+ - lib/batcher: Deprecate unused option: batch_commit_timeout (Dan
+ McArdle)
+ - log:
+ - Remove github.com/sirupsen/logrus and replace with log/slog
+ (Nick Craig-Wood)
+ - Add --windows-event-log-level to support Windows Event Log
+ (Nick Craig-Wood)
+ - rc
+ - Add add short parameter to core/stats to not return
+ transferring and checking (Nick Craig-Wood)
+ - In options/info make FieldName contain a "." if it should be
+ nested (Nick Craig-Wood)
+ - Add rc control for serve commands (Nick Craig-Wood)
+ - rcserver: Improve content-type check (Jonathan Giannuzzi)
+ - serve nfs
+ - Update docs to note Windows is not supported (Zachary
+ Vorhies)
+ - Change the format of --nfs-cache-type symlink file handles
+ (Nick Craig-Wood)
+ - Make metadata files have special file handles (Nick
+ Craig-Wood)
+ - touch: Make touch obey --transfers (Nick Craig-Wood)
+ - version: Add --deps flag to show dependencies and other build
+ info (Nick Craig-Wood)
+- Bug Fixes
+ - serve s3:
+ - Fix ListObjectsV2 response (fhuber)
+ - Remove redundant handler initialization (Tho Neyugn)
+ - stats: Fix goroutine leak and improve stats accounting process
+ (Nathanael Demacon)
+- VFS
+ - Add --vfs-metadata-extension to expose metadata sidecar files
+ (Nick Craig-Wood)
+- Azure Blob
+ - Add support for x-ms-tags header (Trevor Starick)
+ - Cleanup uncommitted blocks on upload errors (Nick Craig-Wood)
+ - Speed up server side copies for small files (Nick Craig-Wood)
+ - Implement multipart server side copy (Nick Craig-Wood)
+ - Remove uncommitted blocks on InvalidBlobOrBlock error (Nick
+ Craig-Wood)
+ - Fix handling of objects with // in (Nick Craig-Wood)
+ - Handle retry error codes more carefully (Nick Craig-Wood)
+ - Fix errors not being retried when doing single part copy (Nick
+ Craig-Wood)
+ - Fix multipart server side copies of 0 sized files (Nick
+ Craig-Wood)
+- Azurefiles
+ - Add --azurefiles-use-az and
+ --azurefiles-disable-instance-discovery (b-wimmer)
+- B2
+ - Add SkipDestructive handling to backend commands (Pat Patterson)
+ - Use file id from listing when not presented in headers (ahxxm)
+- Cloudinary
+ - Automatically add/remove known media files extensions
+ (yuval-cloudinary)
+ - Var naming convention (yuval-cloudinary)
+- Drive
+ - Added backend moveid command (Spencer McCullough)
+- Dropbox
+ - Support Dropbox Paper (Dave Vasilevsky)
+- FTP
+ - Add --ftp-http-proxy to connect via HTTP CONNECT proxy
+- Gofile
+ - Update to use new direct upload endpoint (wbulot)
+- Googlephotos
+ - Update read only and read write scopes to meet Google's
+ requirements. (Germán Casares)
+- Iclouddrive
+ - Fix panic and files potentially downloaded twice (Clément
+ Wehrung)
+- Internetarchive
+ - Add --internetarchive-metadata="key=value" for setting item
+ metadata (Corentin Barreau)
+- Onedrive
+ - Fix "The upload session was not found" errors (Nick Craig-Wood)
+ - Re-add --onedrive-upload-cutoff flag (Nick Craig-Wood)
+ - Fix crash if no metadata was updated (Nick Craig-Wood)
+- Opendrive
+ - Added --opendrive-access flag to handle permissions (Joel K
+ Biju)
+- Pcloud
+ - Fix "Access denied. You do not have permissions to perform this
+ operation" on large uploads (Nick Craig-Wood)
+- S3
+ - Fix handling of objects with // in (Nick Craig-Wood)
+ - Add IBM IAM signer (Alexander Minbaev)
+ - Split the GCS quirks into --s3-use-x-id and
+ --s3-sign-accept-encoding (Nick Craig-Wood)
+ - Implement paged listing interface ListP (Nick Craig-Wood)
+ - Add Pure Storage FlashBlade provider support (Jeremy Daer)
+ - Require custom endpoint for Lyve Cloud v2 support
+ (PrathameshLakawade)
+ - MEGA S4 support (Nick Craig-Wood)
+- SFTP
+ - Add --sftp-http-proxy to connect via HTTP CONNECT proxy (Nick
+ Craig-Wood)
+- Smb
+ - Add support for kerberos authentication (Jonathan Giannuzzi)
+ - Improve connection pooling efficiency (Jonathan Giannuzzi)
+- WebDAV
+ - Retry propfind on 425 status (Jörn Friedrich Dreyer)
+ - Add an ownCloud Infinite Scale vendor that enables tus chunked
+ upload support (Klaas Freitag)
+
+v1.69.3 - 2025-05-21
+
+See commits
+
+- Bug Fixes
+ - build: Reapply update github.com/golang-jwt/jwt/v5 from 5.2.1 to
+ 5.2.2 to fix CVE-2025-30204 (dependabot[bot])
+ - build: Update github.com/ebitengine/purego to work around bug in
+ go1.24.3 (Nick Craig-Wood)
+
+v1.69.2 - 2025-05-01
+
+See commits
+
+- Bug fixes
+ - accounting: Fix percentDiff calculation -- (Anagh Kumar
+ Baranwal)
+ - build
+ - Update github.com/golang-jwt/jwt/v4 from 4.5.1 to 4.5.2 to
+ fix CVE-2025-30204 (dependabot[bot])
+ - Update github.com/golang-jwt/jwt/v5 from 5.2.1 to 5.2.2 to
+ fix CVE-2025-30204 (dependabot[bot])
+ - Update golang.org/x/crypto to v0.35.0 to fix CVE-2025-22869
+ (Nick Craig-Wood)
+ - Update golang.org/x/net from 0.36.0 to 0.38.0 to fix
+ CVE-2025-22870 (dependabot[bot])
+ - Update golang.org/x/net to 0.36.0. to fix CVE-2025-22869
+ (dependabot[bot])
+ - Stop building with go < go1.23 as security updates forbade
+ it (Nick Craig-Wood)
+ - Fix docker plugin build (Anagh Kumar Baranwal)
+ - cmd: Fix crash if rclone is invoked without any arguments (Janne
+ Hellsten)
+ - config: Read configuration passwords from stdin even when
+ terminated with EOF (Samantha Bowen)
+ - doc fixes (Andrew Kreimer, Danny Garside, eccoisle, Ed
+ Craig-Wood, emyarod, jack, Jugal Kishore, Markus Gerstel,
+ Michael Kebe, Nick Craig-Wood, simonmcnair, simwai, Zachary
+ Vorhies)
+ - fs: Fix corruption of SizeSuffix with "B" suffix in config (eg
+ --min-size) (Nick Craig-Wood)
+ - lib/http: Fix race between Serve() and Shutdown() (Nick
+ Craig-Wood)
+ - object: Fix memory object out of bounds Seek (Nick Craig-Wood)
+ - operations: Fix call fmt.Errorf with wrong err (alingse)
+ - rc
+ - Disable the metrics server when running rclone rc
+ (hiddenmarten)
+ - Fix debug/* commands not being available over unix sockets
+ (Nick Craig-Wood)
+ - serve nfs: Fix unlikely crash (Nick Craig-Wood)
+ - stats: Fix the speed not getting updated after a pause in the
+ processing (Anagh Kumar Baranwal)
+ - sync
+ - Fix cpu spinning when empty directory finding with leading
+ slashes (Nick Craig-Wood)
+ - Copy dir modtimes even when copyEmptySrcDirs is false
+ (ll3006)
+- VFS
+ - Fix directory cache serving stale data (Lorenz Brun)
+ - Fix inefficient directory caching when directory reads are slow
+ (huanghaojun)
+ - Fix integration test failures (Nick Craig-Wood)
+- Drive
+ - Metadata: fix error when setting copy-requires-writer-permission
+ on a folder (Nick Craig-Wood)
+- Dropbox
+ - Retry link without expiry (Dave Vasilevsky)
+- HTTP
+ - Correct root if definitely pointing to a file (nielash)
+- Iclouddrive
+ - Fix so created files are writable (Ben Alex)
+- Onedrive
+ - Fix metadata ordering in permissions (Nick Craig-Wood)
+
+v1.69.1 - 2025-02-14
+
+See commits
+
+- Bug Fixes
+ - lib/oauthutil: Fix redirect URL mismatch errors (Nick
+ Craig-Wood)
+ - bisync: Fix listings missing concurrent modifications (nielash)
+ - serve s3: Fix list objects encoding-type (Nick Craig-Wood)
+ - fs: Fix confusing "didn't find section in config file" error
+ (Nick Craig-Wood)
+ - doc fixes (Christoph Berger, Dimitri Papadopoulos, Matt
+ Ickstadt, Nick Craig-Wood, Tim White, Zachary Vorhies)
+ - build: Added parallel docker builds and caching for go build in
+ the container (Anagh Kumar Baranwal)
+- VFS
+ - Fix the cache failing to upload symlinks when --links was
+ specified (Nick Craig-Wood)
+ - Fix race detected by race detector (Nick Craig-Wood)
+ - Close the change notify channel on Shutdown (izouxv)
+- B2
+ - Fix "fatal error: concurrent map writes" (Nick Craig-Wood)
+- Iclouddrive
+ - Add notes on ADP and Missing PCS cookies (Nick Craig-Wood)
+- Onedrive
+ - Mark German (de) region as deprecated (Nick Craig-Wood)
+- S3
+ - Added new storage class to magalu provider (Bruno Fernandes)
+ - Add DigitalOcean regions SFO2, LON1, TOR1, BLR1 (jkpe)
+ - Add latest Linode Object Storage endpoints (jbagwell-akamai)
+
v1.69.0 - 2025-01-12
See commits
@@ -56202,7 +59042,7 @@ See commits
sockets in http servers (Moises Lima)
- This was making it impossible to use unix sockets with an
proxy
- - This might now cause rclone to need authenticaton where it
+ - This might now cause rclone to need authentication where it
didn't before
- oauthutil: add support for OAuth client credential flow (Martin
Hassack, Nick Craig-Wood)
@@ -57147,7 +59987,7 @@ See commits
- doc updates (albertony, alfish2000, asdffdsazqqq, Dimitri
Papadopoulos, Herby Gillot, Joda Stößer, Manoj Ghosh, Nick
Craig-Wood)
- - Implement --metadata-mapper to transform metatadata with a user
+ - Implement --metadata-mapper to transform metadata with a user
supplied program (Nick Craig-Wood)
- Add ChunkWriterDoesntSeek feature flag and set it for b2 (Nick
Craig-Wood)
@@ -57309,7 +60149,7 @@ See commits
- B2
- Fix multipart upload: corrupted on transfer: sizes differ XXX vs
0 (Nick Craig-Wood)
- - Fix locking window when getting mutipart upload URL (Nick
+ - Fix locking window when getting multipart upload URL (Nick
Craig-Wood)
- Fix server side copies greater than 4GB (Nick Craig-Wood)
- Fix chunked streaming uploads (Nick Craig-Wood)
@@ -63640,6 +66480,18 @@ How do I configure rclone on a remote / headless box with no browser?
This has now been documented in its own remote setup page.
+How can I get rid of the "Config file not found" notice?
+
+If you see a notice like 'NOTICE: Config file "rclone.conf" not found',
+this means you have not configured any remotes.
+
+If you need to configure a remote, see the config help docs.
+
+If you are using rclone entirely with on the fly remotes, you can create
+an empty config file to get rid of this notice, for example:
+
+ rclone config touch
+
Can rclone sync directly from drive to s3
Rclone can sync between two remote cloud storage systems just fine.
@@ -63839,10 +66691,18 @@ garbage collector work harder, reducing memory size at the expense of
CPU usage.
The most common cause of rclone using lots of memory is a single
-directory with millions of files in. Rclone has to load this entirely
-into memory as rclone objects. Each rclone object takes 0.5k-1k of
-memory. There is a workaround for this which involves a bit of
-scripting.
+directory with millions of files in.
+
+Before rclone v1.70 has to load this entirely into memory as rclone
+objects. Each rclone object takes 0.5k-1k of memory. There is a
+workaround for this which involves a bit of scripting.
+
+However with rclone v1.70 and later rclone will automatically save
+directory entries to disk when a directory with more than --list-cutoff
+(1,000,000 by default) entries is detected.
+
+From v1.70 rclone also has the --max-buffer-memory flag which helps
+particularly when multi-thread transfers are using too much memory.
Rclone changes fullwidth Unicode punctuation marks in file names
@@ -64699,7 +67559,6 @@ email addresses removed from here need to be added to bin/.ignore-emails to make
- ben-ba benjamin.brauner@gmx.de
- Eli Orzitzer e_orz@yahoo.com
- Anthony Metzidis anthony.metzidis@gmail.com
-- emyarod afw5059@gmail.com
- keongalvin keongalvin@gmail.com
- rarspace01 rarspace01@users.noreply.github.com
- Paul Stern paulstern45@gmail.com
@@ -64817,6 +67676,65 @@ email addresses removed from here need to be added to bin/.ignore-emails to make
- TAKEI Yuya 853320+takei-yuya@users.noreply.github.com
- Francesco Frassinelli fraph24@gmail.com
francesco.frassinelli@nina.no
+- Matt Ickstadt mattico8@gmail.com matt@beckenterprises.com
+- Spencer McCullough mccullough.spencer@gmail.com
+- Jonathan Giannuzzi jonathan@giannuzzi.me
+- Christoph Berger github@christophberger.com
+- Tim White tim.white@su.org.au
+- Robin Schneider robin.schneider@stackit.cloud
+- izouxv izouxv@users.noreply.github.com
+- Moises Lima mozlima@users.noreply.github.com
+- Bruno Fernandes bruno.fernandes1996@hotmail.com
+- Corentin Barreau corentin@archive.org
+- hiddenmarten hiddenmarten@gmail.com
+- Trevor Starick trevor.starick@gmail.com
+- b-wimmer 132347192+b-wimmer@users.noreply.github.com
+- Jess jess@jessie.cafe
+- Zachary Vorhies zachvorhies@protonmail.com
+- Alexander Minbaev minbaev@gmail.com
+- Joel K Biju joelkbiju18@gmail.com
+- ll3006 doublel3006@gmail.com
+- jbagwell-akamai 113531113+jbagwell-akamai@users.noreply.github.com
+- Michael Kebe michael.kebe@gmail.com
+- Lorenz Brun lorenz@brun.one
+- Dave Vasilevsky djvasi@gmail.com dave@vasilevsky.ca
+- luzpaz luzpaz@users.noreply.github.com
+- jack 9480542+jackusm@users.noreply.github.com
+- Jörn Friedrich Dreyer jfd@butonic.de
+- alingse alingse@foxmail.com
+- Fernando Fernández ferferga@hotmail.com
+- eccoisle 167755281+eccoisle@users.noreply.github.com
+- Klaas Freitag kraft@freisturz.de
+- Danny Garside dannygarside@outlook.com
+- Samantha Bowen sam@bbowen.net
+- simonmcnair 101189766+simonmcnair@users.noreply.github.com
+- huanghaojun jasen.huang@ugreen.com
+- Enduriel endur1el@protonmail.com
+- Markus Gerstel markus.gerstel@osirium.com
+- simwai 16225108+simwai@users.noreply.github.com
+- Ben Alex ben.alex@acegi.com.au
+- Klaas Freitag opensource@freisturz.de klaas.freitag@kiteworks.com
+- Andrew Kreimer algonell@gmail.com
+- Ed Craig-Wood 138211970+edc-w@users.noreply.github.com
+- Christian Richter crichter@owncloud.com
+ 1058116+dragonchaser@users.noreply.github.com
+- Ralf Haferkamp r.haferkamp@opencloud.eu
+- Jugal Kishore me@devjugal.com
+- Tho Neyugn nguyentruongtho@users.noreply.github.com
+- Ben Boeckel mathstuf@users.noreply.github.com
+- Clément Wehrung cwehrung@nurves.com
+- Jeff Geerling geerlingguy@mac.com
+- Germán Casares german.casares.march+github@gmail.com
+- fhuber florian.huber@noris.de
+- wbulot wbulot@hotmail.com
+- Jeremy Daer jeremydaer@gmail.com
+- Oleksiy Stashok ostashok@tesla.com
+- PrathameshLakawade prathameshlakawade@gmail.com
+- Nathanael Demacon 7271496+quantumsheep@users.noreply.github.com
+- ahxxm ahxxm@users.noreply.github.com
+- Flora Thiebaut johann.thiebaut@gmail.com
+- kingston125 support@filelu.com
+- Ser-Bul 30335009+Ser-Bul@users.noreply.github.com
Contact the rclone project
diff --git a/bin/make_manual.py b/bin/make_manual.py
index 962a4731b..f83e95b8a 100755
--- a/bin/make_manual.py
+++ b/bin/make_manual.py
@@ -41,6 +41,7 @@ docs = [
"crypt.md",
"compress.md",
"combine.md",
+ "doi.md",
"dropbox.md",
"filefabric.md",
"filelu.md",
diff --git a/docs/content/azureblob.md b/docs/content/azureblob.md
index 8f74e072a..9e8970438 100644
--- a/docs/content/azureblob.md
+++ b/docs/content/azureblob.md
@@ -719,6 +719,65 @@ Properties:
- Type: int
- Default: 16
+#### --azureblob-copy-cutoff
+
+Cutoff for switching to multipart copy.
+
+Any files larger than this that need to be server-side copied will be
+copied in chunks of chunk_size using the put block list API.
+
+Files smaller than this limit will be copied with the Copy Blob API.
+
+Properties:
+
+- Config: copy_cutoff
+- Env Var: RCLONE_AZUREBLOB_COPY_CUTOFF
+- Type: SizeSuffix
+- Default: 8Mi
+
+#### --azureblob-copy-concurrency
+
+Concurrency for multipart copy.
+
+This is the number of chunks of the same file that are copied
+concurrently.
+
+These chunks are not buffered in memory and Microsoft recommends
+setting this value to greater than 1000 in the azcopy documentation.
+
+https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-optimize#increase-concurrency
+
+In tests, copy speed increases almost linearly with copy
+concurrency.
+
+Properties:
+
+- Config: copy_concurrency
+- Env Var: RCLONE_AZUREBLOB_COPY_CONCURRENCY
+- Type: int
+- Default: 512
+
+#### --azureblob-use-copy-blob
+
+Whether to use the Copy Blob API when copying to the same storage account.
+
+If true (the default) then rclone will use the Copy Blob API for
+copies to the same storage account even when the size is above the
+copy_cutoff.
+
+Rclone assumes that the same storage account means the same config
+and does not check for the same storage account in different configs.
+
+There should be no need to change this value.
+
+
+Properties:
+
+- Config: use_copy_blob
+- Env Var: RCLONE_AZUREBLOB_USE_COPY_BLOB
+- Type: bool
+- Default: true
+
#### --azureblob-list-chunk
Size of blob list.
diff --git a/docs/content/azurefiles.md b/docs/content/azurefiles.md
index 54bb1bd3a..a08e8d9ea 100644
--- a/docs/content/azurefiles.md
+++ b/docs/content/azurefiles.md
@@ -615,6 +615,42 @@ Properties:
- Type: string
- Required: false
+#### --azurefiles-disable-instance-discovery
+
+Skip requesting Microsoft Entra instance metadata
+This should be set true only by applications authenticating in
+disconnected clouds, or private clouds such as Azure Stack.
+It determines whether rclone requests Microsoft Entra instance
+metadata from `https://login.microsoft.com/` before
+authenticating.
+Setting this to true will skip this request, making you responsible
+for ensuring the configured authority is valid and trustworthy.
+
+
+Properties:
+
+- Config: disable_instance_discovery
+- Env Var: RCLONE_AZUREFILES_DISABLE_INSTANCE_DISCOVERY
+- Type: bool
+- Default: false
+
+#### --azurefiles-use-az
+
+Use Azure CLI tool az for authentication
+Set to use the [Azure CLI tool az](https://learn.microsoft.com/en-us/cli/azure/)
+as the sole means of authentication.
+Setting this can be useful if you wish to use the az CLI on a host with
+a System Managed Identity that you do not want to use.
+Don't set env_auth at the same time.
+
+
+Properties:
+
+- Config: use_az
+- Env Var: RCLONE_AZUREFILES_USE_AZ
+- Type: bool
+- Default: false
+
#### --azurefiles-endpoint
Endpoint for the service.
diff --git a/docs/content/changelog.md b/docs/content/changelog.md
index 4eb16138b..adfb44089 100644
--- a/docs/content/changelog.md
+++ b/docs/content/changelog.md
@@ -5,6 +5,122 @@ description: "Rclone Changelog"
# Changelog
+## v1.70.0 - 2025-06-17
+
+[See commits](https://github.com/rclone/rclone/compare/v1.69.0...v1.70.0)
+
+* New backends
+ * [DOI](/doi/) (Flora Thiebaut)
+ * [FileLu](/filelu/) (kingston125)
+ * New S3 providers:
+ * [MEGA S4](/s3/#mega) (Nick Craig-Wood)
+ * [Pure Storage FlashBlade](/s3/#pure-storage-flashblade) (Jeremy Daer)
+* New commands
+ * [convmv](/commands/rclone_convmv/): for moving and transforming files (nielash)
+* New Features
+ * Add [`--max-connections`](/docs/#max-connections-n) to control maximum backend concurrency (Nick Craig-Wood)
+ * Add [`--max-buffer-memory`](/docs/#max-buffer-memory) to limit total buffer memory usage (Nick Craig-Wood)
+ * Add transform library and [`--name-transform`](/docs/#name-transform-command-xxxx) flag (nielash)
+ * sync: Implement [`--list-cutoff`](/docs/#list-cutoff) to allow on disk sorting for reduced memory use (Nick Craig-Wood)
+ * accounting: Add listed stat for number of directory entries listed (Nick Craig-Wood)
+ * backend: Skip hash calculation when the hashType is None (Oleksiy Stashok)
+ * build
+ * Update to go1.24 and make go1.22 the minimum required version (Nick Craig-Wood)
+ * Disable docker builds on PRs & add missing dockerfile changes (Anagh Kumar Baranwal)
+ * Modernize Go usage (Nick Craig-Wood)
+ * Update all dependencies (Nick Craig-Wood)
+ * cmd/authorize: Show required arguments in help text (simwai)
+ * cmd/config: add `--no-output` option (Jess)
+ * cmd/gitannex
+ * Tweak parsing of "rcloneremotename" config (Dan McArdle)
+ * Permit remotes with options (Dan McArdle)
+ * Reject unknown layout modes in INITREMOTE (Dan McArdle)
+ * docker image: Add label org.opencontainers.image.source for release notes in Renovate dependency updates (Robin Schneider)
+ * doc fixes (albertony, Andrew Kreimer, Ben Boeckel, Christoph Berger, Danny Garside, Dimitri Papadopoulos, eccoisle, Ed Craig-Wood, Fernando Fernández, jack, Jeff Geerling, Jugal Kishore, kingston125, luzpaz, Markus Gerstel, Matt Ickstadt, Michael Kebe, Nick Craig-Wood, PrathameshLakawade, Ser-Bul, simonmcnair, Tim White, Zachary Vorhies)
+ * filter:
+ * Add `--hash-filter` to deterministically select a subset of files (Nick Craig-Wood)
+ * Show `--min-size` and `--max-size` in `--dump` filters (Nick Craig-Wood)
+ * hash: Add SHA512 support for file hashes (Enduriel)
+ * http servers: Add `--user-from-header` to use for authentication (Moises Lima)
+ * lib/batcher: Deprecate unused option: batch_commit_timeout (Dan McArdle)
+ * log:
+ * Remove github.com/sirupsen/logrus and replace with log/slog (Nick Craig-Wood)
+ * Add `--windows-event-log-level` to support Windows Event Log (Nick Craig-Wood)
+ * rc
+ * Add add `short` parameter to `core/stats` to not return transferring and checking (Nick Craig-Wood)
+ * In `options/info` make FieldName contain a "." if it should be nested (Nick Craig-Wood)
+ * Add rc control for serve commands (Nick Craig-Wood)
+ * rcserver: Improve content-type check (Jonathan Giannuzzi)
+ * serve nfs
+ * Update docs to note Windows is not supported (Zachary Vorhies)
+ * Change the format of `--nfs-cache-type symlink` file handles (Nick Craig-Wood)
+ * Make metadata files have special file handles (Nick Craig-Wood)
+ * touch: Make touch obey `--transfers` (Nick Craig-Wood)
+ * version: Add `--deps` flag to show dependencies and other build info (Nick Craig-Wood)
+* Bug Fixes
+ * serve s3:
+ * Fix ListObjectsV2 response (fhuber)
+ * Remove redundant handler initialization (Tho Neyugn)
+ * stats: Fix goroutine leak and improve stats accounting process (Nathanael Demacon)
+* VFS
+ * Add `--vfs-metadata-extension` to expose metadata sidecar files (Nick Craig-Wood)
+* Azure Blob
+ * Add support for `x-ms-tags` header (Trevor Starick)
+ * Cleanup uncommitted blocks on upload errors (Nick Craig-Wood)
+ * Speed up server side copies for small files (Nick Craig-Wood)
+ * Implement multipart server side copy (Nick Craig-Wood)
+ * Remove uncommitted blocks on InvalidBlobOrBlock error (Nick Craig-Wood)
+ * Fix handling of objects with // in (Nick Craig-Wood)
+ * Handle retry error codes more carefully (Nick Craig-Wood)
+ * Fix errors not being retried when doing single part copy (Nick Craig-Wood)
+ * Fix multipart server side copies of 0 sized files (Nick Craig-Wood)
+* Azurefiles
+ * Add `--azurefiles-use-az` and `--azurefiles-disable-instance-discovery` (b-wimmer)
+* B2
+ * Add SkipDestructive handling to backend commands (Pat Patterson)
+ * Use file id from listing when not presented in headers (ahxxm)
+* Cloudinary
+ * Automatically add/remove known media files extensions (yuval-cloudinary)
+ * Var naming convention (yuval-cloudinary)
+* Drive
+ * Added `backend moveid` command (Spencer McCullough)
+* Dropbox
+ * Support Dropbox Paper (Dave Vasilevsky)
+* FTP
+ * Add `--ftp-http-proxy` to connect via HTTP CONNECT proxy
+* Gofile
+ * Update to use new direct upload endpoint (wbulot)
+* Googlephotos
+ * Update read only and read write scopes to meet Google's requirements. (Germán Casares)
+* Iclouddrive
+ * Fix panic and files potentially downloaded twice (Clément Wehrung)
+* Internetarchive
+ * Add `--internetarchive-metadata="key=value"` for setting item metadata (Corentin Barreau)
+* Onedrive
+ * Fix "The upload session was not found" errors (Nick Craig-Wood)
+ * Re-add `--onedrive-upload-cutoff` flag (Nick Craig-Wood)
+ * Fix crash if no metadata was updated (Nick Craig-Wood)
+* Opendrive
+ * Added `--opendrive-access` flag to handle permissions (Joel K Biju)
+* Pcloud
+ * Fix "Access denied. You do not have permissions to perform this operation" on large uploads (Nick Craig-Wood)
+* S3
+ * Fix handling of objects with // in (Nick Craig-Wood)
+ * Add IBM IAM signer (Alexander Minbaev)
+ * Split the GCS quirks into `--s3-use-x-id` and `--s3-sign-accept-encoding` (Nick Craig-Wood)
+ * Implement paged listing interface ListP (Nick Craig-Wood)
+ * Add Pure Storage FlashBlade provider support (Jeremy Daer)
+ * Require custom endpoint for Lyve Cloud v2 support (PrathameshLakawade)
+ * MEGA S4 support (Nick Craig-Wood)
+* SFTP
+ * Add `--sftp-http-proxy` to connect via HTTP CONNECT proxy (Nick Craig-Wood)
+* Smb
+ * Add support for kerberos authentication (Jonathan Giannuzzi)
+ * Improve connection pooling efficiency (Jonathan Giannuzzi)
+* WebDAV
+ * Retry propfind on 425 status (Jörn Friedrich Dreyer)
+ * Add an ownCloud Infinite Scale vendor that enables tus chunked upload support (Klaas Freitag)
+
## v1.69.3 - 2025-05-21
[See commits](https://github.com/rclone/rclone/compare/v1.69.2...v1.69.3)
diff --git a/docs/content/commands/rclone.md b/docs/content/commands/rclone.md
index 0921995de..fabe44fa9 100644
--- a/docs/content/commands/rclone.md
+++ b/docs/content/commands/rclone.md
@@ -37,6 +37,8 @@ rclone [flags]
--azureblob-client-id string The ID of the client in use
--azureblob-client-secret string One of the service principal's client secrets
--azureblob-client-send-certificate-chain Send the certificate chain when using certificate auth
+ --azureblob-copy-concurrency int Concurrency for multipart copy (default 512)
+ --azureblob-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 8Mi)
--azureblob-delete-snapshots string Set to specify how to deal with snapshots on blob deletion
--azureblob-description string Description of the remote
--azureblob-directory-markers Upload an empty object with a trailing slash when a new directory is created
@@ -60,6 +62,7 @@ rclone [flags]
--azureblob-upload-concurrency int Concurrency for multipart uploads (default 16)
--azureblob-upload-cutoff string Cutoff for switching to chunked upload (<= 256 MiB) (deprecated)
--azureblob-use-az Use Azure CLI tool az for authentication
+ --azureblob-use-copy-blob Whether to use the Copy Blob API when copying to the same storage account (default true)
--azureblob-use-emulator Uses local storage emulator if provided as 'true'
--azureblob-use-msi Use a managed service identity to authenticate (only works in Azure)
--azureblob-username string User name (usually an email address)
@@ -72,6 +75,7 @@ rclone [flags]
--azurefiles-client-send-certificate-chain Send the certificate chain when using certificate auth
--azurefiles-connection-string string Azure Files Connection String
--azurefiles-description string Description of the remote
+ --azurefiles-disable-instance-discovery Skip requesting Microsoft Entra instance metadata
--azurefiles-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot)
--azurefiles-endpoint string Endpoint for the service
--azurefiles-env-auth Read credentials from runtime (environment variables, CLI or MSI)
@@ -86,6 +90,7 @@ rclone [flags]
--azurefiles-share-name string Azure Files Share Name
--azurefiles-tenant string ID of the service principal's tenant. Also called its directory ID
--azurefiles-upload-concurrency int Concurrency for multipart uploads (default 16)
+ --azurefiles-use-az Use Azure CLI tool az for authentication
--azurefiles-use-msi Use a managed service identity to authenticate (only works in Azure)
--azurefiles-username string User name (usually an email address)
--b2-account string Account ID or Application Key ID
@@ -160,12 +165,14 @@ rclone [flags]
--chunker-remote string Remote to chunk/unchunk
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
+ --cloudinary-adjust-media-files-extensions Cloudinary handles media formats as a file attribute and strips it from the name, which is unlike most other file systems (default true)
--cloudinary-api-key string Cloudinary API Key
--cloudinary-api-secret string Cloudinary API Secret
--cloudinary-cloud-name string Cloudinary Environment Name
--cloudinary-description string Description of the remote
--cloudinary-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
--cloudinary-eventually-consistent-delay Duration Wait N seconds for eventual consistency of the databases that support the backend operation (default 0s)
+ --cloudinary-media-extensions stringArray Cloudinary supported media extensions (default 3ds,3g2,3gp,ai,arw,avi,avif,bmp,bw,cr2,cr3,djvu,dng,eps3,fbx,flif,flv,gif,glb,gltf,hdp,heic,heif,ico,indd,jp2,jpe,jpeg,jpg,jxl,jxr,m2ts,mov,mp4,mpeg,mts,mxf,obj,ogv,pdf,ply,png,psd,svg,tga,tif,tiff,ts,u3ma,usdz,wdp,webm,webp,wmv)
--cloudinary-upload-prefix string Specify the API endpoint for environments out of the US
--cloudinary-upload-preset string Upload Preset to select asset manipulation on upload
--color AUTO|NEVER|ALWAYS When to show colors (and other ANSI codes) AUTO|NEVER|ALWAYS (default AUTO)
@@ -204,6 +211,10 @@ rclone [flags]
--disable string Disable a comma separated list of features (use --disable help to see a list)
--disable-http-keep-alives Disable HTTP keep-alives and use each connection once
--disable-http2 Disable HTTP/2 in the global transport
+ --doi-description string Description of the remote
+ --doi-doi string The DOI or the doi.org URL
+ --doi-doi-resolver-api-url string The URL of the DOI resolver API to use
+ --doi-provider string DOI provider
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs
--drive-auth-owner-only Only consider files owned by the authenticated user
@@ -255,7 +266,6 @@ rclone [flags]
--drive-use-trash Send files to the trash instead of deleting permanently (default true)
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download (default off)
--dropbox-auth-url string Auth server URL
- --dropbox-batch-commit-timeout Duration Max time to wait for a batch to finish committing (default 10m0s)
--dropbox-batch-mode string Upload file batching sync|async|off (default "sync")
--dropbox-batch-size int Max number of files in upload batch
--dropbox-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
@@ -265,11 +275,14 @@ rclone [flags]
--dropbox-client-secret string OAuth Client Secret
--dropbox-description string Description of the remote
--dropbox-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
+ --dropbox-export-formats CommaSepList Comma separated list of preferred formats for exporting files (default html,md)
--dropbox-impersonate string Impersonate this user when using a business account
--dropbox-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms)
--dropbox-root-namespace string Specify a different Dropbox namespace ID to use as the root for all paths
--dropbox-shared-files Instructs rclone to work on individual shared files
--dropbox-shared-folders Instructs rclone to work on shared folders
+ --dropbox-show-all-exports Show all exportable files in listings
+ --dropbox-skip-exports Skip exportable files in all listings
--dropbox-token string OAuth Access Token as a JSON blob
--dropbox-token-url string Token server url
-n, --dry-run Do a trial run with no permanent changes
@@ -298,6 +311,9 @@ rclone [flags]
--filefabric-token-expiry string Token expiry time
--filefabric-url string URL of the Enterprise File Fabric to connect to
--filefabric-version string Version read from the file fabric
+ --filelu-description string Description of the remote
+ --filelu-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,CrLf,Del,Ctl,LeftSpace,LeftPeriod,LeftTilde,LeftCrLfHtVt,RightSpace,RightPeriod,RightCrLfHtVt,InvalidUtf8,Dot,SquareBracket,Semicolon,Exclamation)
+ --filelu-key string Your FileLu Rclone key from My Account
--files-from stringArray Read list of source-file names from file (use - to read from stdin)
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
--filescom-api-key string The API key used to authenticate with Files.com
@@ -364,7 +380,6 @@ rclone [flags]
--gofile-list-chunk int Number of items to list in each call (default 1000)
--gofile-root-folder-id string ID of the root folder
--gphotos-auth-url string Auth server URL
- --gphotos-batch-commit-timeout Duration Max time to wait for a batch to finish committing (default 10m0s)
--gphotos-batch-mode string Upload file batching sync|async|off (default "sync")
--gphotos-batch-size int Max number of files in upload batch
--gphotos-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
@@ -380,6 +395,7 @@ rclone [flags]
--gphotos-start-year int Year limits the photos to be downloaded to those which are uploaded after the given year (default 2000)
--gphotos-token string OAuth Access Token as a JSON blob
--gphotos-token-url string Token server url
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--hasher-auto-size SizeSuffix Auto-update checksum for files smaller than this size (disabled by default)
--hasher-description string Description of the remote
--hasher-hashes CommaSepList Comma separated list of supported checksum types (default md5,sha1)
@@ -449,6 +465,8 @@ rclone [flags]
--internetarchive-encoding Encoding The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot)
--internetarchive-endpoint string IAS3 Endpoint (default "https://s3.us.archive.org")
--internetarchive-front-endpoint string Host of InternetArchive Frontend (default "https://archive.org")
+ --internetarchive-item-derive Whether to trigger derive on the IA item or not. If set to false, the item will not be derived by IA upon upload (default true)
+ --internetarchive-item-metadata stringArray Metadata to be set on the IA item, this is different from file-level metadata that can be set using --metadata-set
--internetarchive-secret-access-key string IAS3 Secret Key (password)
--internetarchive-wait-archive Duration Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish (default 0s)
--jottacloud-auth-url string Auth server URL
@@ -476,6 +494,7 @@ rclone [flags]
--linkbox-description string Description of the remote
--linkbox-token string Token from https://www.linkbox.to/admin/account
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
+ --list-cutoff int To save memory, sort directory listings on disk above this threshold (default 1000000)
--local-case-insensitive Force the filesystem to report itself as case insensitive
--local-case-sensitive Force the filesystem to report itself as case sensitive
--local-description string Description of the remote
@@ -491,7 +510,7 @@ rclone [flags]
--local-unicode-normalization Apply unicode NFC normalization to paths and filenames
--local-zero-size-links Assume the Stat size of links is zero (and read them instead) (deprecated)
--log-file string Log everything to this file
- --log-format string Comma separated list of log format options (default "date,time")
+ --log-format Bits Comma separated list of log format options (default date,time)
--log-level LogLevel Log level DEBUG|INFO|NOTICE|ERROR (default NOTICE)
--log-systemd Activate systemd integration for the logger
--low-level-retries int Number of low level retries to do (default 10)
@@ -512,6 +531,8 @@ rclone [flags]
--mailru-user string User name (usually email)
--max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
+ --max-buffer-memory SizeSuffix If set, don't allocate more than this amount of memory as buffers (default off)
+ --max-connections int Maximum number of simultaneous backend API connections, 0 for unlimited
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off)
--max-depth int If set limits the recursion depth to this (default -1)
@@ -553,6 +574,7 @@ rclone [flags]
--metrics-server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
--metrics-template string User-specified template
--metrics-user string User name for authentication
+ --metrics-user-from-header string User name from a defined HTTP header
--min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
--modify-window Duration Max time diff to be considered the same (default 1ns)
@@ -560,6 +582,7 @@ rclone [flags]
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
--netstorage-account string Set the NetStorage account name
--netstorage-description string Description of the remote
--netstorage-host string Domain+path of NetStorage host to connect to
@@ -601,6 +624,7 @@ rclone [flags]
--onedrive-tenant string ID of the service principal's tenant. Also called its directory ID
--onedrive-token string OAuth Access Token as a JSON blob
--onedrive-token-url string Token server url
+ --onedrive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default off)
--oos-attempt-resume-upload If true attempt to resume previously started multipart upload for the object
--oos-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
--oos-compartment string Specify compartment OCID, if you need to list buckets
@@ -626,6 +650,7 @@ rclone [flags]
--oos-storage-tier string The storage class to use when storing new objects in storage. https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm (default "Standard")
--oos-upload-concurrency int Concurrency for multipart uploads (default 10)
--oos-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
+ --opendrive-access string Files and folders will be uploaded with this access permission (default private) (default "private")
--opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi)
--opendrive-description string Description of the remote
--opendrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
@@ -736,6 +761,7 @@ rclone [flags]
--rc-server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
--rc-template string User-specified template
--rc-user string User name for authentication
+ --rc-user-from-header string User name from a defined HTTP header
--rc-web-fetch-url string URL to fetch the releases for webgui (default "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest")
--rc-web-gui Launch WebGUI on localhost
--rc-web-gui-force-update Force update to latest version of web gui
@@ -760,6 +786,8 @@ rclone [flags]
--s3-endpoint string Endpoint for S3 API
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars)
--s3-force-path-style If true use path style access if false use virtual hosted style (default true)
+ --s3-ibm-api-key string IBM API Key to be used to obtain IAM token
+ --s3-ibm-resource-instance-id string IBM service instance id
--s3-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery
--s3-list-chunk int Size of listing chunk (response list for each ListObject S3 request) (default 1000)
--s3-list-url-encode Tristate Whether to url encode listings: true/false/unset (default unset)
@@ -780,6 +808,7 @@ rclone [flags]
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3
--s3-session-token string An AWS session token
--s3-shared-credentials-file string Path to the shared credentials file
+ --s3-sign-accept-encoding Tristate Set if rclone should include Accept-Encoding as part of the signature (default unset)
--s3-sse-customer-algorithm string If using SSE-C, the server-side encryption algorithm used when storing this object in S3
--s3-sse-customer-key string To use SSE-C you may provide the secret encryption key used to encrypt/decrypt your data
--s3-sse-customer-key-base64 string If using SSE-C you must provide the secret encryption key encoded in base64 format to encrypt/decrypt your data
@@ -796,6 +825,7 @@ rclone [flags]
--s3-use-multipart-uploads Tristate Set if rclone should use multipart uploads (default unset)
--s3-use-presigned-request Whether to use a presigned request or PutObject for single part uploads
--s3-use-unsigned-payload Tristate Whether to use an unsigned payload in PutObject (default unset)
+ --s3-use-x-id Tristate Set if rclone should add x-id URL parameters (default unset)
--s3-v2-auth If true use v2 authentication
--s3-version-at Time Show file versions as they were at the specified time (default off)
--s3-version-deleted Show deleted file markers when using versions
@@ -822,6 +852,7 @@ rclone [flags]
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available
--sftp-host string SSH host to connect to
--sftp-host-key-algorithms SpaceSepList Space separated list of host key algorithms, ordered by preference
+ --sftp-http-proxy string URL for HTTP CONNECT proxy
--sftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--sftp-key-exchange SpaceSepList Space separated list of key exchange algorithms, ordered by preference
--sftp-key-file string Path to PEM-encoded private key file
@@ -877,6 +908,7 @@ rclone [flags]
--smb-pass string SMB password (obscured)
--smb-port int SMB port number (default 445)
--smb-spn string Service principal name
+ --smb-use-kerberos Use Kerberos authentication
--smb-user string SMB username (default "$USER")
--stats Duration Interval between printing stats, e.g. 500ms, 60s, 5m (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats (0 for no limit) (default 45)
@@ -965,7 +997,7 @@ rclone [flags]
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.69.0")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.70.0")
-v, --verbose count Print lots more stuff (repeat for more)
-V, --version Print the version number
--webdav-auth-redirect Preserve authentication on redirect
@@ -1017,6 +1049,7 @@ rclone [flags]
* [rclone cleanup](/commands/rclone_cleanup/) - Clean up the remote if possible.
* [rclone completion](/commands/rclone_completion/) - Output completion script for a given shell.
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
+* [rclone convmv](/commands/rclone_convmv/) - Convert file and directory names in place.
* [rclone copy](/commands/rclone_copy/) - Copy files from source to dest, skipping identical files.
* [rclone copyto](/commands/rclone_copyto/) - Copy files from source to dest, skipping identical files.
* [rclone copyurl](/commands/rclone_copyurl/) - Copy the contents of the URL supplied content to dest:path.
diff --git a/docs/content/commands/rclone_authorize.md b/docs/content/commands/rclone_authorize.md
index abdaff9d2..9e9fbcfa1 100644
--- a/docs/content/commands/rclone_authorize.md
+++ b/docs/content/commands/rclone_authorize.md
@@ -14,13 +14,18 @@ Remote authorization. Used to authorize a remote or headless
rclone from a machine with a browser - use as instructed by
rclone config.
+The command requires 1-3 arguments:
+ - fs name (e.g., "drive", "s3", etc.)
+ - Either a base64 encoded JSON blob obtained from a previous rclone config session
+ - Or a client_id and client_secret pair obtained from the remote service
+
Use --auth-no-open-browser to prevent rclone to open auth
link in default browser automatically.
Use --template to generate HTML output via a custom Go template. If a blank string is provided as an argument to this flag, the default template is used.
```
-rclone authorize [flags]
+rclone authorize [base64_json_blob | client_id client_secret] [flags]
```
## Options
diff --git a/docs/content/commands/rclone_bisync.md b/docs/content/commands/rclone_bisync.md
index 4f7c80d31..d7fb8ea2a 100644
--- a/docs/content/commands/rclone_bisync.md
+++ b/docs/content/commands/rclone_bisync.md
@@ -93,6 +93,7 @@ Flags for anything which can copy a file
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
--no-update-dir-modtime Don't update directory modification times
@@ -129,6 +130,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
diff --git a/docs/content/commands/rclone_cat.md b/docs/content/commands/rclone_cat.md
index 6ec5666a0..71d5a9814 100644
--- a/docs/content/commands/rclone_cat.md
+++ b/docs/content/commands/rclone_cat.md
@@ -74,6 +74,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
diff --git a/docs/content/commands/rclone_check.md b/docs/content/commands/rclone_check.md
index 9647280b5..2e3e80852 100644
--- a/docs/content/commands/rclone_check.md
+++ b/docs/content/commands/rclone_check.md
@@ -96,6 +96,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
diff --git a/docs/content/commands/rclone_checksum.md b/docs/content/commands/rclone_checksum.md
index 9f6eaab39..44cd51bb8 100644
--- a/docs/content/commands/rclone_checksum.md
+++ b/docs/content/commands/rclone_checksum.md
@@ -82,6 +82,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
diff --git a/docs/content/commands/rclone_config_create.md b/docs/content/commands/rclone_config_create.md
index 2832b70a8..22eeb7d85 100644
--- a/docs/content/commands/rclone_config_create.md
+++ b/docs/content/commands/rclone_config_create.md
@@ -123,6 +123,7 @@ rclone config create name type [key value]* [flags]
--continue Continue the configuration process with an answer
-h, --help help for create
--no-obscure Force any passwords not to be obscured
+ --no-output Don't provide any output
--non-interactive Don't interact with user and return questions
--obscure Force any passwords to be obscured
--result string Result - use with --continue
diff --git a/docs/content/commands/rclone_config_encryption_set.md b/docs/content/commands/rclone_config_encryption_set.md
index b02dff900..780c086dc 100644
--- a/docs/content/commands/rclone_config_encryption_set.md
+++ b/docs/content/commands/rclone_config_encryption_set.md
@@ -21,12 +21,12 @@ password to re-encrypt the config.
When `--password-command` is called to change the password then the
environment variable `RCLONE_PASSWORD_CHANGE=1` will be set. So if
-changing passwords programatically you can use the environment
+changing passwords programmatically you can use the environment
variable to distinguish which password you must supply.
Alternatively you can remove the password first (with `rclone config
encryption remove`), then set it again with this command which may be
-easier if you don't mind the unecrypted config file being on the disk
+easier if you don't mind the unencrypted config file being on the disk
briefly.
diff --git a/docs/content/commands/rclone_config_update.md b/docs/content/commands/rclone_config_update.md
index e4a160b6a..af9660db2 100644
--- a/docs/content/commands/rclone_config_update.md
+++ b/docs/content/commands/rclone_config_update.md
@@ -123,6 +123,7 @@ rclone config update name [key value]+ [flags]
--continue Continue the configuration process with an answer
-h, --help help for update
--no-obscure Force any passwords not to be obscured
+ --no-output Don't provide any output
--non-interactive Don't interact with user and return questions
--obscure Force any passwords to be obscured
--result string Result - use with --continue
diff --git a/docs/content/commands/rclone_convmv.md b/docs/content/commands/rclone_convmv.md
new file mode 100644
index 000000000..d23125618
--- /dev/null
+++ b/docs/content/commands/rclone_convmv.md
@@ -0,0 +1,400 @@
+---
+title: "rclone convmv"
+description: "Convert file and directory names in place."
+versionIntroduced: v1.70
+# autogenerated - DO NOT EDIT, instead edit the source code in cmd/convmv/ and as part of making a release run "make commanddocs"
+---
+# rclone convmv
+
+Convert file and directory names in place.
+
+## Synopsis
+
+
+convmv supports advanced path name transformations for converting and renaming files and directories by applying prefixes, suffixes, and other alterations.
+
+| Command | Description |
+|------|------|
+| `--name-transform prefix=XXXX` | Prepends XXXX to the file name. |
+| `--name-transform suffix=XXXX` | Appends XXXX to the file name after the extension. |
+| `--name-transform suffix_keep_extension=XXXX` | Appends XXXX to the file name while preserving the original file extension. |
+| `--name-transform trimprefix=XXXX` | Removes XXXX if it appears at the start of the file name. |
+| `--name-transform trimsuffix=XXXX` | Removes XXXX if it appears at the end of the file name. |
+| `--name-transform regex=/pattern/replacement/` | Applies a regex-based transformation. |
+| `--name-transform replace=old:new` | Replaces occurrences of old with new in the file name. |
+| `--name-transform date={YYYYMMDD}` | Appends or prefixes the specified date format. |
+| `--name-transform truncate=N` | Truncates the file name to a maximum of N characters. |
+| `--name-transform base64encode` | Encodes the file name in Base64. |
+| `--name-transform base64decode` | Decodes a Base64-encoded file name. |
+| `--name-transform encoder=ENCODING` | Converts the file name to the specified encoding (e.g., ISO-8859-1, Windows-1252, Macintosh). |
+| `--name-transform decoder=ENCODING` | Decodes the file name from the specified encoding. |
+| `--name-transform charmap=MAP` | Applies a character mapping transformation. |
+| `--name-transform lowercase` | Converts the file name to lowercase. |
+| `--name-transform uppercase` | Converts the file name to UPPERCASE. |
+| `--name-transform titlecase` | Converts the file name to Title Case. |
+| `--name-transform ascii` | Strips non-ASCII characters. |
+| `--name-transform url` | URL-encodes the file name. |
+| `--name-transform nfc` | Converts the file name to NFC Unicode normalization form. |
+| `--name-transform nfd` | Converts the file name to NFD Unicode normalization form. |
+| `--name-transform nfkc` | Converts the file name to NFKC Unicode normalization form. |
+| `--name-transform nfkd` | Converts the file name to NFKD Unicode normalization form. |
+| `--name-transform command=/path/to/my/programfile names.` | Executes an external program to transform |
+
+
+Conversion modes:
+```
+none
+nfc
+nfd
+nfkc
+nfkd
+replace
+prefix
+suffix
+suffix_keep_extension
+trimprefix
+trimsuffix
+index
+date
+truncate
+base64encode
+base64decode
+encoder
+decoder
+ISO-8859-1
+Windows-1252
+Macintosh
+charmap
+lowercase
+uppercase
+titlecase
+ascii
+url
+regex
+command
+```
+Char maps:
+```
+
+IBM-Code-Page-037
+IBM-Code-Page-437
+IBM-Code-Page-850
+IBM-Code-Page-852
+IBM-Code-Page-855
+Windows-Code-Page-858
+IBM-Code-Page-860
+IBM-Code-Page-862
+IBM-Code-Page-863
+IBM-Code-Page-865
+IBM-Code-Page-866
+IBM-Code-Page-1047
+IBM-Code-Page-1140
+ISO-8859-1
+ISO-8859-2
+ISO-8859-3
+ISO-8859-4
+ISO-8859-5
+ISO-8859-6
+ISO-8859-7
+ISO-8859-8
+ISO-8859-9
+ISO-8859-10
+ISO-8859-13
+ISO-8859-14
+ISO-8859-15
+ISO-8859-16
+KOI8-R
+KOI8-U
+Macintosh
+Macintosh-Cyrillic
+Windows-874
+Windows-1250
+Windows-1251
+Windows-1252
+Windows-1253
+Windows-1254
+Windows-1255
+Windows-1256
+Windows-1257
+Windows-1258
+X-User-Defined
+```
+Encoding masks:
+```
+Asterisk
+ BackQuote
+ BackSlash
+ Colon
+ CrLf
+ Ctl
+ Del
+ Dollar
+ Dot
+ DoubleQuote
+ Exclamation
+ Hash
+ InvalidUtf8
+ LeftCrLfHtVt
+ LeftPeriod
+ LeftSpace
+ LeftTilde
+ LtGt
+ None
+ Percent
+ Pipe
+ Question
+ Raw
+ RightCrLfHtVt
+ RightPeriod
+ RightSpace
+ Semicolon
+ SingleQuote
+ Slash
+ SquareBracket
+```
+Examples:
+
+```
+rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,uppercase"
+// Output: STORIES/THE QUICK BROWN FOX!.TXT
+```
+
+```
+rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,replace=Fox:Turtle" --name-transform "all,replace=Quick:Slow"
+// Output: stories/The Slow Brown Turtle!.txt
+```
+
+```
+rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,base64encode"
+// Output: c3Rvcmllcw==/VGhlIFF1aWNrIEJyb3duIEZveCEudHh0
+```
+
+```
+rclone convmv "c3Rvcmllcw==/VGhlIFF1aWNrIEJyb3duIEZveCEudHh0" --name-transform "all,base64decode"
+// Output: stories/The Quick Brown Fox!.txt
+```
+
+```
+rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,nfc"
+// Output: stories/The Quick Brown 🦊 Fox Went to the Café!.txt
+```
+
+```
+rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,nfd"
+// Output: stories/The Quick Brown 🦊 Fox Went to the Café!.txt
+```
+
+```
+rclone convmv "stories/The Quick Brown 🦊 Fox!.txt" --name-transform "all,ascii"
+// Output: stories/The Quick Brown Fox!.txt
+```
+
+```
+rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,trimsuffix=.txt"
+// Output: stories/The Quick Brown Fox!
+```
+
+```
+rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,prefix=OLD_"
+// Output: OLD_stories/OLD_The Quick Brown Fox!.txt
+```
+
+```
+rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,charmap=ISO-8859-7"
+// Output: stories/The Quick Brown _ Fox Went to the Caf_!.txt
+```
+
+```
+rclone convmv "stories/The Quick Brown Fox: A Memoir [draft].txt" --name-transform "all,encoder=Colon,SquareBracket"
+// Output: stories/The Quick Brown Fox: A Memoir [draft].txt
+```
+
+```
+rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,truncate=21"
+// Output: stories/The Quick Brown 🦊 Fox
+```
+
+```
+rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=echo"
+// Output: stories/The Quick Brown Fox!.txt
+```
+
+```
+rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
+// Output: stories/The Quick Brown Fox!-20250617
+```
+
+```
+rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
+// Output: stories/The Quick Brown Fox!-2025-06-17 0551PM
+```
+
+```
+rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,regex=[\\.\\w]/ab"
+// Output: ababababababab/ababab ababababab ababababab ababab!abababab
+```
+
+
+
+Multiple transformations can be used in sequence, applied in the order they are specified on the command line.
+
+The `--name-transform` flag is also available in `sync`, `copy`, and `move`.
+
+# Files vs Directories ##
+
+By default `--name-transform` will only apply to file names. The means only the leaf file name will be transformed.
+However some of the transforms would be better applied to the whole path or just directories.
+To choose which which part of the file path is affected some tags can be added to the `--name-transform`
+
+| Tag | Effect |
+|------|------|
+| `file` | Only transform the leaf name of files (DEFAULT) |
+| `dir` | Only transform name of directories - these may appear anywhere in the path |
+| `all` | Transform the entire path for files and directories |
+
+This is used by adding the tag into the transform name like this: `--name-transform file,prefix=ABC` or `--name-transform dir,prefix=DEF`.
+
+For some conversions using all is more likely to be useful, for example `--name-transform all,nfc`
+
+Note that `--name-transform` may not add path separators `/` to the name. This will cause an error.
+
+# Ordering and Conflicts ##
+
+* Transformations will be applied in the order specified by the user.
+ * If the `file` tag is in use (the default) then only the leaf name of files will be transformed.
+ * If the `dir` tag is in use then directories anywhere in the path will be transformed
+ * If the `all` tag is in use then directories and files anywhere in the path will be transformed
+ * Each transformation will be run one path segment at a time.
+ * If a transformation adds a `/` or ends up with an empty path segment then that will be an error.
+* It is up to the user to put the transformations in a sensible order.
+ * Conflicting transformations, such as `prefix` followed by `trimprefix` or `nfc` followed by `nfd`, are possible.
+ * Instead of enforcing mutual exclusivity, transformations are applied in sequence as specified by the
+user, allowing for intentional use cases (e.g., trimming one prefix before adding another).
+ * Users should be aware that certain combinations may lead to unexpected results and should verify
+transformations using `--dry-run` before execution.
+
+# Race Conditions and Non-Deterministic Behavior ##
+
+Some transformations, such as `replace=old:new`, may introduce conflicts where multiple source files map to the same destination name.
+This can lead to race conditions when performing concurrent transfers. It is up to the user to anticipate these.
+* If two files from the source are transformed into the same name at the destination, the final state may be non-deterministic.
+* Running rclone check after a sync using such transformations may erroneously report missing or differing files due to overwritten results.
+
+* To minimize risks, users should:
+ * Carefully review transformations that may introduce conflicts.
+ * Use `--dry-run` to inspect changes before executing a sync (but keep in mind that it won't show the effect of non-deterministic transformations).
+ * Avoid transformations that cause multiple distinct source files to map to the same destination name.
+ * Consider disabling concurrency with `--transfers=1` if necessary.
+ * Certain transformations (e.g. `prefix`) will have a multiplying effect every time they are used. Avoid these when using `bisync`.
+
+
+
+```
+rclone convmv dest:path --name-transform XXX [flags]
+```
+
+## Options
+
+```
+ --create-empty-src-dirs Create empty source dirs on destination after move
+ --delete-empty-src-dirs Delete empty source dirs after move
+ -h, --help help for convmv
+```
+
+Options shared with other commands are described next.
+See the [global flags page](/flags/) for global options not listed here.
+
+### Copy Options
+
+Flags for anything which can copy a file
+
+```
+ --check-first Do all the checks before starting transfers
+ -c, --checksum Check for changes with size & checksum (if available, or fallback to size only)
+ --compare-dest stringArray Include additional server-side paths during comparison
+ --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
+ --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
+ --ignore-case-sync Ignore case when synchronizing
+ --ignore-checksum Skip post copy check of checksums
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use modtime or checksum
+ -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
+ --immutable Do not modify files, fail if existing files have been modified
+ --inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
+ --max-backlog int Maximum number of objects in sync or check backlog (default 10000)
+ --max-duration Duration Maximum duration rclone will transfer data for (default 0s)
+ --max-transfer SizeSuffix Maximum size of data to transfer (default off)
+ -M, --metadata If set, preserve metadata when copying objects
+ --modify-window Duration Max time diff to be considered the same (default 1ns)
+ --multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi)
+ --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
+ --multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
+ --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
+ --no-check-dest Don't check the destination, copy regardless
+ --no-traverse Don't traverse destination file system on copy
+ --no-update-dir-modtime Don't update directory modification times
+ --no-update-modtime Don't update destination modtime if files identical
+ --order-by string Instructions on how to order the transfers, e.g. 'size,descending'
+ --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
+ --refresh-times Refresh the modtime of remote files
+ --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
+ --size-only Skip based on size only, not modtime or checksum
+ --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
+ -u, --update Skip files that are newer on the destination
+```
+
+### Important Options
+
+Important flags useful for most commands
+
+```
+ -n, --dry-run Do a trial run with no permanent changes
+ -i, --interactive Enable interactive mode
+ -v, --verbose count Print lots more stuff (repeat for more)
+```
+
+### Filter Options
+
+Flags for filtering directory listings
+
+```
+ --delete-excluded Delete files on dest excluded from sync
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin)
+ --exclude-if-present stringArray Exclude directories if filename is present
+ --files-from stringArray Read list of source-file names from file (use - to read from stdin)
+ --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
+ -f, --filter stringArray Add a file filtering rule
+ --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
+ --ignore-case Ignore case in filters (case insensitive)
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read file include patterns from file (use - to read from stdin)
+ --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-depth int If set limits the recursion depth to this (default -1)
+ --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
+ --metadata-exclude stringArray Exclude metadatas matching pattern
+ --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin)
+ --metadata-filter stringArray Add a metadata filtering rule
+ --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin)
+ --metadata-include stringArray Include metadatas matching pattern
+ --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
+ --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
+```
+
+### Listing Options
+
+Flags for listing directories
+
+```
+ --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
+ --fast-list Use recursive list if available; uses more memory but fewer transactions
+```
+
+## See Also
+
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
+
diff --git a/docs/content/commands/rclone_copy.md b/docs/content/commands/rclone_copy.md
index e4da96773..db7ebf6d7 100644
--- a/docs/content/commands/rclone_copy.md
+++ b/docs/content/commands/rclone_copy.md
@@ -116,6 +116,7 @@ Flags for anything which can copy a file
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
--no-update-dir-modtime Don't update directory modification times
@@ -152,6 +153,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
diff --git a/docs/content/commands/rclone_copyto.md b/docs/content/commands/rclone_copyto.md
index 4a2152c86..b5026bbc5 100644
--- a/docs/content/commands/rclone_copyto.md
+++ b/docs/content/commands/rclone_copyto.md
@@ -36,6 +36,8 @@ This doesn't transfer files that are identical on src and dst, testing
by size and modification time or MD5SUM. It doesn't delete files from
the destination.
+*If you are looking to copy just a byte range of a file, please see 'rclone cat --offset X --count Y'*
+
**Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics
@@ -79,6 +81,7 @@ Flags for anything which can copy a file
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
--no-update-dir-modtime Don't update directory modification times
@@ -115,6 +118,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
diff --git a/docs/content/commands/rclone_copyurl.md b/docs/content/commands/rclone_copyurl.md
index 061644fa3..779b6b5e1 100644
--- a/docs/content/commands/rclone_copyurl.md
+++ b/docs/content/commands/rclone_copyurl.md
@@ -17,7 +17,7 @@ Setting `--auto-filename` will attempt to automatically determine the
filename from the URL (after any redirections) and used in the
destination path.
-With `--auto-filename-header` in addition, if a specific filename is
+With `--header-filename` in addition, if a specific filename is
set in HTTP headers, it will be used instead of the name from the URL.
With `--print-filename` in addition, the resulting file name will be
printed.
@@ -28,7 +28,7 @@ destination if there is one with the same name.
Setting `--stdout` or making the output file name `-`
will cause the output to be written to standard output.
-## Troublshooting
+## Troubleshooting
If you can't get `rclone copyurl` to work then here are some things you can try:
diff --git a/docs/content/commands/rclone_cryptcheck.md b/docs/content/commands/rclone_cryptcheck.md
index 1f3a916bd..0f59f6893 100644
--- a/docs/content/commands/rclone_cryptcheck.md
+++ b/docs/content/commands/rclone_cryptcheck.md
@@ -99,6 +99,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
diff --git a/docs/content/commands/rclone_delete.md b/docs/content/commands/rclone_delete.md
index 542cfcd16..d4153f87d 100644
--- a/docs/content/commands/rclone_delete.md
+++ b/docs/content/commands/rclone_delete.md
@@ -75,6 +75,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
diff --git a/docs/content/commands/rclone_hashsum.md b/docs/content/commands/rclone_hashsum.md
index 4ca4f30a2..ab0609a55 100644
--- a/docs/content/commands/rclone_hashsum.md
+++ b/docs/content/commands/rclone_hashsum.md
@@ -36,6 +36,7 @@ Run without a hash to see the list of all supported hashes, e.g.
* whirlpool
* crc32
* sha256
+ * sha512
Then
@@ -74,6 +75,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
diff --git a/docs/content/commands/rclone_ls.md b/docs/content/commands/rclone_ls.md
index 54ad257f1..30b8cfd88 100644
--- a/docs/content/commands/rclone_ls.md
+++ b/docs/content/commands/rclone_ls.md
@@ -70,6 +70,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
diff --git a/docs/content/commands/rclone_lsd.md b/docs/content/commands/rclone_lsd.md
index 90fafb28d..0fa31360c 100644
--- a/docs/content/commands/rclone_lsd.md
+++ b/docs/content/commands/rclone_lsd.md
@@ -81,6 +81,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
diff --git a/docs/content/commands/rclone_lsf.md b/docs/content/commands/rclone_lsf.md
index a02014a66..73ca2077a 100644
--- a/docs/content/commands/rclone_lsf.md
+++ b/docs/content/commands/rclone_lsf.md
@@ -178,6 +178,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
diff --git a/docs/content/commands/rclone_lsjson.md b/docs/content/commands/rclone_lsjson.md
index 48ea8b155..952828ff4 100644
--- a/docs/content/commands/rclone_lsjson.md
+++ b/docs/content/commands/rclone_lsjson.md
@@ -150,6 +150,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
diff --git a/docs/content/commands/rclone_lsl.md b/docs/content/commands/rclone_lsl.md
index ba2d587ca..d02090071 100644
--- a/docs/content/commands/rclone_lsl.md
+++ b/docs/content/commands/rclone_lsl.md
@@ -71,6 +71,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
diff --git a/docs/content/commands/rclone_md5sum.md b/docs/content/commands/rclone_md5sum.md
index fe40b7b1e..435ca45b1 100644
--- a/docs/content/commands/rclone_md5sum.md
+++ b/docs/content/commands/rclone_md5sum.md
@@ -58,6 +58,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
diff --git a/docs/content/commands/rclone_mount.md b/docs/content/commands/rclone_mount.md
index 7ce79eb5c..3d324847d 100644
--- a/docs/content/commands/rclone_mount.md
+++ b/docs/content/commands/rclone_mount.md
@@ -571,11 +571,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
-If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
+If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
-`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
+`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -900,6 +900,45 @@ _WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
+## VFS Metadata
+
+If you use the `--vfs-metadata-extension` flag you can get the VFS to
+expose files which contain the [metadata](/docs/#metadata) as a JSON
+blob. These files will not appear in the directory listing, but can be
+`stat`-ed and opened and once they have been they **will** appear in
+directory listings until the directory cache expires.
+
+Note that some backends won't create metadata unless you pass in the
+`--metadata` flag.
+
+For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata`
+we get
+
+```
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+```
+
+If the file has no metadata it will be returned as `{}` and if there
+is an error reading the metadata the error will be returned as
+`{"error":"error string"}`.
+
```
@@ -951,6 +990,7 @@ rclone mount remote:path /path/to/mountpoint [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -980,6 +1020,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
diff --git a/docs/content/commands/rclone_move.md b/docs/content/commands/rclone_move.md
index 1d9930121..a705635ab 100644
--- a/docs/content/commands/rclone_move.md
+++ b/docs/content/commands/rclone_move.md
@@ -91,6 +91,7 @@ Flags for anything which can copy a file
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
--no-update-dir-modtime Don't update directory modification times
@@ -127,6 +128,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
diff --git a/docs/content/commands/rclone_moveto.md b/docs/content/commands/rclone_moveto.md
index 0de165dc2..14d1646de 100644
--- a/docs/content/commands/rclone_moveto.md
+++ b/docs/content/commands/rclone_moveto.md
@@ -82,6 +82,7 @@ Flags for anything which can copy a file
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
--no-update-dir-modtime Don't update directory modification times
@@ -118,6 +119,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
diff --git a/docs/content/commands/rclone_ncdu.md b/docs/content/commands/rclone_ncdu.md
index 62623c59d..08387a1c2 100644
--- a/docs/content/commands/rclone_ncdu.md
+++ b/docs/content/commands/rclone_ncdu.md
@@ -98,6 +98,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
diff --git a/docs/content/commands/rclone_nfsmount.md b/docs/content/commands/rclone_nfsmount.md
index 079eabbef..9bb84225b 100644
--- a/docs/content/commands/rclone_nfsmount.md
+++ b/docs/content/commands/rclone_nfsmount.md
@@ -572,11 +572,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
-If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
+If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
-`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
+`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -901,6 +901,45 @@ _WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
+## VFS Metadata
+
+If you use the `--vfs-metadata-extension` flag you can get the VFS to
+expose files which contain the [metadata](/docs/#metadata) as a JSON
+blob. These files will not appear in the directory listing, but can be
+`stat`-ed and opened and once they have been they **will** appear in
+directory listings until the directory cache expires.
+
+Note that some backends won't create metadata unless you pass in the
+`--metadata` flag.
+
+For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata`
+we get
+
+```
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+```
+
+If the file has no metadata it will be returned as `{}` and if there
+is an error reading the metadata the error will be returned as
+`{"error":"error string"}`.
+
```
@@ -957,6 +996,7 @@ rclone nfsmount remote:path /path/to/mountpoint [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -986,6 +1026,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
diff --git a/docs/content/commands/rclone_purge.md b/docs/content/commands/rclone_purge.md
index ab191f57e..c1fded41c 100644
--- a/docs/content/commands/rclone_purge.md
+++ b/docs/content/commands/rclone_purge.md
@@ -15,6 +15,9 @@ include/exclude filters - everything will be removed. Use the
delete files. To delete empty directories only, use command
[rmdir](/commands/rclone_rmdir/) or [rmdirs](/commands/rclone_rmdirs/).
+The concurrency of this operation is controlled by the `--checkers` global flag. However, some backends will
+implement this command directly, in which case `--checkers` will be ignored.
+
**Important**: Since this can cause data loss, test first with the
`--dry-run` or the `--interactive`/`-i` flag.
diff --git a/docs/content/commands/rclone_rcd.md b/docs/content/commands/rclone_rcd.md
index 912a0f677..126727b34 100644
--- a/docs/content/commands/rclone_rcd.md
+++ b/docs/content/commands/rclone_rcd.md
@@ -126,7 +126,11 @@ By default this will serve files without needing a login.
You can either use an htpasswd file which can take lots of users, or
set a single username and password with the `--rc-user` and `--rc-pass` flags.
-If no static users are configured by either of the above methods, and client
+Alternatively, you can have the reverse proxy manage authentication and use the
+username provided in the configured header with `--user-from-header` (e.g., `--rc---user-from-header=x-remote-user`).
+Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access.
+
+If either of the above authentication methods is not configured and client
certificates are required by the `--client-ca` flag passed to the server, the
client certificate common name will be considered as the username.
@@ -190,6 +194,7 @@ Flags to control the Remote Control API
--rc-server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
--rc-template string User-specified template
--rc-user string User name for authentication
+ --rc-user-from-header string User name from a defined HTTP header
--rc-web-fetch-url string URL to fetch the releases for webgui (default "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest")
--rc-web-gui Launch WebGUI on localhost
--rc-web-gui-force-update Force update to latest version of web gui
diff --git a/docs/content/commands/rclone_serve_dlna.md b/docs/content/commands/rclone_serve_dlna.md
index 9ac076f2d..a0c8b8d4d 100644
--- a/docs/content/commands/rclone_serve_dlna.md
+++ b/docs/content/commands/rclone_serve_dlna.md
@@ -134,11 +134,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
-If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
+If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
-`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
+`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -463,6 +463,45 @@ _WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
+## VFS Metadata
+
+If you use the `--vfs-metadata-extension` flag you can get the VFS to
+expose files which contain the [metadata](/docs/#metadata) as a JSON
+blob. These files will not appear in the directory listing, but can be
+`stat`-ed and opened and once they have been they **will** appear in
+directory listings until the directory cache expires.
+
+Note that some backends won't create metadata unless you pass in the
+`--metadata` flag.
+
+For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata`
+we get
+
+```
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+```
+
+If the file has no metadata it will be returned as `{}` and if there
+is an error reading the metadata the error will be returned as
+`{"error":"error string"}`.
+
```
@@ -500,6 +539,7 @@ rclone serve dlna remote:path [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -527,6 +567,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
diff --git a/docs/content/commands/rclone_serve_docker.md b/docs/content/commands/rclone_serve_docker.md
index 6c2f84f29..5f4536ba4 100644
--- a/docs/content/commands/rclone_serve_docker.md
+++ b/docs/content/commands/rclone_serve_docker.md
@@ -146,11 +146,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
-If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
+If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
-`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
+`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -475,6 +475,45 @@ _WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
+## VFS Metadata
+
+If you use the `--vfs-metadata-extension` flag you can get the VFS to
+expose files which contain the [metadata](/docs/#metadata) as a JSON
+blob. These files will not appear in the directory listing, but can be
+`stat`-ed and opened and once they have been they **will** appear in
+directory listings until the directory cache expires.
+
+Note that some backends won't create metadata unless you pass in the
+`--metadata` flag.
+
+For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata`
+we get
+
+```
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+```
+
+If the file has no metadata it will be returned as `{}` and if there
+is an error reading the metadata the error will be returned as
+`{"error":"error string"}`.
+
```
@@ -531,6 +570,7 @@ rclone serve docker [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -560,6 +600,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
diff --git a/docs/content/commands/rclone_serve_ftp.md b/docs/content/commands/rclone_serve_ftp.md
index e96f68116..219b1dd79 100644
--- a/docs/content/commands/rclone_serve_ftp.md
+++ b/docs/content/commands/rclone_serve_ftp.md
@@ -127,11 +127,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
-If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
+If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
-`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
+`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -456,6 +456,45 @@ _WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
+## VFS Metadata
+
+If you use the `--vfs-metadata-extension` flag you can get the VFS to
+expose files which contain the [metadata](/docs/#metadata) as a JSON
+blob. These files will not appear in the directory listing, but can be
+`stat`-ed and opened and once they have been they **will** appear in
+directory listings until the directory cache expires.
+
+Note that some backends won't create metadata unless you pass in the
+`--metadata` flag.
+
+For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata`
+we get
+
+```
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+```
+
+If the file has no metadata it will be returned as `{}` and if there
+is an error reading the metadata the error will be returned as
+`{"error":"error string"}`.
+
## Auth Proxy
If you supply the parameter `--auth-proxy /path/to/program` then
@@ -577,6 +616,7 @@ rclone serve ftp remote:path [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -604,6 +644,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
diff --git a/docs/content/commands/rclone_serve_http.md b/docs/content/commands/rclone_serve_http.md
index f87db8730..36c3de07e 100644
--- a/docs/content/commands/rclone_serve_http.md
+++ b/docs/content/commands/rclone_serve_http.md
@@ -128,7 +128,11 @@ By default this will serve files without needing a login.
You can either use an htpasswd file which can take lots of users, or
set a single username and password with the `--user` and `--pass` flags.
-If no static users are configured by either of the above methods, and client
+Alternatively, you can have the reverse proxy manage authentication and use the
+username provided in the configured header with `--user-from-header` (e.g., `----user-from-header=x-remote-user`).
+Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access.
+
+If either of the above authentication methods is not configured and client
certificates are required by the `--client-ca` flag passed to the server, the
client certificate common name will be considered as the username.
@@ -245,11 +249,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
-If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
+If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
-`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
+`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -574,6 +578,45 @@ _WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
+## VFS Metadata
+
+If you use the `--vfs-metadata-extension` flag you can get the VFS to
+expose files which contain the [metadata](/docs/#metadata) as a JSON
+blob. These files will not appear in the directory listing, but can be
+`stat`-ed and opened and once they have been they **will** appear in
+directory listings until the directory cache expires.
+
+Note that some backends won't create metadata unless you pass in the
+`--metadata` flag.
+
+For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata`
+we get
+
+```
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+```
+
+If the file has no metadata it will be returned as `{}` and if there
+is an error reading the metadata the error will be returned as
+`{"error":"error string"}`.
+
## Auth Proxy
If you supply the parameter `--auth-proxy /path/to/program` then
@@ -664,19 +707,19 @@ rclone serve http remote:path [flags]
## Options
```
- --addr stringArray IPaddress:Port, :Port or [unix://]/path/to/socket to bind server to (default [127.0.0.1:8080])
+ --addr stringArray IPaddress:Port or :Port to bind server to (default 127.0.0.1:8080)
--allow-origin string Origin which cross-domain request (CORS) can be executed from
--auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
- --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
- --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
+ --cert string TLS PEM key (concatenation of certificate and CA certificate)
+ --client-ca string Client certificate authority to verify clients with
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 777)
--file-perms FileMode File permissions (default 666)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for http
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string Path to TLS PEM private key file
+ --key string TLS PEM Private key
--link-perms FileMode Link permissions (default 666)
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
@@ -694,6 +737,7 @@ rclone serve http remote:path [flags]
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask FileMode Override the permission bits set by the filesystem (not supported on Windows) (default 002)
--user string User name for authentication
+ --user-from-header string User name from a defined HTTP header
--vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost)
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
@@ -704,6 +748,7 @@ rclone serve http remote:path [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -731,6 +776,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
diff --git a/docs/content/commands/rclone_serve_nfs.md b/docs/content/commands/rclone_serve_nfs.md
index ed8358ec2..7f5b88b37 100644
--- a/docs/content/commands/rclone_serve_nfs.md
+++ b/docs/content/commands/rclone_serve_nfs.md
@@ -7,8 +7,6 @@ versionIntroduced: v1.65
---
# rclone serve nfs
-*Not available in Windows.*
-
Serve the remote as an NFS mount
## Synopsis
@@ -55,7 +53,7 @@ that it uses an on disk cache, but the cache entries are held as
symlinks. Rclone will use the handle of the underlying file as the NFS
handle which improves performance. This sort of cache can't be backed
up and restored as the underlying handles will change. This is Linux
-only. It requres running rclone as root or with `CAP_DAC_READ_SEARCH`.
+only. It requires running rclone as root or with `CAP_DAC_READ_SEARCH`.
You can run rclone with this extra permission by doing this to the
rclone binary `sudo setcap cap_dac_read_search+ep /path/to/rclone`.
@@ -79,6 +77,12 @@ Where `$PORT` is the same port number used in the `serve nfs` command
and `$HOSTNAME` is the network address of the machine that `serve nfs`
was run on.
+If `--vfs-metadata-extension` is in use then for the `--nfs-cache-type disk`
+and `--nfs-cache-type cache` the metadata files will have the file
+handle of their parent file suffixed with `0x00, 0x00, 0x00, 0x01`.
+This means they can be looked up directly from the parent file handle
+is desired.
+
This command is only available on Unix platforms.
## VFS - Virtual File System
@@ -178,11 +182,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
-If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
+If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
-`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
+`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -507,6 +511,45 @@ _WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
+## VFS Metadata
+
+If you use the `--vfs-metadata-extension` flag you can get the VFS to
+expose files which contain the [metadata](/docs/#metadata) as a JSON
+blob. These files will not appear in the directory listing, but can be
+`stat`-ed and opened and once they have been they **will** appear in
+directory listings until the directory cache expires.
+
+Note that some backends won't create metadata unless you pass in the
+`--metadata` flag.
+
+For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata`
+we get
+
+```
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+```
+
+If the file has no metadata it will be returned as `{}` and if there
+is an error reading the metadata the error will be returned as
+`{"error":"error string"}`.
+
```
@@ -543,6 +586,7 @@ rclone serve nfs remote:path [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -570,6 +614,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
diff --git a/docs/content/commands/rclone_serve_restic.md b/docs/content/commands/rclone_serve_restic.md
index dca327576..6bc94f3d8 100644
--- a/docs/content/commands/rclone_serve_restic.md
+++ b/docs/content/commands/rclone_serve_restic.md
@@ -162,7 +162,11 @@ By default this will serve files without needing a login.
You can either use an htpasswd file which can take lots of users, or
set a single username and password with the `--user` and `--pass` flags.
-If no static users are configured by either of the above methods, and client
+Alternatively, you can have the reverse proxy manage authentication and use the
+username provided in the configured header with `--user-from-header` (e.g., `----user-from-header=x-remote-user`).
+Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access.
+
+If either of the above authentication methods is not configured and client
certificates are required by the `--client-ca` flag passed to the server, the
client certificate common name will be considered as the username.
@@ -191,16 +195,16 @@ rclone serve restic remote:path [flags]
## Options
```
- --addr stringArray IPaddress:Port, :Port or [unix://]/path/to/socket to bind server to (default [127.0.0.1:8080])
+ --addr stringArray IPaddress:Port or :Port to bind server to (default 127.0.0.1:8080)
--allow-origin string Origin which cross-domain request (CORS) can be executed from
--append-only Disallow deletion of repository data
--baseurl string Prefix for URLs - leave blank for root
--cache-objects Cache listed objects (default true)
- --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
- --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
+ --cert string TLS PEM key (concatenation of certificate and CA certificate)
+ --client-ca string Client certificate authority to verify clients with
-h, --help help for restic
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string Path to TLS PEM private key file
+ --key string TLS PEM Private key
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--pass string Password for authentication
@@ -211,6 +215,7 @@ rclone serve restic remote:path [flags]
--server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
--stdio Run an HTTP2 server on stdin/stdout
--user string User name for authentication
+ --user-from-header string User name from a defined HTTP header
```
See the [global flags page](/flags/) for global options not listed here.
diff --git a/docs/content/commands/rclone_serve_s3.md b/docs/content/commands/rclone_serve_s3.md
index 40813321b..21d72f4e6 100644
--- a/docs/content/commands/rclone_serve_s3.md
+++ b/docs/content/commands/rclone_serve_s3.md
@@ -27,7 +27,7 @@ docs](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html)).
access.
Please note that some clients may require HTTPS endpoints. See [the
-SSL docs](#ssl-tls) for more information.
+SSL docs](#tls-ssl) for more information.
This command uses the [VFS directory cache](#vfs-virtual-file-system).
All the functionality will work with `--vfs-cache-mode off`. Using
@@ -82,7 +82,7 @@ secret_access_key = SECRET_ACCESS_KEY
use_multipart_uploads = false
```
-Note that setting `disable_multipart_uploads = true` is to work around
+Note that setting `use_multipart_uploads = false` is to work around
[a bug](#bugs) which will be fixed in due course.
## Bugs
@@ -154,7 +154,11 @@ By default this will serve files without needing a login.
You can either use an htpasswd file which can take lots of users, or
set a single username and password with the `--user` and `--pass` flags.
-If no static users are configured by either of the above methods, and client
+Alternatively, you can have the reverse proxy manage authentication and use the
+username provided in the configured header with `--user-from-header` (e.g., `----user-from-header=x-remote-user`).
+Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access.
+
+If either of the above authentication methods is not configured and client
certificates are required by the `--client-ca` flag passed to the server, the
client certificate common name will be considered as the username.
@@ -334,11 +338,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
-If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
+If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
-`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
+`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -663,6 +667,45 @@ _WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
+## VFS Metadata
+
+If you use the `--vfs-metadata-extension` flag you can get the VFS to
+expose files which contain the [metadata](/docs/#metadata) as a JSON
+blob. These files will not appear in the directory listing, but can be
+`stat`-ed and opened and once they have been they **will** appear in
+directory listings until the directory cache expires.
+
+Note that some backends won't create metadata unless you pass in the
+`--metadata` flag.
+
+For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata`
+we get
+
+```
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+```
+
+If the file has no metadata it will be returned as `{}` and if there
+is an error reading the metadata the error will be returned as
+`{"error":"error string"}`.
+
```
@@ -672,22 +715,22 @@ rclone serve s3 remote:path [flags]
## Options
```
- --addr stringArray IPaddress:Port, :Port or [unix://]/path/to/socket to bind server to (default [127.0.0.1:8080])
+ --addr stringArray IPaddress:Port or :Port to bind server to (default 127.0.0.1:8080)
--allow-origin string Origin which cross-domain request (CORS) can be executed from
--auth-key stringArray Set key pair for v4 authorization: access_key_id,secret_access_key
--auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
- --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
- --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
+ --cert string TLS PEM key (concatenation of certificate and CA certificate)
+ --client-ca string Client certificate authority to verify clients with
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 777)
--etag-hash string Which hash to use for the ETag, or auto or blank for off (default "MD5")
--file-perms FileMode File permissions (default 666)
- --force-path-style If true use path style access if false use virtual hosted style (default true) (default true)
+ --force-path-style If true use path style access if false use virtual hosted style (default true)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for s3
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string Path to TLS PEM private key file
+ --key string TLS PEM Private key
--link-perms FileMode Link permissions (default 666)
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
@@ -705,6 +748,7 @@ rclone serve s3 remote:path [flags]
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask FileMode Override the permission bits set by the filesystem (not supported on Windows) (default 002)
--user string User name for authentication
+ --user-from-header string User name from a defined HTTP header
--vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost)
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
@@ -715,6 +759,7 @@ rclone serve s3 remote:path [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -742,6 +787,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
diff --git a/docs/content/commands/rclone_serve_sftp.md b/docs/content/commands/rclone_serve_sftp.md
index 477e35dfe..2d2c6974d 100644
--- a/docs/content/commands/rclone_serve_sftp.md
+++ b/docs/content/commands/rclone_serve_sftp.md
@@ -170,11 +170,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
-If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
+If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
-`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
+`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -499,6 +499,45 @@ _WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
+## VFS Metadata
+
+If you use the `--vfs-metadata-extension` flag you can get the VFS to
+expose files which contain the [metadata](/docs/#metadata) as a JSON
+blob. These files will not appear in the directory listing, but can be
+`stat`-ed and opened and once they have been they **will** appear in
+directory listings until the directory cache expires.
+
+Note that some backends won't create metadata unless you pass in the
+`--metadata` flag.
+
+For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata`
+we get
+
+```
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+```
+
+If the file has no metadata it will be returned as `{}` and if there
+is an error reading the metadata the error will be returned as
+`{"error":"error string"}`.
+
## Auth Proxy
If you supply the parameter `--auth-proxy /path/to/program` then
@@ -620,6 +659,7 @@ rclone serve sftp remote:path [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -647,6 +687,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
diff --git a/docs/content/commands/rclone_serve_webdav.md b/docs/content/commands/rclone_serve_webdav.md
index 397b9a5f3..5da838fa3 100644
--- a/docs/content/commands/rclone_serve_webdav.md
+++ b/docs/content/commands/rclone_serve_webdav.md
@@ -171,7 +171,11 @@ By default this will serve files without needing a login.
You can either use an htpasswd file which can take lots of users, or
set a single username and password with the `--user` and `--pass` flags.
-If no static users are configured by either of the above methods, and client
+Alternatively, you can have the reverse proxy manage authentication and use the
+username provided in the configured header with `--user-from-header` (e.g., `----user-from-header=x-remote-user`).
+Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access.
+
+If either of the above authentication methods is not configured and client
certificates are required by the `--client-ca` flag passed to the server, the
client certificate common name will be considered as the username.
@@ -288,11 +292,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
-If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
+If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
-`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
+`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -617,6 +621,45 @@ _WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
+## VFS Metadata
+
+If you use the `--vfs-metadata-extension` flag you can get the VFS to
+expose files which contain the [metadata](/docs/#metadata) as a JSON
+blob. These files will not appear in the directory listing, but can be
+`stat`-ed and opened and once they have been they **will** appear in
+directory listings until the directory cache expires.
+
+Note that some backends won't create metadata unless you pass in the
+`--metadata` flag.
+
+For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata`
+we get
+
+```
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ "atime": "2025-03-04T17:34:22.317069787Z",
+ "btime": "2025-03-03T16:03:37.708253808Z",
+ "gid": "1000",
+ "mode": "100664",
+ "mtime": "2025-03-03T16:03:39.640238323Z",
+ "uid": "1000"
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+```
+
+If the file has no metadata it will be returned as `{}` and if there
+is an error reading the metadata the error will be returned as
+`{"error":"error string"}`.
+
## Auth Proxy
If you supply the parameter `--auth-proxy /path/to/program` then
@@ -707,12 +750,12 @@ rclone serve webdav remote:path [flags]
## Options
```
- --addr stringArray IPaddress:Port, :Port or [unix://]/path/to/socket to bind server to (default [127.0.0.1:8080])
+ --addr stringArray IPaddress:Port or :Port to bind server to (default 127.0.0.1:8080)
--allow-origin string Origin which cross-domain request (CORS) can be executed from
--auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
- --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
- --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
+ --cert string TLS PEM key (concatenation of certificate and CA certificate)
+ --client-ca string Client certificate authority to verify clients with
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 777)
--disable-dir-list Disable HTML directory list on GET request for a directory
@@ -721,7 +764,7 @@ rclone serve webdav remote:path [flags]
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for webdav
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string Path to TLS PEM private key file
+ --key string TLS PEM Private key
--link-perms FileMode Link permissions (default 666)
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
@@ -739,6 +782,7 @@ rclone serve webdav remote:path [flags]
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask FileMode Override the permission bits set by the filesystem (not supported on Windows) (default 002)
--user string User name for authentication
+ --user-from-header string User name from a defined HTTP header
--vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost)
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
@@ -749,6 +793,7 @@ rclone serve webdav remote:path [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -776,6 +821,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
diff --git a/docs/content/commands/rclone_sha1sum.md b/docs/content/commands/rclone_sha1sum.md
index ad6450d9b..cae7f22b9 100644
--- a/docs/content/commands/rclone_sha1sum.md
+++ b/docs/content/commands/rclone_sha1sum.md
@@ -61,6 +61,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
diff --git a/docs/content/commands/rclone_size.md b/docs/content/commands/rclone_size.md
index 7e241f557..c4bcc0367 100644
--- a/docs/content/commands/rclone_size.md
+++ b/docs/content/commands/rclone_size.md
@@ -56,6 +56,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
diff --git a/docs/content/commands/rclone_sync.md b/docs/content/commands/rclone_sync.md
index e06b1ccd0..d4ce2f3cd 100644
--- a/docs/content/commands/rclone_sync.md
+++ b/docs/content/commands/rclone_sync.md
@@ -147,6 +147,7 @@ Flags for anything which can copy a file
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
--no-update-dir-modtime Don't update directory modification times
@@ -171,6 +172,7 @@ Flags used for sync commands
--delete-during When synchronizing, delete files during transfer
--fix-case Force rename of case insensitive dest to match source
--ignore-errors Delete even if there are I/O errors
+ --list-cutoff int To save memory, sort directory listings on disk above this threshold (default 1000000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off)
--suffix string Suffix to add to changed files
@@ -202,6 +204,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
diff --git a/docs/content/commands/rclone_touch.md b/docs/content/commands/rclone_touch.md
index f1725579f..82b5bf4df 100644
--- a/docs/content/commands/rclone_touch.md
+++ b/docs/content/commands/rclone_touch.md
@@ -19,6 +19,7 @@ unless `--no-create` or `--recursive` is provided.
If `--recursive` is used then recursively sets the modification
time on all existing files that is found under the path. Filters are supported,
and you can test with the `--dry-run` or the `--interactive`/`-i` flag.
+This will touch `--transfers` files concurrently.
If `--timestamp` is used then sets the modification time to that
time instead of the current time. Times may be specified as one of:
@@ -71,6 +72,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
diff --git a/docs/content/commands/rclone_tree.md b/docs/content/commands/rclone_tree.md
index 0460b61b3..74bfa15fe 100644
--- a/docs/content/commands/rclone_tree.md
+++ b/docs/content/commands/rclone_tree.md
@@ -81,6 +81,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
diff --git a/docs/content/commands/rclone_version.md b/docs/content/commands/rclone_version.md
index 127ded349..9aca17dd8 100644
--- a/docs/content/commands/rclone_version.md
+++ b/docs/content/commands/rclone_version.md
@@ -46,6 +46,9 @@ Or
beta: 1.42.0.5 (released 2018-06-17)
upgrade: https://beta.rclone.org/v1.42-005-g56e1e820
+If you supply the --deps flag then rclone will print a list of all the
+packages it depends on and their versions along with some other
+information about the build.
```
@@ -56,6 +59,7 @@ rclone version [flags]
```
--check Check for new version
+ --deps Show the Go dependencies
-h, --help help for version
```
diff --git a/docs/content/drive.md b/docs/content/drive.md
index 775163a8f..55607450a 100644
--- a/docs/content/drive.md
+++ b/docs/content/drive.md
@@ -1641,6 +1641,32 @@ attempted if possible.
Use the --interactive/-i or --dry-run flag to see what would be copied before copying.
+### moveid
+
+Move files by ID
+
+ rclone backend moveid remote: [options] [+]
+
+This command moves files by ID
+
+Usage:
+
+ rclone backend moveid drive: ID path
+ rclone backend moveid drive: ID1 path1 ID2 path2
+
+It moves the drive file with ID given to the path (an rclone path which
+will be passed internally to rclone moveto).
+
+The path should end with a / to indicate move the file as named to
+this directory. If it doesn't end with a / then the last path
+component will be used as the file name.
+
+If the destination is a drive backend then server-side moving will be
+attempted if possible.
+
+Use the --interactive/-i or --dry-run flag to see what would be moved beforehand.
+
+
### exportformats
Dump the export formats for debug purposes
diff --git a/docs/content/dropbox.md b/docs/content/dropbox.md
index 6a9211299..af461e859 100644
--- a/docs/content/dropbox.md
+++ b/docs/content/dropbox.md
@@ -431,6 +431,56 @@ Properties:
- Type: string
- Required: false
+#### --dropbox-export-formats
+
+Comma separated list of preferred formats for exporting files
+
+Certain Dropbox files can only be accessed by exporting them to another format.
+These include Dropbox Paper documents.
+
+For each such file, rclone will choose the first format on this list that Dropbox
+considers valid. If none is valid, it will choose Dropbox's default format.
+
+Known formats include: "html", "md" (markdown)
+
+Properties:
+
+- Config: export_formats
+- Env Var: RCLONE_DROPBOX_EXPORT_FORMATS
+- Type: CommaSepList
+- Default: html,md
+
+#### --dropbox-skip-exports
+
+Skip exportable files in all listings.
+
+If given, exportable files practically become invisible to rclone.
+
+Properties:
+
+- Config: skip_exports
+- Env Var: RCLONE_DROPBOX_SKIP_EXPORTS
+- Type: bool
+- Default: false
+
+#### --dropbox-show-all-exports
+
+Show all exportable files in listings.
+
+Adding this flag will allow all exportable files to be server side copied.
+Note that rclone doesn't add extensions to the exportable file names in this mode.
+
+Do **not** use this flag when trying to download exportable files - rclone
+will fail to download them.
+
+
+Properties:
+
+- Config: show_all_exports
+- Env Var: RCLONE_DROPBOX_SHOW_ALL_EXPORTS
+- Type: bool
+- Default: false
+
#### --dropbox-batch-mode
Upload file batching sync|async|off.
@@ -508,7 +558,7 @@ Properties:
#### --dropbox-batch-commit-timeout
-Max time to wait for a batch to finish committing
+Max time to wait for a batch to finish committing. (no longer used)
Properties:
diff --git a/docs/content/filelu.md b/docs/content/filelu.md
index fb411c469..766330e8f 100644
--- a/docs/content/filelu.md
+++ b/docs/content/filelu.md
@@ -188,6 +188,19 @@ Properties:
Here are the Advanced options specific to filelu (FileLu Cloud Storage).
+#### --filelu-encoding
+
+The encoding for the backend.
+
+See the [encoding section in the overview](/overview/#encoding) for more info.
+
+Properties:
+
+- Config: encoding
+- Env Var: RCLONE_FILELU_ENCODING
+- Type: Encoding
+- Default: Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,CrLf,Del,Ctl,LeftSpace,LeftPeriod,LeftTilde,LeftCrLfHtVt,RightSpace,RightPeriod,RightCrLfHtVt,InvalidUtf8,Dot,SquareBracket,Semicolon,Exclamation
+
#### --filelu-description
Description of the remote.
@@ -199,70 +212,6 @@ Properties:
- Type: string
- Required: false
-## Backend commands
-
-Here are the commands specific to the filelu backend.
-
-Run them with
-
- rclone backend COMMAND remote:
-
-The help below will explain what arguments each command takes.
-
-See the [backend](/commands/rclone_backend/) command for more
-info on how to pass options and arguments.
-
-These can be run on a running backend using the rc command
-[backend/command](/rc/#backend-command).
-
-### rename
-
-Rename a file in a FileLu directory
-
- rclone backend rename remote: [options] [+]
-
-
-For example:
-
- rclone backend rename filelu:/file-path/hello.txt "hello_new_name.txt"
-
-
-### movefile
-
-Move file within the remote FileLu directory
-
- rclone backend movefile remote: [options] [+]
-
-
-For example:
-
- rclone backend movefile filelu:/source-path/hello.txt /destination-path/
-
-
-### movefolder
-
-Move a folder on remote FileLu
-
- rclone backend movefolder remote: [options] [+]
-
-
-For example:
-
- rclone backend movefolder filelu:/sorce-fld-path/hello-folder/ /destication-fld-path/hello-folder/
-
-
-### renamefolder
-
-Rename a folder on FileLu
-
- rclone backend renamefolder remote: [options] [+]
-
-
-For example:
-
- rclone backend renamefolder filelu:/folder-path/folder-name "new-folder-name"
-
-
{{< rem autogenerated options stop >}}
## Limitations
diff --git a/docs/content/flags.md b/docs/content/flags.md
index f9d32c490..b830e66b4 100644
--- a/docs/content/flags.md
+++ b/docs/content/flags.md
@@ -9,8 +9,6 @@ description: "Rclone Global Flags"
This describes the global flags available to every rclone command
split into groups.
-See the [Options section](/docs/#options) for syntax and usage advice.
-
## Copy
@@ -39,6 +37,7 @@ Flags for anything which can copy a file.
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
--no-update-dir-modtime Don't update directory modification times
@@ -64,6 +63,7 @@ Flags used for sync commands.
--delete-during When synchronizing, delete files during transfer
--fix-case Force rename of case insensitive dest to match source
--ignore-errors Delete even if there are I/O errors
+ --list-cutoff int To save memory, sort directory listings on disk above this threshold (default 1000000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off)
--suffix string Suffix to add to changed files
@@ -112,13 +112,14 @@ Flags for general networking and HTTP stuff.
--header stringArray Set HTTP header for all transactions
--header-download stringArray Set HTTP header for download transactions
--header-upload stringArray Set HTTP header for upload transactions
+ --max-connections int Maximum number of simultaneous backend API connections, 0 for unlimited
--no-check-certificate Do not verify the server SSL certificate (insecure)
--no-gzip-encoding Don't set Accept-Encoding: gzip
--timeout Duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.69.0")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.70.0")
```
@@ -153,6 +154,7 @@ Flags for general configuration of rclone.
-i, --interactive Enable interactive mode
--kv-lock-time Duration Maximum time to keep key-value database locked by process (default 1s)
--low-level-retries int Number of low level retries to do (default 10)
+ --max-buffer-memory SizeSuffix If set, don't allocate more than this amount of memory as buffers (default off)
--no-console Hide console window (supported on Windows only)
--no-unicode-normalization Don't normalize unicode characters in filenames
--password-command SpaceSepList Command for supplying password for encrypted configuration
@@ -190,6 +192,7 @@ Flags for filtering directory listings.
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -223,7 +226,7 @@ Flags for logging and statistics.
```
--log-file string Log everything to this file
- --log-format string Comma separated list of log format options (default "date,time")
+ --log-format Bits Comma separated list of log format options (default date,time)
--log-level LogLevel Log level DEBUG|INFO|NOTICE|ERROR (default NOTICE)
--log-systemd Activate systemd integration for the logger
--max-stats-groups int Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000)
@@ -290,6 +293,7 @@ Flags to control the Remote Control API.
--rc-server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
--rc-template string User-specified template
--rc-user string User name for authentication
+ --rc-user-from-header string User name from a defined HTTP header
--rc-web-fetch-url string URL to fetch the releases for webgui (default "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest")
--rc-web-gui Launch WebGUI on localhost
--rc-web-gui-force-update Force update to latest version of web gui
@@ -319,6 +323,7 @@ Flags to control the Metrics HTTP endpoint..
--metrics-server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
--metrics-template string User-specified template
--metrics-user string User name for authentication
+ --metrics-user-from-header string User name from a defined HTTP header
--rc-enable-metrics Enable the Prometheus metrics path at the remote control server
```
@@ -339,6 +344,8 @@ Backend-only flags (these can be set in the config file also).
--azureblob-client-id string The ID of the client in use
--azureblob-client-secret string One of the service principal's client secrets
--azureblob-client-send-certificate-chain Send the certificate chain when using certificate auth
+ --azureblob-copy-concurrency int Concurrency for multipart copy (default 512)
+ --azureblob-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 8Mi)
--azureblob-delete-snapshots string Set to specify how to deal with snapshots on blob deletion
--azureblob-description string Description of the remote
--azureblob-directory-markers Upload an empty object with a trailing slash when a new directory is created
@@ -362,6 +369,7 @@ Backend-only flags (these can be set in the config file also).
--azureblob-upload-concurrency int Concurrency for multipart uploads (default 16)
--azureblob-upload-cutoff string Cutoff for switching to chunked upload (<= 256 MiB) (deprecated)
--azureblob-use-az Use Azure CLI tool az for authentication
+ --azureblob-use-copy-blob Whether to use the Copy Blob API when copying to the same storage account (default true)
--azureblob-use-emulator Uses local storage emulator if provided as 'true'
--azureblob-use-msi Use a managed service identity to authenticate (only works in Azure)
--azureblob-username string User name (usually an email address)
@@ -374,6 +382,7 @@ Backend-only flags (these can be set in the config file also).
--azurefiles-client-send-certificate-chain Send the certificate chain when using certificate auth
--azurefiles-connection-string string Azure Files Connection String
--azurefiles-description string Description of the remote
+ --azurefiles-disable-instance-discovery Skip requesting Microsoft Entra instance metadata
--azurefiles-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot)
--azurefiles-endpoint string Endpoint for the service
--azurefiles-env-auth Read credentials from runtime (environment variables, CLI or MSI)
@@ -388,6 +397,7 @@ Backend-only flags (these can be set in the config file also).
--azurefiles-share-name string Azure Files Share Name
--azurefiles-tenant string ID of the service principal's tenant. Also called its directory ID
--azurefiles-upload-concurrency int Concurrency for multipart uploads (default 16)
+ --azurefiles-use-az Use Azure CLI tool az for authentication
--azurefiles-use-msi Use a managed service identity to authenticate (only works in Azure)
--azurefiles-username string User name (usually an email address)
--b2-account string Account ID or Application Key ID
@@ -450,12 +460,14 @@ Backend-only flags (these can be set in the config file also).
--chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks
--chunker-hash-type string Choose how chunker handles hash sums (default "md5")
--chunker-remote string Remote to chunk/unchunk
+ --cloudinary-adjust-media-files-extensions Cloudinary handles media formats as a file attribute and strips it from the name, which is unlike most other file systems (default true)
--cloudinary-api-key string Cloudinary API Key
--cloudinary-api-secret string Cloudinary API Secret
--cloudinary-cloud-name string Cloudinary Environment Name
--cloudinary-description string Description of the remote
--cloudinary-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
--cloudinary-eventually-consistent-delay Duration Wait N seconds for eventual consistency of the databases that support the backend operation (default 0s)
+ --cloudinary-media-extensions stringArray Cloudinary supported media extensions (default 3ds,3g2,3gp,ai,arw,avi,avif,bmp,bw,cr2,cr3,djvu,dng,eps3,fbx,flif,flv,gif,glb,gltf,hdp,heic,heif,ico,indd,jp2,jpe,jpeg,jpg,jxl,jxr,m2ts,mov,mp4,mpeg,mts,mxf,obj,ogv,pdf,ply,png,psd,svg,tga,tif,tiff,ts,u3ma,usdz,wdp,webm,webp,wmv)
--cloudinary-upload-prefix string Specify the API endpoint for environments out of the US
--cloudinary-upload-preset string Upload Preset to select asset manipulation on upload
--combine-description string Description of the remote
@@ -479,6 +491,10 @@ Backend-only flags (these can be set in the config file also).
--crypt-show-mapping For all files listed show how the names encrypt
--crypt-strict-names If set, this will raise an error when crypt comes across a filename that can't be decrypted
--crypt-suffix string If this is set it will override the default suffix of ".bin" (default ".bin")
+ --doi-description string Description of the remote
+ --doi-doi string The DOI or the doi.org URL
+ --doi-doi-resolver-api-url string The URL of the DOI resolver API to use
+ --doi-provider string DOI provider
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs
--drive-auth-owner-only Only consider files owned by the authenticated user
@@ -530,7 +546,6 @@ Backend-only flags (these can be set in the config file also).
--drive-use-trash Send files to the trash instead of deleting permanently (default true)
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download (default off)
--dropbox-auth-url string Auth server URL
- --dropbox-batch-commit-timeout Duration Max time to wait for a batch to finish committing (default 10m0s)
--dropbox-batch-mode string Upload file batching sync|async|off (default "sync")
--dropbox-batch-size int Max number of files in upload batch
--dropbox-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
@@ -540,11 +555,14 @@ Backend-only flags (these can be set in the config file also).
--dropbox-client-secret string OAuth Client Secret
--dropbox-description string Description of the remote
--dropbox-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
+ --dropbox-export-formats CommaSepList Comma separated list of preferred formats for exporting files (default html,md)
--dropbox-impersonate string Impersonate this user when using a business account
--dropbox-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms)
--dropbox-root-namespace string Specify a different Dropbox namespace ID to use as the root for all paths
--dropbox-shared-files Instructs rclone to work on individual shared files
--dropbox-shared-folders Instructs rclone to work on shared folders
+ --dropbox-show-all-exports Show all exportable files in listings
+ --dropbox-skip-exports Skip exportable files in all listings
--dropbox-token string OAuth Access Token as a JSON blob
--dropbox-token-url string Token server url
--fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl
@@ -562,6 +580,9 @@ Backend-only flags (these can be set in the config file also).
--filefabric-token-expiry string Token expiry time
--filefabric-url string URL of the Enterprise File Fabric to connect to
--filefabric-version string Version read from the file fabric
+ --filelu-description string Description of the remote
+ --filelu-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,CrLf,Del,Ctl,LeftSpace,LeftPeriod,LeftTilde,LeftCrLfHtVt,RightSpace,RightPeriod,RightCrLfHtVt,InvalidUtf8,Dot,SquareBracket,Semicolon,Exclamation)
+ --filelu-key string Your FileLu Rclone key from My Account
--filescom-api-key string The API key used to authenticate with Files.com
--filescom-description string Description of the remote
--filescom-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
@@ -621,7 +642,6 @@ Backend-only flags (these can be set in the config file also).
--gofile-list-chunk int Number of items to list in each call (default 1000)
--gofile-root-folder-id string ID of the root folder
--gphotos-auth-url string Auth server URL
- --gphotos-batch-commit-timeout Duration Max time to wait for a batch to finish committing (default 10m0s)
--gphotos-batch-mode string Upload file batching sync|async|off (default "sync")
--gphotos-batch-size int Max number of files in upload batch
--gphotos-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
@@ -689,6 +709,8 @@ Backend-only flags (these can be set in the config file also).
--internetarchive-encoding Encoding The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot)
--internetarchive-endpoint string IAS3 Endpoint (default "https://s3.us.archive.org")
--internetarchive-front-endpoint string Host of InternetArchive Frontend (default "https://archive.org")
+ --internetarchive-item-derive Whether to trigger derive on the IA item or not. If set to false, the item will not be derived by IA upon upload (default true)
+ --internetarchive-item-metadata stringArray Metadata to be set on the IA item, this is different from file-level metadata that can be set using --metadata-set
--internetarchive-secret-access-key string IAS3 Secret Key (password)
--internetarchive-wait-archive Duration Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish (default 0s)
--jottacloud-auth-url string Auth server URL
@@ -784,6 +806,7 @@ Backend-only flags (these can be set in the config file also).
--onedrive-tenant string ID of the service principal's tenant. Also called its directory ID
--onedrive-token string OAuth Access Token as a JSON blob
--onedrive-token-url string Token server url
+ --onedrive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default off)
--oos-attempt-resume-upload If true attempt to resume previously started multipart upload for the object
--oos-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
--oos-compartment string Specify compartment OCID, if you need to list buckets
@@ -809,6 +832,7 @@ Backend-only flags (these can be set in the config file also).
--oos-storage-tier string The storage class to use when storing new objects in storage. https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm (default "Standard")
--oos-upload-concurrency int Concurrency for multipart uploads (default 10)
--oos-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
+ --opendrive-access string Files and folders will be uploaded with this access permission (default private) (default "private")
--opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi)
--opendrive-description string Description of the remote
--opendrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
@@ -905,6 +929,8 @@ Backend-only flags (these can be set in the config file also).
--s3-endpoint string Endpoint for S3 API
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars)
--s3-force-path-style If true use path style access if false use virtual hosted style (default true)
+ --s3-ibm-api-key string IBM API Key to be used to obtain IAM token
+ --s3-ibm-resource-instance-id string IBM service instance id
--s3-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery
--s3-list-chunk int Size of listing chunk (response list for each ListObject S3 request) (default 1000)
--s3-list-url-encode Tristate Whether to url encode listings: true/false/unset (default unset)
@@ -925,6 +951,7 @@ Backend-only flags (these can be set in the config file also).
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3
--s3-session-token string An AWS session token
--s3-shared-credentials-file string Path to the shared credentials file
+ --s3-sign-accept-encoding Tristate Set if rclone should include Accept-Encoding as part of the signature (default unset)
--s3-sse-customer-algorithm string If using SSE-C, the server-side encryption algorithm used when storing this object in S3
--s3-sse-customer-key string To use SSE-C you may provide the secret encryption key used to encrypt/decrypt your data
--s3-sse-customer-key-base64 string If using SSE-C you must provide the secret encryption key encoded in base64 format to encrypt/decrypt your data
@@ -941,6 +968,7 @@ Backend-only flags (these can be set in the config file also).
--s3-use-multipart-uploads Tristate Set if rclone should use multipart uploads (default unset)
--s3-use-presigned-request Whether to use a presigned request or PutObject for single part uploads
--s3-use-unsigned-payload Tristate Whether to use an unsigned payload in PutObject (default unset)
+ --s3-use-x-id Tristate Set if rclone should add x-id URL parameters (default unset)
--s3-v2-auth If true use v2 authentication
--s3-version-at Time Show file versions as they were at the specified time (default off)
--s3-version-deleted Show deleted file markers when using versions
@@ -966,6 +994,7 @@ Backend-only flags (these can be set in the config file also).
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available
--sftp-host string SSH host to connect to
--sftp-host-key-algorithms SpaceSepList Space separated list of host key algorithms, ordered by preference
+ --sftp-http-proxy string URL for HTTP CONNECT proxy
--sftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--sftp-key-exchange SpaceSepList Space separated list of key exchange algorithms, ordered by preference
--sftp-key-file string Path to PEM-encoded private key file
@@ -1020,6 +1049,7 @@ Backend-only flags (these can be set in the config file also).
--smb-pass string SMB password (obscured)
--smb-port int SMB port number (default 445)
--smb-spn string Service principal name
+ --smb-use-kerberos Use Kerberos authentication
--smb-user string SMB username (default "$USER")
--storj-access-grant string Access grant
--storj-api-key string API key
diff --git a/docs/content/googlephotos.md b/docs/content/googlephotos.md
index 65db63340..8b38c5594 100644
--- a/docs/content/googlephotos.md
+++ b/docs/content/googlephotos.md
@@ -504,7 +504,7 @@ Properties:
#### --gphotos-batch-commit-timeout
-Max time to wait for a batch to finish committing
+Max time to wait for a batch to finish committing. (no longer used)
Properties:
diff --git a/docs/content/internetarchive.md b/docs/content/internetarchive.md
index aacafb65b..a165c90a8 100644
--- a/docs/content/internetarchive.md
+++ b/docs/content/internetarchive.md
@@ -192,6 +192,19 @@ Properties:
- Type: string
- Required: false
+#### --internetarchive-item-derive
+
+Whether to trigger derive on the IA item or not. If set to false, the item will not be derived by IA upon upload.
+The derive process produces a number of secondary files from an upload to make an upload more usable on the web.
+Setting this to false is useful for uploading files that are already in a format that IA can display or reduce burden on IA's infrastructure.
+
+Properties:
+
+- Config: item_derive
+- Env Var: RCLONE_INTERNETARCHIVE_ITEM_DERIVE
+- Type: bool
+- Default: true
+
### Advanced options
Here are the Advanced options specific to internetarchive (Internet Archive).
@@ -222,6 +235,18 @@ Properties:
- Type: string
- Default: "https://archive.org"
+#### --internetarchive-item-metadata
+
+Metadata to be set on the IA item, this is different from file-level metadata that can be set using --metadata-set.
+Format is key=value and the 'x-archive-meta-' prefix is automatically added.
+
+Properties:
+
+- Config: item_metadata
+- Env Var: RCLONE_INTERNETARCHIVE_ITEM_METADATA
+- Type: stringArray
+- Default: []
+
#### --internetarchive-disable-checksum
Don't ask the server to test against MD5 checksum calculated by rclone.
diff --git a/docs/content/onedrive.md b/docs/content/onedrive.md
index 91c852083..1206064be 100644
--- a/docs/content/onedrive.md
+++ b/docs/content/onedrive.md
@@ -319,7 +319,7 @@ Properties:
- "us"
- Microsoft Cloud for US Government
- "de"
- - Microsoft Cloud Germany
+ - Microsoft Cloud Germany (deprecated - try global region first).
- "cn"
- Azure and Office 365 operated by Vnet Group in China
@@ -392,6 +392,27 @@ Properties:
- Type: bool
- Default: false
+#### --onedrive-upload-cutoff
+
+Cutoff for switching to chunked upload.
+
+Any files larger than this will be uploaded in chunks of chunk_size.
+
+This is disabled by default as uploading using single part uploads
+causes rclone to use twice the storage on Onedrive business as when
+rclone sets the modification time after the upload Onedrive creates a
+new version.
+
+See: https://github.com/rclone/rclone/issues/1716
+
+
+Properties:
+
+- Config: upload_cutoff
+- Env Var: RCLONE_ONEDRIVE_UPLOAD_CUTOFF
+- Type: SizeSuffix
+- Default: off
+
#### --onedrive-chunk-size
Chunk size to upload files with - must be multiple of 320k (327,680 bytes).
diff --git a/docs/content/opendrive.md b/docs/content/opendrive.md
index 6908cfdf1..b38b8d01a 100644
--- a/docs/content/opendrive.md
+++ b/docs/content/opendrive.md
@@ -163,6 +163,24 @@ Properties:
- Type: SizeSuffix
- Default: 10Mi
+#### --opendrive-access
+
+Files and folders will be uploaded with this access permission (default private)
+
+Properties:
+
+- Config: access
+- Env Var: RCLONE_OPENDRIVE_ACCESS
+- Type: string
+- Default: "private"
+- Examples:
+ - "private"
+ - The file or folder access can be granted in a way that will allow select users to view, read or write what is absolutely essential for them.
+ - "public"
+ - The file or folder can be downloaded by anyone from a web browser. The link can be shared in any way,
+ - "hidden"
+ - The file or folder can be accessed has the same restrictions as Public if the user knows the URL of the file or folder link in order to access the contents
+
#### --opendrive-description
Description of the remote.
diff --git a/docs/content/rc.md b/docs/content/rc.md
index 142e8ba59..03cdcf777 100644
--- a/docs/content/rc.md
+++ b/docs/content/rc.md
@@ -652,6 +652,7 @@ This takes the following parameters:
- opt - a dictionary of options to control the configuration
- obscure - declare passwords are plain and need obscuring
- noObscure - declare passwords are already obscured and don't need obscuring
+ - noOutput - don't print anything to stdout
- nonInteractive - don't interact with a user, return questions
- continue - continue the config process with an answer
- all - ask all the config questions not just the post config ones
@@ -766,6 +767,7 @@ This takes the following parameters:
- opt - a dictionary of options to control the configuration
- obscure - declare passwords are plain and need obscuring
- noObscure - declare passwords are already obscured and don't need obscuring
+ - noOutput - don't print anything to stdout
- nonInteractive - don't interact with a user, return questions
- continue - continue the config process with an answer
- all - ask all the config questions not just the post config ones
@@ -950,7 +952,8 @@ returned.
Parameters
-- group - name of the stats group (string)
+- group - name of the stats group (string, optional)
+- short - if true will not return the transferring and checking arrays (boolean, optional)
Returns the following values:
@@ -965,6 +968,7 @@ Returns the following values:
"fatalError": boolean whether there has been at least one fatal error,
"lastError": last error string,
"renames" : number of files renamed,
+ "listed" : number of directory entries listed,
"retryError": boolean showing whether there has been at least one non-NoRetryError,
"serverSideCopies": number of server side copies done,
"serverSideCopyBytes": number bytes server side copied,
@@ -1931,6 +1935,141 @@ check that parameter passing is working properly.
**Authentication is required for this call.**
+### serve/list: Show running servers {#serve-list}
+
+Show running servers with IDs.
+
+This takes no parameters and returns
+
+- list: list of running serve commands
+
+Each list element will have
+
+- id: ID of the server
+- addr: address the server is running on
+- params: parameters used to start the server
+
+Eg
+
+ rclone rc serve/list
+
+Returns
+
+```json
+{
+ "list": [
+ {
+ "addr": "[::]:4321",
+ "id": "nfs-ffc2a4e5",
+ "params": {
+ "fs": "remote:",
+ "opt": {
+ "ListenAddr": ":4321"
+ },
+ "type": "nfs",
+ "vfsOpt": {
+ "CacheMode": "full"
+ }
+ }
+ }
+ ]
+}
+```
+
+**Authentication is required for this call.**
+
+### serve/start: Create a new server {#serve-start}
+
+Create a new server with the specified parameters.
+
+This takes the following parameters:
+
+- `type` - type of server: `http`, `webdav`, `ftp`, `sftp`, `nfs`, etc.
+- `fs` - remote storage path to serve
+- `addr` - the ip:port to run the server on, eg ":1234" or "localhost:1234"
+
+Other parameters are as described in the documentation for the
+relevant [rclone serve](/commands/rclone_serve/) command line options.
+To translate a command line option to an rc parameter, remove the
+leading `--` and replace `-` with `_`, so `--vfs-cache-mode` becomes
+`vfs_cache_mode`. Note that global parameters must be set with
+`_config` and `_filter` as described above.
+
+Examples:
+
+ rclone rc serve/start type=nfs fs=remote: addr=:4321 vfs_cache_mode=full
+ rclone rc serve/start --json '{"type":"nfs","fs":"remote:","addr":":1234","vfs_cache_mode":"full"}'
+
+This will give the reply
+
+```json
+{
+ "addr": "[::]:4321", // Address the server was started on
+ "id": "nfs-ecfc6852" // Unique identifier for the server instance
+}
+```
+
+Or an error if it failed to start.
+
+Stop the server with `serve/stop` and list the running servers with `serve/list`.
+
+**Authentication is required for this call.**
+
+### serve/stop: Unserve selected active serve {#serve-stop}
+
+Stops a running `serve` instance by ID.
+
+This takes the following parameters:
+
+- id: as returned by serve/start
+
+This will give an empty response if successful or an error if not.
+
+Example:
+
+ rclone rc serve/stop id=12345
+
+**Authentication is required for this call.**
+
+### serve/stopall: Stop all active servers {#serve-stopall}
+
+Stop all active servers.
+
+This will stop all active servers.
+
+ rclone rc serve/stopall
+
+**Authentication is required for this call.**
+
+### serve/types: Show all possible serve types {#serve-types}
+
+This shows all possible serve types and returns them as a list.
+
+This takes no parameters and returns
+
+- types: list of serve types, eg "nfs", "sftp", etc
+
+The serve types are strings like "serve", "serve2", "cserve" and can
+be passed to serve/start as the serveType parameter.
+
+Eg
+
+ rclone rc serve/types
+
+Returns
+
+```json
+{
+ "types": [
+ "http",
+ "sftp",
+ "nfs"
+ ]
+}
+```
+
+**Authentication is required for this call.**
+
### sync/bisync: Perform bidirectional synchronization between two paths. {#sync-bisync}
This takes the following parameters
diff --git a/docs/content/s3.md b/docs/content/s3.md
index 48bc4c4fb..0225e9da7 100644
--- a/docs/content/s3.md
+++ b/docs/content/s3.md
@@ -792,7 +792,7 @@ A simple solution is to set the `--s3-upload-cutoff 0` and force all the files t
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/s3/s3.go then run make backenddocs" >}}
### Standard options
-Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
+Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, Exaba, FlashBlade, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Mega, Minio, Netease, Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
#### --s3-provider
@@ -821,6 +821,10 @@ Properties:
- DigitalOcean Spaces
- "Dreamhost"
- Dreamhost DreamObjects
+ - "Exaba"
+ - Exaba Object Storage
+ - "FlashBlade"
+ - Pure Storage FlashBlade Object Storage
- "GCS"
- Google Cloud Storage
- "HuaweiOBS"
@@ -841,6 +845,8 @@ Properties:
- Linode Object Storage
- "Magalu"
- Magalu Object Storage
+ - "Mega"
+ - MEGA S4 Object Storage
- "Minio"
- Minio Object Storage
- "Netease"
@@ -1110,7 +1116,7 @@ Properties:
- Config: acl
- Env Var: RCLONE_S3_ACL
-- Provider: !Storj,Selectel,Synology,Cloudflare
+- Provider: !Storj,Selectel,Synology,Cloudflare,FlashBlade,Mega
- Type: string
- Required: false
- Examples:
@@ -1222,9 +1228,33 @@ Properties:
- "GLACIER_IR"
- Glacier Instant Retrieval storage class
+#### --s3-ibm-api-key
+
+IBM API Key to be used to obtain IAM token
+
+Properties:
+
+- Config: ibm_api_key
+- Env Var: RCLONE_S3_IBM_API_KEY
+- Provider: IBMCOS
+- Type: string
+- Required: false
+
+#### --s3-ibm-resource-instance-id
+
+IBM service instance id
+
+Properties:
+
+- Config: ibm_resource_instance_id
+- Env Var: RCLONE_S3_IBM_RESOURCE_INSTANCE_ID
+- Provider: IBMCOS
+- Type: string
+- Required: false
+
### Advanced options
-Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
+Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, Exaba, FlashBlade, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Mega, Minio, Netease, Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
#### --s3-bucket-acl
@@ -1243,6 +1273,7 @@ Properties:
- Config: bucket_acl
- Env Var: RCLONE_S3_BUCKET_ACL
+- Provider: !Storj,Selectel,Synology,Cloudflare,FlashBlade
- Type: string
- Required: false
- Examples:
@@ -2058,6 +2089,46 @@ Properties:
- Type: Tristate
- Default: unset
+#### --s3-use-x-id
+
+Set if rclone should add x-id URL parameters.
+
+You can change this if you want to disable the AWS SDK from
+adding x-id URL parameters.
+
+This shouldn't be necessary in normal operation.
+
+This should be automatically set correctly for all providers rclone
+knows about - please make a bug report if not.
+
+
+Properties:
+
+- Config: use_x_id
+- Env Var: RCLONE_S3_USE_X_ID
+- Type: Tristate
+- Default: unset
+
+#### --s3-sign-accept-encoding
+
+Set if rclone should include Accept-Encoding as part of the signature.
+
+You can change this if you want to stop rclone including
+Accept-Encoding as part of the signature.
+
+This shouldn't be necessary in normal operation.
+
+This should be automatically set correctly for all providers rclone
+knows about - please make a bug report if not.
+
+
+Properties:
+
+- Config: sign_accept_encoding
+- Env Var: RCLONE_S3_SIGN_ACCEPT_ENCODING
+- Type: Tristate
+- Default: unset
+
#### --s3-directory-bucket
Set to use AWS Directory Buckets
@@ -2177,7 +2248,7 @@ or from INTELLIGENT-TIERING Archive Access / Deep Archive Access tier to the Fre
Usage Examples:
- rclone backend restore s3:bucket/path/to/object -o priority=PRIORITY -o lifetime=DAYS
+ rclone backend restore s3:bucket/path/to/ --include /object -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY
diff --git a/docs/content/sftp.md b/docs/content/sftp.md
index 5da567247..e5389269f 100644
--- a/docs/content/sftp.md
+++ b/docs/content/sftp.md
@@ -1065,6 +1065,20 @@ Properties:
- Type: string
- Required: false
+#### --sftp-http-proxy
+
+URL for HTTP CONNECT proxy
+
+Set this to a URL for an HTTP proxy which supports the HTTP CONNECT verb.
+
+
+Properties:
+
+- Config: http_proxy
+- Env Var: RCLONE_SFTP_HTTP_PROXY
+- Type: string
+- Required: false
+
#### --sftp-copy-is-hardlink
Set to enable server side copies using hardlinks.
diff --git a/docs/content/smb.md b/docs/content/smb.md
index 0529f1dd6..a78e6da6d 100644
--- a/docs/content/smb.md
+++ b/docs/content/smb.md
@@ -190,6 +190,23 @@ Properties:
- Type: string
- Required: false
+#### --smb-use-kerberos
+
+Use Kerberos authentication.
+
+If set, rclone will use Kerberos authentication instead of NTLM. This
+requires a valid Kerberos configuration and credentials cache to be
+available, either in the default locations or as specified by the
+KRB5_CONFIG and KRB5CCNAME environment variables.
+
+
+Properties:
+
+- Config: use_kerberos
+- Env Var: RCLONE_SMB_USE_KERBEROS
+- Type: bool
+- Default: false
+
### Advanced options
Here are the Advanced options specific to smb (SMB / CIFS).
diff --git a/docs/content/webdav.md b/docs/content/webdav.md
index 7c9835365..72a9c71af 100644
--- a/docs/content/webdav.md
+++ b/docs/content/webdav.md
@@ -146,7 +146,9 @@ Properties:
- "nextcloud"
- Nextcloud
- "owncloud"
- - Owncloud
+ - Owncloud 10 PHP based WebDAV server
+ - "infinitescale"
+ - ownCloud Infinite Scale
- "sharepoint"
- Sharepoint Online, authenticated by Microsoft account
- "sharepoint-ntlm"
diff --git a/rclone.1 b/rclone.1
index 28e0c3304..e2c6b3c5f 100644
--- a/rclone.1
+++ b/rclone.1
@@ -1,8 +1,80 @@
.\"t
.\" Automatically generated by Pandoc 2.9.2.1
.\"
-.TH "rclone" "1" "Jan 12, 2025" "User Manual" ""
+.TH "rclone" "1" "Jun 17, 2025" "User Manual" ""
.hy
+.SH NAME
+.PP
+rclone - manage files on cloud storage
+.SH SYNOPSIS
+.IP
+.nf
+\f[C]
+Usage:
+ rclone [flags]
+ rclone [command]
+
+Available commands:
+ about Get quota information from the remote.
+ authorize Remote authorization.
+ backend Run a backend-specific command.
+ bisync Perform bidirectional synchronization between two paths.
+ cat Concatenates any files and sends them to stdout.
+ check Checks the files in the source and destination match.
+ checksum Checks the files in the destination against a SUM file.
+ cleanup Clean up the remote if possible.
+ completion Output completion script for a given shell.
+ config Enter an interactive configuration session.
+ convmv Convert file and directory names in place.
+ copy Copy files from source to dest, skipping identical files.
+ copyto Copy files from source to dest, skipping identical files.
+ copyurl Copy the contents of the URL supplied content to dest:path.
+ cryptcheck Cryptcheck checks the integrity of an encrypted remote.
+ cryptdecode Cryptdecode returns unencrypted file names.
+ dedupe Interactively find duplicate filenames and delete/rename them.
+ delete Remove the files in path.
+ deletefile Remove a single file from remote.
+ gendocs Output markdown docs for rclone to the directory supplied.
+ gitannex Speaks with git-annex over stdin/stdout.
+ hashsum Produces a hashsum file for all the objects in the path.
+ help Show help for rclone commands, flags and backends.
+ link Generate public link to file/folder.
+ listremotes List all the remotes in the config file and defined in environment variables.
+ ls List the objects in the path with size and path.
+ lsd List all directories/containers/buckets in the path.
+ lsf List directories and objects in remote:path formatted for parsing.
+ lsjson List directories and objects in the path in JSON format.
+ lsl List the objects in path with modification time, size and path.
+ md5sum Produces an md5sum file for all the objects in the path.
+ mkdir Make the path if it doesn\[aq]t already exist.
+ mount Mount the remote as file system on a mountpoint.
+ move Move files from source to dest.
+ moveto Move file or directory from source to dest.
+ ncdu Explore a remote with a text based user interface.
+ nfsmount Mount the remote as file system on a mountpoint.
+ obscure Obscure password for use in the rclone config file.
+ purge Remove the path and all of its contents.
+ rc Run a command against a running rclone.
+ rcat Copies standard input to file on remote.
+ rcd Run rclone listening to remote control commands only.
+ rmdir Remove the empty directory at path.
+ rmdirs Remove empty directories under the path.
+ selfupdate Update the rclone binary.
+ serve Serve a remote over a protocol.
+ settier Changes storage class/tier of objects in remote.
+ sha1sum Produces an sha1sum file for all the objects in the path.
+ size Prints the total size and number of objects in remote:path.
+ sync Make source and dest identical, modifying destination only.
+ test Run a test command
+ touch Create new file or change file modification time.
+ tree List the contents of the remote in a tree like fashion.
+ version Show the version number.
+
+Use \[dq]rclone [command] --help\[dq] for more information about a command.
+Use \[dq]rclone help flags\[dq] for to see the global flags.
+Use \[dq]rclone help backends\[dq] for a list of supported services.
+\f[R]
+.fi
.SH Rclone syncs your files to cloud storage
.PP
.IP \[bu] 2
@@ -182,6 +254,10 @@ Fastmail Files
.IP \[bu] 2
Files.com
.IP \[bu] 2
+FileLu Cloud Storage
+.IP \[bu] 2
+FlashBlade
+.IP \[bu] 2
FTP
.IP \[bu] 2
Gofile
@@ -230,7 +306,9 @@ Mail.ru Cloud
.IP \[bu] 2
Memset Memstore
.IP \[bu] 2
-Mega
+MEGA
+.IP \[bu] 2
+MEGA S4
.IP \[bu] 2
Memory
.IP \[bu] 2
@@ -808,7 +886,7 @@ Its current version is as below.
.SS Source installation
.PP
Make sure you have git and Go (https://golang.org/) installed.
-Go version 1.18 or newer is required, the latest release is recommended.
+Go version 1.22 or newer is required, the latest release is recommended.
You can get it from your package manager, or download it from
golang.org/dl (https://golang.org/dl/).
Then you can run the following:
@@ -1240,6 +1318,8 @@ Dropbox (https://rclone.org/dropbox/)
.IP \[bu] 2
Enterprise File Fabric (https://rclone.org/filefabric/)
.IP \[bu] 2
+FileLu Cloud Storage (https://rclone.org/filelu/)
+.IP \[bu] 2
Files.com (https://rclone.org/filescom/)
.IP \[bu] 2
FTP (https://rclone.org/ftp/)
@@ -1616,6 +1696,7 @@ Flags for anything which can copy a file
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
--no-check-dest Don\[aq]t check the destination, copy regardless
--no-traverse Don\[aq]t traverse destination file system on copy
--no-update-dir-modtime Don\[aq]t update directory modification times
@@ -1654,6 +1735,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly \[at]/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -1861,6 +1943,7 @@ Flags for anything which can copy a file
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
--no-check-dest Don\[aq]t check the destination, copy regardless
--no-traverse Don\[aq]t traverse destination file system on copy
--no-update-dir-modtime Don\[aq]t update directory modification times
@@ -1886,6 +1969,7 @@ Flags used for sync commands
--delete-during When synchronizing, delete files during transfer
--fix-case Force rename of case insensitive dest to match source
--ignore-errors Delete even if there are I/O errors
+ --list-cutoff int To save memory, sort directory listings on disk above this threshold (default 1000000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off)
--suffix string Suffix to add to changed files
@@ -1919,6 +2003,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly \[at]/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -2039,6 +2124,7 @@ Flags for anything which can copy a file
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
--no-check-dest Don\[aq]t check the destination, copy regardless
--no-traverse Don\[aq]t traverse destination file system on copy
--no-update-dir-modtime Don\[aq]t update directory modification times
@@ -2077,6 +2163,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly \[at]/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -2194,6 +2281,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly \[at]/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -2238,6 +2326,11 @@ To delete empty directories only, use command
rmdir (https://rclone.org/commands/rclone_rmdir/) or
rmdirs (https://rclone.org/commands/rclone_rmdirs/).
.PP
+The concurrency of this operation is controlled by the
+\f[C]--checkers\f[R] global flag.
+However, some backends will implement this command directly, in which
+case \f[C]--checkers\f[R] will be ignored.
+.PP
\f[B]Important\f[R]: Since this can cause data loss, test first with the
\f[C]--dry-run\f[R] or the \f[C]--interactive\f[R]/\f[C]-i\f[R] flag.
.IP
@@ -2466,6 +2559,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly \[at]/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -2576,6 +2670,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly \[at]/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -2702,6 +2797,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly \[at]/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -2812,6 +2908,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly \[at]/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -2899,6 +2996,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly \[at]/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -2989,6 +3087,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly \[at]/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -3072,6 +3171,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly \[at]/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -3155,6 +3255,10 @@ beta: 1.42.0.5 (released 2018-06-17)
upgrade: https://beta.rclone.org/v1.42-005-g56e1e820
\f[R]
.fi
+.PP
+If you supply the --deps flag then rclone will print a list of all the
+packages it depends on and their versions along with some other
+information about the build.
.IP
.nf
\f[C]
@@ -3166,6 +3270,7 @@ rclone version [flags]
.nf
\f[C]
--check Check for new version
+ --deps Show the Go dependencies
-h, --help help for version
\f[R]
.fi
@@ -3507,6 +3612,11 @@ Remote authorization.
Used to authorize a remote or headless rclone from a machine with a
browser - use as instructed by rclone config.
.PP
+The command requires 1-3 arguments: - fs name (e.g., \[dq]drive\[dq],
+\[dq]s3\[dq], etc.) - Either a base64 encoded JSON blob obtained from a
+previous rclone config session - Or a client_id and client_secret pair
+obtained from the remote service
+.PP
Use --auth-no-open-browser to prevent rclone to open auth link in
default browser automatically.
.PP
@@ -3516,7 +3626,7 @@ template is used.
.IP
.nf
\f[C]
-rclone authorize [flags]
+rclone authorize [base64_json_blob | client_id client_secret] [flags]
\f[R]
.fi
.SS Options
@@ -3711,6 +3821,7 @@ Flags for anything which can copy a file
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
--no-check-dest Don\[aq]t check the destination, copy regardless
--no-traverse Don\[aq]t traverse destination file system on copy
--no-update-dir-modtime Don\[aq]t update directory modification times
@@ -3749,6 +3860,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly \[at]/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -3867,6 +3979,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly \[at]/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -3993,6 +4106,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly \[at]/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -4415,6 +4529,7 @@ rclone config create name type [key value]* [flags]
--continue Continue the configuration process with an answer
-h, --help help for create
--no-obscure Force any passwords not to be obscured
+ --no-output Don\[aq]t provide any output
--non-interactive Don\[aq]t interact with user and return questions
--obscure Force any passwords to be obscured
--result string Result - use with --continue
@@ -4652,12 +4767,12 @@ password to re-encrypt the config.
.PP
When \f[C]--password-command\f[R] is called to change the password then
the environment variable \f[C]RCLONE_PASSWORD_CHANGE=1\f[R] will be set.
-So if changing passwords programatically you can use the environment
+So if changing passwords programmatically you can use the environment
variable to distinguish which password you must supply.
.PP
Alternatively you can remove the password first (with
\f[C]rclone config encryption remove\f[R]), then set it again with this
-command which may be easier if you don\[aq]t mind the unecrypted config
+command which may be easier if you don\[aq]t mind the unencrypted config
file being on the disk briefly.
.IP
.nf
@@ -5054,6 +5169,7 @@ rclone config update name [key value]+ [flags]
--continue Continue the configuration process with an answer
-h, --help help for update
--no-obscure Force any passwords not to be obscured
+ --no-output Don\[aq]t provide any output
--non-interactive Don\[aq]t interact with user and return questions
--obscure Force any passwords to be obscured
--result string Result - use with --continue
@@ -5095,6 +5211,620 @@ not listed here.
.IP \[bu] 2
rclone config (https://rclone.org/commands/rclone_config/) - Enter an
interactive configuration session.
+.SH rclone convmv
+.PP
+Convert file and directory names in place.
+.SS Synopsis
+.PP
+convmv supports advanced path name transformations for converting and
+renaming files and directories by applying prefixes, suffixes, and other
+alterations.
+.PP
+.TS
+tab(@);
+lw(35.0n) lw(35.0n).
+T{
+Command
+T}@T{
+Description
+T}
+_
+T{
+\f[C]--name-transform prefix=XXXX\f[R]
+T}@T{
+Prepends XXXX to the file name.
+T}
+T{
+\f[C]--name-transform suffix=XXXX\f[R]
+T}@T{
+Appends XXXX to the file name after the extension.
+T}
+T{
+\f[C]--name-transform suffix_keep_extension=XXXX\f[R]
+T}@T{
+Appends XXXX to the file name while preserving the original file
+extension.
+T}
+T{
+\f[C]--name-transform trimprefix=XXXX\f[R]
+T}@T{
+Removes XXXX if it appears at the start of the file name.
+T}
+T{
+\f[C]--name-transform trimsuffix=XXXX\f[R]
+T}@T{
+Removes XXXX if it appears at the end of the file name.
+T}
+T{
+\f[C]--name-transform regex=/pattern/replacement/\f[R]
+T}@T{
+Applies a regex-based transformation.
+T}
+T{
+\f[C]--name-transform replace=old:new\f[R]
+T}@T{
+Replaces occurrences of old with new in the file name.
+T}
+T{
+\f[C]--name-transform date={YYYYMMDD}\f[R]
+T}@T{
+Appends or prefixes the specified date format.
+T}
+T{
+\f[C]--name-transform truncate=N\f[R]
+T}@T{
+Truncates the file name to a maximum of N characters.
+T}
+T{
+\f[C]--name-transform base64encode\f[R]
+T}@T{
+Encodes the file name in Base64.
+T}
+T{
+\f[C]--name-transform base64decode\f[R]
+T}@T{
+Decodes a Base64-encoded file name.
+T}
+T{
+\f[C]--name-transform encoder=ENCODING\f[R]
+T}@T{
+Converts the file name to the specified encoding (e.g., ISO-8859-1,
+Windows-1252, Macintosh).
+T}
+T{
+\f[C]--name-transform decoder=ENCODING\f[R]
+T}@T{
+Decodes the file name from the specified encoding.
+T}
+T{
+\f[C]--name-transform charmap=MAP\f[R]
+T}@T{
+Applies a character mapping transformation.
+T}
+T{
+\f[C]--name-transform lowercase\f[R]
+T}@T{
+Converts the file name to lowercase.
+T}
+T{
+\f[C]--name-transform uppercase\f[R]
+T}@T{
+Converts the file name to UPPERCASE.
+T}
+T{
+\f[C]--name-transform titlecase\f[R]
+T}@T{
+Converts the file name to Title Case.
+T}
+T{
+\f[C]--name-transform ascii\f[R]
+T}@T{
+Strips non-ASCII characters.
+T}
+T{
+\f[C]--name-transform url\f[R]
+T}@T{
+URL-encodes the file name.
+T}
+T{
+\f[C]--name-transform nfc\f[R]
+T}@T{
+Converts the file name to NFC Unicode normalization form.
+T}
+T{
+\f[C]--name-transform nfd\f[R]
+T}@T{
+Converts the file name to NFD Unicode normalization form.
+T}
+T{
+\f[C]--name-transform nfkc\f[R]
+T}@T{
+Converts the file name to NFKC Unicode normalization form.
+T}
+T{
+\f[C]--name-transform nfkd\f[R]
+T}@T{
+Converts the file name to NFKD Unicode normalization form.
+T}
+T{
+\f[C]--name-transform command=/path/to/my/programfile names.\f[R]
+T}@T{
+Executes an external program to transform
+T}
+.TE
+.PP
+Conversion modes:
+.IP
+.nf
+\f[C]
+none
+nfc
+nfd
+nfkc
+nfkd
+replace
+prefix
+suffix
+suffix_keep_extension
+trimprefix
+trimsuffix
+index
+date
+truncate
+base64encode
+base64decode
+encoder
+decoder
+ISO-8859-1
+Windows-1252
+Macintosh
+charmap
+lowercase
+uppercase
+titlecase
+ascii
+url
+regex
+command
+\f[R]
+.fi
+.PP
+Char maps:
+.IP
+.nf
+\f[C]
+
+IBM-Code-Page-037
+IBM-Code-Page-437
+IBM-Code-Page-850
+IBM-Code-Page-852
+IBM-Code-Page-855
+Windows-Code-Page-858
+IBM-Code-Page-860
+IBM-Code-Page-862
+IBM-Code-Page-863
+IBM-Code-Page-865
+IBM-Code-Page-866
+IBM-Code-Page-1047
+IBM-Code-Page-1140
+ISO-8859-1
+ISO-8859-2
+ISO-8859-3
+ISO-8859-4
+ISO-8859-5
+ISO-8859-6
+ISO-8859-7
+ISO-8859-8
+ISO-8859-9
+ISO-8859-10
+ISO-8859-13
+ISO-8859-14
+ISO-8859-15
+ISO-8859-16
+KOI8-R
+KOI8-U
+Macintosh
+Macintosh-Cyrillic
+Windows-874
+Windows-1250
+Windows-1251
+Windows-1252
+Windows-1253
+Windows-1254
+Windows-1255
+Windows-1256
+Windows-1257
+Windows-1258
+X-User-Defined
+\f[R]
+.fi
+.PP
+Encoding masks:
+.IP
+.nf
+\f[C]
+Asterisk
+ BackQuote
+ BackSlash
+ Colon
+ CrLf
+ Ctl
+ Del
+ Dollar
+ Dot
+ DoubleQuote
+ Exclamation
+ Hash
+ InvalidUtf8
+ LeftCrLfHtVt
+ LeftPeriod
+ LeftSpace
+ LeftTilde
+ LtGt
+ None
+ Percent
+ Pipe
+ Question
+ Raw
+ RightCrLfHtVt
+ RightPeriod
+ RightSpace
+ Semicolon
+ SingleQuote
+ Slash
+ SquareBracket
+\f[R]
+.fi
+.PP
+Examples:
+.IP
+.nf
+\f[C]
+rclone convmv \[dq]stories/The Quick Brown Fox!.txt\[dq] --name-transform \[dq]all,uppercase\[dq]
+// Output: STORIES/THE QUICK BROWN FOX!.TXT
+\f[R]
+.fi
+.IP
+.nf
+\f[C]
+rclone convmv \[dq]stories/The Quick Brown Fox!.txt\[dq] --name-transform \[dq]all,replace=Fox:Turtle\[dq] --name-transform \[dq]all,replace=Quick:Slow\[dq]
+// Output: stories/The Slow Brown Turtle!.txt
+\f[R]
+.fi
+.IP
+.nf
+\f[C]
+rclone convmv \[dq]stories/The Quick Brown Fox!.txt\[dq] --name-transform \[dq]all,base64encode\[dq]
+// Output: c3Rvcmllcw==/VGhlIFF1aWNrIEJyb3duIEZveCEudHh0
+\f[R]
+.fi
+.IP
+.nf
+\f[C]
+rclone convmv \[dq]c3Rvcmllcw==/VGhlIFF1aWNrIEJyb3duIEZveCEudHh0\[dq] --name-transform \[dq]all,base64decode\[dq]
+// Output: stories/The Quick Brown Fox!.txt
+\f[R]
+.fi
+.IP
+.nf
+\f[C]
+rclone convmv \[dq]stories/The Quick Brown \[u1F98A] Fox Went to the Caf\['e]!.txt\[dq] --name-transform \[dq]all,nfc\[dq]
+// Output: stories/The Quick Brown \[u1F98A] Fox Went to the Caf\['e]!.txt
+\f[R]
+.fi
+.IP
+.nf
+\f[C]
+rclone convmv \[dq]stories/The Quick Brown \[u1F98A] Fox Went to the Caf\['e]!.txt\[dq] --name-transform \[dq]all,nfd\[dq]
+// Output: stories/The Quick Brown \[u1F98A] Fox Went to the Cafe\[u0301]!.txt
+\f[R]
+.fi
+.IP
+.nf
+\f[C]
+rclone convmv \[dq]stories/The Quick Brown \[u1F98A] Fox!.txt\[dq] --name-transform \[dq]all,ascii\[dq]
+// Output: stories/The Quick Brown Fox!.txt
+\f[R]
+.fi
+.IP
+.nf
+\f[C]
+rclone convmv \[dq]stories/The Quick Brown Fox!.txt\[dq] --name-transform \[dq]all,trimsuffix=.txt\[dq]
+// Output: stories/The Quick Brown Fox!
+\f[R]
+.fi
+.IP
+.nf
+\f[C]
+rclone convmv \[dq]stories/The Quick Brown Fox!.txt\[dq] --name-transform \[dq]all,prefix=OLD_\[dq]
+// Output: OLD_stories/OLD_The Quick Brown Fox!.txt
+\f[R]
+.fi
+.IP
+.nf
+\f[C]
+rclone convmv \[dq]stories/The Quick Brown \[u1F98A] Fox Went to the Caf\['e]!.txt\[dq] --name-transform \[dq]all,charmap=ISO-8859-7\[dq]
+// Output: stories/The Quick Brown _ Fox Went to the Caf_!.txt
+\f[R]
+.fi
+.IP
+.nf
+\f[C]
+rclone convmv \[dq]stories/The Quick Brown Fox: A Memoir [draft].txt\[dq] --name-transform \[dq]all,encoder=Colon,SquareBracket\[dq]
+// Output: stories/The Quick Brown Fox\[uFF1A] A Memoir \[uFF3B]draft\[uFF3D].txt
+\f[R]
+.fi
+.IP
+.nf
+\f[C]
+rclone convmv \[dq]stories/The Quick Brown \[u1F98A] Fox Went to the Caf\['e]!.txt\[dq] --name-transform \[dq]all,truncate=21\[dq]
+// Output: stories/The Quick Brown \[u1F98A] Fox
+\f[R]
+.fi
+.IP
+.nf
+\f[C]
+rclone convmv \[dq]stories/The Quick Brown Fox!.txt\[dq] --name-transform \[dq]all,command=echo\[dq]
+// Output: stories/The Quick Brown Fox!.txt
+\f[R]
+.fi
+.IP
+.nf
+\f[C]
+rclone convmv \[dq]stories/The Quick Brown Fox!\[dq] --name-transform \[dq]date=-{YYYYMMDD}\[dq]
+// Output: stories/The Quick Brown Fox!-20250617
+\f[R]
+.fi
+.IP
+.nf
+\f[C]
+rclone convmv \[dq]stories/The Quick Brown Fox!\[dq] --name-transform \[dq]date=-{macfriendlytime}\[dq]
+// Output: stories/The Quick Brown Fox!-2025-06-17 0551PM
+\f[R]
+.fi
+.IP
+.nf
+\f[C]
+rclone convmv \[dq]stories/The Quick Brown Fox!.txt\[dq] --name-transform \[dq]all,regex=[\[rs]\[rs].\[rs]\[rs]w]/ab\[dq]
+// Output: ababababababab/ababab ababababab ababababab ababab!abababab
+\f[R]
+.fi
+.PP
+Multiple transformations can be used in sequence, applied in the order
+they are specified on the command line.
+.PP
+The \f[C]--name-transform\f[R] flag is also available in \f[C]sync\f[R],
+\f[C]copy\f[R], and \f[C]move\f[R].
+.SH Files vs Directories
+.PP
+By default \f[C]--name-transform\f[R] will only apply to file names.
+The means only the leaf file name will be transformed.
+However some of the transforms would be better applied to the whole path
+or just directories.
+To choose which which part of the file path is affected some tags can be
+added to the \f[C]--name-transform\f[R]
+.PP
+.TS
+tab(@);
+lw(35.0n) lw(35.0n).
+T{
+Tag
+T}@T{
+Effect
+T}
+_
+T{
+\f[C]file\f[R]
+T}@T{
+Only transform the leaf name of files (DEFAULT)
+T}
+T{
+\f[C]dir\f[R]
+T}@T{
+Only transform name of directories - these may appear anywhere in the
+path
+T}
+T{
+\f[C]all\f[R]
+T}@T{
+Transform the entire path for files and directories
+T}
+.TE
+.PP
+This is used by adding the tag into the transform name like this:
+\f[C]--name-transform file,prefix=ABC\f[R] or
+\f[C]--name-transform dir,prefix=DEF\f[R].
+.PP
+For some conversions using all is more likely to be useful, for example
+\f[C]--name-transform all,nfc\f[R]
+.PP
+Note that \f[C]--name-transform\f[R] may not add path separators
+\f[C]/\f[R] to the name.
+This will cause an error.
+.SH Ordering and Conflicts
+.IP \[bu] 2
+Transformations will be applied in the order specified by the user.
+.RS 2
+.IP \[bu] 2
+If the \f[C]file\f[R] tag is in use (the default) then only the leaf
+name of files will be transformed.
+.IP \[bu] 2
+If the \f[C]dir\f[R] tag is in use then directories anywhere in the path
+will be transformed
+.IP \[bu] 2
+If the \f[C]all\f[R] tag is in use then directories and files anywhere
+in the path will be transformed
+.IP \[bu] 2
+Each transformation will be run one path segment at a time.
+.IP \[bu] 2
+If a transformation adds a \f[C]/\f[R] or ends up with an empty path
+segment then that will be an error.
+.RE
+.IP \[bu] 2
+It is up to the user to put the transformations in a sensible order.
+.RS 2
+.IP \[bu] 2
+Conflicting transformations, such as \f[C]prefix\f[R] followed by
+\f[C]trimprefix\f[R] or \f[C]nfc\f[R] followed by \f[C]nfd\f[R], are
+possible.
+.IP \[bu] 2
+Instead of enforcing mutual exclusivity, transformations are applied in
+sequence as specified by the user, allowing for intentional use cases
+(e.g., trimming one prefix before adding another).
+.IP \[bu] 2
+Users should be aware that certain combinations may lead to unexpected
+results and should verify transformations using \f[C]--dry-run\f[R]
+before execution.
+.RE
+.SH Race Conditions and Non-Deterministic Behavior
+.PP
+Some transformations, such as \f[C]replace=old:new\f[R], may introduce
+conflicts where multiple source files map to the same destination name.
+This can lead to race conditions when performing concurrent transfers.
+It is up to the user to anticipate these.
+* If two files from the source are transformed into the same name at the
+destination, the final state may be non-deterministic.
+* Running rclone check after a sync using such transformations may
+erroneously report missing or differing files due to overwritten
+results.
+.IP \[bu] 2
+To minimize risks, users should:
+.RS 2
+.IP \[bu] 2
+Carefully review transformations that may introduce conflicts.
+.IP \[bu] 2
+Use \f[C]--dry-run\f[R] to inspect changes before executing a sync (but
+keep in mind that it won\[aq]t show the effect of non-deterministic
+transformations).
+.IP \[bu] 2
+Avoid transformations that cause multiple distinct source files to map
+to the same destination name.
+.IP \[bu] 2
+Consider disabling concurrency with \f[C]--transfers=1\f[R] if
+necessary.
+.IP \[bu] 2
+Certain transformations (e.g.
+\f[C]prefix\f[R]) will have a multiplying effect every time they are
+used.
+Avoid these when using \f[C]bisync\f[R].
+.RE
+.IP
+.nf
+\f[C]
+rclone convmv dest:path --name-transform XXX [flags]
+\f[R]
+.fi
+.SS Options
+.IP
+.nf
+\f[C]
+ --create-empty-src-dirs Create empty source dirs on destination after move
+ --delete-empty-src-dirs Delete empty source dirs after move
+ -h, --help help for convmv
+\f[R]
+.fi
+.PP
+Options shared with other commands are described next.
+See the global flags page (https://rclone.org/flags/) for global options
+not listed here.
+.SS Copy Options
+.PP
+Flags for anything which can copy a file
+.IP
+.nf
+\f[C]
+ --check-first Do all the checks before starting transfers
+ -c, --checksum Check for changes with size & checksum (if available, or fallback to size only)
+ --compare-dest stringArray Include additional server-side paths during comparison
+ --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
+ --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
+ --ignore-case-sync Ignore case when synchronizing
+ --ignore-checksum Skip post copy check of checksums
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use modtime or checksum
+ -I, --ignore-times Don\[aq]t skip items that match size and time - transfer all unconditionally
+ --immutable Do not modify files, fail if existing files have been modified
+ --inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension
+ --max-backlog int Maximum number of objects in sync or check backlog (default 10000)
+ --max-duration Duration Maximum duration rclone will transfer data for (default 0s)
+ --max-transfer SizeSuffix Maximum size of data to transfer (default off)
+ -M, --metadata If set, preserve metadata when copying objects
+ --modify-window Duration Max time diff to be considered the same (default 1ns)
+ --multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi)
+ --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
+ --multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
+ --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
+ --no-check-dest Don\[aq]t check the destination, copy regardless
+ --no-traverse Don\[aq]t traverse destination file system on copy
+ --no-update-dir-modtime Don\[aq]t update directory modification times
+ --no-update-modtime Don\[aq]t update destination modtime if files identical
+ --order-by string Instructions on how to order the transfers, e.g. \[aq]size,descending\[aq]
+ --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default \[dq].partial\[dq])
+ --refresh-times Refresh the modtime of remote files
+ --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
+ --size-only Skip based on size only, not modtime or checksum
+ --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
+ -u, --update Skip files that are newer on the destination
+\f[R]
+.fi
+.SS Important Options
+.PP
+Important flags useful for most commands
+.IP
+.nf
+\f[C]
+ -n, --dry-run Do a trial run with no permanent changes
+ -i, --interactive Enable interactive mode
+ -v, --verbose count Print lots more stuff (repeat for more)
+\f[R]
+.fi
+.SS Filter Options
+.PP
+Flags for filtering directory listings
+.IP
+.nf
+\f[C]
+ --delete-excluded Delete files on dest excluded from sync
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin)
+ --exclude-if-present stringArray Exclude directories if filename is present
+ --files-from stringArray Read list of source-file names from file (use - to read from stdin)
+ --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
+ -f, --filter stringArray Add a file filtering rule
+ --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly \[at]/n
+ --ignore-case Ignore case in filters (case insensitive)
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read file include patterns from file (use - to read from stdin)
+ --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-depth int If set limits the recursion depth to this (default -1)
+ --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
+ --metadata-exclude stringArray Exclude metadatas matching pattern
+ --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin)
+ --metadata-filter stringArray Add a metadata filtering rule
+ --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin)
+ --metadata-include stringArray Include metadatas matching pattern
+ --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
+ --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
+\f[R]
+.fi
+.SS Listing Options
+.PP
+Flags for listing directories
+.IP
+.nf
+\f[C]
+ --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
+ --fast-list Use recursive list if available; uses more memory but fewer transactions
+\f[R]
+.fi
+.SS See Also
+.IP \[bu] 2
+rclone (https://rclone.org/commands/rclone/) - Show help for rclone
+commands, flags and backends.
.SH rclone copyto
.PP
Copy files from source to dest, skipping identical files.
@@ -5135,6 +5865,9 @@ This doesn\[aq]t transfer files that are identical on src and dst,
testing by size and modification time or MD5SUM.
It doesn\[aq]t delete files from the destination.
.PP
+\f[I]If you are looking to copy just a byte range of a file, please see
+\[aq]rclone cat --offset X --count Y\[aq]\f[R]
+.PP
\f[B]Note\f[R]: Use the \f[C]-P\f[R]/\f[C]--progress\f[R] flag to view
real-time transfer statistics
.IP
@@ -5182,6 +5915,7 @@ Flags for anything which can copy a file
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
--no-check-dest Don\[aq]t check the destination, copy regardless
--no-traverse Don\[aq]t traverse destination file system on copy
--no-update-dir-modtime Don\[aq]t update directory modification times
@@ -5220,6 +5954,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly \[at]/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -5262,9 +5997,8 @@ Setting \f[C]--auto-filename\f[R] will attempt to automatically
determine the filename from the URL (after any redirections) and used in
the destination path.
.PP
-With \f[C]--auto-filename-header\f[R] in addition, if a specific
-filename is set in HTTP headers, it will be used instead of the name
-from the URL.
+With \f[C]--header-filename\f[R] in addition, if a specific filename is
+set in HTTP headers, it will be used instead of the name from the URL.
With \f[C]--print-filename\f[R] in addition, the resulting file name
will be printed.
.PP
@@ -5273,7 +6007,7 @@ destination if there is one with the same name.
.PP
Setting \f[C]--stdout\f[R] or making the output file name \f[C]-\f[R]
will cause the output to be written to standard output.
-.SS Troublshooting
+.SS Troubleshooting
.PP
If you can\[aq]t get \f[C]rclone copyurl\f[R] to work then here are some
things you can try:
@@ -5448,6 +6182,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly \[at]/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -5740,6 +6475,7 @@ Supported hashes are:
* whirlpool
* crc32
* sha256
+ * sha512
\f[R]
.fi
.PP
@@ -5788,6 +6524,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly \[at]/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -6142,6 +6879,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly \[at]/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -6349,6 +7087,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly \[at]/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -7102,13 +7841,13 @@ If rclone is quit or dies with files that haven\[aq]t been uploaded,
these will be uploaded next time rclone is run with the same flags.
.PP
If using \f[C]--vfs-cache-max-size\f[R] or
-\f[C]--vfs-cache-min-free-size\f[R] note that the cache may exceed these
-quotas for two reasons.
+\f[C]--vfs-cache-min-free-space\f[R] note that the cache may exceed
+these quotas for two reasons.
Firstly because it is only checked every
\f[C]--vfs-cache-poll-interval\f[R].
Secondly because open files cannot be evicted from the cache.
When \f[C]--vfs-cache-max-size\f[R] or
-\f[C]--vfs-cache-min-free-size\f[R] is exceeded, rclone will attempt to
+\f[C]--vfs-cache-min-free-space\f[R] is exceeded, rclone will attempt to
evict the least accessed files from the cache first.
rclone will start with files that haven\[aq]t been accessed for the
longest.
@@ -7500,6 +8239,47 @@ filters so that the result is accurate.
However, this is very inefficient and may cost lots of API calls
resulting in extra charges.
Use it as a last resort and only with caching.
+.SS VFS Metadata
+.PP
+If you use the \f[C]--vfs-metadata-extension\f[R] flag you can get the
+VFS to expose files which contain the
+metadata (https://rclone.org/docs/#metadata) as a JSON blob.
+These files will not appear in the directory listing, but can be
+\f[C]stat\f[R]-ed and opened and once they have been they \f[B]will\f[R]
+appear in directory listings until the directory cache expires.
+.PP
+Note that some backends won\[aq]t create metadata unless you pass in the
+\f[C]--metadata\f[R] flag.
+.PP
+For example, using \f[C]rclone mount\f[R] with
+\f[C]--metadata --vfs-metadata-extension .metadata\f[R] we get
+.IP
+.nf
+\f[C]
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ \[dq]atime\[dq]: \[dq]2025-03-04T17:34:22.317069787Z\[dq],
+ \[dq]btime\[dq]: \[dq]2025-03-03T16:03:37.708253808Z\[dq],
+ \[dq]gid\[dq]: \[dq]1000\[dq],
+ \[dq]mode\[dq]: \[dq]100664\[dq],
+ \[dq]mtime\[dq]: \[dq]2025-03-03T16:03:39.640238323Z\[dq],
+ \[dq]uid\[dq]: \[dq]1000\[dq]
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+\f[R]
+.fi
+.PP
+If the file has no metadata it will be returned as \f[C]{}\f[R] and if
+there is an error reading the metadata the error will be returned as
+\f[C]{\[dq]error\[dq]:\[dq]error string\[dq]}\f[R].
.IP
.nf
\f[C]
@@ -7552,6 +8332,7 @@ rclone mount remote:path /path/to/mountpoint [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off)
@@ -7583,6 +8364,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly \[at]/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -7693,6 +8475,7 @@ Flags for anything which can copy a file
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
--no-check-dest Don\[aq]t check the destination, copy regardless
--no-traverse Don\[aq]t traverse destination file system on copy
--no-update-dir-modtime Don\[aq]t update directory modification times
@@ -7731,6 +8514,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly \[at]/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -7868,6 +8652,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly \[at]/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -8622,13 +9407,13 @@ If rclone is quit or dies with files that haven\[aq]t been uploaded,
these will be uploaded next time rclone is run with the same flags.
.PP
If using \f[C]--vfs-cache-max-size\f[R] or
-\f[C]--vfs-cache-min-free-size\f[R] note that the cache may exceed these
-quotas for two reasons.
+\f[C]--vfs-cache-min-free-space\f[R] note that the cache may exceed
+these quotas for two reasons.
Firstly because it is only checked every
\f[C]--vfs-cache-poll-interval\f[R].
Secondly because open files cannot be evicted from the cache.
When \f[C]--vfs-cache-max-size\f[R] or
-\f[C]--vfs-cache-min-free-size\f[R] is exceeded, rclone will attempt to
+\f[C]--vfs-cache-min-free-space\f[R] is exceeded, rclone will attempt to
evict the least accessed files from the cache first.
rclone will start with files that haven\[aq]t been accessed for the
longest.
@@ -9020,6 +9805,47 @@ filters so that the result is accurate.
However, this is very inefficient and may cost lots of API calls
resulting in extra charges.
Use it as a last resort and only with caching.
+.SS VFS Metadata
+.PP
+If you use the \f[C]--vfs-metadata-extension\f[R] flag you can get the
+VFS to expose files which contain the
+metadata (https://rclone.org/docs/#metadata) as a JSON blob.
+These files will not appear in the directory listing, but can be
+\f[C]stat\f[R]-ed and opened and once they have been they \f[B]will\f[R]
+appear in directory listings until the directory cache expires.
+.PP
+Note that some backends won\[aq]t create metadata unless you pass in the
+\f[C]--metadata\f[R] flag.
+.PP
+For example, using \f[C]rclone mount\f[R] with
+\f[C]--metadata --vfs-metadata-extension .metadata\f[R] we get
+.IP
+.nf
+\f[C]
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ \[dq]atime\[dq]: \[dq]2025-03-04T17:34:22.317069787Z\[dq],
+ \[dq]btime\[dq]: \[dq]2025-03-03T16:03:37.708253808Z\[dq],
+ \[dq]gid\[dq]: \[dq]1000\[dq],
+ \[dq]mode\[dq]: \[dq]100664\[dq],
+ \[dq]mtime\[dq]: \[dq]2025-03-03T16:03:39.640238323Z\[dq],
+ \[dq]uid\[dq]: \[dq]1000\[dq]
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+\f[R]
+.fi
+.PP
+If the file has no metadata it will be returned as \f[C]{}\f[R] and if
+there is an error reading the metadata the error will be returned as
+\f[C]{\[dq]error\[dq]:\[dq]error string\[dq]}\f[R].
.IP
.nf
\f[C]
@@ -9077,6 +9903,7 @@ rclone nfsmount remote:path /path/to/mountpoint [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off)
@@ -9108,6 +9935,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly \[at]/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -9616,7 +10444,14 @@ You can either use an htpasswd file which can take lots of users, or set
a single username and password with the \f[C]--rc-user\f[R] and
\f[C]--rc-pass\f[R] flags.
.PP
-If no static users are configured by either of the above methods, and
+Alternatively, you can have the reverse proxy manage authentication and
+use the username provided in the configured header with
+\f[C]--user-from-header\f[R] (e.g.,
+\f[C]--rc---user-from-header=x-remote-user\f[R]).
+Ensure the proxy is trusted and headers cannot be spoofed, as
+misconfiguration may lead to unauthorized access.
+.PP
+If either of the above authentication methods is not configured and
client certificates are required by the \f[C]--client-ca\f[R] flag
passed to the server, the client certificate common name will be
considered as the username.
@@ -9690,6 +10525,7 @@ Flags to control the Remote Control API
--rc-server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
--rc-template string User-specified template
--rc-user string User name for authentication
+ --rc-user-from-header string User name from a defined HTTP header
--rc-web-fetch-url string URL to fetch the releases for webgui (default \[dq]https://api.github.com/repos/rclone/rclone-webui-react/releases/latest\[dq])
--rc-web-gui Launch WebGUI on localhost
--rc-web-gui-force-update Force update to latest version of web gui
@@ -10071,13 +10907,13 @@ If rclone is quit or dies with files that haven\[aq]t been uploaded,
these will be uploaded next time rclone is run with the same flags.
.PP
If using \f[C]--vfs-cache-max-size\f[R] or
-\f[C]--vfs-cache-min-free-size\f[R] note that the cache may exceed these
-quotas for two reasons.
+\f[C]--vfs-cache-min-free-space\f[R] note that the cache may exceed
+these quotas for two reasons.
Firstly because it is only checked every
\f[C]--vfs-cache-poll-interval\f[R].
Secondly because open files cannot be evicted from the cache.
When \f[C]--vfs-cache-max-size\f[R] or
-\f[C]--vfs-cache-min-free-size\f[R] is exceeded, rclone will attempt to
+\f[C]--vfs-cache-min-free-space\f[R] is exceeded, rclone will attempt to
evict the least accessed files from the cache first.
rclone will start with files that haven\[aq]t been accessed for the
longest.
@@ -10469,6 +11305,47 @@ filters so that the result is accurate.
However, this is very inefficient and may cost lots of API calls
resulting in extra charges.
Use it as a last resort and only with caching.
+.SS VFS Metadata
+.PP
+If you use the \f[C]--vfs-metadata-extension\f[R] flag you can get the
+VFS to expose files which contain the
+metadata (https://rclone.org/docs/#metadata) as a JSON blob.
+These files will not appear in the directory listing, but can be
+\f[C]stat\f[R]-ed and opened and once they have been they \f[B]will\f[R]
+appear in directory listings until the directory cache expires.
+.PP
+Note that some backends won\[aq]t create metadata unless you pass in the
+\f[C]--metadata\f[R] flag.
+.PP
+For example, using \f[C]rclone mount\f[R] with
+\f[C]--metadata --vfs-metadata-extension .metadata\f[R] we get
+.IP
+.nf
+\f[C]
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ \[dq]atime\[dq]: \[dq]2025-03-04T17:34:22.317069787Z\[dq],
+ \[dq]btime\[dq]: \[dq]2025-03-03T16:03:37.708253808Z\[dq],
+ \[dq]gid\[dq]: \[dq]1000\[dq],
+ \[dq]mode\[dq]: \[dq]100664\[dq],
+ \[dq]mtime\[dq]: \[dq]2025-03-03T16:03:39.640238323Z\[dq],
+ \[dq]uid\[dq]: \[dq]1000\[dq]
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+\f[R]
+.fi
+.PP
+If the file has no metadata it will be returned as \f[C]{}\f[R] and if
+there is an error reading the metadata the error will be returned as
+\f[C]{\[dq]error\[dq]:\[dq]error string\[dq]}\f[R].
.IP
.nf
\f[C]
@@ -10507,6 +11384,7 @@ rclone serve dlna remote:path [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off)
@@ -10536,6 +11414,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly \[at]/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -10732,13 +11611,13 @@ If rclone is quit or dies with files that haven\[aq]t been uploaded,
these will be uploaded next time rclone is run with the same flags.
.PP
If using \f[C]--vfs-cache-max-size\f[R] or
-\f[C]--vfs-cache-min-free-size\f[R] note that the cache may exceed these
-quotas for two reasons.
+\f[C]--vfs-cache-min-free-space\f[R] note that the cache may exceed
+these quotas for two reasons.
Firstly because it is only checked every
\f[C]--vfs-cache-poll-interval\f[R].
Secondly because open files cannot be evicted from the cache.
When \f[C]--vfs-cache-max-size\f[R] or
-\f[C]--vfs-cache-min-free-size\f[R] is exceeded, rclone will attempt to
+\f[C]--vfs-cache-min-free-space\f[R] is exceeded, rclone will attempt to
evict the least accessed files from the cache first.
rclone will start with files that haven\[aq]t been accessed for the
longest.
@@ -11130,6 +12009,47 @@ filters so that the result is accurate.
However, this is very inefficient and may cost lots of API calls
resulting in extra charges.
Use it as a last resort and only with caching.
+.SS VFS Metadata
+.PP
+If you use the \f[C]--vfs-metadata-extension\f[R] flag you can get the
+VFS to expose files which contain the
+metadata (https://rclone.org/docs/#metadata) as a JSON blob.
+These files will not appear in the directory listing, but can be
+\f[C]stat\f[R]-ed and opened and once they have been they \f[B]will\f[R]
+appear in directory listings until the directory cache expires.
+.PP
+Note that some backends won\[aq]t create metadata unless you pass in the
+\f[C]--metadata\f[R] flag.
+.PP
+For example, using \f[C]rclone mount\f[R] with
+\f[C]--metadata --vfs-metadata-extension .metadata\f[R] we get
+.IP
+.nf
+\f[C]
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ \[dq]atime\[dq]: \[dq]2025-03-04T17:34:22.317069787Z\[dq],
+ \[dq]btime\[dq]: \[dq]2025-03-03T16:03:37.708253808Z\[dq],
+ \[dq]gid\[dq]: \[dq]1000\[dq],
+ \[dq]mode\[dq]: \[dq]100664\[dq],
+ \[dq]mtime\[dq]: \[dq]2025-03-03T16:03:39.640238323Z\[dq],
+ \[dq]uid\[dq]: \[dq]1000\[dq]
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+\f[R]
+.fi
+.PP
+If the file has no metadata it will be returned as \f[C]{}\f[R] and if
+there is an error reading the metadata the error will be returned as
+\f[C]{\[dq]error\[dq]:\[dq]error string\[dq]}\f[R].
.IP
.nf
\f[C]
@@ -11187,6 +12107,7 @@ rclone serve docker [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off)
@@ -11218,6 +12139,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly \[at]/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -11384,13 +12306,13 @@ If rclone is quit or dies with files that haven\[aq]t been uploaded,
these will be uploaded next time rclone is run with the same flags.
.PP
If using \f[C]--vfs-cache-max-size\f[R] or
-\f[C]--vfs-cache-min-free-size\f[R] note that the cache may exceed these
-quotas for two reasons.
+\f[C]--vfs-cache-min-free-space\f[R] note that the cache may exceed
+these quotas for two reasons.
Firstly because it is only checked every
\f[C]--vfs-cache-poll-interval\f[R].
Secondly because open files cannot be evicted from the cache.
When \f[C]--vfs-cache-max-size\f[R] or
-\f[C]--vfs-cache-min-free-size\f[R] is exceeded, rclone will attempt to
+\f[C]--vfs-cache-min-free-space\f[R] is exceeded, rclone will attempt to
evict the least accessed files from the cache first.
rclone will start with files that haven\[aq]t been accessed for the
longest.
@@ -11782,6 +12704,47 @@ filters so that the result is accurate.
However, this is very inefficient and may cost lots of API calls
resulting in extra charges.
Use it as a last resort and only with caching.
+.SS VFS Metadata
+.PP
+If you use the \f[C]--vfs-metadata-extension\f[R] flag you can get the
+VFS to expose files which contain the
+metadata (https://rclone.org/docs/#metadata) as a JSON blob.
+These files will not appear in the directory listing, but can be
+\f[C]stat\f[R]-ed and opened and once they have been they \f[B]will\f[R]
+appear in directory listings until the directory cache expires.
+.PP
+Note that some backends won\[aq]t create metadata unless you pass in the
+\f[C]--metadata\f[R] flag.
+.PP
+For example, using \f[C]rclone mount\f[R] with
+\f[C]--metadata --vfs-metadata-extension .metadata\f[R] we get
+.IP
+.nf
+\f[C]
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ \[dq]atime\[dq]: \[dq]2025-03-04T17:34:22.317069787Z\[dq],
+ \[dq]btime\[dq]: \[dq]2025-03-03T16:03:37.708253808Z\[dq],
+ \[dq]gid\[dq]: \[dq]1000\[dq],
+ \[dq]mode\[dq]: \[dq]100664\[dq],
+ \[dq]mtime\[dq]: \[dq]2025-03-03T16:03:39.640238323Z\[dq],
+ \[dq]uid\[dq]: \[dq]1000\[dq]
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+\f[R]
+.fi
+.PP
+If the file has no metadata it will be returned as \f[C]{}\f[R] and if
+there is an error reading the metadata the error will be returned as
+\f[C]{\[dq]error\[dq]:\[dq]error string\[dq]}\f[R].
.SS Auth Proxy
.PP
If you supply the parameter \f[C]--auth-proxy /path/to/program\f[R] then
@@ -11916,6 +12879,7 @@ rclone serve ftp remote:path [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off)
@@ -11945,6 +12909,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly \[at]/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -12201,7 +13166,14 @@ You can either use an htpasswd file which can take lots of users, or set
a single username and password with the \f[C]--user\f[R] and
\f[C]--pass\f[R] flags.
.PP
-If no static users are configured by either of the above methods, and
+Alternatively, you can have the reverse proxy manage authentication and
+use the username provided in the configured header with
+\f[C]--user-from-header\f[R] (e.g.,
+\f[C]----user-from-header=x-remote-user\f[R]).
+Ensure the proxy is trusted and headers cannot be spoofed, as
+misconfiguration may lead to unauthorized access.
+.PP
+If either of the above authentication methods is not configured and
client certificates are required by the \f[C]--client-ca\f[R] flag
passed to the server, the client certificate common name will be
considered as the username.
@@ -12349,13 +13321,13 @@ If rclone is quit or dies with files that haven\[aq]t been uploaded,
these will be uploaded next time rclone is run with the same flags.
.PP
If using \f[C]--vfs-cache-max-size\f[R] or
-\f[C]--vfs-cache-min-free-size\f[R] note that the cache may exceed these
-quotas for two reasons.
+\f[C]--vfs-cache-min-free-space\f[R] note that the cache may exceed
+these quotas for two reasons.
Firstly because it is only checked every
\f[C]--vfs-cache-poll-interval\f[R].
Secondly because open files cannot be evicted from the cache.
When \f[C]--vfs-cache-max-size\f[R] or
-\f[C]--vfs-cache-min-free-size\f[R] is exceeded, rclone will attempt to
+\f[C]--vfs-cache-min-free-space\f[R] is exceeded, rclone will attempt to
evict the least accessed files from the cache first.
rclone will start with files that haven\[aq]t been accessed for the
longest.
@@ -12747,6 +13719,47 @@ filters so that the result is accurate.
However, this is very inefficient and may cost lots of API calls
resulting in extra charges.
Use it as a last resort and only with caching.
+.SS VFS Metadata
+.PP
+If you use the \f[C]--vfs-metadata-extension\f[R] flag you can get the
+VFS to expose files which contain the
+metadata (https://rclone.org/docs/#metadata) as a JSON blob.
+These files will not appear in the directory listing, but can be
+\f[C]stat\f[R]-ed and opened and once they have been they \f[B]will\f[R]
+appear in directory listings until the directory cache expires.
+.PP
+Note that some backends won\[aq]t create metadata unless you pass in the
+\f[C]--metadata\f[R] flag.
+.PP
+For example, using \f[C]rclone mount\f[R] with
+\f[C]--metadata --vfs-metadata-extension .metadata\f[R] we get
+.IP
+.nf
+\f[C]
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ \[dq]atime\[dq]: \[dq]2025-03-04T17:34:22.317069787Z\[dq],
+ \[dq]btime\[dq]: \[dq]2025-03-03T16:03:37.708253808Z\[dq],
+ \[dq]gid\[dq]: \[dq]1000\[dq],
+ \[dq]mode\[dq]: \[dq]100664\[dq],
+ \[dq]mtime\[dq]: \[dq]2025-03-03T16:03:39.640238323Z\[dq],
+ \[dq]uid\[dq]: \[dq]1000\[dq]
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+\f[R]
+.fi
+.PP
+If the file has no metadata it will be returned as \f[C]{}\f[R] and if
+there is an error reading the metadata the error will be returned as
+\f[C]{\[dq]error\[dq]:\[dq]error string\[dq]}\f[R].
.SS Auth Proxy
.PP
If you supply the parameter \f[C]--auth-proxy /path/to/program\f[R] then
@@ -12850,19 +13863,19 @@ rclone serve http remote:path [flags]
.IP
.nf
\f[C]
- --addr stringArray IPaddress:Port, :Port or [unix://]/path/to/socket to bind server to (default [127.0.0.1:8080])
+ --addr stringArray IPaddress:Port or :Port to bind server to (default 127.0.0.1:8080)
--allow-origin string Origin which cross-domain request (CORS) can be executed from
--auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
- --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
- --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
+ --cert string TLS PEM key (concatenation of certificate and CA certificate)
+ --client-ca string Client certificate authority to verify clients with
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 777)
--file-perms FileMode File permissions (default 666)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for http
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string Path to TLS PEM private key file
+ --key string TLS PEM Private key
--link-perms FileMode Link permissions (default 666)
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default \[dq]tls1.0\[dq])
@@ -12880,6 +13893,7 @@ rclone serve http remote:path [flags]
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask FileMode Override the permission bits set by the filesystem (not supported on Windows) (default 002)
--user string User name for authentication
+ --user-from-header string User name from a defined HTTP header
--vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost)
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
@@ -12890,6 +13904,7 @@ rclone serve http remote:path [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off)
@@ -12919,6 +13934,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly \[at]/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -12993,7 +14009,8 @@ which improves performance.
This sort of cache can\[aq]t be backed up and restored as the underlying
handles will change.
This is Linux only.
-It requres running rclone as root or with \f[C]CAP_DAC_READ_SEARCH\f[R].
+It requires running rclone as root or with
+\f[C]CAP_DAC_READ_SEARCH\f[R].
You can run rclone with this extra permission by doing this to the
rclone binary
\f[C]sudo setcap cap_dac_read_search+ep /path/to/rclone\f[R].
@@ -13027,6 +14044,13 @@ Where \f[C]$PORT\f[R] is the same port number used in the
\f[C]serve nfs\f[R] command and \f[C]$HOSTNAME\f[R] is the network
address of the machine that \f[C]serve nfs\f[R] was run on.
.PP
+If \f[C]--vfs-metadata-extension\f[R] is in use then for the
+\f[C]--nfs-cache-type disk\f[R] and \f[C]--nfs-cache-type cache\f[R] the
+metadata files will have the file handle of their parent file suffixed
+with \f[C]0x00, 0x00, 0x00, 0x01\f[R].
+This means they can be looked up directly from the parent file handle is
+desired.
+.PP
This command is only available on Unix platforms.
.SS VFS - Virtual File System
.PP
@@ -13150,13 +14174,13 @@ If rclone is quit or dies with files that haven\[aq]t been uploaded,
these will be uploaded next time rclone is run with the same flags.
.PP
If using \f[C]--vfs-cache-max-size\f[R] or
-\f[C]--vfs-cache-min-free-size\f[R] note that the cache may exceed these
-quotas for two reasons.
+\f[C]--vfs-cache-min-free-space\f[R] note that the cache may exceed
+these quotas for two reasons.
Firstly because it is only checked every
\f[C]--vfs-cache-poll-interval\f[R].
Secondly because open files cannot be evicted from the cache.
When \f[C]--vfs-cache-max-size\f[R] or
-\f[C]--vfs-cache-min-free-size\f[R] is exceeded, rclone will attempt to
+\f[C]--vfs-cache-min-free-space\f[R] is exceeded, rclone will attempt to
evict the least accessed files from the cache first.
rclone will start with files that haven\[aq]t been accessed for the
longest.
@@ -13548,6 +14572,47 @@ filters so that the result is accurate.
However, this is very inefficient and may cost lots of API calls
resulting in extra charges.
Use it as a last resort and only with caching.
+.SS VFS Metadata
+.PP
+If you use the \f[C]--vfs-metadata-extension\f[R] flag you can get the
+VFS to expose files which contain the
+metadata (https://rclone.org/docs/#metadata) as a JSON blob.
+These files will not appear in the directory listing, but can be
+\f[C]stat\f[R]-ed and opened and once they have been they \f[B]will\f[R]
+appear in directory listings until the directory cache expires.
+.PP
+Note that some backends won\[aq]t create metadata unless you pass in the
+\f[C]--metadata\f[R] flag.
+.PP
+For example, using \f[C]rclone mount\f[R] with
+\f[C]--metadata --vfs-metadata-extension .metadata\f[R] we get
+.IP
+.nf
+\f[C]
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ \[dq]atime\[dq]: \[dq]2025-03-04T17:34:22.317069787Z\[dq],
+ \[dq]btime\[dq]: \[dq]2025-03-03T16:03:37.708253808Z\[dq],
+ \[dq]gid\[dq]: \[dq]1000\[dq],
+ \[dq]mode\[dq]: \[dq]100664\[dq],
+ \[dq]mtime\[dq]: \[dq]2025-03-03T16:03:39.640238323Z\[dq],
+ \[dq]uid\[dq]: \[dq]1000\[dq]
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+\f[R]
+.fi
+.PP
+If the file has no metadata it will be returned as \f[C]{}\f[R] and if
+there is an error reading the metadata the error will be returned as
+\f[C]{\[dq]error\[dq]:\[dq]error string\[dq]}\f[R].
.IP
.nf
\f[C]
@@ -13585,6 +14650,7 @@ rclone serve nfs remote:path [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off)
@@ -13614,6 +14680,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly \[at]/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -13818,7 +14885,14 @@ You can either use an htpasswd file which can take lots of users, or set
a single username and password with the \f[C]--user\f[R] and
\f[C]--pass\f[R] flags.
.PP
-If no static users are configured by either of the above methods, and
+Alternatively, you can have the reverse proxy manage authentication and
+use the username provided in the configured header with
+\f[C]--user-from-header\f[R] (e.g.,
+\f[C]----user-from-header=x-remote-user\f[R]).
+Ensure the proxy is trusted and headers cannot be spoofed, as
+misconfiguration may lead to unauthorized access.
+.PP
+If either of the above authentication methods is not configured and
client certificates are required by the \f[C]--client-ca\f[R] flag
passed to the server, the client certificate common name will be
considered as the username.
@@ -13854,16 +14928,16 @@ rclone serve restic remote:path [flags]
.IP
.nf
\f[C]
- --addr stringArray IPaddress:Port, :Port or [unix://]/path/to/socket to bind server to (default [127.0.0.1:8080])
+ --addr stringArray IPaddress:Port or :Port to bind server to (default 127.0.0.1:8080)
--allow-origin string Origin which cross-domain request (CORS) can be executed from
--append-only Disallow deletion of repository data
--baseurl string Prefix for URLs - leave blank for root
--cache-objects Cache listed objects (default true)
- --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
- --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
+ --cert string TLS PEM key (concatenation of certificate and CA certificate)
+ --client-ca string Client certificate authority to verify clients with
-h, --help help for restic
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string Path to TLS PEM private key file
+ --key string TLS PEM Private key
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default \[dq]tls1.0\[dq])
--pass string Password for authentication
@@ -13874,6 +14948,7 @@ rclone serve restic remote:path [flags]
--server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
--stdio Run an HTTP2 server on stdin/stdout
--user string User name for authentication
+ --user-from-header string User name from a defined HTTP header
\f[R]
.fi
.PP
@@ -13973,7 +15048,7 @@ use_multipart_uploads = false
\f[R]
.fi
.PP
-Note that setting \f[C]disable_multipart_uploads = true\f[R] is to work
+Note that setting \f[C]use_multipart_uploads = false\f[R] is to work
around a bug which will be fixed in due course.
.SS Bugs
.PP
@@ -14064,7 +15139,14 @@ You can either use an htpasswd file which can take lots of users, or set
a single username and password with the \f[C]--user\f[R] and
\f[C]--pass\f[R] flags.
.PP
-If no static users are configured by either of the above methods, and
+Alternatively, you can have the reverse proxy manage authentication and
+use the username provided in the configured header with
+\f[C]--user-from-header\f[R] (e.g.,
+\f[C]----user-from-header=x-remote-user\f[R]).
+Ensure the proxy is trusted and headers cannot be spoofed, as
+misconfiguration may lead to unauthorized access.
+.PP
+If either of the above authentication methods is not configured and
client certificates are required by the \f[C]--client-ca\f[R] flag
passed to the server, the client certificate common name will be
considered as the username.
@@ -14289,13 +15371,13 @@ If rclone is quit or dies with files that haven\[aq]t been uploaded,
these will be uploaded next time rclone is run with the same flags.
.PP
If using \f[C]--vfs-cache-max-size\f[R] or
-\f[C]--vfs-cache-min-free-size\f[R] note that the cache may exceed these
-quotas for two reasons.
+\f[C]--vfs-cache-min-free-space\f[R] note that the cache may exceed
+these quotas for two reasons.
Firstly because it is only checked every
\f[C]--vfs-cache-poll-interval\f[R].
Secondly because open files cannot be evicted from the cache.
When \f[C]--vfs-cache-max-size\f[R] or
-\f[C]--vfs-cache-min-free-size\f[R] is exceeded, rclone will attempt to
+\f[C]--vfs-cache-min-free-space\f[R] is exceeded, rclone will attempt to
evict the least accessed files from the cache first.
rclone will start with files that haven\[aq]t been accessed for the
longest.
@@ -14687,6 +15769,47 @@ filters so that the result is accurate.
However, this is very inefficient and may cost lots of API calls
resulting in extra charges.
Use it as a last resort and only with caching.
+.SS VFS Metadata
+.PP
+If you use the \f[C]--vfs-metadata-extension\f[R] flag you can get the
+VFS to expose files which contain the
+metadata (https://rclone.org/docs/#metadata) as a JSON blob.
+These files will not appear in the directory listing, but can be
+\f[C]stat\f[R]-ed and opened and once they have been they \f[B]will\f[R]
+appear in directory listings until the directory cache expires.
+.PP
+Note that some backends won\[aq]t create metadata unless you pass in the
+\f[C]--metadata\f[R] flag.
+.PP
+For example, using \f[C]rclone mount\f[R] with
+\f[C]--metadata --vfs-metadata-extension .metadata\f[R] we get
+.IP
+.nf
+\f[C]
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ \[dq]atime\[dq]: \[dq]2025-03-04T17:34:22.317069787Z\[dq],
+ \[dq]btime\[dq]: \[dq]2025-03-03T16:03:37.708253808Z\[dq],
+ \[dq]gid\[dq]: \[dq]1000\[dq],
+ \[dq]mode\[dq]: \[dq]100664\[dq],
+ \[dq]mtime\[dq]: \[dq]2025-03-03T16:03:39.640238323Z\[dq],
+ \[dq]uid\[dq]: \[dq]1000\[dq]
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+\f[R]
+.fi
+.PP
+If the file has no metadata it will be returned as \f[C]{}\f[R] and if
+there is an error reading the metadata the error will be returned as
+\f[C]{\[dq]error\[dq]:\[dq]error string\[dq]}\f[R].
.IP
.nf
\f[C]
@@ -14697,22 +15820,22 @@ rclone serve s3 remote:path [flags]
.IP
.nf
\f[C]
- --addr stringArray IPaddress:Port, :Port or [unix://]/path/to/socket to bind server to (default [127.0.0.1:8080])
+ --addr stringArray IPaddress:Port or :Port to bind server to (default 127.0.0.1:8080)
--allow-origin string Origin which cross-domain request (CORS) can be executed from
--auth-key stringArray Set key pair for v4 authorization: access_key_id,secret_access_key
--auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
- --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
- --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
+ --cert string TLS PEM key (concatenation of certificate and CA certificate)
+ --client-ca string Client certificate authority to verify clients with
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 777)
--etag-hash string Which hash to use for the ETag, or auto or blank for off (default \[dq]MD5\[dq])
--file-perms FileMode File permissions (default 666)
- --force-path-style If true use path style access if false use virtual hosted style (default true) (default true)
+ --force-path-style If true use path style access if false use virtual hosted style (default true)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for s3
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string Path to TLS PEM private key file
+ --key string TLS PEM Private key
--link-perms FileMode Link permissions (default 666)
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default \[dq]tls1.0\[dq])
@@ -14730,6 +15853,7 @@ rclone serve s3 remote:path [flags]
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask FileMode Override the permission bits set by the filesystem (not supported on Windows) (default 002)
--user string User name for authentication
+ --user-from-header string User name from a defined HTTP header
--vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost)
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
@@ -14740,6 +15864,7 @@ rclone serve s3 remote:path [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off)
@@ -14769,6 +15894,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly \[at]/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -14995,13 +16121,13 @@ If rclone is quit or dies with files that haven\[aq]t been uploaded,
these will be uploaded next time rclone is run with the same flags.
.PP
If using \f[C]--vfs-cache-max-size\f[R] or
-\f[C]--vfs-cache-min-free-size\f[R] note that the cache may exceed these
-quotas for two reasons.
+\f[C]--vfs-cache-min-free-space\f[R] note that the cache may exceed
+these quotas for two reasons.
Firstly because it is only checked every
\f[C]--vfs-cache-poll-interval\f[R].
Secondly because open files cannot be evicted from the cache.
When \f[C]--vfs-cache-max-size\f[R] or
-\f[C]--vfs-cache-min-free-size\f[R] is exceeded, rclone will attempt to
+\f[C]--vfs-cache-min-free-space\f[R] is exceeded, rclone will attempt to
evict the least accessed files from the cache first.
rclone will start with files that haven\[aq]t been accessed for the
longest.
@@ -15393,6 +16519,47 @@ filters so that the result is accurate.
However, this is very inefficient and may cost lots of API calls
resulting in extra charges.
Use it as a last resort and only with caching.
+.SS VFS Metadata
+.PP
+If you use the \f[C]--vfs-metadata-extension\f[R] flag you can get the
+VFS to expose files which contain the
+metadata (https://rclone.org/docs/#metadata) as a JSON blob.
+These files will not appear in the directory listing, but can be
+\f[C]stat\f[R]-ed and opened and once they have been they \f[B]will\f[R]
+appear in directory listings until the directory cache expires.
+.PP
+Note that some backends won\[aq]t create metadata unless you pass in the
+\f[C]--metadata\f[R] flag.
+.PP
+For example, using \f[C]rclone mount\f[R] with
+\f[C]--metadata --vfs-metadata-extension .metadata\f[R] we get
+.IP
+.nf
+\f[C]
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ \[dq]atime\[dq]: \[dq]2025-03-04T17:34:22.317069787Z\[dq],
+ \[dq]btime\[dq]: \[dq]2025-03-03T16:03:37.708253808Z\[dq],
+ \[dq]gid\[dq]: \[dq]1000\[dq],
+ \[dq]mode\[dq]: \[dq]100664\[dq],
+ \[dq]mtime\[dq]: \[dq]2025-03-03T16:03:39.640238323Z\[dq],
+ \[dq]uid\[dq]: \[dq]1000\[dq]
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+\f[R]
+.fi
+.PP
+If the file has no metadata it will be returned as \f[C]{}\f[R] and if
+there is an error reading the metadata the error will be returned as
+\f[C]{\[dq]error\[dq]:\[dq]error string\[dq]}\f[R].
.SS Auth Proxy
.PP
If you supply the parameter \f[C]--auth-proxy /path/to/program\f[R] then
@@ -15527,6 +16694,7 @@ rclone serve sftp remote:path [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off)
@@ -15556,6 +16724,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly \[at]/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -15865,7 +17034,14 @@ You can either use an htpasswd file which can take lots of users, or set
a single username and password with the \f[C]--user\f[R] and
\f[C]--pass\f[R] flags.
.PP
-If no static users are configured by either of the above methods, and
+Alternatively, you can have the reverse proxy manage authentication and
+use the username provided in the configured header with
+\f[C]--user-from-header\f[R] (e.g.,
+\f[C]----user-from-header=x-remote-user\f[R]).
+Ensure the proxy is trusted and headers cannot be spoofed, as
+misconfiguration may lead to unauthorized access.
+.PP
+If either of the above authentication methods is not configured and
client certificates are required by the \f[C]--client-ca\f[R] flag
passed to the server, the client certificate common name will be
considered as the username.
@@ -16013,13 +17189,13 @@ If rclone is quit or dies with files that haven\[aq]t been uploaded,
these will be uploaded next time rclone is run with the same flags.
.PP
If using \f[C]--vfs-cache-max-size\f[R] or
-\f[C]--vfs-cache-min-free-size\f[R] note that the cache may exceed these
-quotas for two reasons.
+\f[C]--vfs-cache-min-free-space\f[R] note that the cache may exceed
+these quotas for two reasons.
Firstly because it is only checked every
\f[C]--vfs-cache-poll-interval\f[R].
Secondly because open files cannot be evicted from the cache.
When \f[C]--vfs-cache-max-size\f[R] or
-\f[C]--vfs-cache-min-free-size\f[R] is exceeded, rclone will attempt to
+\f[C]--vfs-cache-min-free-space\f[R] is exceeded, rclone will attempt to
evict the least accessed files from the cache first.
rclone will start with files that haven\[aq]t been accessed for the
longest.
@@ -16411,6 +17587,47 @@ filters so that the result is accurate.
However, this is very inefficient and may cost lots of API calls
resulting in extra charges.
Use it as a last resort and only with caching.
+.SS VFS Metadata
+.PP
+If you use the \f[C]--vfs-metadata-extension\f[R] flag you can get the
+VFS to expose files which contain the
+metadata (https://rclone.org/docs/#metadata) as a JSON blob.
+These files will not appear in the directory listing, but can be
+\f[C]stat\f[R]-ed and opened and once they have been they \f[B]will\f[R]
+appear in directory listings until the directory cache expires.
+.PP
+Note that some backends won\[aq]t create metadata unless you pass in the
+\f[C]--metadata\f[R] flag.
+.PP
+For example, using \f[C]rclone mount\f[R] with
+\f[C]--metadata --vfs-metadata-extension .metadata\f[R] we get
+.IP
+.nf
+\f[C]
+$ ls -l /mnt/
+total 1048577
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+
+$ cat /mnt/1G.metadata
+{
+ \[dq]atime\[dq]: \[dq]2025-03-04T17:34:22.317069787Z\[dq],
+ \[dq]btime\[dq]: \[dq]2025-03-03T16:03:37.708253808Z\[dq],
+ \[dq]gid\[dq]: \[dq]1000\[dq],
+ \[dq]mode\[dq]: \[dq]100664\[dq],
+ \[dq]mtime\[dq]: \[dq]2025-03-03T16:03:39.640238323Z\[dq],
+ \[dq]uid\[dq]: \[dq]1000\[dq]
+}
+
+$ ls -l /mnt/
+total 1048578
+-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
+-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
+\f[R]
+.fi
+.PP
+If the file has no metadata it will be returned as \f[C]{}\f[R] and if
+there is an error reading the metadata the error will be returned as
+\f[C]{\[dq]error\[dq]:\[dq]error string\[dq]}\f[R].
.SS Auth Proxy
.PP
If you supply the parameter \f[C]--auth-proxy /path/to/program\f[R] then
@@ -16514,12 +17731,12 @@ rclone serve webdav remote:path [flags]
.IP
.nf
\f[C]
- --addr stringArray IPaddress:Port, :Port or [unix://]/path/to/socket to bind server to (default [127.0.0.1:8080])
+ --addr stringArray IPaddress:Port or :Port to bind server to (default 127.0.0.1:8080)
--allow-origin string Origin which cross-domain request (CORS) can be executed from
--auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
- --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
- --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
+ --cert string TLS PEM key (concatenation of certificate and CA certificate)
+ --client-ca string Client certificate authority to verify clients with
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 777)
--disable-dir-list Disable HTML directory list on GET request for a directory
@@ -16528,7 +17745,7 @@ rclone serve webdav remote:path [flags]
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for webdav
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string Path to TLS PEM private key file
+ --key string TLS PEM Private key
--link-perms FileMode Link permissions (default 666)
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default \[dq]tls1.0\[dq])
@@ -16546,6 +17763,7 @@ rclone serve webdav remote:path [flags]
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask FileMode Override the permission bits set by the filesystem (not supported on Windows) (default 002)
--user string User name for authentication
+ --user-from-header string User name from a defined HTTP header
--vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost)
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
@@ -16556,6 +17774,7 @@ rclone serve webdav remote:path [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS
+ --vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off)
@@ -16585,6 +17804,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly \[at]/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -16920,6 +18140,7 @@ If \f[C]--recursive\f[R] is used then recursively sets the modification
time on all existing files that is found under the path.
Filters are supported, and you can test with the \f[C]--dry-run\f[R] or
the \f[C]--interactive\f[R]/\f[C]-i\f[R] flag.
+This will touch \f[C]--transfers\f[R] files concurrently.
.PP
If \f[C]--timestamp\f[R] is used then sets the modification time to that
time instead of the current time.
@@ -16982,6 +18203,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly \[at]/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -17098,6 +18320,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly \[at]/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -17806,6 +19029,11 @@ It is also possible to specify \f[C]--boolean=false\f[R] or
Note that \f[C]--boolean false\f[R] is not valid - this is parsed as
\f[C]--boolean\f[R] and the \f[C]false\f[R] is parsed as an extra
command line argument for rclone.
+.PP
+Options documented to take a \f[C]stringArray\f[R] parameter accept
+multiple values.
+To pass more than one value, repeat the option; for example:
+\f[C]--include value1 --include value2\f[R].
.SS Time or duration options
.PP
TIME or DURATION options can be specified as a duration string or a time
@@ -18239,6 +19467,8 @@ passwd file), or else use the result from shell command
.PP
If you run \f[C]rclone config file\f[R] you will see where the default
location is for you.
+Running \f[C]rclone config touch\f[R] will ensure a configuration file
+exists, creating an empty one in the default location if there is none.
.PP
The fact that an existing file \f[C]rclone.conf\f[R] in the same
directory as the rclone executable is always preferred, means that it is
@@ -18249,7 +19479,15 @@ to a writable directory and then create an empty file
If the location is set to empty string \f[C]\[dq]\[dq]\f[R] or path to a
file with name \f[C]notfound\f[R], or the os null device represented by
value \f[C]NUL\f[R] on Windows and \f[C]/dev/null\f[R] on Unix systems,
-then rclone will keep the config file in memory only.
+then rclone will keep the configuration file in memory only.
+.PP
+You may see a log message \[dq]Config file not found - using
+defaults\[dq] if there is no configuration file.
+This can be supressed, e.g.
+if you are using rclone entirely with on the fly
+remotes (https://rclone.org/docs/#backend-path-to-dir), by using
+memory-only configuration file or by creating an empty configuration
+file, as described above.
.PP
The file format is basic
INI (https://en.wikipedia.org/wiki/INI_file#Format): Sections of text,
@@ -18797,6 +20035,20 @@ supported backends and the VFS.
There are individual flags for just enabling it for the VFS
\f[C]--vfs-links\f[R] and the local backend \f[C]--local-links\f[R] if
required.
+.SS --list-cutoff N
+.PP
+When syncing rclone needs to sort directory entries before comparing
+them.
+Below this threshold (1,000,000) by default, rclone will store the
+directory entries in memory.
+1,000,000 entries will take approx 1GB of RAM to store.
+Above this threshold rclone will store directory entries on disk and
+sort them without using a lot of memory.
+.PP
+Doing this is slightly less efficient then sorting them in memory and
+will only work well for the bucket based backends (eg s3, b2, azureblob,
+swift) but these are the only backends likely to have millions of
+entries in a directory.
.SS --log-file=FILE
.PP
Log all of rclone\[aq]s output to FILE.
@@ -18813,15 +20065,33 @@ as rclone doesn\[aq]t have a signal to rotate logs.
.SS --log-format LIST
.PP
Comma separated list of log format options.
-Accepted options are \f[C]date\f[R], \f[C]time\f[R],
-\f[C]microseconds\f[R], \f[C]pid\f[R], \f[C]longfile\f[R],
-\f[C]shortfile\f[R], \f[C]UTC\f[R].
-Any other keywords will be silently ignored.
-\f[C]pid\f[R] will tag log messages with process identifier which useful
-with \f[C]rclone mount --daemon\f[R].
-Other accepted options are explained in the go
-documentation (https://pkg.go.dev/log#pkg-constants).
-The default log format is \[dq]\f[C]date\f[R],\f[C]time\f[R]\[dq].
+The accepted options are:
+.IP \[bu] 2
+\f[C]date\f[R] - Add a date in the format YYYY/MM/YY to the log.
+.IP \[bu] 2
+\f[C]time\f[R] - Add a time to the log in format HH:MM:SS.
+.IP \[bu] 2
+\f[C]microseconds\f[R] - Add microseconds to the time in format
+HH:MM:SS.SSSSSS.
+.IP \[bu] 2
+\f[C]UTC\f[R] - Make the logs in UTC not localtime.
+.IP \[bu] 2
+\f[C]longfile\f[R] - Adds the source file and line number of the log
+statement.
+.IP \[bu] 2
+\f[C]shortfile\f[R] - Adds the source file and line number of the log
+statement.
+.IP \[bu] 2
+\f[C]pid\f[R] - Add the process ID to the log - useful with
+\f[C]rclone mount --daemon\f[R].
+.IP \[bu] 2
+\f[C]nolevel\f[R] - Don\[aq]t add the level to the log.
+.IP \[bu] 2
+\f[C]json\f[R] - Equivalent to adding \f[C]--use-json-log\f[R]
+.PP
+They are added to the log line in the order above.
+.PP
+The default log format is \f[C]\[dq]date,time\[dq]\f[R].
.SS --log-level LEVEL
.PP
This sets the log level for rclone.
@@ -18842,10 +20112,108 @@ It outputs warnings and significant events.
.PP
\f[C]ERROR\f[R] is equivalent to \f[C]-q\f[R].
It only outputs error messages.
+.SS --windows-event-log LEVEL
+.PP
+If this is configured (the default is \f[C]OFF\f[R]) then logs of this
+level and above will be logged to the Windows event log in
+\f[B]addition\f[R] to the normal logs.
+These will be logged in JSON format as described below regardless of
+what format the main logs are configured for.
+.PP
+The Windows event log only has 3 levels of severity \f[C]Info\f[R],
+\f[C]Warning\f[R] and \f[C]Error\f[R].
+If enabled we map rclone levels like this.
+.IP \[bu] 2
+\f[C]Error\f[R] \[<-] \f[C]ERROR\f[R] (and above)
+.IP \[bu] 2
+\f[C]Warning\f[R] \[<-] \f[C]WARNING\f[R] (note that this level is
+defined but not currently used).
+.IP \[bu] 2
+\f[C]Info\f[R] \[<-] \f[C]NOTICE\f[R], \f[C]INFO\f[R] and
+\f[C]DEBUG\f[R].
+.PP
+Rclone will declare its log source as \[dq]rclone\[dq] if it is has
+enough permissions to create the registry key needed.
+If not then logs will appear as \[dq]Application\[dq].
+You can run \f[C]rclone version --windows-event-log DEBUG\f[R] once as
+administrator to create the registry key in advance.
+.PP
+\f[B]Note\f[R] that the \f[C]--windows-event-log\f[R] level must be
+greater (more severe) than or equal to the \f[C]--log-level\f[R].
+For example to log DEBUG to a log file but ERRORs to the event log you
+would use
+.IP
+.nf
+\f[C]
+--log-file rclone.log --log-level DEBUG --windows-event-log ERROR
+\f[R]
+.fi
+.PP
+This option is only supported Windows platforms.
.SS --use-json-log
.PP
This switches the log format to JSON for rclone.
-The fields of json log are level, msg, source, time.
+The fields of JSON log are \f[C]level\f[R], \f[C]msg\f[R],
+\f[C]source\f[R], \f[C]time\f[R].
+The JSON logs will be printed on a single line, but are shown expanded
+here for clarity.
+.IP
+.nf
+\f[C]
+{
+ \[dq]time\[dq]: \[dq]2025-05-13T17:30:51.036237518+01:00\[dq],
+ \[dq]level\[dq]: \[dq]debug\[dq],
+ \[dq]msg\[dq]: \[dq]4 go routines active\[rs]n\[dq],
+ \[dq]source\[dq]: \[dq]cmd/cmd.go:298\[dq]
+}
+\f[R]
+.fi
+.PP
+Completed data transfer logs will have extra \f[C]size\f[R] information.
+Logs which are about a particular object will have \f[C]object\f[R] and
+\f[C]objectType\f[R] fields also.
+.IP
+.nf
+\f[C]
+{
+ \[dq]time\[dq]: \[dq]2025-05-13T17:38:05.540846352+01:00\[dq],
+ \[dq]level\[dq]: \[dq]info\[dq],
+ \[dq]msg\[dq]: \[dq]Copied (new) to: file2.txt\[dq],
+ \[dq]size\[dq]: 6,
+ \[dq]object\[dq]: \[dq]file.txt\[dq],
+ \[dq]objectType\[dq]: \[dq]*local.Object\[dq],
+ \[dq]source\[dq]: \[dq]operations/copy.go:368\[dq]
+}
+\f[R]
+.fi
+.PP
+Stats logs will contain a \f[C]stats\f[R] field which is the same as
+returned from the rc call
+core/stats (https://rclone.org/rc/#core-stats).
+.IP
+.nf
+\f[C]
+{
+ \[dq]time\[dq]: \[dq]2025-05-13T17:38:05.540912847+01:00\[dq],
+ \[dq]level\[dq]: \[dq]info\[dq],
+ \[dq]msg\[dq]: \[dq]...text version of the stats...\[dq],
+ \[dq]stats\[dq]: {
+ \[dq]bytes\[dq]: 6,
+ \[dq]checks\[dq]: 0,
+ \[dq]deletedDirs\[dq]: 0,
+ \[dq]deletes\[dq]: 0,
+ \[dq]elapsedTime\[dq]: 0.000904825,
+ ...truncated for clarity...
+ \[dq]totalBytes\[dq]: 6,
+ \[dq]totalChecks\[dq]: 0,
+ \[dq]totalTransfers\[dq]: 1,
+ \[dq]transferTime\[dq]: 0.000882794,
+ \[dq]transfers\[dq]: 1
+ },
+ \[dq]source\[dq]: \[dq]accounting/stats.go:569\[dq]
+}
+\f[R]
+.fi
.SS --low-level-retries NUMBER
.PP
This controls the number of low level retries rclone does.
@@ -18881,6 +20249,52 @@ the remote which may be desirable.
.PP
Setting this to a negative number will make the backlog as large as
possible.
+.SS --max-buffer-memory=SIZE
+.PP
+If set, don\[aq]t allocate more than SIZE amount of memory as buffers.
+If not set or set to \f[C]0\f[R] or \f[C]off\f[R] this will not limit
+the amount of memory in use.
+.PP
+This includes memory used by buffers created by the \f[C]--buffer\f[R]
+flag and buffers used by multi-thread transfers.
+.PP
+Most multi-thread transfers do not take additional memory, but some do
+depending on the backend (eg the s3 backend for uploads).
+This means there is a tension between total setting
+\f[C]--transfers\f[R] as high as possible and memory use.
+.PP
+Setting \f[C]--max-buffer-memory\f[R] allows the buffer memory to be
+controlled so that it doesn\[aq]t overwhelm the machine and allows
+\f[C]--transfers\f[R] to be set large.
+.SS --max-connections=N
+.PP
+This sets the maximum number of concurrent calls to the backend API.
+It may not map 1:1 to TCP or HTTP connections depending on the backend
+in use and the use of HTTP1 vs HTTP2.
+.PP
+When downloading files, backends only limit the initial opening of the
+stream.
+The bulk data download is not counted as a connection.
+This means that the \f[C]--max--connections\f[R] flag won\[aq]t limit
+the total number of downloads.
+.PP
+Note that it is possible to cause deadlocks with this setting so it
+should be used with care.
+.PP
+If you are doing a sync or copy then make sure
+\f[C]--max-connections\f[R] is one more than the sum of
+\f[C]--transfers\f[R] and \f[C]--checkers\f[R].
+.PP
+If you use \f[C]--check-first\f[R] then \f[C]--max-connections\f[R] just
+needs to be one more than the maximum of \f[C]--checkers\f[R] and
+\f[C]--transfers\f[R].
+.PP
+So for \f[C]--max-connections 3\f[R] you\[aq]d use
+\f[C]--checkers 2 --transfers 2 --check-first\f[R] or
+\f[C]--checkers 1 --transfers 1\f[R].
+.PP
+Setting this flag can be useful for backends which do multipart uploads
+to limit the number of simultaneous parts being transferred.
.SS --max-delete=N
.PP
This tells rclone not to delete more than N files.
@@ -19172,6 +20586,17 @@ Multi thread transfers will be used with \f[C]rclone mount\f[R] and
\f[C]rclone serve\f[R] if \f[C]--vfs-cache-mode\f[R] is set to
\f[C]writes\f[R] or above.
.PP
+Most multi-thread transfers do not take additional memory, but some do
+(for example uploading to s3).
+In the worst case memory usage can be at maximum \f[C]--transfers\f[R] *
+\f[C]--multi-thread-chunk-size\f[R] \f[I]
+\f[CI]--multi-thread-streams\f[I] or specifically for the s3 backend
+\f[CI]--transfers\f[I] \f[R] \f[C]--s3-chunk-size\f[R] *
+\f[C]--s3-concurrency\f[R].
+However you can use the the
+--max-buffer-memory (https://rclone.org/docs/#max-buffer-memory) flag to
+control the maximum memory used here.
+.PP
\f[B]NB\f[R] that this \f[B]only\f[R] works with supported backends as
the destination but will work with any backend as the source.
.PP
@@ -19196,6 +20621,15 @@ If the backend has a \f[C]--backend-upload-concurrency\f[R] setting (eg
number of transfers instead if it is larger than the value of
\f[C]--multi-thread-streams\f[R] or \f[C]--multi-thread-streams\f[R]
isn\[aq]t set.
+.SS --name-transform COMMAND[=XXXX]
+.PP
+\f[C]--name-transform\f[R] introduces path name transformations for
+\f[C]rclone copy\f[R], \f[C]rclone sync\f[R], and \f[C]rclone move\f[R].
+These transformations enable modifications to source and destination
+file names by applying prefixes, suffixes, and other alterations during
+transfer operations.
+For detailed docs and examples, see
+\f[C]convmv\f[R] (https://rclone.org/commands/rclone_convmv/).
.SS --no-check-dest
.PP
The \f[C]--no-check-dest\f[R] can be used with \f[C]move\f[R] or
@@ -20297,6 +21731,8 @@ For the filtering options
.IP \[bu] 2
\f[C]--max-age\f[R]
.IP \[bu] 2
+\f[C]--hash-filter\f[R]
+.IP \[bu] 2
\f[C]--dump filters\f[R]
.IP \[bu] 2
\f[C]--metadata-include\f[R]
@@ -20455,8 +21891,8 @@ The options set by environment variables can be seen with the
\f[C]rclone version -vv\f[R].
.PP
Options that can appear multiple times (type \f[C]stringArray\f[R]) are
-treated slighly differently as environment variables can only be defined
-once.
+treated slightly differently as environment variables can only be
+defined once.
In order to allow a simple mechanism for adding one or many items, the
input is treated as a CSV encoded (https://godoc.org/encoding/csv)
string.
@@ -21853,6 +23289,122 @@ E.g.
.PP
See the time option docs (https://rclone.org/docs/#time-option) for
valid formats.
+.SS \f[C]--hash-filter\f[R] - Deterministically select a subset of files
+.PP
+The \f[C]--hash-filter\f[R] flag enables selecting a deterministic
+subset of files, useful for:
+.IP "1." 3
+Running large sync operations across multiple machines.
+.IP "2." 3
+Checking a subset of files for bitrot.
+.IP "3." 3
+Any other operations where a sample of files is required.
+.SS Syntax
+.PP
+The flag takes two parameters expressed as a fraction:
+.IP
+.nf
+\f[C]
+--hash-filter K/N
+\f[R]
+.fi
+.IP \[bu] 2
+\f[C]N\f[R]: The total number of partitions (must be a positive
+integer).
+.IP \[bu] 2
+\f[C]K\f[R]: The specific partition to select (an integer from
+\f[C]0\f[R] to \f[C]N\f[R]).
+.PP
+For example: - \f[C]--hash-filter 1/3\f[R]: Selects the first third of
+the files.
+- \f[C]--hash-filter 2/3\f[R] and \f[C]--hash-filter 3/3\f[R]: Select
+the second and third partitions, respectively.
+.PP
+Each partition is non-overlapping, ensuring all files are covered
+without duplication.
+.SS Random Partition Selection
+.PP
+Use \f[C]\[at]\f[R] as \f[C]K\f[R] to randomly select a partition:
+.IP
+.nf
+\f[C]
+--hash-filter \[at]/M
+\f[R]
+.fi
+.PP
+For example, \f[C]--hash-filter \[at]/3\f[R] will randomly select a
+number between 0 and 2.
+This will stay constant across retries.
+.SS How It Works
+.IP \[bu] 2
+Rclone takes each file\[aq]s full path, normalizes it to lowercase, and
+applies Unicode normalization.
+.IP \[bu] 2
+It then hashes the normalized path into a 64 bit number.
+.IP \[bu] 2
+The hash result is reduced modulo \f[C]N\f[R] to assign the file to a
+partition.
+.IP \[bu] 2
+If the calculated partition does not match \f[C]K\f[R] the file is
+excluded.
+.IP \[bu] 2
+Other filters may apply if the file is not excluded.
+.PP
+\f[B]Important:\f[R] Rclone will traverse all directories to apply the
+filter.
+.SS Usage Notes
+.IP \[bu] 2
+Safe to use with \f[C]rclone sync\f[R]; source and destination
+selections will match.
+.IP \[bu] 2
+\f[B]Do not\f[R] use with \f[C]--delete-excluded\f[R], as this could
+delete unselected files.
+.IP \[bu] 2
+Ignored if \f[C]--files-from\f[R] is used.
+.SS Examples
+.SS Dividing files into 4 partitions
+.PP
+Assuming the current directory contains \f[C]file1.jpg\f[R] through
+\f[C]file9.jpg\f[R]:
+.IP
+.nf
+\f[C]
+$ rclone lsf --hash-filter 0/4 .
+file1.jpg
+file5.jpg
+
+$ rclone lsf --hash-filter 1/4 .
+file3.jpg
+file6.jpg
+file9.jpg
+
+$ rclone lsf --hash-filter 2/4 .
+file2.jpg
+file4.jpg
+
+$ rclone lsf --hash-filter 3/4 .
+file7.jpg
+file8.jpg
+
+$ rclone lsf --hash-filter 4/4 . # the same as --hash-filter 0/4
+file1.jpg
+file5.jpg
+\f[R]
+.fi
+.SS Syncing the first quarter of files
+.IP
+.nf
+\f[C]
+rclone sync --hash-filter 1/4 source:path destination:path
+\f[R]
+.fi
+.SS Checking a random 1% of files for integrity
+.IP
+.nf
+\f[C]
+rclone check --download --hash-filter \[at]/100 source:path destination:path
+\f[R]
+.fi
.SS Other flags
.SS \f[C]--delete-excluded\f[R] - Delete files on dest excluded from sync
.PP
@@ -22421,8 +23973,23 @@ If you wish to set config (the equivalent of the global flags) for the
duration of an rc call only then pass in the \f[C]_config\f[R]
parameter.
.PP
-This should be in the same format as the \f[C]config\f[R] key returned
-by options/get.
+This should be in the same format as the \f[C]main\f[R] key returned by
+options/get.
+.IP
+.nf
+\f[C]
+rclone rc --loopback options/get blocks=main
+\f[R]
+.fi
+.PP
+You can see more help on these options with this command (see the
+options blocks section for more info).
+.IP
+.nf
+\f[C]
+rclone rc --loopback options/info blocks=main
+\f[R]
+.fi
.PP
For example, if you wished to run a sync with the \f[C]--checksum\f[R]
parameter, you would pass this parameter in your JSON blob.
@@ -22466,6 +24033,21 @@ in the \f[C]_filter\f[R] parameter.
.PP
This should be in the same format as the \f[C]filter\f[R] key returned
by options/get.
+.IP
+.nf
+\f[C]
+rclone rc --loopback options/get blocks=filter
+\f[R]
+.fi
+.PP
+You can see more help on these options with this command (see the
+options blocks section for more info).
+.IP
+.nf
+\f[C]
+rclone rc --loopback options/info blocks=filter
+\f[R]
+.fi
.PP
For example, if you wished to run a sync with these flags
.IP
@@ -22602,7 +24184,8 @@ string
T}@T{
N
T}@T{
-name of the field used in the rc - if blank use Name
+name of the field used in the rc - if blank use Name.
+May contain \[dq].\[dq] for nested fields.
T}
T{
Help
@@ -22971,6 +24554,8 @@ obscure - declare passwords are plain and need obscuring
noObscure - declare passwords are already obscured and don\[aq]t need
obscuring
.IP \[bu] 2
+noOutput - don\[aq]t print anything to stdout
+.IP \[bu] 2
nonInteractive - don\[aq]t interact with a user, return questions
.IP \[bu] 2
continue - continue the config process with an answer
@@ -23101,6 +24686,8 @@ obscure - declare passwords are plain and need obscuring
noObscure - declare passwords are already obscured and don\[aq]t need
obscuring
.IP \[bu] 2
+noOutput - don\[aq]t print anything to stdout
+.IP \[bu] 2
nonInteractive - don\[aq]t interact with a user, return questions
.IP \[bu] 2
continue - continue the config process with an answer
@@ -23325,7 +24912,10 @@ returned.
.PP
Parameters
.IP \[bu] 2
-group - name of the stats group (string)
+group - name of the stats group (string, optional)
+.IP \[bu] 2
+short - if true will not return the transferring and checking arrays
+(boolean, optional)
.PP
Returns the following values:
.IP
@@ -23341,6 +24931,7 @@ Returns the following values:
\[dq]fatalError\[dq]: boolean whether there has been at least one fatal error,
\[dq]lastError\[dq]: last error string,
\[dq]renames\[dq] : number of files renamed,
+ \[dq]listed\[dq] : number of directory entries listed,
\[dq]retryError\[dq]: boolean showing whether there has been at least one non-NoRetryError,
\[dq]serverSideCopies\[dq]: number of server side copies done,
\[dq]serverSideCopyBytes\[dq]: number bytes server side copied,
@@ -24531,6 +26122,172 @@ It can be used to check that rclone is still alive and to check that
parameter passing is working properly.
.PP
\f[B]Authentication is required for this call.\f[R]
+.SS serve/list: Show running servers
+.PP
+Show running servers with IDs.
+.PP
+This takes no parameters and returns
+.IP \[bu] 2
+list: list of running serve commands
+.PP
+Each list element will have
+.IP \[bu] 2
+id: ID of the server
+.IP \[bu] 2
+addr: address the server is running on
+.IP \[bu] 2
+params: parameters used to start the server
+.PP
+Eg
+.IP
+.nf
+\f[C]
+rclone rc serve/list
+\f[R]
+.fi
+.PP
+Returns
+.IP
+.nf
+\f[C]
+{
+ \[dq]list\[dq]: [
+ {
+ \[dq]addr\[dq]: \[dq][::]:4321\[dq],
+ \[dq]id\[dq]: \[dq]nfs-ffc2a4e5\[dq],
+ \[dq]params\[dq]: {
+ \[dq]fs\[dq]: \[dq]remote:\[dq],
+ \[dq]opt\[dq]: {
+ \[dq]ListenAddr\[dq]: \[dq]:4321\[dq]
+ },
+ \[dq]type\[dq]: \[dq]nfs\[dq],
+ \[dq]vfsOpt\[dq]: {
+ \[dq]CacheMode\[dq]: \[dq]full\[dq]
+ }
+ }
+ }
+ ]
+}
+\f[R]
+.fi
+.PP
+\f[B]Authentication is required for this call.\f[R]
+.SS serve/start: Create a new server
+.PP
+Create a new server with the specified parameters.
+.PP
+This takes the following parameters:
+.IP \[bu] 2
+\f[C]type\f[R] - type of server: \f[C]http\f[R], \f[C]webdav\f[R],
+\f[C]ftp\f[R], \f[C]sftp\f[R], \f[C]nfs\f[R], etc.
+.IP \[bu] 2
+\f[C]fs\f[R] - remote storage path to serve
+.IP \[bu] 2
+\f[C]addr\f[R] - the ip:port to run the server on, eg \[dq]:1234\[dq] or
+\[dq]localhost:1234\[dq]
+.PP
+Other parameters are as described in the documentation for the relevant
+rclone serve (https://rclone.org/commands/rclone_serve/) command line
+options.
+To translate a command line option to an rc parameter, remove the
+leading \f[C]--\f[R] and replace \f[C]-\f[R] with \f[C]_\f[R], so
+\f[C]--vfs-cache-mode\f[R] becomes \f[C]vfs_cache_mode\f[R].
+Note that global parameters must be set with \f[C]_config\f[R] and
+\f[C]_filter\f[R] as described above.
+.PP
+Examples:
+.IP
+.nf
+\f[C]
+rclone rc serve/start type=nfs fs=remote: addr=:4321 vfs_cache_mode=full
+rclone rc serve/start --json \[aq]{\[dq]type\[dq]:\[dq]nfs\[dq],\[dq]fs\[dq]:\[dq]remote:\[dq],\[dq]addr\[dq]:\[dq]:1234\[dq],\[dq]vfs_cache_mode\[dq]:\[dq]full\[dq]}\[aq]
+\f[R]
+.fi
+.PP
+This will give the reply
+.IP
+.nf
+\f[C]
+{
+ \[dq]addr\[dq]: \[dq][::]:4321\[dq], // Address the server was started on
+ \[dq]id\[dq]: \[dq]nfs-ecfc6852\[dq] // Unique identifier for the server instance
+}
+\f[R]
+.fi
+.PP
+Or an error if it failed to start.
+.PP
+Stop the server with \f[C]serve/stop\f[R] and list the running servers
+with \f[C]serve/list\f[R].
+.PP
+\f[B]Authentication is required for this call.\f[R]
+.SS serve/stop: Unserve selected active serve
+.PP
+Stops a running \f[C]serve\f[R] instance by ID.
+.PP
+This takes the following parameters:
+.IP \[bu] 2
+id: as returned by serve/start
+.PP
+This will give an empty response if successful or an error if not.
+.PP
+Example:
+.IP
+.nf
+\f[C]
+rclone rc serve/stop id=12345
+\f[R]
+.fi
+.PP
+\f[B]Authentication is required for this call.\f[R]
+.SS serve/stopall: Stop all active servers
+.PP
+Stop all active servers.
+.PP
+This will stop all active servers.
+.IP
+.nf
+\f[C]
+rclone rc serve/stopall
+\f[R]
+.fi
+.PP
+\f[B]Authentication is required for this call.\f[R]
+.SS serve/types: Show all possible serve types
+.PP
+This shows all possible serve types and returns them as a list.
+.PP
+This takes no parameters and returns
+.IP \[bu] 2
+types: list of serve types, eg \[dq]nfs\[dq], \[dq]sftp\[dq], etc
+.PP
+The serve types are strings like \[dq]serve\[dq], \[dq]serve2\[dq],
+\[dq]cserve\[dq] and can be passed to serve/start as the serveType
+parameter.
+.PP
+Eg
+.IP
+.nf
+\f[C]
+rclone rc serve/types
+\f[R]
+.fi
+.PP
+Returns
+.IP
+.nf
+\f[C]
+{
+ \[dq]types\[dq]: [
+ \[dq]http\[dq],
+ \[dq]sftp\[dq],
+ \[dq]nfs\[dq]
+ ]
+}
+\f[R]
+.fi
+.PP
+\f[B]Authentication is required for this call.\f[R]
.SS sync/bisync: Perform bidirectional synchronization between two paths.
.PP
This takes the following parameters
@@ -24731,7 +26488,7 @@ return an empty result.
\f[R]
.fi
.PP
-The \f[C]expiry\f[R] time is the time until the file is elegible for
+The \f[C]expiry\f[R] time is the time until the file is eligible for
being uploaded in floating point seconds.
This may go negative.
As rclone only transfers \f[C]--transfers\f[R] files at once, only the
@@ -25288,6 +27045,21 @@ T}@T{
-
T}
T{
+FileLu Cloud Storage
+T}@T{
+MD5
+T}@T{
+R/W
+T}@T{
+No
+T}@T{
+Yes
+T}@T{
+R
+T}@T{
+-
+T}
+T{
Files.com
T}@T{
MD5, CRC32
@@ -28363,6 +30135,7 @@ Flags for anything which can copy a file.
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
+ --name-transform stringArray Transform paths during the copy process
--no-check-dest Don\[aq]t check the destination, copy regardless
--no-traverse Don\[aq]t traverse destination file system on copy
--no-update-dir-modtime Don\[aq]t update directory modification times
@@ -28388,6 +30161,7 @@ Flags used for sync commands.
--delete-during When synchronizing, delete files during transfer
--fix-case Force rename of case insensitive dest to match source
--ignore-errors Delete even if there are I/O errors
+ --list-cutoff int To save memory, sort directory listings on disk above this threshold (default 1000000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off)
--suffix string Suffix to add to changed files
@@ -28436,13 +30210,14 @@ Flags for general networking and HTTP stuff.
--header stringArray Set HTTP header for all transactions
--header-download stringArray Set HTTP header for download transactions
--header-upload stringArray Set HTTP header for upload transactions
+ --max-connections int Maximum number of simultaneous backend API connections, 0 for unlimited
--no-check-certificate Do not verify the server SSL certificate (insecure)
--no-gzip-encoding Don\[aq]t set Accept-Encoding: gzip
--timeout Duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
- --user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.69.0\[dq])
+ --user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.70.0\[dq])
\f[R]
.fi
.SS Performance
@@ -28477,6 +30252,7 @@ Flags for general configuration of rclone.
-i, --interactive Enable interactive mode
--kv-lock-time Duration Maximum time to keep key-value database locked by process (default 1s)
--low-level-retries int Number of low level retries to do (default 10)
+ --max-buffer-memory SizeSuffix If set, don\[aq]t allocate more than this amount of memory as buffers (default off)
--no-console Hide console window (supported on Windows only)
--no-unicode-normalization Don\[aq]t normalize unicode characters in filenames
--password-command SpaceSepList Command for supplying password for encrypted configuration
@@ -28514,6 +30290,7 @@ Flags for filtering directory listings.
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --hash-filter string Partition filenames by hash k/n or randomly \[at]/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -28547,7 +30324,7 @@ Flags for logging and statistics.
.nf
\f[C]
--log-file string Log everything to this file
- --log-format string Comma separated list of log format options (default \[dq]date,time\[dq])
+ --log-format Bits Comma separated list of log format options (default date,time)
--log-level LogLevel Log level DEBUG|INFO|NOTICE|ERROR (default NOTICE)
--log-systemd Activate systemd integration for the logger
--max-stats-groups int Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000)
@@ -28614,6 +30391,7 @@ Flags to control the Remote Control API.
--rc-server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
--rc-template string User-specified template
--rc-user string User name for authentication
+ --rc-user-from-header string User name from a defined HTTP header
--rc-web-fetch-url string URL to fetch the releases for webgui (default \[dq]https://api.github.com/repos/rclone/rclone-webui-react/releases/latest\[dq])
--rc-web-gui Launch WebGUI on localhost
--rc-web-gui-force-update Force update to latest version of web gui
@@ -28643,6 +30421,7 @@ Flags to control the Metrics HTTP endpoint..
--metrics-server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
--metrics-template string User-specified template
--metrics-user string User name for authentication
+ --metrics-user-from-header string User name from a defined HTTP header
--rc-enable-metrics Enable the Prometheus metrics path at the remote control server
\f[R]
.fi
@@ -28663,6 +30442,8 @@ Backend-only flags (these can be set in the config file also).
--azureblob-client-id string The ID of the client in use
--azureblob-client-secret string One of the service principal\[aq]s client secrets
--azureblob-client-send-certificate-chain Send the certificate chain when using certificate auth
+ --azureblob-copy-concurrency int Concurrency for multipart copy (default 512)
+ --azureblob-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 8Mi)
--azureblob-delete-snapshots string Set to specify how to deal with snapshots on blob deletion
--azureblob-description string Description of the remote
--azureblob-directory-markers Upload an empty object with a trailing slash when a new directory is created
@@ -28686,6 +30467,7 @@ Backend-only flags (these can be set in the config file also).
--azureblob-upload-concurrency int Concurrency for multipart uploads (default 16)
--azureblob-upload-cutoff string Cutoff for switching to chunked upload (<= 256 MiB) (deprecated)
--azureblob-use-az Use Azure CLI tool az for authentication
+ --azureblob-use-copy-blob Whether to use the Copy Blob API when copying to the same storage account (default true)
--azureblob-use-emulator Uses local storage emulator if provided as \[aq]true\[aq]
--azureblob-use-msi Use a managed service identity to authenticate (only works in Azure)
--azureblob-username string User name (usually an email address)
@@ -28698,6 +30480,7 @@ Backend-only flags (these can be set in the config file also).
--azurefiles-client-send-certificate-chain Send the certificate chain when using certificate auth
--azurefiles-connection-string string Azure Files Connection String
--azurefiles-description string Description of the remote
+ --azurefiles-disable-instance-discovery Skip requesting Microsoft Entra instance metadata
--azurefiles-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot)
--azurefiles-endpoint string Endpoint for the service
--azurefiles-env-auth Read credentials from runtime (environment variables, CLI or MSI)
@@ -28712,6 +30495,7 @@ Backend-only flags (these can be set in the config file also).
--azurefiles-share-name string Azure Files Share Name
--azurefiles-tenant string ID of the service principal\[aq]s tenant. Also called its directory ID
--azurefiles-upload-concurrency int Concurrency for multipart uploads (default 16)
+ --azurefiles-use-az Use Azure CLI tool az for authentication
--azurefiles-use-msi Use a managed service identity to authenticate (only works in Azure)
--azurefiles-username string User name (usually an email address)
--b2-account string Account ID or Application Key ID
@@ -28774,12 +30558,14 @@ Backend-only flags (these can be set in the config file also).
--chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks
--chunker-hash-type string Choose how chunker handles hash sums (default \[dq]md5\[dq])
--chunker-remote string Remote to chunk/unchunk
+ --cloudinary-adjust-media-files-extensions Cloudinary handles media formats as a file attribute and strips it from the name, which is unlike most other file systems (default true)
--cloudinary-api-key string Cloudinary API Key
--cloudinary-api-secret string Cloudinary API Secret
--cloudinary-cloud-name string Cloudinary Environment Name
--cloudinary-description string Description of the remote
--cloudinary-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
--cloudinary-eventually-consistent-delay Duration Wait N seconds for eventual consistency of the databases that support the backend operation (default 0s)
+ --cloudinary-media-extensions stringArray Cloudinary supported media extensions (default 3ds,3g2,3gp,ai,arw,avi,avif,bmp,bw,cr2,cr3,djvu,dng,eps3,fbx,flif,flv,gif,glb,gltf,hdp,heic,heif,ico,indd,jp2,jpe,jpeg,jpg,jxl,jxr,m2ts,mov,mp4,mpeg,mts,mxf,obj,ogv,pdf,ply,png,psd,svg,tga,tif,tiff,ts,u3ma,usdz,wdp,webm,webp,wmv)
--cloudinary-upload-prefix string Specify the API endpoint for environments out of the US
--cloudinary-upload-preset string Upload Preset to select asset manipulation on upload
--combine-description string Description of the remote
@@ -28803,6 +30589,10 @@ Backend-only flags (these can be set in the config file also).
--crypt-show-mapping For all files listed show how the names encrypt
--crypt-strict-names If set, this will raise an error when crypt comes across a filename that can\[aq]t be decrypted
--crypt-suffix string If this is set it will override the default suffix of \[dq].bin\[dq] (default \[dq].bin\[dq])
+ --doi-description string Description of the remote
+ --doi-doi string The DOI or the doi.org URL
+ --doi-doi-resolver-api-url string The URL of the DOI resolver API to use
+ --doi-provider string DOI provider
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs
--drive-auth-owner-only Only consider files owned by the authenticated user
@@ -28854,7 +30644,6 @@ Backend-only flags (these can be set in the config file also).
--drive-use-trash Send files to the trash instead of deleting permanently (default true)
--drive-v2-download-min-size SizeSuffix If Object\[aq]s are greater, use drive v2 API to download (default off)
--dropbox-auth-url string Auth server URL
- --dropbox-batch-commit-timeout Duration Max time to wait for a batch to finish committing (default 10m0s)
--dropbox-batch-mode string Upload file batching sync|async|off (default \[dq]sync\[dq])
--dropbox-batch-size int Max number of files in upload batch
--dropbox-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
@@ -28864,11 +30653,14 @@ Backend-only flags (these can be set in the config file also).
--dropbox-client-secret string OAuth Client Secret
--dropbox-description string Description of the remote
--dropbox-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
+ --dropbox-export-formats CommaSepList Comma separated list of preferred formats for exporting files (default html,md)
--dropbox-impersonate string Impersonate this user when using a business account
--dropbox-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms)
--dropbox-root-namespace string Specify a different Dropbox namespace ID to use as the root for all paths
--dropbox-shared-files Instructs rclone to work on individual shared files
--dropbox-shared-folders Instructs rclone to work on shared folders
+ --dropbox-show-all-exports Show all exportable files in listings
+ --dropbox-skip-exports Skip exportable files in all listings
--dropbox-token string OAuth Access Token as a JSON blob
--dropbox-token-url string Token server url
--fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl
@@ -28886,6 +30678,9 @@ Backend-only flags (these can be set in the config file also).
--filefabric-token-expiry string Token expiry time
--filefabric-url string URL of the Enterprise File Fabric to connect to
--filefabric-version string Version read from the file fabric
+ --filelu-description string Description of the remote
+ --filelu-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,CrLf,Del,Ctl,LeftSpace,LeftPeriod,LeftTilde,LeftCrLfHtVt,RightSpace,RightPeriod,RightCrLfHtVt,InvalidUtf8,Dot,SquareBracket,Semicolon,Exclamation)
+ --filelu-key string Your FileLu Rclone key from My Account
--filescom-api-key string The API key used to authenticate with Files.com
--filescom-description string Description of the remote
--filescom-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
@@ -28945,7 +30740,6 @@ Backend-only flags (these can be set in the config file also).
--gofile-list-chunk int Number of items to list in each call (default 1000)
--gofile-root-folder-id string ID of the root folder
--gphotos-auth-url string Auth server URL
- --gphotos-batch-commit-timeout Duration Max time to wait for a batch to finish committing (default 10m0s)
--gphotos-batch-mode string Upload file batching sync|async|off (default \[dq]sync\[dq])
--gphotos-batch-size int Max number of files in upload batch
--gphotos-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
@@ -29013,6 +30807,8 @@ Backend-only flags (these can be set in the config file also).
--internetarchive-encoding Encoding The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot)
--internetarchive-endpoint string IAS3 Endpoint (default \[dq]https://s3.us.archive.org\[dq])
--internetarchive-front-endpoint string Host of InternetArchive Frontend (default \[dq]https://archive.org\[dq])
+ --internetarchive-item-derive Whether to trigger derive on the IA item or not. If set to false, the item will not be derived by IA upon upload (default true)
+ --internetarchive-item-metadata stringArray Metadata to be set on the IA item, this is different from file-level metadata that can be set using --metadata-set
--internetarchive-secret-access-key string IAS3 Secret Key (password)
--internetarchive-wait-archive Duration Timeout for waiting the server\[aq]s processing tasks (specifically archive and book_op) to finish (default 0s)
--jottacloud-auth-url string Auth server URL
@@ -29108,6 +30904,7 @@ Backend-only flags (these can be set in the config file also).
--onedrive-tenant string ID of the service principal\[aq]s tenant. Also called its directory ID
--onedrive-token string OAuth Access Token as a JSON blob
--onedrive-token-url string Token server url
+ --onedrive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default off)
--oos-attempt-resume-upload If true attempt to resume previously started multipart upload for the object
--oos-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
--oos-compartment string Specify compartment OCID, if you need to list buckets
@@ -29133,6 +30930,7 @@ Backend-only flags (these can be set in the config file also).
--oos-storage-tier string The storage class to use when storing new objects in storage. https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm (default \[dq]Standard\[dq])
--oos-upload-concurrency int Concurrency for multipart uploads (default 10)
--oos-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
+ --opendrive-access string Files and folders will be uploaded with this access permission (default private) (default \[dq]private\[dq])
--opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi)
--opendrive-description string Description of the remote
--opendrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
@@ -29229,6 +31027,8 @@ Backend-only flags (these can be set in the config file also).
--s3-endpoint string Endpoint for S3 API
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars)
--s3-force-path-style If true use path style access if false use virtual hosted style (default true)
+ --s3-ibm-api-key string IBM API Key to be used to obtain IAM token
+ --s3-ibm-resource-instance-id string IBM service instance id
--s3-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery
--s3-list-chunk int Size of listing chunk (response list for each ListObject S3 request) (default 1000)
--s3-list-url-encode Tristate Whether to url encode listings: true/false/unset (default unset)
@@ -29249,6 +31049,7 @@ Backend-only flags (these can be set in the config file also).
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3
--s3-session-token string An AWS session token
--s3-shared-credentials-file string Path to the shared credentials file
+ --s3-sign-accept-encoding Tristate Set if rclone should include Accept-Encoding as part of the signature (default unset)
--s3-sse-customer-algorithm string If using SSE-C, the server-side encryption algorithm used when storing this object in S3
--s3-sse-customer-key string To use SSE-C you may provide the secret encryption key used to encrypt/decrypt your data
--s3-sse-customer-key-base64 string If using SSE-C you must provide the secret encryption key encoded in base64 format to encrypt/decrypt your data
@@ -29265,6 +31066,7 @@ Backend-only flags (these can be set in the config file also).
--s3-use-multipart-uploads Tristate Set if rclone should use multipart uploads (default unset)
--s3-use-presigned-request Whether to use a presigned request or PutObject for single part uploads
--s3-use-unsigned-payload Tristate Whether to use an unsigned payload in PutObject (default unset)
+ --s3-use-x-id Tristate Set if rclone should add x-id URL parameters (default unset)
--s3-v2-auth If true use v2 authentication
--s3-version-at Time Show file versions as they were at the specified time (default off)
--s3-version-deleted Show deleted file markers when using versions
@@ -29290,6 +31092,7 @@ Backend-only flags (these can be set in the config file also).
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available
--sftp-host string SSH host to connect to
--sftp-host-key-algorithms SpaceSepList Space separated list of host key algorithms, ordered by preference
+ --sftp-http-proxy string URL for HTTP CONNECT proxy
--sftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--sftp-key-exchange SpaceSepList Space separated list of key exchange algorithms, ordered by preference
--sftp-key-file string Path to PEM-encoded private key file
@@ -29344,6 +31147,7 @@ Backend-only flags (these can be set in the config file also).
--smb-pass string SMB password (obscured)
--smb-port int SMB port number (default 445)
--smb-spn string Service principal name
+ --smb-use-kerberos Use Kerberos authentication
--smb-user string SMB username (default \[dq]$USER\[dq])
--storj-access-grant string Access grant
--storj-api-key string API key
@@ -29489,7 +31293,7 @@ should be installed on host:
.IP
.nf
\f[C]
-sudo apt-get -y install fuse
+sudo apt-get -y install fuse3
\f[R]
.fi
.PP
@@ -30761,7 +32565,7 @@ See the bisync filters section and generic
--filter-from (https://rclone.org/filtering/#filter-from-read-filtering-patterns-from-a-file)
documentation.
An example filters file contains filters for non-allowed files for
-synching with Dropbox.
+syncing with Dropbox.
.PP
If you make changes to your filters file then bisync requires a run with
\f[C]--resync\f[R].
@@ -30987,7 +32791,7 @@ reduce the sync run times for very large numbers of files.
.PP
The check may be run manually with \f[C]--check-sync=only\f[R].
It runs only the integrity check and terminates without actually
-synching.
+syncing.
.PP
Note that currently, \f[C]--check-sync\f[R] \f[B]only checks listing
snapshots and NOT the actual files on the remotes.\f[R] Note also that
@@ -31701,7 +33505,7 @@ flags are also supported.
.SS How to filter directories
.PP
Filtering portions of the directory tree is a critical feature for
-synching.
+syncing.
.PP
Examples of directory trees (always beneath the Path1/Path2 root level)
you may want to exclude from your sync: - Directory trees containing
@@ -31859,7 +33663,7 @@ This noise can be quashed by adding \f[C]--quiet\f[R] to the bisync
command line.
.SS Example exclude-style filters files for use with Dropbox
.IP \[bu] 2
-Dropbox disallows synching the listed temporary and configuration/data
+Dropbox disallows syncing the listed temporary and configuration/data
files.
The \[ga]- \[ga] filters exclude these files where ever they may occur
in the sync tree.
@@ -32246,7 +34050,7 @@ single \f[C]-\f[R] or double dash.
.SS Running tests
.IP \[bu] 2
\f[C]go test . -case basic -remote local -remote2 local\f[R] runs the
-\f[C]test_basic\f[R] test case using only the local filesystem, synching
+\f[C]test_basic\f[R] test case using only the local filesystem, syncing
one local directory with another local directory.
Test script output is to the console, while commands within scenario.txt
have their output sent to the \f[C].../workdir/test.log\f[R] file, which
@@ -32579,6 +34383,10 @@ Also note a number of academic publications by Benjamin
Pierce (http://www.cis.upenn.edu/%7Ebcpierce/papers/index.shtml#File%20Synchronization)
about \f[I]Unison\f[R] and synchronization in general.
.SS Changelog
+.SS \f[C]v1.69.1\f[R]
+.IP \[bu] 2
+Fixed an issue causing listings to not capture concurrent modifications
+under certain conditions
.SS \f[C]v1.68\f[R]
.IP \[bu] 2
Fixed an issue affecting backends that round modtimes to a lower
@@ -33381,12 +35189,16 @@ Linode Object Storage
.IP \[bu] 2
Magalu Object Storage
.IP \[bu] 2
+MEGA S4 Object Storage
+.IP \[bu] 2
Minio
.IP \[bu] 2
Outscale
.IP \[bu] 2
Petabox
.IP \[bu] 2
+Pure Storage FlashBlade
+.IP \[bu] 2
Qiniu Cloud Object Storage (Kodo)
.IP \[bu] 2
RackCorp Object Storage
@@ -34293,7 +36105,7 @@ It assumes that \f[C]USER_NAME\f[R] has been created.
The Resource entry must include both resource ARNs, as one implies the
bucket and the other implies the bucket\[aq]s objects.
.IP "3." 3
-When using s3-no-check-bucket and the bucket already exsits, the
+When using s3-no-check-bucket and the bucket already exists, the
\f[C]\[dq]arn:aws:s3:::BUCKET_NAME\[dq]\f[R] doesn\[aq]t have to be
included.
.PP
@@ -34324,7 +36136,8 @@ error like below.
.PP
In this case you need to
restore (http://docs.aws.amazon.com/AmazonS3/latest/user-guide/restore-archived-objects.html)
-the object(s) in question before using rclone.
+the object(s) in question before accessing object contents.
+The restore section below shows how to do this with rclone.
.PP
Note that rclone only speaks the S3 API it does not speak the Glacier
Vault API, so rclone cannot directly access Glacier Vaults.
@@ -34347,10 +36160,11 @@ all the files to be uploaded as multipart.
.PP
Here are the Standard options specific to s3 (Amazon S3 Compliant
Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile,
-Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive,
-IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease,
-Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel,
-StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
+Cloudflare, DigitalOcean, Dreamhost, Exaba, FlashBlade, GCS, HuaweiOBS,
+IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Mega,
+Minio, Netease, Outscale, Petabox, RackCorp, Rclone, Scaleway,
+SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi,
+Qiniu and others).
.SS --s3-provider
.PP
Choose your S3 provider.
@@ -34416,6 +36230,18 @@ DigitalOcean Spaces
Dreamhost DreamObjects
.RE
.IP \[bu] 2
+\[dq]Exaba\[dq]
+.RS 2
+.IP \[bu] 2
+Exaba Object Storage
+.RE
+.IP \[bu] 2
+\[dq]FlashBlade\[dq]
+.RS 2
+.IP \[bu] 2
+Pure Storage FlashBlade Object Storage
+.RE
+.IP \[bu] 2
\[dq]GCS\[dq]
.RS 2
.IP \[bu] 2
@@ -34476,6 +36302,12 @@ Linode Object Storage
Magalu Object Storage
.RE
.IP \[bu] 2
+\[dq]Mega\[dq]
+.RS 2
+.IP \[bu] 2
+MEGA S4 Object Storage
+.RE
+.IP \[bu] 2
\[dq]Minio\[dq]
.RS 2
.IP \[bu] 2
@@ -35079,7 +36911,7 @@ Config: acl
.IP \[bu] 2
Env Var: RCLONE_S3_ACL
.IP \[bu] 2
-Provider: !Storj,Selectel,Synology,Cloudflare
+Provider: !Storj,Selectel,Synology,Cloudflare,FlashBlade,Mega
.IP \[bu] 2
Type: string
.IP \[bu] 2
@@ -35337,14 +37169,45 @@ Intelligent-Tiering storage class
Glacier Instant Retrieval storage class
.RE
.RE
+.SS --s3-ibm-api-key
+.PP
+IBM API Key to be used to obtain IAM token
+.PP
+Properties:
+.IP \[bu] 2
+Config: ibm_api_key
+.IP \[bu] 2
+Env Var: RCLONE_S3_IBM_API_KEY
+.IP \[bu] 2
+Provider: IBMCOS
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.SS --s3-ibm-resource-instance-id
+.PP
+IBM service instance id
+.PP
+Properties:
+.IP \[bu] 2
+Config: ibm_resource_instance_id
+.IP \[bu] 2
+Env Var: RCLONE_S3_IBM_RESOURCE_INSTANCE_ID
+.IP \[bu] 2
+Provider: IBMCOS
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
.SS Advanced options
.PP
Here are the Advanced options specific to s3 (Amazon S3 Compliant
Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile,
-Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive,
-IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease,
-Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel,
-StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
+Cloudflare, DigitalOcean, Dreamhost, Exaba, FlashBlade, GCS, HuaweiOBS,
+IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Mega,
+Minio, Netease, Outscale, Petabox, RackCorp, Rclone, Scaleway,
+SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi,
+Qiniu and others).
.SS --s3-bucket-acl
.PP
Canned ACL used when creating buckets.
@@ -35364,6 +37227,8 @@ Config: bucket_acl
.IP \[bu] 2
Env Var: RCLONE_S3_BUCKET_ACL
.IP \[bu] 2
+Provider: !Storj,Selectel,Synology,Cloudflare,FlashBlade
+.IP \[bu] 2
Type: string
.IP \[bu] 2
Required: false
@@ -36358,6 +38223,48 @@ Env Var: RCLONE_S3_USE_MULTIPART_UPLOADS
Type: Tristate
.IP \[bu] 2
Default: unset
+.SS --s3-use-x-id
+.PP
+Set if rclone should add x-id URL parameters.
+.PP
+You can change this if you want to disable the AWS SDK from adding x-id
+URL parameters.
+.PP
+This shouldn\[aq]t be necessary in normal operation.
+.PP
+This should be automatically set correctly for all providers rclone
+knows about - please make a bug report if not.
+.PP
+Properties:
+.IP \[bu] 2
+Config: use_x_id
+.IP \[bu] 2
+Env Var: RCLONE_S3_USE_X_ID
+.IP \[bu] 2
+Type: Tristate
+.IP \[bu] 2
+Default: unset
+.SS --s3-sign-accept-encoding
+.PP
+Set if rclone should include Accept-Encoding as part of the signature.
+.PP
+You can change this if you want to stop rclone including Accept-Encoding
+as part of the signature.
+.PP
+This shouldn\[aq]t be necessary in normal operation.
+.PP
+This should be automatically set correctly for all providers rclone
+knows about - please make a bug report if not.
+.PP
+Properties:
+.IP \[bu] 2
+Config: sign_accept_encoding
+.IP \[bu] 2
+Env Var: RCLONE_S3_SIGN_ACCEPT_ENCODING
+.IP \[bu] 2
+Type: Tristate
+.IP \[bu] 2
+Default: unset
.SS --s3-directory-bucket
.PP
Set to use AWS Directory Buckets
@@ -36605,7 +38512,7 @@ Usage Examples:
.IP
.nf
\f[C]
-rclone backend restore s3:bucket/path/to/object -o priority=PRIORITY -o lifetime=DAYS
+rclone backend restore s3:bucket/path/to/ --include /object -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY
@@ -37544,7 +39451,7 @@ For on-prem COS, do not make a selection from this list, hit enter
location_constraint>1
\f[R]
.fi
-.IP "9." 3
+.IP "8." 3
Specify a canned ACL.
IBM Cloud (Storage) supports \[dq]public-read\[dq] and
\[dq]private\[dq].
@@ -37567,7 +39474,7 @@ Choose a number from below, or type in your own value
acl> 1
\f[R]
.fi
-.IP "12." 4
+.IP "9." 3
Review the displayed configuration and accept to save the
\[dq]remote\[dq] then quit.
The config file should look like this
@@ -37584,7 +39491,7 @@ The config file should look like this
acl = private
\f[R]
.fi
-.IP "13." 4
+.IP "10." 4
Execute rclone commands
.IP
.nf
@@ -37606,6 +39513,47 @@ Execute rclone commands
rclone delete IBM-COS-XREGION:newbucket/file.txt
\f[R]
.fi
+.SS IBM IAM authentication
+.PP
+If using IBM IAM authentication with IBM API KEY you need to fill in
+these additional parameters 1.
+Select false for env_auth 2.
+Leave \f[C]access_key_id\f[R] and \f[C]secret_access_key\f[R] blank 3.
+Paste your \f[C]ibm_api_key\f[R]
+.IP
+.nf
+\f[C]
+Option ibm_api_key.
+IBM API Key to be used to obtain IAM token
+Enter a value of type string. Press Enter for the default (1).
+ibm_api_key>
+\f[R]
+.fi
+.IP "4." 3
+Paste your \f[C]ibm_resource_instance_id\f[R]
+.IP
+.nf
+\f[C]
+Option ibm_resource_instance_id.
+IBM service instance id
+Enter a value of type string. Press Enter for the default (2).
+ibm_resource_instance_id>
+\f[R]
+.fi
+.IP "5." 3
+In advanced settings type true for \f[C]v2_auth\f[R]
+.IP
+.nf
+\f[C]
+Option v2_auth.
+If true use v2 authentication.
+If this is false (the default) then rclone will use v4 authentication.
+If it is set then rclone will use v2 authentication.
+Use this only if v4 signatures don\[aq]t work, e.g. pre Jewel/v10 CEPH.
+Enter a boolean value (true or false). Press Enter for the default (true).
+v2_auth>
+\f[R]
+.fi
.SS IDrive e2
.PP
Here is an example of making an IDrive e2 (https://www.idrive.com/e2/)
@@ -38469,7 +40417,7 @@ location_constraint = au-nsw
.PP
Rclone can serve any remote over the S3 protocol.
For details see the rclone serve
-s3 (https://rclone.org/commands/rclone_serve_http/) documentation.
+s3 (https://rclone.org/commands/rclone_serve_s3/) documentation.
.PP
For example, to serve \f[C]remote:path\f[R] over s3, run the server like
this:
@@ -38495,8 +40443,8 @@ use_multipart_uploads = false
\f[R]
.fi
.PP
-Note that setting \f[C]disable_multipart_uploads = true\f[R] is to work
-around a bug (https://rclone.org/commands/rclone_serve_http/#bugs) which
+Note that setting \f[C]use_multipart_uploads = false\f[R] is to work
+around a bug (https://rclone.org/commands/rclone_serve_s3/#bugs) which
will be fixed in due course.
.SS Scaleway
.PP
@@ -38642,21 +40590,19 @@ region>
\f[R]
.fi
.PP
-Choose an endpoint from the list
+Enter your Lyve Cloud endpoint.
+This field cannot be kept empty.
.IP
.nf
\f[C]
-Endpoint for S3 API.
+Endpoint for Lyve Cloud S3 API.
Required when using an S3 clone.
-Choose a number from below, or type in your own value.
-Press Enter to leave empty.
- 1 / Seagate Lyve Cloud US East 1 (Virginia)
- \[rs] (s3.us-east-1.lyvecloud.seagate.com)
- 2 / Seagate Lyve Cloud US West 1 (California)
- \[rs] (s3.us-west-1.lyvecloud.seagate.com)
- 3 / Seagate Lyve Cloud AP Southeast 1 (Singapore)
- \[rs] (s3.ap-southeast-1.lyvecloud.seagate.com)
-endpoint> 1
+Please type in your LyveCloud endpoint.
+Examples:
+- s3.us-west-1.{account_name}.lyve.seagate.com (US West 1 - California)
+- s3.eu-west-1.{account_name}.lyve.seagate.com (US West 1 - Ireland)
+Enter a value.
+endpoint> s3.us-west-1.global.lyve.seagate.com
\f[R]
.fi
.PP
@@ -39689,27 +41635,49 @@ Option endpoint.
Endpoint for Linode Object Storage API.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
- 1 / Atlanta, GA (USA), us-southeast-1
+ 1 / Amsterdam (Netherlands), nl-ams-1
+ \[rs] (nl-ams-1.linodeobjects.com)
+ 2 / Atlanta, GA (USA), us-southeast-1
\[rs] (us-southeast-1.linodeobjects.com)
- 2 / Chicago, IL (USA), us-ord-1
+ 3 / Chennai (India), in-maa-1
+ \[rs] (in-maa-1.linodeobjects.com)
+ 4 / Chicago, IL (USA), us-ord-1
\[rs] (us-ord-1.linodeobjects.com)
- 3 / Frankfurt (Germany), eu-central-1
+ 5 / Frankfurt (Germany), eu-central-1
\[rs] (eu-central-1.linodeobjects.com)
- 4 / Milan (Italy), it-mil-1
+ 6 / Jakarta (Indonesia), id-cgk-1
+ \[rs] (id-cgk-1.linodeobjects.com)
+ 7 / London 2 (Great Britain), gb-lon-1
+ \[rs] (gb-lon-1.linodeobjects.com)
+ 8 / Los Angeles, CA (USA), us-lax-1
+ \[rs] (us-lax-1.linodeobjects.com)
+ 9 / Madrid (Spain), es-mad-1
+ \[rs] (es-mad-1.linodeobjects.com)
+10 / Melbourne (Australia), au-mel-1
+ \[rs] (au-mel-1.linodeobjects.com)
+11 / Miami, FL (USA), us-mia-1
+ \[rs] (us-mia-1.linodeobjects.com)
+12 / Milan (Italy), it-mil-1
\[rs] (it-mil-1.linodeobjects.com)
- 5 / Newark, NJ (USA), us-east-1
+13 / Newark, NJ (USA), us-east-1
\[rs] (us-east-1.linodeobjects.com)
- 6 / Paris (France), fr-par-1
+14 / Osaka (Japan), jp-osa-1
+ \[rs] (jp-osa-1.linodeobjects.com)
+15 / Paris (France), fr-par-1
\[rs] (fr-par-1.linodeobjects.com)
- 7 / Seattle, WA (USA), us-sea-1
+16 / S\[~a]o Paulo (Brazil), br-gru-1
+ \[rs] (br-gru-1.linodeobjects.com)
+17 / Seattle, WA (USA), us-sea-1
\[rs] (us-sea-1.linodeobjects.com)
- 8 / Singapore ap-south-1
+18 / Singapore, ap-south-1
\[rs] (ap-south-1.linodeobjects.com)
- 9 / Stockholm (Sweden), se-sto-1
+19 / Singapore 2, sg-sin-1
+ \[rs] (sg-sin-1.linodeobjects.com)
+20 / Stockholm (Sweden), se-sto-1
\[rs] (se-sto-1.linodeobjects.com)
-10 / Washington, DC, (USA), us-iad-1
+21 / Washington, DC, (USA), us-iad-1
\[rs] (us-iad-1.linodeobjects.com)
-endpoint> 3
+endpoint> 5
Option acl.
Canned ACL used when creating buckets and storing or copying objects.
@@ -39884,6 +41852,124 @@ secret_access_key = SECRET_ACCESS_KEY
endpoint = br-ne1.magaluobjects.com
\f[R]
.fi
+.SS MEGA S4
+.PP
+MEGA S4 Object Storage (https://mega.io/objectstorage) is an S3
+compatible object storage system.
+It has a single pricing tier with no additional charges for data
+transfers or API requests and it is included in existing Pro plans.
+.PP
+Here is an example of making a configuration.
+First run:
+.IP
+.nf
+\f[C]
+rclone config
+\f[R]
+.fi
+.PP
+This will guide you through an interactive setup process.
+.IP
+.nf
+\f[C]
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+
+Enter name for new remote.
+name> megas4
+
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+XX / Amazon S3 Compliant Storage Providers including AWS,... Mega, ...
+ \[rs] (s3)
+[snip]
+Storage> s3
+
+Option provider.
+Choose your S3 provider.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+[snip]
+XX / MEGA S4 Object Storage
+ \[rs] (Mega)
+[snip]
+provider> Mega
+
+Option env_auth.
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+Only applies if access_key_id and secret_access_key is blank.
+Choose a number from below, or type in your own boolean value (true or false).
+Press Enter for the default (false).
+ 1 / Enter AWS credentials in the next step.
+ \[rs] (false)
+ 2 / Get AWS credentials from the environment (env vars or IAM).
+ \[rs] (true)
+env_auth>
+
+Option access_key_id.
+AWS Access Key ID.
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+access_key_id> XXX
+
+Option secret_access_key.
+AWS Secret Access Key (password).
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+secret_access_key> XXX
+
+Option endpoint.
+Endpoint for S3 API.
+Required when using an S3 clone.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / Mega S4 eu-central-1 (Amsterdam)
+ \[rs] (s3.eu-central-1.s4.mega.io)
+ 2 / Mega S4 eu-central-2 (Bettembourg)
+ \[rs] (s3.eu-central-2.s4.mega.io)
+ 3 / Mega S4 ca-central-1 (Montreal)
+ \[rs] (s3.ca-central-1.s4.mega.io)
+ 4 / Mega S4 ca-west-1 (Vancouver)
+ \[rs] (s3.ca-west-1.s4.mega.io)
+endpoint> 1
+
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+
+Configuration complete.
+Options:
+- type: s3
+- provider: Mega
+- access_key_id: XXX
+- secret_access_key: XXX
+- endpoint: s3.eu-central-1.s4.mega.io
+Keep this \[dq]megas4\[dq] remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+\f[R]
+.fi
+.PP
+This will leave the config file looking like this.
+.IP
+.nf
+\f[C]
+[megas4]
+type = s3
+provider = Mega
+access_key_id = XXX
+secret_access_key = XXX
+endpoint = s3.eu-central-1.s4.mega.io
+\f[R]
+.fi
.SS ArvanCloud
.PP
ArvanCloud (https://www.arvancloud.com/en/products/cloud-storage)
@@ -40317,6 +42403,138 @@ region = us-east-1
endpoint = s3.petabox.io
\f[R]
.fi
+.SS Pure Storage FlashBlade
+.PP
+Pure Storage
+FlashBlade (https://www.purestorage.com/products/unstructured-data-storage.html)
+is a high performance S3-compatible object store.
+.PP
+FlashBlade supports most modern S3 features including:
+.IP \[bu] 2
+ListObjectsV2
+.IP \[bu] 2
+Multipart uploads with AWS-compatible ETags
+.IP \[bu] 2
+Advanced checksum algorithms (SHA256, CRC32, CRC32C) with trailer
+support (Purity//FB 4.4.2+)
+.IP \[bu] 2
+Object versioning and lifecycle management
+.IP \[bu] 2
+Virtual hosted-style requests (requires DNS configuration)
+.PP
+To configure rclone for Pure Storage FlashBlade:
+.PP
+First run:
+.IP
+.nf
+\f[C]
+rclone config
+\f[R]
+.fi
+.PP
+This will guide you through an interactive setup process:
+.IP
+.nf
+\f[C]
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+
+Enter name for new remote.
+name> flashblade
+
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+ 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, FlashBlade, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others
+ \[rs] (s3)
+[snip]
+Storage> s3
+
+Option provider.
+Choose your S3 provider.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+[snip]
+ 9 / Pure Storage FlashBlade Object Storage
+ \[rs] (FlashBlade)
+[snip]
+provider> FlashBlade
+
+Option env_auth.
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+Only applies if access_key_id and secret_access_key is blank.
+Choose a number from below, or type in your own boolean value (true or false).
+Press Enter for the default (false).
+ 1 / Enter AWS credentials in the next step.
+ \[rs] (false)
+ 2 / Get AWS credentials from the environment (env vars or IAM).
+ \[rs] (true)
+env_auth> 1
+
+Option access_key_id.
+AWS Access Key ID.
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+access_key_id> ACCESS_KEY_ID
+
+Option secret_access_key.
+AWS Secret Access Key (password).
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+secret_access_key> SECRET_ACCESS_KEY
+
+Option endpoint.
+Endpoint for S3 API.
+Required when using an S3 clone.
+Enter a value. Press Enter to leave empty.
+endpoint> https://s3.flashblade.example.com
+
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+
+Configuration complete.
+Options:
+- type: s3
+- provider: FlashBlade
+- access_key_id: ACCESS_KEY_ID
+- secret_access_key: SECRET_ACCESS_KEY
+- endpoint: https://s3.flashblade.example.com
+Keep this \[dq]flashblade\[dq] remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+\f[R]
+.fi
+.PP
+This results in the following configuration being stored in
+\f[C]\[ti]/.config/rclone/rclone.conf\f[R]:
+.IP
+.nf
+\f[C]
+[flashblade]
+type = s3
+provider = FlashBlade
+access_key_id = ACCESS_KEY_ID
+secret_access_key = SECRET_ACCESS_KEY
+endpoint = https://s3.flashblade.example.com
+\f[R]
+.fi
+.PP
+Note: The FlashBlade endpoint should be the S3 data VIP.
+For virtual-hosted style requests, ensure proper DNS configuration:
+subdomains of the endpoint hostname should resolve to a FlashBlade data
+VIP.
+For example, if your endpoint is
+\f[C]https://s3.flashblade.example.com\f[R], then
+\f[C]bucket-name.s3.flashblade.example.com\f[R] should also resolve to
+the data VIP.
.SS Storj
.PP
Storj is a decentralized cloud storage which can be used through its
@@ -40388,7 +42606,7 @@ stored, nor is any MD5SUM (if one is available from the source).
.PP
This has the following consequences:
.IP \[bu] 2
-Using \f[C]rclone rcat\f[R] will fail as the medatada doesn\[aq]t match
+Using \f[C]rclone rcat\f[R] will fail as the metadata doesn\[aq]t match
after upload
.IP \[bu] 2
Uploading files with \f[C]rclone mount\f[R] will fail for the same
@@ -41644,8 +43862,8 @@ Note that rclone runs a webserver on your local machine to collect the
token as returned from Box.
This only runs from the moment it opens your browser to the moment you
get back the verification code.
-This is on \f[C]http://127.0.0.1:53682/\f[R] and this it may require you
-to unblock it temporarily if you are running a host firewall.
+This is on \f[C]http://127.0.0.1:53682/\f[R] and this may require you to
+unblock it temporarily if you are running a host firewall.
.PP
Once configured you can then use \f[C]rclone\f[R] like this,
.PP
@@ -43895,6 +46113,36 @@ Env Var: RCLONE_CLOUDINARY_EVENTUALLY_CONSISTENT_DELAY
Type: Duration
.IP \[bu] 2
Default: 0s
+.SS --cloudinary-adjust-media-files-extensions
+.PP
+Cloudinary handles media formats as a file attribute and strips it from
+the name, which is unlike most other file systems
+.PP
+Properties:
+.IP \[bu] 2
+Config: adjust_media_files_extensions
+.IP \[bu] 2
+Env Var: RCLONE_CLOUDINARY_ADJUST_MEDIA_FILES_EXTENSIONS
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: true
+.SS --cloudinary-media-extensions
+.PP
+Cloudinary supported media extensions
+.PP
+Properties:
+.IP \[bu] 2
+Config: media_extensions
+.IP \[bu] 2
+Env Var: RCLONE_CLOUDINARY_MEDIA_EXTENSIONS
+.IP \[bu] 2
+Type: stringArray
+.IP \[bu] 2
+Default: [3ds 3g2 3gp ai arw avi avif bmp bw cr2 cr3 djvu dng eps3 fbx
+flif flv gif glb gltf hdp heic heif ico indd jp2 jpe jpeg jpg jxl jxr
+m2ts mov mp4 mpeg mts mxf obj ogv pdf ply png psd svg tga tif tiff ts
+u3ma usdz wdp webm webp wmv]
.SS --cloudinary-description
.PP
Description of the remote.
@@ -45295,7 +47543,7 @@ The nonce is incremented for each chunk read making sure each nonce is
unique for each block written.
The chance of a nonce being reused is minuscule.
If you wrote an exabyte of data (10\[S1]\[u2078] bytes) you would have a
-probability of approximately 2\[tmu]10\[u207B]\[S3]\[S2] of re-using a
+probability of approximately 2\[tmu]10\[u207B]\[S3]\[S2] of reusing a
nonce.
.SS Chunk
.PP
@@ -45779,6 +48027,234 @@ Required: false
Any metadata supported by the underlying remote is read and written.
.PP
See the metadata (https://rclone.org/docs/#metadata) docs for more info.
+.SH DOI
+.PP
+The DOI remote is a read only remote for reading files from digital
+object identifiers (DOI).
+.PP
+Currently, the DOI backend supports supports DOIs hosted with: -
+InvenioRDM (https://inveniosoftware.org/products/rdm/) -
+Zenodo (https://zenodo.org) - CaltechDATA (https://data.caltech.edu) -
+Other InvenioRDM repositories (https://inveniosoftware.org/showcase/) -
+Dataverse (https://dataverse.org) - Harvard
+Dataverse (https://dataverse.harvard.edu) - Other Dataverse
+repositories (https://dataverse.org/installations)
+.PP
+Paths are specified as \f[C]remote:path\f[R]
+.PP
+Paths may be as deep as required, e.g.
+\f[C]remote:directory/subdirectory\f[R].
+.SS Configuration
+.PP
+Here is an example of how to make a remote called \f[C]remote\f[R].
+First run:
+.IP
+.nf
+\f[C]
+ rclone config
+\f[R]
+.fi
+.PP
+This will guide you through an interactive setup process:
+.IP
+.nf
+\f[C]
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+Enter name for new remote.
+name> remote
+Type of storage to configure.
+Choose a number from below, or type in your own value
+[snip]
+XX / DOI datasets
+ \[rs] (doi)
+[snip]
+Storage> doi
+Option doi.
+The DOI or the doi.org URL.
+Enter a value.
+doi> 10.5281/zenodo.5876941
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+Configuration complete.
+Options:
+- type: doi
+- doi: 10.5281/zenodo.5876941
+Keep this \[dq]remote\[dq] remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+\f[R]
+.fi
+.SS Standard options
+.PP
+Here are the Standard options specific to doi (DOI datasets).
+.SS --doi-doi
+.PP
+The DOI or the doi.org URL.
+.PP
+Properties:
+.IP \[bu] 2
+Config: doi
+.IP \[bu] 2
+Env Var: RCLONE_DOI_DOI
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: true
+.SS Advanced options
+.PP
+Here are the Advanced options specific to doi (DOI datasets).
+.SS --doi-provider
+.PP
+DOI provider.
+.PP
+The DOI provider can be set when rclone does not automatically recognize
+a supported DOI provider.
+.PP
+Properties:
+.IP \[bu] 2
+Config: provider
+.IP \[bu] 2
+Env Var: RCLONE_DOI_PROVIDER
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+\[dq]auto\[dq]
+.RS 2
+.IP \[bu] 2
+Auto-detect provider
+.RE
+.IP \[bu] 2
+\[dq]zenodo\[dq]
+.RS 2
+.IP \[bu] 2
+Zenodo
+.RE
+.IP \[bu] 2
+\[dq]dataverse\[dq]
+.RS 2
+.IP \[bu] 2
+Dataverse
+.RE
+.IP \[bu] 2
+\[dq]invenio\[dq]
+.RS 2
+.IP \[bu] 2
+Invenio
+.RE
+.RE
+.SS --doi-doi-resolver-api-url
+.PP
+The URL of the DOI resolver API to use.
+.PP
+The DOI resolver can be set for testing or for cases when the the
+canonical DOI resolver API cannot be used.
+.PP
+Defaults to \[dq]https://doi.org/api\[dq].
+.PP
+Properties:
+.IP \[bu] 2
+Config: doi_resolver_api_url
+.IP \[bu] 2
+Env Var: RCLONE_DOI_DOI_RESOLVER_API_URL
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.SS --doi-description
+.PP
+Description of the remote.
+.PP
+Properties:
+.IP \[bu] 2
+Config: description
+.IP \[bu] 2
+Env Var: RCLONE_DOI_DESCRIPTION
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.SS Backend commands
+.PP
+Here are the commands specific to the doi backend.
+.PP
+Run them with
+.IP
+.nf
+\f[C]
+rclone backend COMMAND remote:
+\f[R]
+.fi
+.PP
+The help below will explain what arguments each command takes.
+.PP
+See the backend (https://rclone.org/commands/rclone_backend/) command
+for more info on how to pass options and arguments.
+.PP
+These can be run on a running backend using the rc command
+backend/command (https://rclone.org/rc/#backend-command).
+.SS metadata
+.PP
+Show metadata about the DOI.
+.IP
+.nf
+\f[C]
+rclone backend metadata remote: [options] [+]
+\f[R]
+.fi
+.PP
+This command returns a JSON object with some information about the DOI.
+.IP
+.nf
+\f[C]
+rclone backend medatadata doi:
+\f[R]
+.fi
+.PP
+It returns a JSON object representing metadata about the DOI.
+.SS set
+.PP
+Set command for updating the config parameters.
+.IP
+.nf
+\f[C]
+rclone backend set remote: [options] [+]
+\f[R]
+.fi
+.PP
+This set command can be used to update the config parameters for a
+running doi backend.
+.PP
+Usage Examples:
+.IP
+.nf
+\f[C]
+rclone backend set doi: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
+rclone rc backend/command command=set fs=doi: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
+rclone rc backend/command command=set fs=doi: -o doi=NEW_DOI
+\f[R]
+.fi
+.PP
+The option keys are named as they are in the config file.
+.PP
+This rebuilds the connection to the doi backend when it is called with
+the new parameters.
+Only new parameters need be passed as the values will default to those
+currently in use.
+.PP
+It doesn\[aq]t return anything.
.SH Dropbox
.PP
Paths are specified as \f[C]remote:path\f[R]
@@ -46037,6 +48513,94 @@ Or you could do an initial transfer with
.PP
Note that there may be a pause when quitting rclone while rclone
finishes up the last batch using this mode.
+.SS Exporting files
+.PP
+Certain files in Dropbox are \[dq]exportable\[dq], such as Dropbox Paper
+documents.
+These files need to be converted to another format in order to be
+downloaded.
+Often multiple formats are available for conversion.
+.PP
+When rclone downloads a exportable file, it chooses the format to
+download based on the \f[C]--dropbox-export-formats\f[R] setting.
+By default, the export formats are \f[C]html,md\f[R], which are sensible
+defaults for Dropbox Paper.
+.PP
+Rclone chooses the first format ID in the export formats list that
+Dropbox supports for a given file.
+If no format in the list is usable, rclone will choose the default
+format that Dropbox suggests.
+.PP
+Rclone will change the extension to correspond to the export format.
+Here are some examples of how extensions are mapped:
+.PP
+.TS
+tab(@);
+l l l.
+T{
+File type
+T}@T{
+Filename in Dropbox
+T}@T{
+Filename in rclone
+T}
+_
+T{
+Paper
+T}@T{
+mydoc.paper
+T}@T{
+mydoc.html
+T}
+T{
+Paper template
+T}@T{
+mydoc.papert
+T}@T{
+mydoc.papert.html
+T}
+T{
+other
+T}@T{
+mydoc
+T}@T{
+mydoc.html
+T}
+.TE
+.PP
+\f[I]Importing\f[R] exportable files is not yet supported by rclone.
+.PP
+Here are the supported export extensions known by rclone.
+Note that rclone does not currently support other formats not on this
+list, even if Dropbox supports them.
+Also, Dropbox could change the list of supported formats at any time.
+.PP
+.TS
+tab(@);
+l l l.
+T{
+Format ID
+T}@T{
+Name
+T}@T{
+Description
+T}
+_
+T{
+html
+T}@T{
+HTML
+T}@T{
+HTML document
+T}
+T{
+md
+T}@T{
+Markdown
+T}@T{
+Markdown text format
+T}
+.TE
.SS Standard options
.PP
Here are the Standard options specific to dropbox (Dropbox).
@@ -46270,6 +48834,65 @@ Env Var: RCLONE_DROPBOX_ROOT_NAMESPACE
Type: string
.IP \[bu] 2
Required: false
+.SS --dropbox-export-formats
+.PP
+Comma separated list of preferred formats for exporting files
+.PP
+Certain Dropbox files can only be accessed by exporting them to another
+format.
+These include Dropbox Paper documents.
+.PP
+For each such file, rclone will choose the first format on this list
+that Dropbox considers valid.
+If none is valid, it will choose Dropbox\[aq]s default format.
+.PP
+Known formats include: \[dq]html\[dq], \[dq]md\[dq] (markdown)
+.PP
+Properties:
+.IP \[bu] 2
+Config: export_formats
+.IP \[bu] 2
+Env Var: RCLONE_DROPBOX_EXPORT_FORMATS
+.IP \[bu] 2
+Type: CommaSepList
+.IP \[bu] 2
+Default: html,md
+.SS --dropbox-skip-exports
+.PP
+Skip exportable files in all listings.
+.PP
+If given, exportable files practically become invisible to rclone.
+.PP
+Properties:
+.IP \[bu] 2
+Config: skip_exports
+.IP \[bu] 2
+Env Var: RCLONE_DROPBOX_SKIP_EXPORTS
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
+.SS --dropbox-show-all-exports
+.PP
+Show all exportable files in listings.
+.PP
+Adding this flag will allow all exportable files to be server side
+copied.
+Note that rclone doesn\[aq]t add extensions to the exportable file names
+in this mode.
+.PP
+Do \f[B]not\f[R] use this flag when trying to download exportable files
+- rclone will fail to download them.
+.PP
+Properties:
+.IP \[bu] 2
+Config: show_all_exports
+.IP \[bu] 2
+Env Var: RCLONE_DROPBOX_SHOW_ALL_EXPORTS
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS --dropbox-batch-mode
.PP
Upload file batching sync|async|off.
@@ -46357,7 +48980,8 @@ Type: Duration
Default: 0s
.SS --dropbox-batch-commit-timeout
.PP
-Max time to wait for a batch to finish committing
+Max time to wait for a batch to finish committing.
+(no longer used)
.PP
Properties:
.IP \[bu] 2
@@ -46415,6 +49039,11 @@ See the forum
discussion (https://forum.rclone.org/t/rclone-link-dropbox-permissions/23211)
and the dropbox SDK
issue (https://github.com/dropbox/dropbox-sdk-go-unofficial/issues/75).
+.PP
+Modification times for Dropbox Paper documents are not exact, and may
+not change for some period after the document is edited.
+To make sure you get recent changes in a sync, either wait an hour or
+so, or use \f[C]--ignore-times\f[R] to force a full sync.
.SS Get your own Dropbox App ID
.PP
When you use rclone with Dropbox in its default configuration you are
@@ -46789,6 +49418,288 @@ Env Var: RCLONE_FILEFABRIC_DESCRIPTION
Type: string
.IP \[bu] 2
Required: false
+.SH FileLu
+.PP
+FileLu (https://filelu.com/) is a reliable cloud storage provider
+offering features like secure file uploads, downloads, flexible storage
+options, and sharing capabilities.
+With support for high storage limits and seamless integration with
+rclone, FileLu makes managing files in the cloud easy.
+Its cross-platform file backup services let you upload and back up files
+from any internet-connected device.
+.SS Configuration
+.PP
+Here is an example of how to make a remote called \f[C]filelu\f[R].
+First, run:
+.IP
+.nf
+\f[C]
+ rclone config
+\f[R]
+.fi
+.PP
+This will guide you through an interactive setup process:
+.IP
+.nf
+\f[C]
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> filelu
+Type of storage to configure.
+Choose a number from below, or type in your own value
+[snip]
+xx / FileLu Cloud Storage
+ \[rs] \[dq]filelu\[dq]
+[snip]
+Storage> filelu
+Enter your FileLu Rclone Key:
+key> YOUR_FILELU_RCLONE_KEY RC_xxxxxxxxxxxxxxxxxxxxxxxx
+Configuration complete.
+
+Keep this \[dq]filelu\[dq] remote?
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+\f[R]
+.fi
+.SS Paths
+.PP
+A path without an initial \f[C]/\f[R] will operate in the
+\f[C]Rclone\f[R] directory.
+.PP
+A path with an initial \f[C]/\f[R] will operate at the root where you
+can see the \f[C]Rclone\f[R] directory.
+.IP
+.nf
+\f[C]
+$ rclone lsf TestFileLu:/
+CCTV/
+Camera/
+Documents/
+Music/
+Photos/
+Rclone/
+Vault/
+Videos/
+\f[R]
+.fi
+.SS Example Commands
+.PP
+Create a new folder named \f[C]foldername\f[R] in the \f[C]Rclone\f[R]
+directory:
+.IP
+.nf
+\f[C]
+rclone mkdir filelu:foldername
+\f[R]
+.fi
+.PP
+Delete a folder on FileLu:
+.IP
+.nf
+\f[C]
+rclone rmdir filelu:/folder/path/
+\f[R]
+.fi
+.PP
+Delete a file on FileLu:
+.IP
+.nf
+\f[C]
+rclone delete filelu:/hello.txt
+\f[R]
+.fi
+.PP
+List files from your FileLu account:
+.IP
+.nf
+\f[C]
+rclone ls filelu:
+\f[R]
+.fi
+.PP
+List all folders:
+.IP
+.nf
+\f[C]
+rclone lsd filelu:
+\f[R]
+.fi
+.PP
+Copy a specific file to the FileLu root:
+.IP
+.nf
+\f[C]
+rclone copy D:\[rs]\[rs]hello.txt filelu:
+\f[R]
+.fi
+.PP
+Copy files from a local directory to a FileLu directory:
+.IP
+.nf
+\f[C]
+rclone copy D:/local-folder filelu:/remote-folder/path/
+\f[R]
+.fi
+.PP
+Download a file from FileLu into a local directory:
+.IP
+.nf
+\f[C]
+rclone copy filelu:/file-path/hello.txt D:/local-folder
+\f[R]
+.fi
+.PP
+Move files from a local directory to a FileLu directory:
+.IP
+.nf
+\f[C]
+rclone move D:\[rs]\[rs]local-folder filelu:/remote-path/
+\f[R]
+.fi
+.PP
+Sync files from a local directory to a FileLu directory:
+.IP
+.nf
+\f[C]
+rclone sync --interactive D:/local-folder filelu:/remote-path/
+\f[R]
+.fi
+.PP
+Mount remote to local Linux:
+.IP
+.nf
+\f[C]
+rclone mount filelu: /root/mnt --vfs-cache-mode full
+\f[R]
+.fi
+.PP
+Mount remote to local Windows:
+.IP
+.nf
+\f[C]
+rclone mount filelu: D:/local_mnt --vfs-cache-mode full
+\f[R]
+.fi
+.PP
+Get storage info about the FileLu account:
+.IP
+.nf
+\f[C]
+rclone about filelu:
+\f[R]
+.fi
+.PP
+All the other rclone commands are supported by this backend.
+.SS FolderID instead of folder path
+.PP
+We use the FolderID instead of the folder name to prevent errors when
+users have identical folder names or paths.
+For example, if a user has two or three folders named
+\[dq]test_folders,\[dq] the system may become confused and won\[aq]t
+know which folder to move.
+In large storage systems, some clients have hundred of thousands of
+folders and a few millions of files, duplicate folder names or paths are
+quite common.
+.SS Modification Times and Hashes
+.PP
+FileLu supports both modification times and MD5 hashes.
+.PP
+FileLu only supports filenames and folder names up to 255 characters in
+length, where a character is a Unicode character.
+.SS Duplicated Files
+.PP
+When uploading and syncing via Rclone, FileLu does not allow uploading
+duplicate files within the same directory.
+However, you can upload duplicate files, provided they are in different
+directories (folders).
+.SS Failure to Log / Invalid Credentials or KEY
+.PP
+Ensure that you have the correct Rclone key, which can be found in My
+Account (https://filelu.com/account/).
+Every time you toggle Rclone OFF and ON in My Account, a new
+RC_xxxxxxxxxxxxxxxxxxxx key is generated.
+Be sure to update your Rclone configuration with the new key.
+.PP
+If you are connecting to your FileLu remote for the first time and
+encounter an error such as:
+.IP
+.nf
+\f[C]
+Failed to create file system for \[dq]my-filelu-remote:\[dq]: couldn\[aq]t login: Invalid credentials
+\f[R]
+.fi
+.PP
+Ensure your Rclone Key is correct.
+.SS Process \f[C]killed\f[R]
+.PP
+Accounts with large files or extensive metadata may experience
+significant memory usage during list/sync operations.
+Ensure the system running \f[C]rclone\f[R] has sufficient memory and CPU
+to handle these operations.
+.SS Standard options
+.PP
+Here are the Standard options specific to filelu (FileLu Cloud Storage).
+.SS --filelu-key
+.PP
+Your FileLu Rclone key from My Account
+.PP
+Properties:
+.IP \[bu] 2
+Config: key
+.IP \[bu] 2
+Env Var: RCLONE_FILELU_KEY
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: true
+.SS Advanced options
+.PP
+Here are the Advanced options specific to filelu (FileLu Cloud Storage).
+.SS --filelu-encoding
+.PP
+The encoding for the backend.
+.PP
+See the encoding section in the
+overview (https://rclone.org/overview/#encoding) for more info.
+.PP
+Properties:
+.IP \[bu] 2
+Config: encoding
+.IP \[bu] 2
+Env Var: RCLONE_FILELU_ENCODING
+.IP \[bu] 2
+Type: Encoding
+.IP \[bu] 2
+Default:
+Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,CrLf,Del,Ctl,LeftSpace,LeftPeriod,LeftTilde,LeftCrLfHtVt,RightSpace,RightPeriod,RightCrLfHtVt,InvalidUtf8,Dot,SquareBracket,Semicolon,Exclamation
+.SS --filelu-description
+.PP
+Description of the remote.
+.PP
+Properties:
+.IP \[bu] 2
+Config: description
+.IP \[bu] 2
+Env Var: RCLONE_FILELU_DESCRIPTION
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.SS Limitations
+.PP
+This backend uses a custom library implementing the FileLu API.
+While it supports file transfers, some advanced features may not yet be
+available.
+Please report any issues to the rclone forum (https://forum.rclone.org/)
+for troubleshooting and updates.
+.PP
+For further information, visit FileLu\[aq]s
+website (https://filelu.com/).
.SH Files.com
.PP
Files.com (https://www.files.com/) is a cloud storage service that
@@ -51683,6 +54594,40 @@ attempted if possible.
.PP
Use the --interactive/-i or --dry-run flag to see what would be copied
before copying.
+.SS moveid
+.PP
+Move files by ID
+.IP
+.nf
+\f[C]
+rclone backend moveid remote: [options] [+]
+\f[R]
+.fi
+.PP
+This command moves files by ID
+.PP
+Usage:
+.IP
+.nf
+\f[C]
+rclone backend moveid drive: ID path
+rclone backend moveid drive: ID1 path1 ID2 path2
+\f[R]
+.fi
+.PP
+It moves the drive file with ID given to the path (an rclone path which
+will be passed internally to rclone moveto).
+.PP
+The path should end with a / to indicate move the file as named to this
+directory.
+If it doesn\[aq]t end with a / then the last path component will be used
+as the file name.
+.PP
+If the destination is a drive backend then server-side moving will be
+attempted if possible.
+.PP
+Use the --interactive/-i or --dry-run flag to see what would be moved
+beforehand.
.SS exportformats
.PP
Dump the export formats for debug purposes
@@ -52019,6 +54964,12 @@ for transferring photos and videos to and from Google Photos.
\f[B]NB\f[R] The Google Photos API which rclone uses has quite a few
limitations, so please read the limitations section carefully to make
sure it is suitable for your use.
+.PP
+\f[B]NB\f[R] From March 31, 2025 rclone can only download photos it
+uploaded.
+This limitation is due to policy changes at Google.
+You may need to run \f[C]rclone config reconnect remote:\f[R] to make
+rclone work again after upgrading to rclone v1.70.
.SS Configuration
.PP
The initial setup for google cloud storage involves getting a token from
@@ -52457,7 +55408,7 @@ Use the gphotosdl proxy for downloading the full resolution images
The Google API will deliver images and video which aren\[aq]t full
resolution, and/or have EXIF data missing.
.PP
-However if you ue the gphotosdl proxy tnen you can download original,
+However if you use the gphotosdl proxy then you can download original,
unchanged images.
.PP
This runs a headless browser in the background.
@@ -52596,7 +55547,8 @@ Type: Duration
Default: 0s
.SS --gphotos-batch-commit-timeout
.PP
-Max time to wait for a batch to finish committing
+Max time to wait for a batch to finish committing.
+(no longer used)
.PP
Properties:
.IP \[bu] 2
@@ -52627,6 +55579,12 @@ If you attempt to upload non videos or images or formats that Google
Photos doesn\[aq]t understand, rclone will upload the file, then Google
Photos will give an error when it is put turned into a media item.
.PP
+\f[B]NB\f[R] From March 31, 2025 rclone can only download photos it
+uploaded.
+This limitation is due to policy changes at Google.
+You may need to run \f[C]rclone config reconnect remote:\f[R] to make
+rclone work again after upgrading to rclone v1.70.
+.PP
Note that all media items uploaded to Google Photos through the API are
stored in full resolution at \[dq]original quality\[dq] and
\f[B]will\f[R] count towards your storage quota in your Google Account.
@@ -54838,7 +57796,7 @@ Enter a value.
config_2fa> 2FACODE
Remote config
--------------------
-[koofr]
+[iclouddrive]
- type: iclouddrive
- apple_id: APPLEID
- password: *** ENCRYPTED ***
@@ -54854,6 +57812,28 @@ y/e/d> y
.SS Advanced Data Protection
.PP
ADP is currently unsupported and need to be disabled
+.PP
+On iPhone, Settings \f[C]>\f[R] Apple Account \f[C]>\f[R] iCloud
+\f[C]>\f[R] \[aq]Access iCloud Data on the Web\[aq] must be ON, and
+\[aq]Advanced Data Protection\[aq] OFF.
+.SS Troubleshooting
+.SS Missing PCS cookies from the request
+.PP
+This means you have Advanced Data Protection (ADP) turned on.
+This is not supported at the moment.
+If you want to use rclone you will have to turn it off.
+See above for how to turn it off.
+.PP
+You will need to clear the \f[C]cookies\f[R] and the
+\f[C]trust_token\f[R] fields in the config.
+Or you can delete the remote config and start again.
+.PP
+You should then run \f[C]rclone reconnect remote:\f[R].
+.PP
+Note that changing the ADP setting may not take effect immediately - you
+may need to wait a few hours or a day before you can get rclone to work
+- keep clearing the config entry and running
+\f[C]rclone reconnect remote:\f[R] until rclone functions properly.
.SS Standard options
.PP
Here are the Standard options specific to iclouddrive (iCloud Drive).
@@ -55184,6 +58164,25 @@ Env Var: RCLONE_INTERNETARCHIVE_SECRET_ACCESS_KEY
Type: string
.IP \[bu] 2
Required: false
+.SS --internetarchive-item-derive
+.PP
+Whether to trigger derive on the IA item or not.
+If set to false, the item will not be derived by IA upon upload.
+The derive process produces a number of secondary files from an upload
+to make an upload more usable on the web.
+Setting this to false is useful for uploading files that are already in
+a format that IA can display or reduce burden on IA\[aq]s
+infrastructure.
+.PP
+Properties:
+.IP \[bu] 2
+Config: item_derive
+.IP \[bu] 2
+Env Var: RCLONE_INTERNETARCHIVE_ITEM_DERIVE
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: true
.SS Advanced options
.PP
Here are the Advanced options specific to internetarchive (Internet
@@ -55218,6 +58217,22 @@ Env Var: RCLONE_INTERNETARCHIVE_FRONT_ENDPOINT
Type: string
.IP \[bu] 2
Default: \[dq]https://archive.org\[dq]
+.SS --internetarchive-item-metadata
+.PP
+Metadata to be set on the IA item, this is different from file-level
+metadata that can be set using --metadata-set.
+Format is key=value and the \[aq]x-archive-meta-\[aq] prefix is
+automatically added.
+.PP
+Properties:
+.IP \[bu] 2
+Config: item_metadata
+.IP \[bu] 2
+Env Var: RCLONE_INTERNETARCHIVE_ITEM_METADATA
+.IP \[bu] 2
+Type: stringArray
+.IP \[bu] 2
+Default: []
.SS --internetarchive-disable-checksum
.PP
Don\[aq]t ask the server to test against MD5 checksum calculated by
@@ -56753,6 +59768,14 @@ Click password for apps /
.IP \[bu] 2
Add the password - give it a name - eg \[dq]rclone\[dq]
.IP \[bu] 2
+Select the permissions level.
+For some reason just \[dq]Full access to Cloud\[dq] (WebDav) doesn\[aq]t
+work for Rclone currently.
+You have to select \[dq]Full access to Mail, Cloud and Calendar\[dq]
+(all protocols).
+(thread on
+forum.rclone.org (https://forum.rclone.org/t/failed-to-create-file-system-for-mailru-failed-to-authorize-oauth2-invalid-username-or-password-username-or-password-is-incorrect/49298))
+.IP \[bu] 2
Copy the password and use this password below - your normal login
password won\[aq]t work.
.PP
@@ -59157,6 +62180,69 @@ Env Var: RCLONE_AZUREBLOB_UPLOAD_CONCURRENCY
Type: int
.IP \[bu] 2
Default: 16
+.SS --azureblob-copy-cutoff
+.PP
+Cutoff for switching to multipart copy.
+.PP
+Any files larger than this that need to be server-side copied will be
+copied in chunks of chunk_size using the put block list API.
+.PP
+Files smaller than this limit will be copied with the Copy Blob API.
+.PP
+Properties:
+.IP \[bu] 2
+Config: copy_cutoff
+.IP \[bu] 2
+Env Var: RCLONE_AZUREBLOB_COPY_CUTOFF
+.IP \[bu] 2
+Type: SizeSuffix
+.IP \[bu] 2
+Default: 8Mi
+.SS --azureblob-copy-concurrency
+.PP
+Concurrency for multipart copy.
+.PP
+This is the number of chunks of the same file that are copied
+concurrently.
+.PP
+These chunks are not buffered in memory and Microsoft recommends setting
+this value to greater than 1000 in the azcopy documentation.
+.PP
+https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-optimize#increase-concurrency
+.PP
+In tests, copy speed increases almost linearly with copy concurrency.
+.PP
+Properties:
+.IP \[bu] 2
+Config: copy_concurrency
+.IP \[bu] 2
+Env Var: RCLONE_AZUREBLOB_COPY_CONCURRENCY
+.IP \[bu] 2
+Type: int
+.IP \[bu] 2
+Default: 512
+.SS --azureblob-use-copy-blob
+.PP
+Whether to use the Copy Blob API when copying to the same storage
+account.
+.PP
+If true (the default) then rclone will use the Copy Blob API for copies
+to the same storage account even when the size is above the copy_cutoff.
+.PP
+Rclone assumes that the same storage account means the same config and
+does not check for the same storage account in different configs.
+.PP
+There should be no need to change this value.
+.PP
+Properties:
+.IP \[bu] 2
+Config: use_copy_blob
+.IP \[bu] 2
+Env Var: RCLONE_AZUREBLOB_USE_COPY_BLOB
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: true
.SS --azureblob-list-chunk
.PP
Size of blob list.
@@ -59451,8 +62537,11 @@ Content-Encoding
Content-Language
.IP \[bu] 2
Content-Type
+.IP \[bu] 2
+X-MS-Tags
.PP
-Eg \f[C]--header-upload \[dq]Content-Type: text/potato\[dq]\f[R]
+Eg \f[C]--header-upload \[dq]Content-Type: text/potato\[dq]\f[R] or
+\f[C]--header-upload \[dq]X-MS-Tags: foo=bar\[dq]\f[R]
.SS Limitations
.PP
MD5 sums are only uploaded with chunked files if the source has an MD5
@@ -59825,6 +62914,16 @@ identity, the user-assigned identity will be used by default.
If the resource has multiple user-assigned identities you will need to
unset \f[C]env_auth\f[R] and set \f[C]use_msi\f[R] instead.
See the \f[C]use_msi\f[R] section.
+.PP
+If you are operating in disconnected clouds, or private clouds such as
+Azure Stack you may want to set
+\f[C]disable_instance_discovery = true\f[R].
+This determines whether rclone requests Microsoft Entra instance
+metadata from \f[C]https://login.microsoft.com/\f[R] before
+authenticating.
+Setting this to \f[C]true\f[R] will skip this request, making you
+responsible for ensuring the configured authority is valid and
+trustworthy.
.SS Env Auth: 3. Azure CLI credentials (as used by the az tool)
.PP
Credentials created with the \f[C]az\f[R] tool can be picked up using
@@ -59945,6 +63044,14 @@ be explicitly specified using exactly one of the
If none of \f[C]msi_object_id\f[R], \f[C]msi_client_id\f[R], or
\f[C]msi_mi_res_id\f[R] is set, this is is equivalent to using
\f[C]env_auth\f[R].
+.SS Azure CLI tool \f[C]az\f[R]
+.PP
+Set to use the Azure CLI tool
+\f[C]az\f[R] (https://learn.microsoft.com/en-us/cli/azure/) as the sole
+means of authentication.
+Setting this can be useful if you wish to use the \f[C]az\f[R] CLI on a
+host with a System Managed Identity that you do not want to use.
+Don\[aq]t set \f[C]env_auth\f[R] at the same time.
.SS Standard options
.PP
Here are the Standard options specific to azurefiles (Microsoft Azure
@@ -60292,6 +63399,43 @@ Env Var: RCLONE_AZUREFILES_MSI_MI_RES_ID
Type: string
.IP \[bu] 2
Required: false
+.SS --azurefiles-disable-instance-discovery
+.PP
+Skip requesting Microsoft Entra instance metadata This should be set
+true only by applications authenticating in disconnected clouds, or
+private clouds such as Azure Stack.
+It determines whether rclone requests Microsoft Entra instance metadata
+from \f[C]https://login.microsoft.com/\f[R] before authenticating.
+Setting this to true will skip this request, making you responsible for
+ensuring the configured authority is valid and trustworthy.
+.PP
+Properties:
+.IP \[bu] 2
+Config: disable_instance_discovery
+.IP \[bu] 2
+Env Var: RCLONE_AZUREFILES_DISABLE_INSTANCE_DISCOVERY
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
+.SS --azurefiles-use-az
+.PP
+Use Azure CLI tool az for authentication Set to use the Azure CLI tool
+az (https://learn.microsoft.com/en-us/cli/azure/) as the sole means of
+authentication.
+Setting this can be useful if you wish to use the az CLI on a host with
+a System Managed Identity that you do not want to use.
+Don\[aq]t set env_auth at the same time.
+.PP
+Properties:
+.IP \[bu] 2
+Config: use_az
+.IP \[bu] 2
+Env Var: RCLONE_AZUREFILES_USE_AZ
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS --azurefiles-endpoint
.PP
Endpoint for the service.
@@ -60946,7 +64090,7 @@ Microsoft Cloud for US Government
\[dq]de\[dq]
.RS 2
.IP \[bu] 2
-Microsoft Cloud Germany
+Microsoft Cloud Germany (deprecated - try global region first).
.RE
.IP \[bu] 2
\[dq]cn\[dq]
@@ -61033,6 +64177,28 @@ Env Var: RCLONE_ONEDRIVE_CLIENT_CREDENTIALS
Type: bool
.IP \[bu] 2
Default: false
+.SS --onedrive-upload-cutoff
+.PP
+Cutoff for switching to chunked upload.
+.PP
+Any files larger than this will be uploaded in chunks of chunk_size.
+.PP
+This is disabled by default as uploading using single part uploads
+causes rclone to use twice the storage on Onedrive business as when
+rclone sets the modification time after the upload Onedrive creates a
+new version.
+.PP
+See: https://github.com/rclone/rclone/issues/1716
+.PP
+Properties:
+.IP \[bu] 2
+Config: upload_cutoff
+.IP \[bu] 2
+Env Var: RCLONE_ONEDRIVE_UPLOAD_CUTOFF
+.IP \[bu] 2
+Type: SizeSuffix
+.IP \[bu] 2
+Default: off
.SS --onedrive-chunk-size
.PP
Chunk size to upload files with - must be multiple of 320k (327,680
@@ -61951,6 +65117,43 @@ T}
.TE
.PP
See the metadata (https://rclone.org/docs/#metadata) docs for more info.
+.SS Impersonate other users as Admin
+.PP
+Unlike Google Drive and impersonating any domain user via service
+accounts, OneDrive requires you to authenticate as an admin account, and
+manually setup a remote per user you wish to impersonate.
+.IP "1." 3
+In Microsoft 365 Admin Center (https://admin.microsoft.com), open each
+user you need to \[dq]impersonate\[dq] and go to the OneDrive section.
+There is a heading called \[dq]Get access to files\[dq], you need to
+click to create the link, this creates the link of the format
+\f[C]https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/\f[R]
+but also changes the permissions so you your admin user has access.
+.IP "2." 3
+Then in powershell run the following commands:
+.IP
+.nf
+\f[C]
+Install-Module Microsoft.Graph -Scope CurrentUser -Repository PSGallery -Force
+Import-Module Microsoft.Graph.Files
+Connect-MgGraph -Scopes \[dq]Files.ReadWrite.All\[dq]
+# Follow the steps to allow access to your admin user
+# Then run this for each user you want to impersonate to get the Drive ID
+Get-MgUserDefaultDrive -UserId \[aq]{emailaddress}\[aq]
+# This will give you output of the format:
+# Name Id DriveType CreatedDateTime
+# ---- -- --------- ---------------
+# OneDrive b!XYZ123 business 14/10/2023 1:00:58\[u202F]pm
+\f[R]
+.fi
+.IP "3." 3
+Then in rclone add a onedrive remote type, and use the
+\f[C]Type in driveID\f[R] with the DriveID you got in the previous step.
+One remote per user.
+It will then confirm the drive ID, and hopefully give you a message of
+\f[C]Found drive \[dq]root\[dq] of type \[dq]business\[dq]\f[R] and then
+include the URL of the format
+\f[C]https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/Documents\f[R]
.SS Limitations
.PP
If you don\[aq]t use rclone for 90 days the refresh token will expire.
@@ -62568,6 +65771,46 @@ Env Var: RCLONE_OPENDRIVE_CHUNK_SIZE
Type: SizeSuffix
.IP \[bu] 2
Default: 10Mi
+.SS --opendrive-access
+.PP
+Files and folders will be uploaded with this access permission (default
+private)
+.PP
+Properties:
+.IP \[bu] 2
+Config: access
+.IP \[bu] 2
+Env Var: RCLONE_OPENDRIVE_ACCESS
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: \[dq]private\[dq]
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+\[dq]private\[dq]
+.RS 2
+.IP \[bu] 2
+The file or folder access can be granted in a way that will allow select
+users to view, read or write what is absolutely essential for them.
+.RE
+.IP \[bu] 2
+\[dq]public\[dq]
+.RS 2
+.IP \[bu] 2
+The file or folder can be downloaded by anyone from a web browser.
+The link can be shared in any way,
+.RE
+.IP \[bu] 2
+\[dq]hidden\[dq]
+.RS 2
+.IP \[bu] 2
+The file or folder can be accessed has the same restrictions as Public
+if the user knows the URL of the file or folder link in order to access
+the contents
+.RE
+.RE
.SS --opendrive-description
.PP
Description of the remote.
@@ -69938,6 +73181,22 @@ Env Var: RCLONE_SFTP_SOCKS_PROXY
Type: string
.IP \[bu] 2
Required: false
+.SS --sftp-http-proxy
+.PP
+URL for HTTP CONNECT proxy
+.PP
+Set this to a URL for an HTTP proxy which supports the HTTP CONNECT
+verb.
+.PP
+Properties:
+.IP \[bu] 2
+Config: http_proxy
+.IP \[bu] 2
+Env Var: RCLONE_SFTP_HTTP_PROXY
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
.SS --sftp-copy-is-hardlink
.PP
Set to enable server side copies using hardlinks.
@@ -69984,7 +73243,8 @@ Required: false
On some SFTP servers (e.g.
Synology) the paths are different for SSH and SFTP so the hashes
can\[aq]t be calculated properly.
-For them using \f[C]disable_hashcheck\f[R] is a good idea.
+You can either use \f[C]--sftp-path-override\f[R] or
+\f[C]disable_hashcheck\f[R].
.PP
The only ssh agent supported under Windows is Putty\[aq]s pageant.
.PP
@@ -70236,6 +73496,24 @@ Env Var: RCLONE_SMB_SPN
Type: string
.IP \[bu] 2
Required: false
+.SS --smb-use-kerberos
+.PP
+Use Kerberos authentication.
+.PP
+If set, rclone will use Kerberos authentication instead of NTLM.
+This requires a valid Kerberos configuration and credentials cache to be
+available, either in the default locations or as specified by the
+KRB5_CONFIG and KRB5CCNAME environment variables.
+.PP
+Properties:
+.IP \[bu] 2
+Config: use_kerberos
+.IP \[bu] 2
+Env Var: RCLONE_SMB_USE_KERBEROS
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS Advanced options
.PP
Here are the Advanced options specific to smb (SMB / CIFS).
@@ -72449,13 +75727,13 @@ rclone copy /home/source remote:backup
.SS Modification times and hashes
.PP
Plain WebDAV does not support modified times.
-However when used with Fastmail Files, Owncloud or Nextcloud rclone will
+However when used with Fastmail Files, ownCloud or Nextcloud rclone will
support modified times.
.PP
Likewise plain WebDAV does not support hashes, however when used with
-Fastmail Files, Owncloud or Nextcloud rclone will support SHA1 and MD5
+Fastmail Files, ownCloud or Nextcloud rclone will support SHA1 and MD5
hashes.
-Depending on the exact version of Owncloud or Nextcloud hashes may
+Depending on the exact version of ownCloud or Nextcloud hashes may
appear on all objects, or only on objects which had a hash uploaded with
them.
.SS Standard options
@@ -72509,7 +75787,13 @@ Nextcloud
\[dq]owncloud\[dq]
.RS 2
.IP \[bu] 2
-Owncloud
+Owncloud 10 PHP based WebDAV server
+.RE
+.IP \[bu] 2
+\[dq]infinitescale\[dq]
+.RS 2
+.IP \[bu] 2
+ownCloud Infinite Scale
.RE
.IP \[bu] 2
\[dq]sharepoint\[dq]
@@ -72767,22 +76051,31 @@ to create an app password with access to \f[C]Files (WebDAV)\f[R] and
use this as the password.
.PP
Fastmail supports modified times using the \f[C]X-OC-Mtime\f[R] header.
-.SS Owncloud
+.SS ownCloud
.PP
Click on the settings cog in the bottom right of the page and this will
show the WebDAV URL that rclone needs in the config step.
It will look something like
\f[C]https://example.com/remote.php/webdav/\f[R].
.PP
-Owncloud supports modified times using the \f[C]X-OC-Mtime\f[R] header.
+ownCloud supports modified times using the \f[C]X-OC-Mtime\f[R] header.
.SS Nextcloud
.PP
-This is configured in an identical way to Owncloud.
+This is configured in an identical way to ownCloud.
Note that Nextcloud initially did not support streaming of files
-(\f[C]rcat\f[R]) whereas Owncloud did, but
+(\f[C]rcat\f[R]) whereas ownCloud did, but
this (https://github.com/nextcloud/nextcloud-snap/issues/365) seems to
be fixed as of 2020-11-27 (tested with rclone v1.53.1 and Nextcloud
Server v19).
+.SS ownCloud Infinite Scale
+.PP
+The WebDAV URL for Infinite Scale can be found in the details panel of
+any space in Infinite Scale, if the display was enabled in the personal
+settings of the user through a checkbox there.
+.PP
+Infinite Scale works with the chunking tus (https://tus.io) upload
+protocol.
+The chunk size is currently fixed 10 MB.
.SS Sharepoint Online
.PP
Rclone can be used with Sharepoint provided by OneDrive for Business or
@@ -74872,6 +78165,526 @@ Options:
.IP \[bu] 2
\[dq]error\[dq]: return an error based on option value
.SH Changelog
+.SS v1.70.0 - 2025-06-17
+.PP
+See commits (https://github.com/rclone/rclone/compare/v1.69.0...v1.70.0)
+.IP \[bu] 2
+New backends
+.RS 2
+.IP \[bu] 2
+DOI (https://rclone.org/doi/) (Flora Thiebaut)
+.IP \[bu] 2
+FileLu (https://rclone.org/filelu/) (kingston125)
+.IP \[bu] 2
+New S3 providers:
+.RS 2
+.IP \[bu] 2
+MEGA S4 (https://rclone.org/s3/#mega) (Nick Craig-Wood)
+.IP \[bu] 2
+Pure Storage FlashBlade (https://rclone.org/s3/#pure-storage-flashblade)
+(Jeremy Daer)
+.RE
+.RE
+.IP \[bu] 2
+New commands
+.RS 2
+.IP \[bu] 2
+convmv (https://rclone.org/commands/rclone_convmv/): for moving and
+transforming files (nielash)
+.RE
+.IP \[bu] 2
+New Features
+.RS 2
+.IP \[bu] 2
+Add
+\f[C]--max-connections\f[R] (https://rclone.org/docs/#max-connections-n)
+to control maximum backend concurrency (Nick Craig-Wood)
+.IP \[bu] 2
+Add
+\f[C]--max-buffer-memory\f[R] (https://rclone.org/docs/#max-buffer-memory)
+to limit total buffer memory usage (Nick Craig-Wood)
+.IP \[bu] 2
+Add transform library and
+\f[C]--name-transform\f[R] (https://rclone.org/docs/#name-transform-command-xxxx)
+flag (nielash)
+.IP \[bu] 2
+sync: Implement
+\f[C]--list-cutoff\f[R] (https://rclone.org/docs/#list-cutoff) to allow
+on disk sorting for reduced memory use (Nick Craig-Wood)
+.IP \[bu] 2
+accounting: Add listed stat for number of directory entries listed (Nick
+Craig-Wood)
+.IP \[bu] 2
+backend: Skip hash calculation when the hashType is None (Oleksiy
+Stashok)
+.IP \[bu] 2
+build
+.RS 2
+.IP \[bu] 2
+Update to go1.24 and make go1.22 the minimum required version (Nick
+Craig-Wood)
+.IP \[bu] 2
+Disable docker builds on PRs & add missing dockerfile changes (Anagh
+Kumar Baranwal)
+.IP \[bu] 2
+Modernize Go usage (Nick Craig-Wood)
+.IP \[bu] 2
+Update all dependencies (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+cmd/authorize: Show required arguments in help text (simwai)
+.IP \[bu] 2
+cmd/config: add \f[C]--no-output\f[R] option (Jess)
+.IP \[bu] 2
+cmd/gitannex
+.RS 2
+.IP \[bu] 2
+Tweak parsing of \[dq]rcloneremotename\[dq] config (Dan McArdle)
+.IP \[bu] 2
+Permit remotes with options (Dan McArdle)
+.IP \[bu] 2
+Reject unknown layout modes in INITREMOTE (Dan McArdle)
+.RE
+.IP \[bu] 2
+docker image: Add label org.opencontainers.image.source for release
+notes in Renovate dependency updates (Robin Schneider)
+.IP \[bu] 2
+doc fixes (albertony, Andrew Kreimer, Ben Boeckel, Christoph Berger,
+Danny Garside, Dimitri Papadopoulos, eccoisle, Ed Craig-Wood, Fernando
+Fern\['a]ndez, jack, Jeff Geerling, Jugal Kishore, kingston125, luzpaz,
+Markus Gerstel, Matt Ickstadt, Michael Kebe, Nick Craig-Wood,
+PrathameshLakawade, Ser-Bul, simonmcnair, Tim White, Zachary Vorhies)
+.IP \[bu] 2
+filter:
+.RS 2
+.IP \[bu] 2
+Add \f[C]--hash-filter\f[R] to deterministically select a subset of
+files (Nick Craig-Wood)
+.IP \[bu] 2
+Show \f[C]--min-size\f[R] and \f[C]--max-size\f[R] in \f[C]--dump\f[R]
+filters (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+hash: Add SHA512 support for file hashes (Enduriel)
+.IP \[bu] 2
+http servers: Add \f[C]--user-from-header\f[R] to use for authentication
+(Moises Lima)
+.IP \[bu] 2
+lib/batcher: Deprecate unused option: batch_commit_timeout (Dan McArdle)
+.IP \[bu] 2
+log:
+.RS 2
+.IP \[bu] 2
+Remove github.com/sirupsen/logrus and replace with log/slog (Nick
+Craig-Wood)
+.IP \[bu] 2
+Add \f[C]--windows-event-log-level\f[R] to support Windows Event Log
+(Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+rc
+.RS 2
+.IP \[bu] 2
+Add add \f[C]short\f[R] parameter to \f[C]core/stats\f[R] to not return
+transferring and checking (Nick Craig-Wood)
+.IP \[bu] 2
+In \f[C]options/info\f[R] make FieldName contain a \[dq].\[dq] if it
+should be nested (Nick Craig-Wood)
+.IP \[bu] 2
+Add rc control for serve commands (Nick Craig-Wood)
+.IP \[bu] 2
+rcserver: Improve content-type check (Jonathan Giannuzzi)
+.RE
+.IP \[bu] 2
+serve nfs
+.RS 2
+.IP \[bu] 2
+Update docs to note Windows is not supported (Zachary Vorhies)
+.IP \[bu] 2
+Change the format of \f[C]--nfs-cache-type symlink\f[R] file handles
+(Nick Craig-Wood)
+.IP \[bu] 2
+Make metadata files have special file handles (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+touch: Make touch obey \f[C]--transfers\f[R] (Nick Craig-Wood)
+.IP \[bu] 2
+version: Add \f[C]--deps\f[R] flag to show dependencies and other build
+info (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Bug Fixes
+.RS 2
+.IP \[bu] 2
+serve s3:
+.RS 2
+.IP \[bu] 2
+Fix ListObjectsV2 response (fhuber)
+.IP \[bu] 2
+Remove redundant handler initialization (Tho Neyugn)
+.RE
+.IP \[bu] 2
+stats: Fix goroutine leak and improve stats accounting process
+(Nathanael Demacon)
+.RE
+.IP \[bu] 2
+VFS
+.RS 2
+.IP \[bu] 2
+Add \f[C]--vfs-metadata-extension\f[R] to expose metadata sidecar files
+(Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Azure Blob
+.RS 2
+.IP \[bu] 2
+Add support for \f[C]x-ms-tags\f[R] header (Trevor Starick)
+.IP \[bu] 2
+Cleanup uncommitted blocks on upload errors (Nick Craig-Wood)
+.IP \[bu] 2
+Speed up server side copies for small files (Nick Craig-Wood)
+.IP \[bu] 2
+Implement multipart server side copy (Nick Craig-Wood)
+.IP \[bu] 2
+Remove uncommitted blocks on InvalidBlobOrBlock error (Nick Craig-Wood)
+.IP \[bu] 2
+Fix handling of objects with // in (Nick Craig-Wood)
+.IP \[bu] 2
+Handle retry error codes more carefully (Nick Craig-Wood)
+.IP \[bu] 2
+Fix errors not being retried when doing single part copy (Nick
+Craig-Wood)
+.IP \[bu] 2
+Fix multipart server side copies of 0 sized files (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Azurefiles
+.RS 2
+.IP \[bu] 2
+Add \f[C]--azurefiles-use-az\f[R] and
+\f[C]--azurefiles-disable-instance-discovery\f[R] (b-wimmer)
+.RE
+.IP \[bu] 2
+B2
+.RS 2
+.IP \[bu] 2
+Add SkipDestructive handling to backend commands (Pat Patterson)
+.IP \[bu] 2
+Use file id from listing when not presented in headers (ahxxm)
+.RE
+.IP \[bu] 2
+Cloudinary
+.RS 2
+.IP \[bu] 2
+Automatically add/remove known media files extensions (yuval-cloudinary)
+.IP \[bu] 2
+Var naming convention (yuval-cloudinary)
+.RE
+.IP \[bu] 2
+Drive
+.RS 2
+.IP \[bu] 2
+Added \f[C]backend moveid\f[R] command (Spencer McCullough)
+.RE
+.IP \[bu] 2
+Dropbox
+.RS 2
+.IP \[bu] 2
+Support Dropbox Paper (Dave Vasilevsky)
+.RE
+.IP \[bu] 2
+FTP
+.RS 2
+.IP \[bu] 2
+Add \f[C]--ftp-http-proxy\f[R] to connect via HTTP CONNECT proxy
+.RE
+.IP \[bu] 2
+Gofile
+.RS 2
+.IP \[bu] 2
+Update to use new direct upload endpoint (wbulot)
+.RE
+.IP \[bu] 2
+Googlephotos
+.RS 2
+.IP \[bu] 2
+Update read only and read write scopes to meet Google\[aq]s
+requirements.
+(Germ\['a]n Casares)
+.RE
+.IP \[bu] 2
+Iclouddrive
+.RS 2
+.IP \[bu] 2
+Fix panic and files potentially downloaded twice (Cl\['e]ment Wehrung)
+.RE
+.IP \[bu] 2
+Internetarchive
+.RS 2
+.IP \[bu] 2
+Add \f[C]--internetarchive-metadata=\[dq]key=value\[dq]\f[R] for setting
+item metadata (Corentin Barreau)
+.RE
+.IP \[bu] 2
+Onedrive
+.RS 2
+.IP \[bu] 2
+Fix \[dq]The upload session was not found\[dq] errors (Nick Craig-Wood)
+.IP \[bu] 2
+Re-add \f[C]--onedrive-upload-cutoff\f[R] flag (Nick Craig-Wood)
+.IP \[bu] 2
+Fix crash if no metadata was updated (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Opendrive
+.RS 2
+.IP \[bu] 2
+Added \f[C]--opendrive-access\f[R] flag to handle permissions (Joel K
+Biju)
+.RE
+.IP \[bu] 2
+Pcloud
+.RS 2
+.IP \[bu] 2
+Fix \[dq]Access denied.
+You do not have permissions to perform this operation\[dq] on large
+uploads (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+S3
+.RS 2
+.IP \[bu] 2
+Fix handling of objects with // in (Nick Craig-Wood)
+.IP \[bu] 2
+Add IBM IAM signer (Alexander Minbaev)
+.IP \[bu] 2
+Split the GCS quirks into \f[C]--s3-use-x-id\f[R] and
+\f[C]--s3-sign-accept-encoding\f[R] (Nick Craig-Wood)
+.IP \[bu] 2
+Implement paged listing interface ListP (Nick Craig-Wood)
+.IP \[bu] 2
+Add Pure Storage FlashBlade provider support (Jeremy Daer)
+.IP \[bu] 2
+Require custom endpoint for Lyve Cloud v2 support (PrathameshLakawade)
+.IP \[bu] 2
+MEGA S4 support (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+SFTP
+.RS 2
+.IP \[bu] 2
+Add \f[C]--sftp-http-proxy\f[R] to connect via HTTP CONNECT proxy (Nick
+Craig-Wood)
+.RE
+.IP \[bu] 2
+Smb
+.RS 2
+.IP \[bu] 2
+Add support for kerberos authentication (Jonathan Giannuzzi)
+.IP \[bu] 2
+Improve connection pooling efficiency (Jonathan Giannuzzi)
+.RE
+.IP \[bu] 2
+WebDAV
+.RS 2
+.IP \[bu] 2
+Retry propfind on 425 status (J\[:o]rn Friedrich Dreyer)
+.IP \[bu] 2
+Add an ownCloud Infinite Scale vendor that enables tus chunked upload
+support (Klaas Freitag)
+.RE
+.SS v1.69.3 - 2025-05-21
+.PP
+See commits (https://github.com/rclone/rclone/compare/v1.69.2...v1.69.3)
+.IP \[bu] 2
+Bug Fixes
+.RS 2
+.IP \[bu] 2
+build: Reapply update github.com/golang-jwt/jwt/v5 from 5.2.1 to 5.2.2
+to fix CVE-2025-30204 (dependabot[bot])
+.IP \[bu] 2
+build: Update github.com/ebitengine/purego to work around bug in
+go1.24.3 (Nick Craig-Wood)
+.RE
+.SS v1.69.2 - 2025-05-01
+.PP
+See commits (https://github.com/rclone/rclone/compare/v1.69.1...v1.69.2)
+.IP \[bu] 2
+Bug fixes
+.RS 2
+.IP \[bu] 2
+accounting: Fix percentDiff calculation -- (Anagh Kumar Baranwal)
+.IP \[bu] 2
+build
+.RS 2
+.IP \[bu] 2
+Update github.com/golang-jwt/jwt/v4 from 4.5.1 to 4.5.2 to fix
+CVE-2025-30204 (dependabot[bot])
+.IP \[bu] 2
+Update github.com/golang-jwt/jwt/v5 from 5.2.1 to 5.2.2 to fix
+CVE-2025-30204 (dependabot[bot])
+.IP \[bu] 2
+Update golang.org/x/crypto to v0.35.0 to fix CVE-2025-22869 (Nick
+Craig-Wood)
+.IP \[bu] 2
+Update golang.org/x/net from 0.36.0 to 0.38.0 to fix CVE-2025-22870
+(dependabot[bot])
+.IP \[bu] 2
+Update golang.org/x/net to 0.36.0.
+to fix CVE-2025-22869 (dependabot[bot])
+.IP \[bu] 2
+Stop building with go < go1.23 as security updates forbade it (Nick
+Craig-Wood)
+.IP \[bu] 2
+Fix docker plugin build (Anagh Kumar Baranwal)
+.RE
+.IP \[bu] 2
+cmd: Fix crash if rclone is invoked without any arguments (Janne
+Hellsten)
+.IP \[bu] 2
+config: Read configuration passwords from stdin even when terminated
+with EOF (Samantha Bowen)
+.IP \[bu] 2
+doc fixes (Andrew Kreimer, Danny Garside, eccoisle, Ed Craig-Wood,
+emyarod, jack, Jugal Kishore, Markus Gerstel, Michael Kebe, Nick
+Craig-Wood, simonmcnair, simwai, Zachary Vorhies)
+.IP \[bu] 2
+fs: Fix corruption of SizeSuffix with \[dq]B\[dq] suffix in config (eg
+--min-size) (Nick Craig-Wood)
+.IP \[bu] 2
+lib/http: Fix race between Serve() and Shutdown() (Nick Craig-Wood)
+.IP \[bu] 2
+object: Fix memory object out of bounds Seek (Nick Craig-Wood)
+.IP \[bu] 2
+operations: Fix call fmt.Errorf with wrong err (alingse)
+.IP \[bu] 2
+rc
+.RS 2
+.IP \[bu] 2
+Disable the metrics server when running \f[C]rclone rc\f[R]
+(hiddenmarten)
+.IP \[bu] 2
+Fix debug/* commands not being available over unix sockets (Nick
+Craig-Wood)
+.RE
+.IP \[bu] 2
+serve nfs: Fix unlikely crash (Nick Craig-Wood)
+.IP \[bu] 2
+stats: Fix the speed not getting updated after a pause in the processing
+(Anagh Kumar Baranwal)
+.IP \[bu] 2
+sync
+.RS 2
+.IP \[bu] 2
+Fix cpu spinning when empty directory finding with leading slashes (Nick
+Craig-Wood)
+.IP \[bu] 2
+Copy dir modtimes even when copyEmptySrcDirs is false (ll3006)
+.RE
+.RE
+.IP \[bu] 2
+VFS
+.RS 2
+.IP \[bu] 2
+Fix directory cache serving stale data (Lorenz Brun)
+.IP \[bu] 2
+Fix inefficient directory caching when directory reads are slow
+(huanghaojun)
+.IP \[bu] 2
+Fix integration test failures (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Drive
+.RS 2
+.IP \[bu] 2
+Metadata: fix error when setting copy-requires-writer-permission on a
+folder (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Dropbox
+.RS 2
+.IP \[bu] 2
+Retry link without expiry (Dave Vasilevsky)
+.RE
+.IP \[bu] 2
+HTTP
+.RS 2
+.IP \[bu] 2
+Correct root if definitely pointing to a file (nielash)
+.RE
+.IP \[bu] 2
+Iclouddrive
+.RS 2
+.IP \[bu] 2
+Fix so created files are writable (Ben Alex)
+.RE
+.IP \[bu] 2
+Onedrive
+.RS 2
+.IP \[bu] 2
+Fix metadata ordering in permissions (Nick Craig-Wood)
+.RE
+.SS v1.69.1 - 2025-02-14
+.PP
+See commits (https://github.com/rclone/rclone/compare/v1.69.0...v1.69.1)
+.IP \[bu] 2
+Bug Fixes
+.RS 2
+.IP \[bu] 2
+lib/oauthutil: Fix redirect URL mismatch errors (Nick Craig-Wood)
+.IP \[bu] 2
+bisync: Fix listings missing concurrent modifications (nielash)
+.IP \[bu] 2
+serve s3: Fix list objects encoding-type (Nick Craig-Wood)
+.IP \[bu] 2
+fs: Fix confusing \[dq]didn\[aq]t find section in config file\[dq] error
+(Nick Craig-Wood)
+.IP \[bu] 2
+doc fixes (Christoph Berger, Dimitri Papadopoulos, Matt Ickstadt, Nick
+Craig-Wood, Tim White, Zachary Vorhies)
+.IP \[bu] 2
+build: Added parallel docker builds and caching for go build in the
+container (Anagh Kumar Baranwal)
+.RE
+.IP \[bu] 2
+VFS
+.RS 2
+.IP \[bu] 2
+Fix the cache failing to upload symlinks when \f[C]--links\f[R] was
+specified (Nick Craig-Wood)
+.IP \[bu] 2
+Fix race detected by race detector (Nick Craig-Wood)
+.IP \[bu] 2
+Close the change notify channel on Shutdown (izouxv)
+.RE
+.IP \[bu] 2
+B2
+.RS 2
+.IP \[bu] 2
+Fix \[dq]fatal error: concurrent map writes\[dq] (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Iclouddrive
+.RS 2
+.IP \[bu] 2
+Add notes on ADP and Missing PCS cookies (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Onedrive
+.RS 2
+.IP \[bu] 2
+Mark German (de) region as deprecated (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+S3
+.RS 2
+.IP \[bu] 2
+Added new storage class to magalu provider (Bruno Fernandes)
+.IP \[bu] 2
+Add DigitalOcean regions SFO2, LON1, TOR1, BLR1 (jkpe)
+.IP \[bu] 2
+Add latest Linode Object Storage endpoints (jbagwell-akamai)
+.RE
.SS v1.69.0 - 2025-01-12
.PP
See commits (https://github.com/rclone/rclone/compare/v1.68.0...v1.69.0)
@@ -74955,7 +78768,7 @@ in http servers (Moises Lima)
.IP \[bu] 2
This was making it impossible to use unix sockets with an proxy
.IP \[bu] 2
-This might now cause rclone to need authenticaton where it didn\[aq]t
+This might now cause rclone to need authentication where it didn\[aq]t
before
.RE
.IP \[bu] 2
@@ -76881,8 +80694,8 @@ Refactor version info and icon resource handling on windows (albertony)
doc updates (albertony, alfish2000, asdffdsazqqq, Dimitri Papadopoulos,
Herby Gillot, Joda St\[:o]\[ss]er, Manoj Ghosh, Nick Craig-Wood)
.IP \[bu] 2
-Implement \f[C]--metadata-mapper\f[R] to transform metatadata with a
-user supplied program (Nick Craig-Wood)
+Implement \f[C]--metadata-mapper\f[R] to transform metadata with a user
+supplied program (Nick Craig-Wood)
.IP \[bu] 2
Add \f[C]ChunkWriterDoesntSeek\f[R] feature flag and set it for b2 (Nick
Craig-Wood)
@@ -77202,7 +81015,7 @@ B2
Fix multipart upload: corrupted on transfer: sizes differ XXX vs 0 (Nick
Craig-Wood)
.IP \[bu] 2
-Fix locking window when getting mutipart upload URL (Nick Craig-Wood)
+Fix locking window when getting multipart upload URL (Nick Craig-Wood)
.IP \[bu] 2
Fix server side copies greater than 4GB (Nick Craig-Wood)
.IP \[bu] 2
@@ -89834,6 +93647,23 @@ info.
.PP
This has now been documented in its own remote setup
page (https://rclone.org/remote_setup/).
+.SS How can I get rid of the \[dq]Config file not found\[dq] notice?
+.PP
+If you see a notice like \[aq]NOTICE: Config file \[dq]rclone.conf\[dq]
+not found\[aq], this means you have not configured any remotes.
+.PP
+If you need to configure a remote, see the config help
+docs (https://rclone.org/docs/#configure).
+.PP
+If you are using rclone entirely with on the fly
+remotes (https://rclone.org/docs/#backend-path-to-dir), you can create
+an empty config file to get rid of this notice, for example:
+.IP
+.nf
+\f[C]
+rclone config touch
+\f[R]
+.fi
.SS Can rclone sync directly from drive to s3
.PP
Rclone can sync between two remote cloud storage systems just fine.
@@ -90090,11 +93920,23 @@ at the expense of CPU usage.
.PP
The most common cause of rclone using lots of memory is a single
directory with millions of files in.
-Rclone has to load this entirely into memory as rclone objects.
+.PP
+Before rclone v1.70 has to load this entirely into memory as rclone
+objects.
Each rclone object takes 0.5k-1k of memory.
There is a workaround for
this (https://github.com/rclone/rclone/wiki/Big-syncs-with-millions-of-files)
which involves a bit of scripting.
+.PP
+However with rclone v1.70 and later rclone will automatically save
+directory entries to disk when a directory with more than
+\f[C]--list-cutoff\f[R] (https://rclone.org/docs/#list-cutoff)
+(1,000,000 by default) entries is detected.
+.PP
+From v1.70 rclone also has the
+--max-buffer-memory (https://rclone.org/docs/#max-buffer-memory) flag
+which helps particularly when multi-thread transfers are using too much
+memory.
.SS Rclone changes fullwidth Unicode punctuation marks in file names
.PP
For example: On a Windows system, you have a file with name
@@ -91758,8 +95600,6 @@ Eli Orzitzer
.IP \[bu] 2
Anthony Metzidis
.IP \[bu] 2
-emyarod
-.IP \[bu] 2
keongalvin
.IP \[bu] 2
rarspace01
@@ -91992,6 +95832,123 @@ TAKEI Yuya <853320+takei-yuya@users.noreply.github.com>
.IP \[bu] 2
Francesco Frassinelli
+.IP \[bu] 2
+Matt Ickstadt
+.IP \[bu] 2
+Spencer McCullough
+.IP \[bu] 2
+Jonathan Giannuzzi
+.IP \[bu] 2
+Christoph Berger
+.IP \[bu] 2
+Tim White
+.IP \[bu] 2
+Robin Schneider
+.IP \[bu] 2
+izouxv
+.IP \[bu] 2
+Moises Lima
+.IP \[bu] 2
+Bruno Fernandes
+.IP \[bu] 2
+Corentin Barreau
+.IP \[bu] 2
+hiddenmarten
+.IP \[bu] 2
+Trevor Starick
+.IP \[bu] 2
+b-wimmer <132347192+b-wimmer@users.noreply.github.com>
+.IP \[bu] 2
+Jess
+.IP \[bu] 2
+Zachary Vorhies
+.IP \[bu] 2
+Alexander Minbaev
+.IP \[bu] 2
+Joel K Biju
+.IP \[bu] 2
+ll3006
+.IP \[bu] 2
+jbagwell-akamai <113531113+jbagwell-akamai@users.noreply.github.com>
+.IP \[bu] 2
+Michael Kebe
+.IP \[bu] 2
+Lorenz Brun
+.IP \[bu] 2
+Dave Vasilevsky
+.IP \[bu] 2
+luzpaz
+.IP \[bu] 2
+jack <9480542+jackusm@users.noreply.github.com>
+.IP \[bu] 2
+J\[:o]rn Friedrich Dreyer
+.IP \[bu] 2
+alingse
+.IP \[bu] 2
+Fernando Fern\['a]ndez
+.IP \[bu] 2
+eccoisle <167755281+eccoisle@users.noreply.github.com>
+.IP \[bu] 2
+Klaas Freitag
+.IP \[bu] 2
+Danny Garside
+.IP \[bu] 2
+Samantha Bowen
+.IP \[bu] 2
+simonmcnair <101189766+simonmcnair@users.noreply.github.com>
+.IP \[bu] 2
+huanghaojun
+.IP \[bu] 2
+Enduriel
+.IP \[bu] 2
+Markus Gerstel
+.IP \[bu] 2
+simwai <16225108+simwai@users.noreply.github.com>
+.IP \[bu] 2
+Ben Alex
+.IP \[bu] 2
+Klaas Freitag
+.IP \[bu] 2
+Andrew Kreimer
+.IP \[bu] 2
+Ed Craig-Wood <138211970+edc-w@users.noreply.github.com>
+.IP \[bu] 2
+Christian Richter
+<1058116+dragonchaser@users.noreply.github.com>
+.IP \[bu] 2
+Ralf Haferkamp
+.IP \[bu] 2
+Jugal Kishore
+.IP \[bu] 2
+Tho Neyugn
+.IP \[bu] 2
+Ben Boeckel
+.IP \[bu] 2
+Cl\['e]ment Wehrung
+.IP \[bu] 2
+Jeff Geerling
+.IP \[bu] 2
+Germ\['a]n Casares
+.IP \[bu] 2
+fhuber
+.IP \[bu] 2
+wbulot
+.IP \[bu] 2
+Jeremy Daer
+.IP \[bu] 2
+Oleksiy Stashok
+.IP \[bu] 2
+PrathameshLakawade
+.IP \[bu] 2
+Nathanael Demacon <7271496+quantumsheep@users.noreply.github.com>
+.IP \[bu] 2
+ahxxm
+.IP \[bu] 2
+Flora Thiebaut
+.IP \[bu] 2
+kingston125
+.IP \[bu] 2
+Ser-Bul <30335009+Ser-Bul@users.noreply.github.com>
.SH Contact the rclone project
.SS Forum
.PP