diff --git a/MANUAL.html b/MANUAL.html
index d4b683fec..29c5bfac1 100644
--- a/MANUAL.html
+++ b/MANUAL.html
@@ -81,7 +81,7 @@
Rclone syncs your files to cloud storage

@@ -148,6 +148,7 @@
Arvan Cloud Object Storage (AOS)
Citrix ShareFile
Cloudflare R2
+Cloudinary
DigitalOcean Spaces
Digi Storage
Dreamhost
@@ -164,6 +165,7 @@
Hetzner Storage Box
HiDrive
HTTP
+iCloud Drive
ImageKit
Internet Archive
Jottacloud
@@ -191,6 +193,7 @@
OpenStack Swift
Oracle Cloud Storage Swift
Oracle Object Storage
+Outscale
ownCloud
pCloud
Petabox
@@ -208,6 +211,7 @@
Seafile
Seagate Lyve Cloud
SeaweedFS
+Selectel
SFTP
Sia
SMB / CIFS
@@ -508,6 +512,7 @@ go build
Chunker - transparently splits large files for other remotes
Citrix ShareFile
Compress
+Cloudinary
Combine
Crypt - to encrypt other remotes
DigitalOcean Spaces
@@ -525,6 +530,7 @@ go build
Hetzner Storage Box
HiDrive
HTTP
+iCloud Drive
Internet Archive
Jottacloud
Koofr
@@ -650,6 +656,7 @@ destpath/sourcepath/two.txt
-I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
@@ -776,6 +783,7 @@ destpath/sourcepath/two.txt
-I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
@@ -880,6 +888,7 @@ destpath/sourcepath/two.txt
-I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
@@ -1708,6 +1717,7 @@ rclone backend help <backendname>
-I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
@@ -2363,6 +2373,7 @@ if src is directory
-I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
@@ -2964,7 +2975,9 @@ rclone mount remote:path/to/files \\cloud\remote
When running in background mode the user will have to stop the mount manually:
# Linux
fusermount -u /path/to/local/mount
-# OS X
+#... or on some systems
+fusermount3 -u /path/to/local/mount
+# OS X or Linux when using nfsmount
umount /path/to/local/mount
The umount operation can fail, for example when the mountpoint is busy. When that happens, it is the user's responsibility to stop the mount manually.
The size of the mounted file system will be set according to information retrieved from the remote, the same as returned by the rclone about command. Remotes with unlimited storage may report the used size only, then an additional 1 PiB of free space is assumed. If the remote does not support the about feature at all, then 1 PiB is set as both the total and the free size.
@@ -3048,7 +3061,7 @@ sudo ln -s /opt/local/lib/libfuse.2.dylib
Note that all the rclone filters can be used to select a subset of the files to be visible in the mount.
systemd
When running rclone mount as a systemd service, it is possible to use Type=notify. In this case the service will enter the started state after the mountpoint has been successfully set up. Units having the rclone mount service specified as a requirement will see all files and folders immediately in this mode.
-Note that systemd runs mount units without any environment variables including PATH
or HOME
. This means that tilde (~
) expansion will not work and you should provide --config
and --cache-dir
explicitly as absolute paths via rclone arguments. Since mounting requires the fusermount
program, rclone will use the fallback PATH of /bin:/usr/bin
in this scenario. Please ensure that fusermount
is present on this PATH.
+Note that systemd runs mount units without any environment variables including PATH
or HOME
. This means that tilde (~
) expansion will not work and you should provide --config
and --cache-dir
explicitly as absolute paths via rclone arguments. Since mounting requires the fusermount
or fusermount3
program, rclone will use the fallback PATH of /bin:/usr/bin
in this scenario. Please ensure that fusermount
/fusermount3
is present on this PATH.
Rclone as Unix mount helper
The core Unix program /bin/mount
normally takes the -t FSTYPE
argument then runs the /sbin/mount.FSTYPE
helper program passing it mount options as -o key=val,...
or --opt=...
. Automount (classic or systemd) behaves in a similar way.
rclone by default expects GNU-style flags --key val
. To run it as a mount helper you should symlink rclone binary to /sbin/mount.rclone
and optionally /usr/bin/rclonefs
, e.g. ln -s /usr/bin/rclone /sbin/mount.rclone
. rclone will detect it and translate command-line arguments appropriately.
@@ -3197,6 +3210,22 @@ WantedBy=multi-user.target
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
When using VFS write caching (--vfs-cache-mode
with value writes or full), the global flag --transfers
can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers
has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
+Symlinks
+By default the VFS does not support symlinks. However this may be enabled with either of the following flags:
+--links Translate symlinks to/from regular files with a '.rclonelink' extension.
+--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink link-to-file.txt
would be stored on cloud storage as link-to-file.txt.rclonelink
and the contents would be the path to the symlink destination.
+Note that --links
enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). --vfs-links
just enables it for the VFS layer.
+This scheme is compatible with that used by the local backend with the --local-links flag.
+The --vfs-links
flag has been designed for rclone mount
, rclone nfsmount
and rclone serve nfs
.
+It hasn't been tested with the other rclone serve
commands yet.
+A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dir
+The VFS will correctly resolve linked-dir
but not linked-dir/file.txt
. This is not a problem for the tested commands but may be for other commands.
+Note that there is an outstanding issue with symlink support issue #8245 with duplicate files being created when symlinks are moved into directories where there is a file of the same name (or vice versa).
VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.
File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.
@@ -3233,6 +3262,7 @@ WantedBy=multi-user.target
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for mount
+ --link-perms FileMode Link permissions (default 666)
--max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads (not supported on Windows) (default 128Ki)
--mount-case-insensitive Tristate Tell the OS the mount is case insensitive (true) or sensitive (false) regardless of the backend (auto) (default unset)
--network-mode Mount as remote network drive, instead of fixed disk drive (supported on Windows only)
@@ -3255,6 +3285,7 @@ WantedBy=multi-user.target
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -3330,6 +3361,7 @@ if src is directory
-I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
@@ -3482,7 +3514,9 @@ rclone nfsmount remote:path/to/files \\cloud\remote
When running in background mode the user will have to stop the mount manually:
# Linux
fusermount -u /path/to/local/mount
-# OS X
+#... or on some systems
+fusermount3 -u /path/to/local/mount
+# OS X or Linux when using nfsmount
umount /path/to/local/mount
The umount operation can fail, for example when the mountpoint is busy. When that happens, it is the user's responsibility to stop the mount manually.
The size of the mounted file system will be set according to information retrieved from the remote, the same as returned by the rclone about command. Remotes with unlimited storage may report the used size only, then an additional 1 PiB of free space is assumed. If the remote does not support the about feature at all, then 1 PiB is set as both the total and the free size.
@@ -3566,7 +3600,7 @@ sudo ln -s /opt/local/lib/libfuse.2.dylib
Note that all the rclone filters can be used to select a subset of the files to be visible in the mount.
systemd
When running rclone nfsmount as a systemd service, it is possible to use Type=notify. In this case the service will enter the started state after the mountpoint has been successfully set up. Units having the rclone nfsmount service specified as a requirement will see all files and folders immediately in this mode.
-Note that systemd runs mount units without any environment variables including PATH
or HOME
. This means that tilde (~
) expansion will not work and you should provide --config
and --cache-dir
explicitly as absolute paths via rclone arguments. Since mounting requires the fusermount
program, rclone will use the fallback PATH of /bin:/usr/bin
in this scenario. Please ensure that fusermount
is present on this PATH.
+Note that systemd runs mount units without any environment variables including PATH
or HOME
. This means that tilde (~
) expansion will not work and you should provide --config
and --cache-dir
explicitly as absolute paths via rclone arguments. Since mounting requires the fusermount
or fusermount3
program, rclone will use the fallback PATH of /bin:/usr/bin
in this scenario. Please ensure that fusermount
/fusermount3
is present on this PATH.
Rclone as Unix mount helper
The core Unix program /bin/mount
normally takes the -t FSTYPE
argument then runs the /sbin/mount.FSTYPE
helper program passing it mount options as -o key=val,...
or --opt=...
. Automount (classic or systemd) behaves in a similar way.
rclone by default expects GNU-style flags --key val
. To run it as a mount helper you should symlink rclone binary to /sbin/mount.rclone
and optionally /usr/bin/rclonefs
, e.g. ln -s /usr/bin/rclone /sbin/mount.rclone
. rclone will detect it and translate command-line arguments appropriately.
@@ -3715,6 +3749,22 @@ WantedBy=multi-user.target
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
When using VFS write caching (--vfs-cache-mode
with value writes or full), the global flag --transfers
can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers
has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
+Symlinks
+By default the VFS does not support symlinks. However this may be enabled with either of the following flags:
+--links Translate symlinks to/from regular files with a '.rclonelink' extension.
+--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink link-to-file.txt
would be stored on cloud storage as link-to-file.txt.rclonelink
and the contents would be the path to the symlink destination.
+Note that --links
enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). --vfs-links
just enables it for the VFS layer.
+This scheme is compatible with that used by the local backend with the --local-links flag.
+The --vfs-links
flag has been designed for rclone mount
, rclone nfsmount
and rclone serve nfs
.
+It hasn't been tested with the other rclone serve
commands yet.
+A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dir
+The VFS will correctly resolve linked-dir
but not linked-dir/file.txt
. This is not a problem for the tested commands but may be for other commands.
+Note that there is an outstanding issue with symlink support issue #8245 with duplicate files being created when symlinks are moved into directories where there is a file of the same name (or vice versa).
VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.
File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.
@@ -3752,6 +3802,7 @@ WantedBy=multi-user.target
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for nfsmount
+ --link-perms FileMode Link permissions (default 666)
--max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads (not supported on Windows) (default 128Ki)
--mount-case-insensitive Tristate Tell the OS the mount is case insensitive (true) or sensitive (false) regardless of the backend (auto) (default unset)
--network-mode Mount as remote network drive, instead of fixed disk drive (supported on Windows only)
@@ -3778,6 +3829,7 @@ WantedBy=multi-user.target
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -3912,17 +3964,17 @@ ffmpeg - | rclone rcat remote:path/to/file
Server options
Use --rc-addr
to specify which IP address and port the server should listen on, eg --rc-addr 1.2.3.4:8000
or --rc-addr :8080
to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
If you set --rc-addr
to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
-You can use a unix socket by setting the url to unix:///path/to/socket
or just by using an absolute path name. Note that unix sockets bypass the authentication - this is expected to be done with file system permissions.
+You can use a unix socket by setting the url to unix:///path/to/socket
or just by using an absolute path name.
--rc-addr
may be repeated to listen on multiple IPs/ports/sockets. Socket activation, described further below, can also be used to accomplish the same.
--rc-server-read-timeout
and --rc-server-write-timeout
can be used to control the timeouts on the server. Note that this is the total time for a transfer.
--rc-max-header-bytes
controls the maximum number of bytes the server will accept in the HTTP header.
--rc-baseurl
controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --rc-baseurl "/rclone"
then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --rc-baseurl
, so --rc-baseurl "rclone"
, --rc-baseurl "/rclone"
and --rc-baseurl "/rclone/"
are all treated identically.
TLS (SSL)
By default this will serve over http. If you want you can serve over https. You will need to supply the --rc-cert
and --rc-key
flags. If you wish to do client side certificate validation then you will need to supply --rc-client-ca
also.
---rc-cert
should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --krc-ey
should be the PEM encoded private key and --rc-client-ca
should be the PEM encoded client certificate authority certificate.
+--rc-cert
must be set to the path of a file containing either a PEM encoded certificate, or a concatenation of that with the CA certificate. --rc-key
must be set to the path of a file with the PEM encoded private key. If setting --rc-client-ca
, it should be set to the path of a file with PEM encoded client certificate authority certificates.
--rc-min-tls-version
is minimum TLS version that is acceptable. Valid values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").
Socket activation
-Instead of the listening addresses specified above, rclone will listen to all FDs passed by the service manager, if any (and ignore any arguments passed by --rc-addr`).
+Instead of the listening addresses specified above, rclone will listen to all FDs passed by the service manager, if any (and ignore any arguments passed by --rc-addr
).
This allows rclone to be a socket-activated service. It can be configured with .socket and .service unit files as described in https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html
Socket activation can be tested ad-hoc with the systemd-socket-activate
command
systemd-socket-activate -l 8000 -- rclone serve
@@ -4056,7 +4108,7 @@ htpasswd -B htpasswd anotherUser
RC Options
Flags to control the Remote Control API
--rc Enable the remote control server
- --rc-addr stringArray IPaddress:Port or :Port to bind server to (default ["localhost:5572"])
+ --rc-addr stringArray IPaddress:Port or :Port to bind server to (default localhost:5572)
--rc-allow-origin string Origin which cross-domain request (CORS) can be executed from
--rc-baseurl string Prefix for URLs - leave blank for root
--rc-cert string TLS PEM key (concatenation of certificate and CA certificate)
@@ -4281,6 +4333,22 @@ htpasswd -B htpasswd anotherUser
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
When using VFS write caching (--vfs-cache-mode
with value writes or full), the global flag --transfers
can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers
has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
+Symlinks
+By default the VFS does not support symlinks. However this may be enabled with either of the following flags:
+--links Translate symlinks to/from regular files with a '.rclonelink' extension.
+--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink link-to-file.txt
would be stored on cloud storage as link-to-file.txt.rclonelink
and the contents would be the path to the symlink destination.
+Note that --links
enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). --vfs-links
just enables it for the VFS layer.
+This scheme is compatible with that used by the local backend with the --local-links flag.
+The --vfs-links
flag has been designed for rclone mount
, rclone nfsmount
and rclone serve nfs
.
+It hasn't been tested with the other rclone serve
commands yet.
+A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dir
+The VFS will correctly resolve linked-dir
but not linked-dir/file.txt
. This is not a problem for the tested commands but may be for other commands.
+Note that there is an outstanding issue with symlink support issue #8245 with duplicate files being created when symlinks are moved into directories where there is a file of the same name (or vice versa).
VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.
File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.
@@ -4307,6 +4375,7 @@ htpasswd -B htpasswd anotherUser
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for dlna
--interface stringArray The interface to use for SSDP (repeat as necessary)
+ --link-perms FileMode Link permissions (default 666)
--log-trace Enable trace logging of SOAP traffic
--name string Name of DLNA server
--no-checksum Don't compare checksums on up/download
@@ -4325,6 +4394,7 @@ htpasswd -B htpasswd anotherUser
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -4486,6 +4556,22 @@ htpasswd -B htpasswd anotherUser
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
When using VFS write caching (--vfs-cache-mode
with value writes or full), the global flag --transfers
can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers
has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
+Symlinks
+By default the VFS does not support symlinks. However this may be enabled with either of the following flags:
+--links Translate symlinks to/from regular files with a '.rclonelink' extension.
+--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink link-to-file.txt
would be stored on cloud storage as link-to-file.txt.rclonelink
and the contents would be the path to the symlink destination.
+Note that --links
enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). --vfs-links
just enables it for the VFS layer.
+This scheme is compatible with that used by the local backend with the --local-links flag.
+The --vfs-links
flag has been designed for rclone mount
, rclone nfsmount
and rclone serve nfs
.
+It hasn't been tested with the other rclone serve
commands yet.
+A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dir
+The VFS will correctly resolve linked-dir
but not linked-dir/file.txt
. This is not a problem for the tested commands but may be for other commands.
+Note that there is an outstanding issue with symlink support issue #8245 with duplicate files being created when symlinks are moved into directories where there is a file of the same name (or vice versa).
VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.
File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.
@@ -4524,6 +4610,7 @@ htpasswd -B htpasswd anotherUser
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for docker
+ --link-perms FileMode Link permissions (default 666)
--max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads (not supported on Windows) (default 128Ki)
--mount-case-insensitive Tristate Tell the OS the mount is case insensitive (true) or sensitive (false) regardless of the backend (auto) (default unset)
--network-mode Mount as remote network drive, instead of fixed disk drive (supported on Windows only)
@@ -4549,6 +4636,7 @@ htpasswd -B htpasswd anotherUser
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -4712,6 +4800,22 @@ htpasswd -B htpasswd anotherUser
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
When using VFS write caching (--vfs-cache-mode
with value writes or full), the global flag --transfers
can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers
has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
+Symlinks
+By default the VFS does not support symlinks. However this may be enabled with either of the following flags:
+--links Translate symlinks to/from regular files with a '.rclonelink' extension.
+--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink link-to-file.txt
would be stored on cloud storage as link-to-file.txt.rclonelink
and the contents would be the path to the symlink destination.
+Note that --links
enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). --vfs-links
just enables it for the VFS layer.
+This scheme is compatible with that used by the local backend with the --local-links flag.
+The --vfs-links
flag has been designed for rclone mount
, rclone nfsmount
and rclone serve nfs
.
+It hasn't been tested with the other rclone serve
commands yet.
+A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dir
+The VFS will correctly resolve linked-dir
but not linked-dir/file.txt
. This is not a problem for the tested commands but may be for other commands.
+Note that there is an outstanding issue with symlink support issue #8245 with duplicate files being created when symlinks are moved into directories where there is a file of the same name (or vice versa).
VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.
File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.
@@ -4769,6 +4873,7 @@ htpasswd -B htpasswd anotherUser
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for ftp
--key string TLS PEM Private key
+ --link-perms FileMode Link permissions (default 666)
--no-checksum Don't compare checksums on up/download
--no-modtime Don't read/write the modification time (can speed things up)
--no-seek Don't allow seeking in files
@@ -4789,6 +4894,7 @@ htpasswd -B htpasswd anotherUser
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -4837,17 +4943,17 @@ htpasswd -B htpasswd anotherUser
Server options
Use --addr
to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000
or --addr :8080
to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
If you set --addr
to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
-You can use a unix socket by setting the url to unix:///path/to/socket
or just by using an absolute path name. Note that unix sockets bypass the authentication - this is expected to be done with file system permissions.
+You can use a unix socket by setting the url to unix:///path/to/socket
or just by using an absolute path name.
--addr
may be repeated to listen on multiple IPs/ports/sockets. Socket activation, described further below, can also be used to accomplish the same.
--server-read-timeout
and --server-write-timeout
can be used to control the timeouts on the server. Note that this is the total time for a transfer.
--max-header-bytes
controls the maximum number of bytes the server will accept in the HTTP header.
--baseurl
controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone"
then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl
, so --baseurl "rclone"
, --baseurl "/rclone"
and --baseurl "/rclone/"
are all treated identically.
TLS (SSL)
By default this will serve over http. If you want you can serve over https. You will need to supply the --cert
and --key
flags. If you wish to do client side certificate validation then you will need to supply --client-ca
also.
---cert
should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key
should be the PEM encoded private key and --client-ca
should be the PEM encoded client certificate authority certificate.
+--cert
must be set to the path of a file containing either a PEM encoded certificate, or a concatenation of that with the CA certificate. --key
must be set to the path of a file with the PEM encoded private key. If setting --client-ca
, it should be set to the path of a file with PEM encoded client certificate authority certificates.
--min-tls-version
is minimum TLS version that is acceptable. Valid values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").
Socket activation
-Instead of the listening addresses specified above, rclone will listen to all FDs passed by the service manager, if any (and ignore any arguments passed by --addr`).
+Instead of the listening addresses specified above, rclone will listen to all FDs passed by the service manager, if any (and ignore any arguments passed by --addr
).
This allows rclone to be a socket-activated service. It can be configured with .socket and .service unit files as described in https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html
Socket activation can be tested ad-hoc with the systemd-socket-activate
command
systemd-socket-activate -l 8000 -- rclone serve
@@ -5087,6 +5193,22 @@ htpasswd -B htpasswd anotherUser
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
When using VFS write caching (--vfs-cache-mode
with value writes or full), the global flag --transfers
can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers
has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
+Symlinks
+By default the VFS does not support symlinks. However this may be enabled with either of the following flags:
+--links Translate symlinks to/from regular files with a '.rclonelink' extension.
+--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink link-to-file.txt
would be stored on cloud storage as link-to-file.txt.rclonelink
and the contents would be the path to the symlink destination.
+Note that --links
enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). --vfs-links
just enables it for the VFS layer.
+This scheme is compatible with that used by the local backend with the --local-links flag.
+The --vfs-links
flag has been designed for rclone mount
, rclone nfsmount
and rclone serve nfs
.
+It hasn't been tested with the other rclone serve
commands yet.
+A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dir
+The VFS will correctly resolve linked-dir
but not linked-dir/file.txt
. This is not a problem for the tested commands but may be for other commands.
+Note that there is an outstanding issue with symlink support issue #8245 with duplicate files being created when symlinks are moved into directories where there is a file of the same name (or vice versa).
VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.
File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.
@@ -5139,15 +5261,16 @@ htpasswd -B htpasswd anotherUser
--allow-origin string Origin which cross-domain request (CORS) can be executed from
--auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
- --cert string TLS PEM key (concatenation of certificate and CA certificate)
- --client-ca string Client certificate authority to verify clients with
+ --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
+ --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 777)
--file-perms FileMode File permissions (default 666)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for http
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string TLS PEM Private key
+ --key string Path to TLS PEM private key file
+ --link-perms FileMode Link permissions (default 666)
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--no-checksum Don't compare checksums on up/download
@@ -5173,6 +5296,7 @@ htpasswd -B htpasswd anotherUser
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -5222,7 +5346,7 @@ htpasswd -B htpasswd anotherUser
Modifying files through the NFS protocol requires VFS caching. Usually you will need to specify --vfs-cache-mode
in order to be able to write to the mountpoint (full
is recommended). If you don't specify VFS cache mode, the mount will be read-only.
--nfs-cache-type
controls the type of the NFS handle cache. By default this is memory
where new handles will be randomly allocated when needed. These are stored in memory. If the server is restarted the handle cache will be lost and connected NFS clients will get stale handle errors.
--nfs-cache-type disk
uses an on disk NFS handle cache. Rclone hashes the path of the object and stores it in a file named after the hash. These hashes are stored on disk the directory controlled by --cache-dir
or the exact directory may be specified with --nfs-cache-dir
. Using this means that the NFS server can be restarted at will without affecting the connected clients.
---nfs-cache-type symlink
is similar to --nfs-cache-type disk
in that it uses an on disk cache, but the cache entries are held as symlinks. Rclone will use the handle of the underlying file as the NFS handle which improves performance. This sort of cache can't be backed up and restored as the underlying handles will change. This is Linux only.
+--nfs-cache-type symlink
is similar to --nfs-cache-type disk
in that it uses an on disk cache, but the cache entries are held as symlinks. Rclone will use the handle of the underlying file as the NFS handle which improves performance. This sort of cache can't be backed up and restored as the underlying handles will change. This is Linux only. It requres running rclone as root or with CAP_DAC_READ_SEARCH
. You can run rclone with this extra permission by doing this to the rclone binary sudo setcap cap_dac_read_search+ep /path/to/rclone
.
--nfs-cache-handle-limit
controls the maximum number of cached NFS handles stored by the caching handler. This should not be set too low or you may experience errors when trying to access files. The default is 1000000
, but consider lowering this limit if the server's system resource usage causes problems. This is only used by the memory
type cache.
To serve NFS over the network use following command:
rclone serve nfs remote: --addr 0.0.0.0:$PORT --vfs-cache-mode=full
@@ -5343,6 +5467,22 @@ htpasswd -B htpasswd anotherUser
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
When using VFS write caching (--vfs-cache-mode
with value writes or full), the global flag --transfers
can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers
has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
+Symlinks
+By default the VFS does not support symlinks. However this may be enabled with either of the following flags:
+--links Translate symlinks to/from regular files with a '.rclonelink' extension.
+--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink link-to-file.txt
would be stored on cloud storage as link-to-file.txt.rclonelink
and the contents would be the path to the symlink destination.
+Note that --links
enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). --vfs-links
just enables it for the VFS layer.
+This scheme is compatible with that used by the local backend with the --local-links flag.
+The --vfs-links
flag has been designed for rclone mount
, rclone nfsmount
and rclone serve nfs
.
+It hasn't been tested with the other rclone serve
commands yet.
+A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dir
+The VFS will correctly resolve linked-dir
but not linked-dir/file.txt
. This is not a problem for the tested commands but may be for other commands.
+Note that there is an outstanding issue with symlink support issue #8245 with duplicate files being created when symlinks are moved into directories where there is a file of the same name (or vice versa).
VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.
File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.
@@ -5367,6 +5507,7 @@ htpasswd -B htpasswd anotherUser
--file-perms FileMode File permissions (default 666)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for nfs
+ --link-perms FileMode Link permissions (default 666)
--nfs-cache-dir string The directory the NFS handle cache will use if set
--nfs-cache-handle-limit int max file handles cached simultaneously (min 5) (default 1000000)
--nfs-cache-type memory|disk|symlink Type of NFS handle cache to use (default memory)
@@ -5386,6 +5527,7 @@ htpasswd -B htpasswd anotherUser
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -5470,17 +5612,17 @@ $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/
Server options
Use --addr
to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000
or --addr :8080
to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
If you set --addr
to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
-You can use a unix socket by setting the url to unix:///path/to/socket
or just by using an absolute path name. Note that unix sockets bypass the authentication - this is expected to be done with file system permissions.
+You can use a unix socket by setting the url to unix:///path/to/socket
or just by using an absolute path name.
--addr
may be repeated to listen on multiple IPs/ports/sockets. Socket activation, described further below, can also be used to accomplish the same.
--server-read-timeout
and --server-write-timeout
can be used to control the timeouts on the server. Note that this is the total time for a transfer.
--max-header-bytes
controls the maximum number of bytes the server will accept in the HTTP header.
--baseurl
controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone"
then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl
, so --baseurl "rclone"
, --baseurl "/rclone"
and --baseurl "/rclone/"
are all treated identically.
TLS (SSL)
By default this will serve over http. If you want you can serve over https. You will need to supply the --cert
and --key
flags. If you wish to do client side certificate validation then you will need to supply --client-ca
also.
---cert
should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key
should be the PEM encoded private key and --client-ca
should be the PEM encoded client certificate authority certificate.
+--cert
must be set to the path of a file containing either a PEM encoded certificate, or a concatenation of that with the CA certificate. --key
must be set to the path of a file with the PEM encoded private key. If setting --client-ca
, it should be set to the path of a file with PEM encoded client certificate authority certificates.
--min-tls-version
is minimum TLS version that is acceptable. Valid values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").
Socket activation
-Instead of the listening addresses specified above, rclone will listen to all FDs passed by the service manager, if any (and ignore any arguments passed by --addr`).
+Instead of the listening addresses specified above, rclone will listen to all FDs passed by the service manager, if any (and ignore any arguments passed by --addr
).
This allows rclone to be a socket-activated service. It can be configured with .socket and .service unit files as described in https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html
Socket activation can be tested ad-hoc with the systemd-socket-activate
command
systemd-socket-activate -l 8000 -- rclone serve
@@ -5503,11 +5645,11 @@ htpasswd -B htpasswd anotherUser
--append-only Disallow deletion of repository data
--baseurl string Prefix for URLs - leave blank for root
--cache-objects Cache listed objects (default true)
- --cert string TLS PEM key (concatenation of certificate and CA certificate)
- --client-ca string Client certificate authority to verify clients with
+ --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
+ --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
-h, --help help for restic
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string TLS PEM Private key
+ --key string Path to TLS PEM private key file
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--pass string Password for authentication
@@ -5602,17 +5744,17 @@ htpasswd -B htpasswd anotherUser
Server options
Use --addr
to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000
or --addr :8080
to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
If you set --addr
to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
-You can use a unix socket by setting the url to unix:///path/to/socket
or just by using an absolute path name. Note that unix sockets bypass the authentication - this is expected to be done with file system permissions.
+You can use a unix socket by setting the url to unix:///path/to/socket
or just by using an absolute path name.
--addr
may be repeated to listen on multiple IPs/ports/sockets. Socket activation, described further below, can also be used to accomplish the same.
--server-read-timeout
and --server-write-timeout
can be used to control the timeouts on the server. Note that this is the total time for a transfer.
--max-header-bytes
controls the maximum number of bytes the server will accept in the HTTP header.
--baseurl
controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone"
then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl
, so --baseurl "rclone"
, --baseurl "/rclone"
and --baseurl "/rclone/"
are all treated identically.
TLS (SSL)
By default this will serve over http. If you want you can serve over https. You will need to supply the --cert
and --key
flags. If you wish to do client side certificate validation then you will need to supply --client-ca
also.
---cert
should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key
should be the PEM encoded private key and --client-ca
should be the PEM encoded client certificate authority certificate.
+--cert
must be set to the path of a file containing either a PEM encoded certificate, or a concatenation of that with the CA certificate. --key
must be set to the path of a file with the PEM encoded private key. If setting --client-ca
, it should be set to the path of a file with PEM encoded client certificate authority certificates.
--min-tls-version
is minimum TLS version that is acceptable. Valid values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").
Socket activation
-Instead of the listening addresses specified above, rclone will listen to all FDs passed by the service manager, if any (and ignore any arguments passed by --addr`).
+Instead of the listening addresses specified above, rclone will listen to all FDs passed by the service manager, if any (and ignore any arguments passed by --addr
).
This allows rclone to be a socket-activated service. It can be configured with .socket and .service unit files as described in https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html
Socket activation can be tested ad-hoc with the systemd-socket-activate
command
systemd-socket-activate -l 8000 -- rclone serve
@@ -5729,6 +5871,22 @@ htpasswd -B htpasswd anotherUser
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
When using VFS write caching (--vfs-cache-mode
with value writes or full), the global flag --transfers
can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers
has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
+Symlinks
+By default the VFS does not support symlinks. However this may be enabled with either of the following flags:
+--links Translate symlinks to/from regular files with a '.rclonelink' extension.
+--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink link-to-file.txt
would be stored on cloud storage as link-to-file.txt.rclonelink
and the contents would be the path to the symlink destination.
+Note that --links
enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). --vfs-links
just enables it for the VFS layer.
+This scheme is compatible with that used by the local backend with the --local-links flag.
+The --vfs-links
flag has been designed for rclone mount
, rclone nfsmount
and rclone serve nfs
.
+It hasn't been tested with the other rclone serve
commands yet.
+A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dir
+The VFS will correctly resolve linked-dir
but not linked-dir/file.txt
. This is not a problem for the tested commands but may be for other commands.
+Note that there is an outstanding issue with symlink support issue #8245 with duplicate files being created when symlinks are moved into directories where there is a file of the same name (or vice versa).
VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.
File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.
@@ -5752,8 +5910,8 @@ htpasswd -B htpasswd anotherUser
--auth-key stringArray Set key pair for v4 authorization: access_key_id,secret_access_key
--auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
- --cert string TLS PEM key (concatenation of certificate and CA certificate)
- --client-ca string Client certificate authority to verify clients with
+ --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
+ --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 777)
--etag-hash string Which hash to use for the ETag, or auto or blank for off (default "MD5")
@@ -5762,7 +5920,8 @@ htpasswd -B htpasswd anotherUser
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for s3
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string TLS PEM Private key
+ --key string Path to TLS PEM private key file
+ --link-perms FileMode Link permissions (default 666)
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--no-checksum Don't compare checksums on up/download
@@ -5788,6 +5947,7 @@ htpasswd -B htpasswd anotherUser
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -5960,6 +6120,22 @@ htpasswd -B htpasswd anotherUser
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
When using VFS write caching (--vfs-cache-mode
with value writes or full), the global flag --transfers
can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers
has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
+Symlinks
+By default the VFS does not support symlinks. However this may be enabled with either of the following flags:
+--links Translate symlinks to/from regular files with a '.rclonelink' extension.
+--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink link-to-file.txt
would be stored on cloud storage as link-to-file.txt.rclonelink
and the contents would be the path to the symlink destination.
+Note that --links
enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). --vfs-links
just enables it for the VFS layer.
+This scheme is compatible with that used by the local backend with the --local-links flag.
+The --vfs-links
flag has been designed for rclone mount
, rclone nfsmount
and rclone serve nfs
.
+It hasn't been tested with the other rclone serve
commands yet.
+A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dir
+The VFS will correctly resolve linked-dir
but not linked-dir/file.txt
. This is not a problem for the tested commands but may be for other commands.
+Note that there is an outstanding issue with symlink support issue #8245 with duplicate files being created when symlinks are moved into directories where there is a file of the same name (or vice versa).
VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.
File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.
@@ -6017,6 +6193,7 @@ htpasswd -B htpasswd anotherUser
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for sftp
--key stringArray SSH private host key file (Can be multi-valued, leave blank to auto generate)
+ --link-perms FileMode Link permissions (default 666)
--no-auth Allow connections with no authentication if set
--no-checksum Don't compare checksums on up/download
--no-modtime Don't read/write the modification time (can speed things up)
@@ -6037,6 +6214,7 @@ htpasswd -B htpasswd anotherUser
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -6097,17 +6275,17 @@ htpasswd -B htpasswd anotherUser
Server options
Use --addr
to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000
or --addr :8080
to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
If you set --addr
to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
-You can use a unix socket by setting the url to unix:///path/to/socket
or just by using an absolute path name. Note that unix sockets bypass the authentication - this is expected to be done with file system permissions.
+You can use a unix socket by setting the url to unix:///path/to/socket
or just by using an absolute path name.
--addr
may be repeated to listen on multiple IPs/ports/sockets. Socket activation, described further below, can also be used to accomplish the same.
--server-read-timeout
and --server-write-timeout
can be used to control the timeouts on the server. Note that this is the total time for a transfer.
--max-header-bytes
controls the maximum number of bytes the server will accept in the HTTP header.
--baseurl
controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone"
then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl
, so --baseurl "rclone"
, --baseurl "/rclone"
and --baseurl "/rclone/"
are all treated identically.
TLS (SSL)
By default this will serve over http. If you want you can serve over https. You will need to supply the --cert
and --key
flags. If you wish to do client side certificate validation then you will need to supply --client-ca
also.
---cert
should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key
should be the PEM encoded private key and --client-ca
should be the PEM encoded client certificate authority certificate.
+--cert
must be set to the path of a file containing either a PEM encoded certificate, or a concatenation of that with the CA certificate. --key
must be set to the path of a file with the PEM encoded private key. If setting --client-ca
, it should be set to the path of a file with PEM encoded client certificate authority certificates.
--min-tls-version
is minimum TLS version that is acceptable. Valid values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").
Socket activation
-Instead of the listening addresses specified above, rclone will listen to all FDs passed by the service manager, if any (and ignore any arguments passed by --addr`).
+Instead of the listening addresses specified above, rclone will listen to all FDs passed by the service manager, if any (and ignore any arguments passed by --addr
).
This allows rclone to be a socket-activated service. It can be configured with .socket and .service unit files as described in https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html
Socket activation can be tested ad-hoc with the systemd-socket-activate
command
systemd-socket-activate -l 8000 -- rclone serve
@@ -6347,6 +6525,22 @@ htpasswd -B htpasswd anotherUser
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
When using VFS write caching (--vfs-cache-mode
with value writes or full), the global flag --transfers
can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers
has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
+Symlinks
+By default the VFS does not support symlinks. However this may be enabled with either of the following flags:
+--links Translate symlinks to/from regular files with a '.rclonelink' extension.
+--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink link-to-file.txt
would be stored on cloud storage as link-to-file.txt.rclonelink
and the contents would be the path to the symlink destination.
+Note that --links
enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). --vfs-links
just enables it for the VFS layer.
+This scheme is compatible with that used by the local backend with the --local-links flag.
+The --vfs-links
flag has been designed for rclone mount
, rclone nfsmount
and rclone serve nfs
.
+It hasn't been tested with the other rclone serve
commands yet.
+A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dir
+The VFS will correctly resolve linked-dir
but not linked-dir/file.txt
. This is not a problem for the tested commands but may be for other commands.
+Note that there is an outstanding issue with symlink support issue #8245 with duplicate files being created when symlinks are moved into directories where there is a file of the same name (or vice versa).
VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.
File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.
@@ -6399,8 +6593,8 @@ htpasswd -B htpasswd anotherUser
--allow-origin string Origin which cross-domain request (CORS) can be executed from
--auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
- --cert string TLS PEM key (concatenation of certificate and CA certificate)
- --client-ca string Client certificate authority to verify clients with
+ --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
+ --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 777)
--disable-dir-list Disable HTML directory list on GET request for a directory
@@ -6409,7 +6603,8 @@ htpasswd -B htpasswd anotherUser
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for webdav
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string TLS PEM Private key
+ --key string Path to TLS PEM private key file
+ --link-perms FileMode Link permissions (default 666)
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--no-checksum Don't compare checksums on up/download
@@ -6435,6 +6630,7 @@ htpasswd -B htpasswd anotherUser
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -6583,6 +6779,7 @@ htpasswd -B htpasswd anotherUser
--chargen Fill files with a ASCII chargen pattern
--files int Number of files to create (default 1000)
--files-per-directory int Average number of files per directory (default 10)
+ --flat If set create all files in the root directory
-h, --help help for makefiles
--max-depth int Maximum depth of directory hierarchy (default 10)
--max-file-size SizeSuffix Maximum size of files to create (default 100)
@@ -7257,6 +7454,11 @@ y/n/s/!/q> n
--leave-root
During rmdirs it will not remove root directory, even if it's empty.
+--links / -l
+Normally rclone will ignore symlinks or junction points (which behave like symlinks under Windows).
+If you supply this flag then rclone will copy symbolic links from any supported backend backend, and store them as text files, with a .rclonelink
suffix in the destination.
+The text file will contain the target of the symbolic link.
+The --links
/ -l
flag enables this feature for all supported backends and the VFS. There are individual flags for just enabling it for the VFS --vfs-links
and the local backend --local-links
if required.
--log-file=FILE
Log all of rclone's output to FILE. This is not active by default. This can be useful for tracking down problems with syncs in combination with the -v
flag. See the Logging section for more info.
If FILE exists then rclone will append to it.
@@ -7328,55 +7530,55 @@ y/n/s/!/q> n
ID
is the source ID
of the object if known.
Metadata
is the backend specific metadata as described in the backend docs.
-{
- "SrcFs": "gdrive:",
- "SrcFsType": "drive",
- "DstFs": "newdrive:user",
- "DstFsType": "onedrive",
- "Remote": "test.txt",
- "Size": 6,
- "MimeType": "text/plain; charset=utf-8",
- "ModTime": "2022-10-11T17:53:10.286745272+01:00",
- "IsDir": false,
- "ID": "xyz",
- "Metadata": {
- "btime": "2022-10-11T16:53:11Z",
- "content-type": "text/plain; charset=utf-8",
- "mtime": "2022-10-11T17:53:10.286745272+01:00",
- "owner": "user1@domain1.com",
- "permissions": "...",
- "description": "my nice file",
- "starred": "false"
- }
-}
+{
+ "SrcFs": "gdrive:",
+ "SrcFsType": "drive",
+ "DstFs": "newdrive:user",
+ "DstFsType": "onedrive",
+ "Remote": "test.txt",
+ "Size": 6,
+ "MimeType": "text/plain; charset=utf-8",
+ "ModTime": "2022-10-11T17:53:10.286745272+01:00",
+ "IsDir": false,
+ "ID": "xyz",
+ "Metadata": {
+ "btime": "2022-10-11T16:53:11Z",
+ "content-type": "text/plain; charset=utf-8",
+ "mtime": "2022-10-11T17:53:10.286745272+01:00",
+ "owner": "user1@domain1.com",
+ "permissions": "...",
+ "description": "my nice file",
+ "starred": "false"
+ }
+}
The program should then modify the input as desired and send it to STDOUT. The returned Metadata
field will be used in its entirety for the destination object. Any other fields will be ignored. Note in this example we translate user names and permissions and add something to the description:
-{
- "Metadata": {
- "btime": "2022-10-11T16:53:11Z",
- "content-type": "text/plain; charset=utf-8",
- "mtime": "2022-10-11T17:53:10.286745272+01:00",
- "owner": "user1@domain2.com",
- "permissions": "...",
- "description": "my nice file [migrated from domain1]",
- "starred": "false"
- }
-}
+{
+ "Metadata": {
+ "btime": "2022-10-11T16:53:11Z",
+ "content-type": "text/plain; charset=utf-8",
+ "mtime": "2022-10-11T17:53:10.286745272+01:00",
+ "owner": "user1@domain2.com",
+ "permissions": "...",
+ "description": "my nice file [migrated from domain1]",
+ "starred": "false"
+ }
+}
Metadata can be removed here too.
An example python program might look something like this to implement the above transformations.
-import sys, json
-
-i = json.load(sys.stdin)
-metadata = i["Metadata"]
-# Add tag to description
-if "description" in metadata:
- metadata["description"] += " [migrated from domain1]"
-else:
- metadata["description"] = "[migrated from domain1]"
-# Modify owner
-if "owner" in metadata:
- metadata["owner"] = metadata["owner"].replace("domain1.com", "domain2.com")
-o = { "Metadata": metadata }
-json.dump(o, sys.stdout, indent="\t")
+import sys, json
+
+i = json.load(sys.stdin)
+metadata = i["Metadata"]
+# Add tag to description
+if "description" in metadata:
+ metadata["description"] += " [migrated from domain1]"
+else:
+ metadata["description"] = "[migrated from domain1]"
+# Modify owner
+if "owner" in metadata:
+ metadata["owner"] = metadata["owner"].replace("domain1.com", "domain2.com")
+o = { "Metadata": metadata }
+json.dump(o, sys.stdout, indent="\t")
You can find this example (slightly expanded) in the rclone source code at bin/test_metadata_mapper.py.
If you want to see the input to the metadata mapper and the output returned from it in the log you can use -vv --dump mapper
.
See the metadata section for more info.
@@ -7827,9 +8029,9 @@ export RCLONE_CONFIG_PASS
When rclone is running it will accumulate errors as it goes along, and only exit with a non-zero exit code if (after retries) there were still failed transfers. For every error counted there will be a high priority log message (visible with -q
) showing the message and which file caused the problem. A high priority message is also shown when starting a retry so the user can see that any previous error messages may not be valid after the retry. If rclone has done a retry it will log a high priority message if the retry was successful.
List of exit codes
-0
- success
-1
- Syntax or usage error
-2
- Error not otherwise categorised
+0
- Success
+1
- Error not otherwise categorised
+2
- Syntax or usage error
3
- Directory not found
4
- File not found
5
- Temporary error (one that more retries might fix) (Retry errors)
@@ -7849,6 +8051,38 @@ export RCLONE_CONFIG_PASS
Verbosity is slightly different, the environment variable equivalent of --verbose
or -v
is RCLONE_VERBOSE=1
, or for -vv
, RCLONE_VERBOSE=2
.
The same parser is used for the options and the environment variables so they take exactly the same form.
The options set by environment variables can be seen with the -vv
flag, e.g. rclone version -vv
.
+Options that can appear multiple times (type stringArray
) are treated slighly differently as environment variables can only be defined once. In order to allow a simple mechanism for adding one or many items, the input is treated as a CSV encoded string. For example
+
+
+
+
+
+
+
+
+
+
+RCLONE_EXCLUDE="*.jpg" |
+--exclude "*.jpg" |
+
+
+RCLONE_EXCLUDE="*.jpg,*.png" |
+--exclude "*.jpg" --exclude "*.png" |
+
+
+RCLONE_EXCLUDE='"*.jpg","*.png"' |
+--exclude "*.jpg" --exclude "*.png" |
+
+
+RCLONE_EXCLUDE='"/directory with comma , in it /**"' |
+`--exclude "/directory with comma , in it /**" |
+
+
+
+If stringArray
options are defined as environment variables and options on the command line then all the values will be used.
Config file
You can set defaults for values in the config file on an individual remote basis. The names of the config items are documented in the page for each backend.
To find the name of the environment variable, you need to set, take RCLONE_CONFIG_
+ name of remote + _
+ name of config file option and make it all uppercase. Note one implication here is the remote's name must be convertible into a valid environment variable name, so it can only contain letters, digits, or the _
(underscore) character.
@@ -8274,8 +8508,8 @@ file2.avi
This flag can be repeated. See above for the order filter flags are processed in.
The --include-from
flag is useful where multiple include filter rules are applied to an rclone command.
--include-from
implies --exclude **
at the end of an rclone internal filter list. Therefore if you mix --include
and --include-from
flags with --exclude
, --exclude-from
, --filter
or --filter-from
, you must use include rules for all the files you want in the include statement. For more flexibility use the --filter-from
flag.
---exclude-from
has no effect when combined with --files-from
or --files-from-raw
flags.
---exclude-from
followed by -
reads filter rules from standard input.
+--include-from
has no effect when combined with --files-from
or --files-from-raw
flags.
+--include-from
followed by -
reads filter rules from standard input.
--filter
- Add a file-filtering rule
Specifies path/file names to an rclone command, based on a single include or exclude rule, in +
or -
format.
This flag can be repeated. See above for the order filter flags are processed in.
@@ -8287,12 +8521,14 @@ file2.avi
Adds path/file names to an rclone command based on rules in a named file. The file contains a list of remarks and pattern rules. Include rules start with +
and exclude rules with -
. !
clears existing rules. Rules are processed in the order they are defined.
This flag can be repeated. See above for the order filter flags are processed in.
Arrange the order of filter rules with the most restrictive first and work down.
+Lines starting with # or ; are ignored, and can be used to write comments. Inline comments are not supported. Use -vv --dump filters
to see how they appear in the final regexp.
E.g. for filter-file.txt
:
# a sample filter rule file
- secret*.jpg
+ *.jpg
+ *.png
+ file2.avi
+- /dir/tmp/** # WARNING! This text will be treated as part of the path.
- /dir/Trash/**
+ /dir/**
# exclude everything else
@@ -8316,6 +8552,7 @@ file2.avi
Adds path/files to an rclone command from a list in a named file. Rclone processes the path/file names in the order of the list, and no others.
Other filter flags (--include
, --include-from
, --exclude
, --exclude-from
, --filter
and --filter-from
) are ignored when --files-from
is used.
--files-from
expects a list of files as its input. Leading or trailing whitespace is stripped from the input lines. Lines starting with #
or ;
are ignored.
+--files-from
followed by -
reads the list of files from standard input.
Rclone commands with a --files-from
flag traverse the remote, treating the names in --files-from
as a set of filters.
If the --no-traverse
and --files-from
flags are used together an rclone command does not traverse the remote. Instead it addresses each path/file named in the file individually. For each path/file name, that requires typically 1 API call. This can be efficient for a short --files-from
list and a remote containing many files.
Rclone commands do not error if any names in the --files-from
file are missing from the source remote.
@@ -8506,19 +8743,19 @@ dir1/dir2/dir3/.ignore
If you just want to run a remote control then see the rcd command.
Supported parameters
--rc
-Flag to start the http server listen on remote requests
+Flag to start the http server listen on remote requests.
--rc-addr=IP
-IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+IPaddress:Port or :Port to bind server to. (default "localhost:5572").
--rc-cert=KEY
-SSL PEM key (concatenation of certificate and CA certificate)
+SSL PEM key (concatenation of certificate and CA certificate).
--rc-client-ca=PATH
-Client certificate authority to verify clients with
+Client certificate authority to verify clients with.
--rc-htpasswd=PATH
-htpasswd file - if not provided no authentication is done
+htpasswd file - if not provided no authentication is done.
--rc-key=PATH
-SSL PEM Private key
+TLS PEM private key file.
-Maximum size of request header (default 4096)
+Maximum size of request header (default 4096).
--rc-min-tls-version=VALUE
The minimum TLS version that is acceptable. Valid values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").
--rc-user=VALUE
@@ -8526,11 +8763,11 @@ dir1/dir2/dir3/.ignore
--rc-pass=VALUE
Password for authentication.
--rc-realm=VALUE
-Realm for authentication (default "rclone")
+Realm for authentication (default "rclone").
--rc-server-read-timeout=DURATION
-Timeout for server reading data (default 1h0m0s)
+Timeout for server reading data (default 1h0m0s).
--rc-server-write-timeout=DURATION
-Timeout for server writing data (default 1h0m0s)
+Timeout for server writing data (default 1h0m0s).
--rc-serve
Enable the serving of remote objects via the HTTP interface. This means objects will be accessible at http://127.0.0.1:5572/ by default, so you can browse to http://127.0.0.1:5572/ or http://127.0.0.1:5572/* to see a listing of the remotes. Objects may be requested from remotes using this syntax http://127.0.0.1:5572/[remote:path]/path/to/object
Default Off.
@@ -8579,13 +8816,17 @@ dir1/dir2/dir3/.ignore
User-specified template.
Accessing the remote control via the rclone rc command
Rclone itself implements the remote control protocol in its rclone rc
command.
-You can use it like this
+You can use it like this:
$ rclone rc rc/noop param1=one param2=two
{
"param1": "one",
"param2": "two"
}
-Run rclone rc
on its own to see the help for the installed remote control commands.
+If the remote is running on a different URL than the default http://localhost:5572/
, use the --url
option to specify it:
+$ rclone rc --url http://some.remote:1234/ rc/noop
+Or, if the remote is listening on a Unix socket, use the --unix-socket
option instead:
+$ rclone rc --unix-socket /tmp/rclone.sock rc/noop
+Run rclone rc
on its own, without any commands, to see the help for the installed remote control commands. Note that this also needs to connect to the remote server.
rclone rc
also supports a --json
flag which can be used to send more complicated input parameters.
$ rclone rc --json '{ "p1": [1,"2",null,4], "p2": { "a":1, "b":2 } }' rc/noop
@@ -9931,6 +10172,7 @@ rclone rc options/set --json '{"main": {"LogLevel": 8}}&
fs
- select the VFS in use (optional)
id
- a numeric ID as returned from vfs/queue
expiry
- a new expiry time as floating point seconds
+relative
- if set, expiry is to be treated as relative to the current expiry (optional, boolean)
This returns an empty result on success, or an error.
This command takes an "fs" parameter. If this parameter is not supplied and if there is only one VFS in use then that VFS will be used. If there is more than one VFS in use then the "fs" parameter must be supplied.
@@ -10164,6 +10406,15 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
+Cloudinary |
+MD5 |
+R |
+No |
+Yes |
+- |
+- |
+
+
Dropbox |
DBHASH ¹ |
R |
@@ -10172,7 +10423,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
- |
-
+
Enterprise File Fabric |
- |
R/W |
@@ -10181,7 +10432,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R/W |
- |
-
+
Files.com |
MD5, CRC32 |
DR/W |
@@ -10190,7 +10441,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R |
- |
-
+
FTP |
- |
R/W ¹⁰ |
@@ -10199,7 +10450,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
- |
-
+
Gofile |
MD5 |
DR/W |
@@ -10208,7 +10459,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R |
- |
-
+
Google Cloud Storage |
MD5 |
R/W |
@@ -10217,7 +10468,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R/W |
- |
-
+
Google Drive |
MD5, SHA1, SHA256 |
DR/W |
@@ -10226,7 +10477,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R/W |
DRWU |
-
+
Google Photos |
- |
- |
@@ -10235,7 +10486,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R |
- |
-
+
HDFS |
- |
R/W |
@@ -10244,7 +10495,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
- |
-
+
HiDrive |
HiDrive ¹² |
R/W |
@@ -10253,7 +10504,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
- |
-
+
HTTP |
- |
R |
@@ -10262,6 +10513,15 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R |
- |
+
+iCloud Drive |
+- |
+R |
+No |
+No |
+- |
+- |
+
Internet Archive |
MD5, SHA1, CRC32 |
@@ -10382,7 +10642,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
pCloud |
MD5, SHA1 ⁷ |
-R |
+R/W |
No |
No |
W |
@@ -11215,6 +11475,20 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes |
+Cloudinary |
+No |
+No |
+No |
+No |
+No |
+No |
+Yes |
+No |
+No |
+No |
+No |
+
+
Enterprise File Fabric |
Yes |
Yes |
@@ -11228,7 +11502,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No |
Yes |
-
+
Files.com |
Yes |
Yes |
@@ -11242,7 +11516,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No |
Yes |
-
+
FTP |
No |
No |
@@ -11256,7 +11530,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No |
Yes |
-
+
Gofile |
Yes |
Yes |
@@ -11270,21 +11544,21 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes |
Yes |
-
+
Google Cloud Storage |
Yes |
Yes |
No |
No |
No |
-Yes |
+No |
Yes |
No |
No |
No |
No |
-
+
Google Drive |
Yes |
Yes |
@@ -11298,7 +11572,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes |
Yes |
-
+
Google Photos |
No |
No |
@@ -11312,7 +11586,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No |
No |
-
+
HDFS |
Yes |
No |
@@ -11326,7 +11600,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes |
Yes |
-
+
HiDrive |
Yes |
Yes |
@@ -11340,7 +11614,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No |
Yes |
-
+
HTTP |
No |
No |
@@ -11354,6 +11628,20 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No |
Yes |
+
+iCloud Drive |
+Yes |
+Yes |
+Yes |
+Yes |
+No |
+No |
+No |
+No |
+No |
+No |
+Yes |
+
ImageKit |
Yes |
@@ -11505,7 +11793,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No |
No |
No |
-No |
+Yes |
Yes |
@@ -11868,6 +12156,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
-I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
@@ -11932,7 +12221,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.68.0")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.69.0")
Flags helpful for increasing performance.
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer (default 16Mi)
@@ -12033,7 +12322,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
RC
Flags to control the Remote Control API.
--rc Enable the remote control server
- --rc-addr stringArray IPaddress:Port or :Port to bind server to (default ["localhost:5572"])
+ --rc-addr stringArray IPaddress:Port or :Port to bind server to (default localhost:5572)
--rc-allow-origin string Origin which cross-domain request (CORS) can be executed from
--rc-baseurl string Prefix for URLs - leave blank for root
--rc-cert string TLS PEM key (concatenation of certificate and CA certificate)
@@ -12063,7 +12352,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--rc-web-gui-update Check and update to latest version of web gui
Metrics
Flags to control the Metrics HTTP endpoint..
- --metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to (default [""])
+ --metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to
--metrics-allow-origin string Origin which cross-domain request (CORS) can be executed from
--metrics-baseurl string Prefix for URLs - leave blank for root
--metrics-cert string TLS PEM key (concatenation of certificate and CA certificate)
@@ -12097,6 +12386,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--azureblob-description string Description of the remote
--azureblob-directory-markers Upload an empty object with a trailing slash when a new directory is created
--azureblob-disable-checksum Don't store MD5 checksum with object metadata
+ --azureblob-disable-instance-discovery Skip requesting Microsoft Entra instance metadata
--azureblob-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
--azureblob-endpoint string Endpoint for the service
--azureblob-env-auth Read credentials from runtime (environment variables, CLI or MSI)
@@ -12114,6 +12404,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--azureblob-tenant string ID of the service principal's tenant. Also called its directory ID
--azureblob-upload-concurrency int Concurrency for multipart uploads (default 16)
--azureblob-upload-cutoff string Cutoff for switching to chunked upload (<= 256 MiB) (deprecated)
+ --azureblob-use-az Use Azure CLI tool az for authentication
--azureblob-use-emulator Uses local storage emulator if provided as 'true'
--azureblob-use-msi Use a managed service identity to authenticate (only works in Azure)
--azureblob-username string User name (usually an email address)
@@ -12163,6 +12454,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--box-auth-url string Auth server URL
--box-box-config-file string Box App config.json location
--box-box-sub-type string (default "user")
+ --box-client-credentials Use client credentials OAuth flow
--box-client-id string OAuth Client Id
--box-client-secret string OAuth Client Secret
--box-commit-retries int Max number of times to try committing a multipart file (default 100)
@@ -12201,6 +12493,14 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks
--chunker-hash-type string Choose how chunker handles hash sums (default "md5")
--chunker-remote string Remote to chunk/unchunk
+ --cloudinary-api-key string Cloudinary API Key
+ --cloudinary-api-secret string Cloudinary API Secret
+ --cloudinary-cloud-name string Cloudinary Environment Name
+ --cloudinary-description string Description of the remote
+ --cloudinary-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
+ --cloudinary-eventually-consistent-delay Duration Wait N seconds for eventual consistency of the databases that support the backend operation (default 0s)
+ --cloudinary-upload-prefix string Specify the API endpoint for environments out of the US
+ --cloudinary-upload-preset string Upload Preset to select asset manipulation on upload
--combine-description string Description of the remote
--combine-upstreams SpaceSepList Upstreams for combining
--compress-description string Description of the remote
@@ -12227,6 +12527,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--drive-auth-owner-only Only consider files owned by the authenticated user
--drive-auth-url string Auth server URL
--drive-chunk-size SizeSuffix Upload chunk size (default 8Mi)
+ --drive-client-credentials Use client credentials OAuth flow
--drive-client-id string Google Application Client Id
--drive-client-secret string OAuth Client Secret
--drive-copy-shortcut-content Server side copy contents of shortcuts instead of the shortcut
@@ -12277,6 +12578,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--dropbox-batch-size int Max number of files in upload batch
--dropbox-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
--dropbox-chunk-size SizeSuffix Upload chunk size (< 150Mi) (default 48Mi)
+ --dropbox-client-credentials Use client credentials OAuth flow
--dropbox-client-id string OAuth Client Id
--dropbox-client-secret string OAuth Client Secret
--dropbox-description string Description of the remote
@@ -12323,6 +12625,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--ftp-host string FTP host to connect to
--ftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--ftp-no-check-certificate Do not verify the TLS certificate of the server
+ --ftp-no-check-upload Don't check the upload is OK
--ftp-pass string FTP password (obscured)
--ftp-port int FTP port number (default 21)
--ftp-shut-timeout Duration Maximum time to wait for data connection closing status (default 1m0s)
@@ -12331,10 +12634,12 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--ftp-tls-cache-size int Size of TLS session cache for all control and data connections (default 32)
--ftp-user string FTP username (default "$USER")
--ftp-writing-mdtm Use MDTM to set modification time (VsFtpd quirk)
+ --gcs-access-token string Short-lived access token
--gcs-anonymous Access public buckets and objects without credentials
--gcs-auth-url string Auth server URL
--gcs-bucket-acl string Access Control List for new buckets
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies
+ --gcs-client-credentials Use client credentials OAuth flow
--gcs-client-id string OAuth Client Id
--gcs-client-secret string OAuth Client Secret
--gcs-decompress If set this will decompress gzip encoded objects
@@ -12363,11 +12668,13 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--gphotos-batch-mode string Upload file batching sync|async|off (default "sync")
--gphotos-batch-size int Max number of files in upload batch
--gphotos-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
+ --gphotos-client-credentials Use client credentials OAuth flow
--gphotos-client-id string OAuth Client Id
--gphotos-client-secret string OAuth Client Secret
--gphotos-description string Description of the remote
--gphotos-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gphotos-include-archived Also view and download archived media
+ --gphotos-proxy string Use the gphotosdl proxy for downloading the full resolution images
--gphotos-read-only Set to make the Google Photos backend read only
--gphotos-read-size Set to read the size of media items
--gphotos-start-year int Year limits the photos to be downloaded to those which are uploaded after the given year (default 2000)
@@ -12386,6 +12693,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--hdfs-username string Hadoop user name
--hidrive-auth-url string Auth server URL
--hidrive-chunk-size SizeSuffix Chunksize for chunked uploads (default 48Mi)
+ --hidrive-client-credentials Use client credentials OAuth flow
--hidrive-client-id string OAuth Client Id
--hidrive-client-secret string OAuth Client Secret
--hidrive-description string Description of the remote
@@ -12405,6 +12713,11 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--http-no-head Don't use HEAD requests
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of HTTP host to connect to
+ --iclouddrive-apple-id string Apple ID
+ --iclouddrive-client-id string Client id (default "d39ba9916b7251055b22c7f910e2ea796ee65e98b2ddecea8f5dde8d9d1a815d")
+ --iclouddrive-description string Description of the remote
+ --iclouddrive-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --iclouddrive-password string Password (obscured)
--imagekit-description string Description of the remote
--imagekit-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Dollar,Question,Hash,Percent,BackSlash,Del,Ctl,InvalidUtf8,Dot,SquareBracket)
--imagekit-endpoint string You can find your ImageKit.io URL endpoint in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
@@ -12422,6 +12735,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--internetarchive-secret-access-key string IAS3 Secret Key (password)
--internetarchive-wait-archive Duration Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish (default 0s)
--jottacloud-auth-url string Auth server URL
+ --jottacloud-client-credentials Use client credentials OAuth flow
--jottacloud-client-id string OAuth Client Id
--jottacloud-client-secret string OAuth Client Secret
--jottacloud-description string Description of the remote
@@ -12443,11 +12757,11 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--koofr-user string Your user name
--linkbox-description string Description of the remote
--linkbox-token string Token from https://www.linkbox.to/admin/account
- -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-case-insensitive Force the filesystem to report itself as case insensitive
--local-case-sensitive Force the filesystem to report itself as case sensitive
--local-description string Description of the remote
--local-encoding Encoding The encoding for the backend (default Slash,Dot)
+ --local-links Translate symlinks to/from regular files with a '.rclonelink' extension for the local backend
--local-no-check-updated Don't check to see if the files change during upload
--local-no-clone Disable reflink cloning for server-side copies
--local-no-preallocate Disable preallocation of disk space for transferred files
@@ -12459,6 +12773,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--local-zero-size-links Assume the Stat size of links is zero (and read them instead) (deprecated)
--mailru-auth-url string Auth server URL
--mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true)
+ --mailru-client-credentials Use client credentials OAuth flow
--mailru-client-id string OAuth Client Id
--mailru-client-secret string OAuth Client Secret
--mailru-description string Description of the remote
@@ -12489,6 +12804,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--onedrive-auth-url string Auth server URL
--onedrive-av-override Allows download of files the server thinks has a virus
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes) (default 10Mi)
+ --onedrive-client-credentials Use client credentials OAuth flow
--onedrive-client-id string OAuth Client Id
--onedrive-client-secret string OAuth Client Secret
--onedrive-delta If set rclone will use delta listing to implement recursive listings
@@ -12508,11 +12824,12 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--onedrive-region string Choose national cloud region for OneDrive (default "global")
--onedrive-root-folder-id string ID of the root folder
--onedrive-server-side-across-configs Deprecated: use --server-side-across-configs instead
+ --onedrive-tenant string ID of the service principal's tenant. Also called its directory ID
--onedrive-token string OAuth Access Token as a JSON blob
--onedrive-token-url string Token server url
--oos-attempt-resume-upload If true attempt to resume previously started multipart upload for the object
--oos-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
- --oos-compartment string Object storage compartment OCID
+ --oos-compartment string Specify compartment OCID, if you need to list buckets
--oos-config-file string Path to OCI config file (default "~/.oci/config")
--oos-config-profile string Profile name inside the oci config file (default "Default")
--oos-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
@@ -12541,6 +12858,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--opendrive-password string Password (obscured)
--opendrive-username string Username
--pcloud-auth-url string Auth server URL
+ --pcloud-client-credentials Use client credentials OAuth flow
--pcloud-client-id string OAuth Client Id
--pcloud-client-secret string OAuth Client Secret
--pcloud-description string Description of the remote
@@ -12551,26 +12869,25 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--pcloud-token string OAuth Access Token as a JSON blob
--pcloud-token-url string Token server url
--pcloud-username string Your pcloud username
- --pikpak-auth-url string Auth server URL
--pikpak-chunk-size SizeSuffix Chunk size for multipart uploads (default 5Mi)
- --pikpak-client-id string OAuth Client Id
- --pikpak-client-secret string OAuth Client Secret
--pikpak-description string Description of the remote
+ --pikpak-device-id string Device ID used for authorization
--pikpak-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot)
--pikpak-hash-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate hash if required (default 10Mi)
+ --pikpak-no-media-link Use original file links instead of media links
--pikpak-pass string Pikpak password (obscured)
--pikpak-root-folder-id string ID of the root folder
- --pikpak-token string OAuth Access Token as a JSON blob
- --pikpak-token-url string Token server url
--pikpak-trashed-only Only show files that are in the trash
--pikpak-upload-concurrency int Concurrency for multipart uploads (default 5)
--pikpak-use-trash Send files to the trash instead of deleting permanently (default true)
--pikpak-user string Pikpak username
+ --pikpak-user-agent string HTTP user agent for pikpak (default "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0")
--pixeldrain-api-key string API key for your pixeldrain account
--pixeldrain-api-url string The API endpoint to connect to. In the vast majority of cases it's fine to leave (default "https://pixeldrain.com/api")
--pixeldrain-description string Description of the remote
--pixeldrain-root-folder-id string Root of the filesystem to use (default "me")
--premiumizeme-auth-url string Auth server URL
+ --premiumizeme-client-credentials Use client credentials OAuth flow
--premiumizeme-client-id string OAuth Client Id
--premiumizeme-client-secret string OAuth Client Secret
--premiumizeme-description string Description of the remote
@@ -12588,6 +12905,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--protondrive-replace-existing-draft Create a new revision when filename conflict is detected
--protondrive-username string The username of your proton account
--putio-auth-url string Auth server URL
+ --putio-client-credentials Use client credentials OAuth flow
--putio-client-id string OAuth Client Id
--putio-client-secret string OAuth Client Secret
--putio-description string Description of the remote
@@ -12621,6 +12939,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
--s3-decompress If set this will decompress gzip encoded objects
--s3-description string Description of the remote
+ --s3-directory-bucket Set to use AWS Directory Buckets
--s3-directory-markers Upload an empty object with a trailing slash when a new directory is created
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-disable-http2 Disable usage of http2 for S3 backends
@@ -12702,6 +13021,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--sftp-pass string SSH password, leave blank to use ssh-agent (obscured)
--sftp-path-override string Override path used by SSH shell commands
--sftp-port int SSH port number (default 22)
+ --sftp-pubkey string SSH public certificate for public certificate based authentication
--sftp-pubkey-file string Optional path to public key file
--sftp-server-command string Specifies the path or command to run a sftp server on the remote host
--sftp-set-env SpaceSepList Environment variables to pass to sftp and commands
@@ -12717,6 +13037,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--sftp-user string SSH username (default "$USER")
--sharefile-auth-url string Auth server URL
--sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi)
+ --sharefile-client-credentials Use client credentials OAuth flow
--sharefile-client-id string OAuth Client Id
--sharefile-client-secret string OAuth Client Secret
--sharefile-description string Description of the remote
@@ -12806,6 +13127,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--uptobox-description string Description of the remote
--uptobox-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
--uptobox-private Set to make uploaded files private
+ --webdav-auth-redirect Preserve authentication on redirect
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
--webdav-bearer-token-command string Command to run to get a bearer token
--webdav-description string Description of the remote
@@ -12821,6 +13143,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--webdav-user string User name
--webdav-vendor string Name of the WebDAV site/service/software you are using
--yandex-auth-url string Auth server URL
+ --yandex-client-credentials Use client credentials OAuth flow
--yandex-client-id string OAuth Client Id
--yandex-client-secret string OAuth Client Secret
--yandex-description string Description of the remote
@@ -12830,13 +13153,15 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--yandex-token string OAuth Access Token as a JSON blob
--yandex-token-url string Token server url
--zoho-auth-url string Auth server URL
+ --zoho-client-credentials Use client credentials OAuth flow
--zoho-client-id string OAuth Client Id
--zoho-client-secret string OAuth Client Secret
--zoho-description string Description of the remote
--zoho-encoding Encoding The encoding for the backend (default Del,Ctl,InvalidUtf8)
--zoho-region string Zoho region to connect to
--zoho-token string OAuth Access Token as a JSON blob
- --zoho-token-url string Token server url
+ --zoho-token-url string Token server url
+ --zoho-upload-cutoff SizeSuffix Cutoff for switching to large file upload api (>= 10 MiB) (default 10Mi)
Docker Volume Plugin
Introduction
Docker 1.9 has added support for creating named volumes via command-line interface and mounting them in containers as a way to share data between them. Since Docker 1.10 you can create named volumes with Docker Compose by descriptions in docker-compose.yml files for use by container groups on a single host. As of Docker 1.12 volumes are supported by Docker Swarm included with Docker Engine and created from descriptions in swarm compose v3 files for use with swarm stacks across multiple cluster nodes.
@@ -12909,7 +13234,7 @@ docker volume inspect vol1
is equivalent to the combined syntax
-o remote=:backend:dir/subdir
but is arguably easier to parameterize in scripts. The path
part is optional.
-Mount and VFS options as well as backend parameters are named like their twin command-line flags without the --
CLI prefix. Optionally you can use underscores instead of dashes in option names. For example, --vfs-cache-mode full
becomes -o vfs-cache-mode=full
or -o vfs_cache_mode=full
. Boolean CLI flags without value will gain the true
value, e.g. --allow-other
becomes -o allow-other=true
or -o allow_other=true
.
+Mount and VFS options as well as backend parameters are named like their twin command-line flags without the --
CLI prefix. Optionally you can use underscores instead of dashes in option names. For example, --vfs-cache-mode full
becomes -o vfs-cache-mode=full
or -o vfs_cache_mode=full
. Boolean CLI flags without value will gain the true
value, e.g. --allow-other
becomes -o allow-other=true
or -o allow_other=true
.
Please note that you can provide parameters only for the backend immediately referenced by the backend type of mounted remote
. If this is a wrapping backend like alias, chunker or crypt, you cannot provide options for the referred to remote or backend. This limitation is imposed by the rclone connection string parser. The only workaround is to feed plugin with rclone.conf
or configure plugin arguments (see below).
Special Volume Options
mount-type
determines the mount method and in general can be one of: mount
, cmount
, or mount2
. This can be aliased as mount_type
. It should be noted that the managed rclone docker plugin currently does not support the cmount
method and mount2
is rarely needed. This option defaults to the first found method, which is usually mount
so you generally won't need it.
@@ -13010,6 +13335,10 @@ sudo runc --root /run/docker/runtime-runc/plugins.moby exec $PLUGID rclone versi
PLUGID=123abc...
sudo curl -H Content-Type:application/json -XPOST -d {} --unix-socket /run/docker/plugins/$PLUGID/rclone.sock http://localhost/Plugin.Activate
though this is rarely needed.
+If the plugin fails to work properly, and only as a last resort after you tried diagnosing with the above methods, you can try clearing the state of the plugin. Note that all existing rclone docker volumes will probably have to be recreated. This might be needed because a reinstall don't cleanup existing state files to allow for easy restoration, as stated above.
+docker plugin disable rclone # disable the plugin to ensure no interference
+sudo rm /var/lib/docker-plugins/rclone/cache/docker-plugin.state # removing the plugin state
+docker plugin enable rclone # re-enable the plugin afterward
Caveats
Finally I'd like to mention a caveat with updating volume settings. Docker CLI does not have a dedicated command like docker volume update
. It may be tempting to invoke docker volume create
with updated options on existing volume, but there is a gotcha. The command will do nothing, it won't even return an error. I hope that docker maintainers will fix this some day. In the meantime be aware that you must remove your volume before recreating it with new settings:
docker volume remove my_vol
@@ -13420,8 +13749,9 @@ rclone copy Path1 Path2 [--create-empty-src-dirs]
Lock file
When bisync is running, a lock file is created in the bisync working directory, typically at ~/.cache/rclone/bisync/PATH1..PATH2.lck
on Linux. If bisync should crash or hang, the lock file will remain in place and block any further runs of bisync for the same paths. Delete the lock file as part of debugging the situation. The lock file effectively blocks follow-on (e.g., scheduled by cron) runs when the prior invocation is taking a long time. The lock file contains PID of the blocking process, which may help in debug. Lock files can be set to automatically expire after a certain amount of time, using the --max-lock
flag.
Note that while concurrent bisync runs are allowed, be very cautious that there is no overlap in the trees being synched between concurrent runs, lest there be replicated files, deleted files and general mayhem.
-Return codes
-rclone bisync
returns the following codes to calling program: - 0
on a successful run, - 1
for a non-critical failing run (a rerun may be successful), - 2
for a critically aborted run (requires a --resync
to recover).
+Exit codes
+rclone bisync
returns the following codes to calling program: - 0
on a successful run, - 1
for a non-critical failing run (a rerun may be successful), - 2
on syntax or usage error, - 7
for a critically aborted run (requires a --resync
to recover).
+See also the section about exit codes in main docs.
Graceful Shutdown
Bisync has a "Graceful Shutdown" mode which is activated by sending SIGINT
or pressing Ctrl+C
during a run. Once triggered, bisync will use best efforts to exit cleanly before the timer runs out. If bisync is in the middle of transferring files, it will attempt to cleanly empty its queue by finishing what it has started but not taking more. If it cannot do so within 30 seconds, it will cancel the in-progress transfers at that point and then give itself a maximum of 60 seconds to wrap up, save its state for next time, and exit. With the -vP
flags you will see constant status updates and a final confirmation of whether or not the graceful shutdown was successful.
At any point during the "Graceful Shutdown" sequence, a second SIGINT
or Ctrl+C
will trigger an immediate, un-graceful exit, which will leave things in a messier state. Usually a robust recovery will still be possible if using --recover
mode, otherwise you will need to do a --resync
.
@@ -14249,6 +14579,7 @@ e/n/d/r/c/s/q> q
Linode Object Storage
Magalu Object Storage
Minio
+Outscale
Petabox
Qiniu Cloud Object Storage (Kodo)
RackCorp Object Storage
@@ -14256,6 +14587,7 @@ e/n/d/r/c/s/q> q
Scaleway
Seagate Lyve Cloud
SeaweedFS
+Selectel
StackPath
Storj
Synology C2 Object Storage
@@ -14443,7 +14775,7 @@ Choose a number from below, or type in your own value
\ "STANDARD_IA"
5 / One Zone Infrequent Access storage class
\ "ONEZONE_IA"
- 6 / Glacier storage class
+ 6 / Glacier Flexible Retrieval storage class
\ "GLACIER"
7 / Glacier Deep Archive storage class
\ "DEEP_ARCHIVE"
@@ -14529,6 +14861,48 @@ y/e/d>
By default, rclone will HEAD every object it uploads. It does this to check the object got uploaded correctly.
You can disable this with the --s3-no-head option - see there for more details.
Setting this flag increases the chance for undetected upload failures.
+
+Using server-side copy
+If you are copying objects between S3 buckets in the same region, you should use server-side copy. This is much faster than downloading and re-uploading the objects, as no data is transferred.
+For rclone to use server-side copy, you must use the same remote for the source and destination.
+rclone copy s3:source-bucket s3:destination-bucket
+When using server-side copy, the performance is limited by the rate at which rclone issues API requests to S3. See below for how to increase the number of API requests rclone makes.
+Increasing the rate of API requests
+You can increase the rate of API requests to S3 by increasing the parallelism using --transfers
and --checkers
options.
+Rclone uses a very conservative defaults for these settings, as not all providers support high rates of requests. Depending on your provider, you can increase significantly the number of transfers and checkers.
+For example, with AWS S3, if you can increase the number of checkers to values like 200. If you are doing a server-side copy, you can also increase the number of transfers to 200.
+rclone sync --transfers 200 --checkers 200 --checksum s3:source-bucket s3:destination-bucket
+You will need to experiment with these values to find the optimal settings for your setup.
+Data integrity
+Rclone does its best to verify every part of an upload or download to the s3 provider using various hashes.
+Every HTTP transaction to/from the provider has a X-Amz-Content-Sha256
or a Content-Md5
header to guard against corruption of the HTTP body. The HTTP Header is protected by the signature passed in the Authorization
header.
+All communications with the provider is done over https for encryption and additional error protection.
+Single part uploads
+
+Rclone uploads single part uploads with a Content-Md5
using the MD5 hash read from the source. The provider checks this is correct on receipt of the data.
+Rclone then does a HEAD request (disable with --s3-no-head
) to read the ETag
back which is the MD5 of the file and checks that with what it sent.
+
+Note that if the source does not have an MD5 then the single part uploads will not have hash protection. In this case it is recommended to use --s3-upload-cutoff 0
so all files are uploaded as multipart uploads.
+Multipart uploads
+For files above --s3-upload-cutoff
rclone splits the file into multiple parts for upload.
+
+- Each part is protected with both an
X-Amz-Content-Sha256
and a Content-Md5
+
+When rclone has finished the upload of all the parts it then completes the upload by sending:
+
+- The MD5 hash of each part
+- The number of parts
+- This info is all protected with a
X-Amz-Content-Sha256
+
+The provider checks the MD5 for all the parts it has received against what rclone sends and if it is good it returns OK.
+Rclone then does a HEAD request (disable with --s3-no-head
) and checks the ETag is what it expects (in this case it should be the MD5 sum of all the MD5 sums of all the parts with the number of parts on the end).
+If the source has an MD5 sum then rclone will attach the X-Amz-Meta-Md5chksum
with it as the ETag
for a multipart upload can't easily be checked against the file as the chunk size must be known in order to calculate it.
+Downloads
+Rclone checks the MD5 hash of the data downloaded against either the ETag or the X-Amz-Meta-Md5chksum
metadata (if present) which rclone uploads with multipart uploads.
+Further checking
+At each stage rclone and the provider are sending and checking hashes of everything. Rclone deliberately HEADs each object after upload to check it arrived safely for extra security. (You can disable this with --s3-no-head
).
+If you require further assurance that your data is intact you can use rclone check
to check the hashes locally vs the remote.
+And if you are feeling ultimately paranoid use rclone check --download
which will download the files and check them against the local copies. (Note that this doesn't use disk to do this - it streams them in memory).
Versions
When bucket versioning is enabled (this can be done with rclone with the rclone backend versioning
command) when rclone uploads a new version of a file it creates a new version of it Likewise when you delete a file, the old version will be marked hidden and still be available.
Old versions of files, where available, are visible using the --s3-versions
flag.
@@ -14611,7 +14985,7 @@ $ rclone -q --s3-versions ls s3:cleanup-test
-Multipart uploads
+Multipart uploads
rclone supports multipart uploads with S3 which means that it can upload files bigger than 5 GiB.
Note that files uploaded both with multipart upload and through crypt remotes do not have MD5 sums.
rclone switches from single part uploads to multipart uploads at the point specified by --s3-upload-cutoff
. This can be a maximum of 5 GiB and a minimum of 0 (ie always upload multipart files).
@@ -14715,7 +15089,7 @@ $ rclone -q --s3-versions ls s3:cleanup-test
As mentioned in the Modification times and hashes section, small files that are not uploaded as multipart, use a different tag, causing the upload to fail. A simple solution is to set the --s3-upload-cutoff 0
and force all the files to be uploaded as multipart.
Standard options
-Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
+Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
--s3-provider
Choose your S3 provider.
Properties:
@@ -14806,6 +15180,10 @@ $ rclone -q --s3-versions ls s3:cleanup-test
- Netease Object Storage (NOS)
+"Outscale"
+
+- OUTSCALE Object Storage (OOS)
+
"Petabox"
- Petabox Object Storage
@@ -14826,6 +15204,10 @@ $ rclone -q --s3-versions ls s3:cleanup-test
+"Selectel"
+
+- Selectel Object Storage
+
"StackPath"
- StackPath Object Storage
@@ -15180,7 +15562,7 @@ $ rclone -q --s3-versions ls s3:cleanup-test
- Config: acl
- Env Var: RCLONE_S3_ACL
-- Provider: !Storj,Synology,Cloudflare
+- Provider: !Storj,Selectel,Synology,Cloudflare
- Type: string
- Required: false
- Examples:
@@ -15328,7 +15710,7 @@ $ rclone -q --s3-versions ls s3:cleanup-test
"GLACIER"
-- Glacier storage class
+- Glacier Flexible Retrieval storage class
"DEEP_ARCHIVE"
@@ -15345,7 +15727,7 @@ $ rclone -q --s3-versions ls s3:cleanup-test
Advanced options
-Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
+Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
--s3-bucket-acl
Canned ACL used when creating buckets.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
@@ -15909,6 +16291,32 @@ Windows: "%USERPROFILE%\.aws\credentials"
Type: Tristate
Default: unset
+--s3-directory-bucket
+Set to use AWS Directory Buckets
+If you are using an AWS Directory Bucket then set this flag.
+This will ensure no Content-Md5
headers are sent and ensure ETag
headers are not interpreted as MD5 sums. X-Amz-Meta-Md5chksum
will be set on all objects whether single or multipart uploaded.
+This also sets no_check_bucket = true
.
+Note that Directory Buckets do not support:
+
+- Versioning
+Content-Encoding: gzip
+
+Rclone limitations with Directory Buckets:
+
+- rclone does not support creating Directory Buckets with
rclone mkdir
+- ... or removing them with
rclone rmdir
yet
+- Directory Buckets do not appear when doing
rclone lsf
at the top level.
+- Rclone can't remove auto created directories yet. In theory this should work with
directory_markers = true
but it doesn't.
+- Directories don't seem to appear in recursive (ListR) listings.
+
+Properties:
+
+- Config: directory_bucket
+- Env Var: RCLONE_S3_DIRECTORY_BUCKET
+- Provider: AWS
+- Type: bool
+- Default: false
+
--s3-sdk-log-mode
Set to debug the SDK
This can be set to a comma separated list of the following functions:
@@ -16177,6 +16585,15 @@ provider = AWS
Providers
AWS S3
This is the provider used as main example and described in the configuration section above.
+AWS Directory Buckets
+From rclone v1.69 Directory Buckets are supported.
+You will need to set the directory_buckets = true
config parameter or use --s3-directory-buckets
.
+Note that rclone cannot yet:
+
+- Create directory buckets
+- List directory buckets
+
+See the --s3-directory-buckets flag for more info
AWS Snowball Edge
AWS Snowball is a hardware appliance used for transferring bulk data back to AWS. Its main software interface is S3 object storage.
To use rclone with AWS Snowball Edge devices, configure as standard for an 'S3 Compatible Service'.
@@ -16302,6 +16719,7 @@ acl = private
Now run rclone lsf r2:
to see your buckets and rclone lsf r2:bucket
to look within a bucket.
For R2 tokens with the "Object Read & Write" permission, you may also need to add no_check_bucket = true
for object uploads to work correctly.
Note that Cloudflare decompresses files uploaded with Content-Encoding: gzip
by default which is a deviation from what AWS does. If this is causing a problem then upload the files with --header-upload "Cache-Control: no-transform"
+A consequence of this is that Content-Encoding: gzip
will never appear in the metadata on Cloudflare.
Dreamhost
Dreamhost DreamObjects is an object storage system based on CEPH.
To use rclone with Dreamhost, configure as above but leave the region blank and set the endpoint. You should end up with something like this in your config:
@@ -16903,6 +17321,125 @@ location_constraint =
server_side_encryption =
So once set up, for example, to copy files into a bucket
rclone copy /path/to/files minio:bucket
+Outscale
+OUTSCALE Object Storage (OOS) is an enterprise-grade, S3-compatible storage service provided by OUTSCALE, a brand of Dassault Systèmes. For more information about OOS, see the official documentation.
+Here is an example of an OOS configuration that you can paste into your rclone configuration file:
+[outscale]
+type = s3
+provider = Outscale
+env_auth = false
+access_key_id = ABCDEFGHIJ0123456789
+secret_access_key = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+region = eu-west-2
+endpoint = oos.eu-west-2.outscale.com
+acl = private
+You can also run rclone config
to go through the interactive setup process:
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+Enter name for new remote.
+name> outscale
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+ X / Amazon S3 Compliant Storage Providers including AWS, ...Outscale, ...and others
+ \ (s3)
+[snip]
+Storage> outscale
+Option provider.
+Choose your S3 provider.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+[snip]
+XX / OUTSCALE Object Storage (OOS)
+ \ (Outscale)
+[snip]
+provider> Outscale
+Option env_auth.
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+Only applies if access_key_id and secret_access_key is blank.
+Choose a number from below, or type in your own boolean value (true or false).
+Press Enter for the default (false).
+ 1 / Enter AWS credentials in the next step.
+ \ (false)
+ 2 / Get AWS credentials from the environment (env vars or IAM).
+ \ (true)
+env_auth>
+Option access_key_id.
+AWS Access Key ID.
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+access_key_id> ABCDEFGHIJ0123456789
+Option secret_access_key.
+AWS Secret Access Key (password).
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+secret_access_key> XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+Option region.
+Region where your bucket will be created and your data stored.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / Paris, France
+ \ (eu-west-2)
+ 2 / New Jersey, USA
+ \ (us-east-2)
+ 3 / California, USA
+ \ (us-west-1)
+ 4 / SecNumCloud, Paris, France
+ \ (cloudgouv-eu-west-1)
+ 5 / Tokyo, Japan
+ \ (ap-northeast-1)
+region> 1
+Option endpoint.
+Endpoint for S3 API.
+Required when using an S3 clone.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / Outscale EU West 2 (Paris)
+ \ (oos.eu-west-2.outscale.com)
+ 2 / Outscale US east 2 (New Jersey)
+ \ (oos.us-east-2.outscale.com)
+ 3 / Outscale EU West 1 (California)
+ \ (oos.us-west-1.outscale.com)
+ 4 / Outscale SecNumCloud (Paris)
+ \ (oos.cloudgouv-eu-west-1.outscale.com)
+ 5 / Outscale AP Northeast 1 (Japan)
+ \ (oos.ap-northeast-1.outscale.com)
+endpoint> 1
+Option acl.
+Canned ACL used when creating buckets and storing or copying objects.
+This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
+For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
+Note that this ACL is applied when server-side copying objects as S3
+doesn't copy the ACL from the source but rather writes a fresh one.
+If the acl is an empty string then no X-Amz-Acl: header is added and
+the default (private) will be used.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ / Owner gets FULL_CONTROL.
+ 1 | No one else has access rights (default).
+ \ (private)
+[snip]
+acl> 1
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+Configuration complete.
+Options:
+- type: s3
+- provider: Outscale
+- access_key_id: ABCDEFGHIJ0123456789
+- secret_access_key: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+- endpoint: oos.eu-west-2.outscale.com
+Keep this "outscale" remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
Qiniu Cloud Object Storage (Kodo)
Qiniu Cloud Object Storage (Kodo), a completely independent-researched core technology which is proven by repeated customer experience has occupied absolute leading market leader position. Kodo can be widely applied to mass data management.
To configure access to Qiniu Kodo, follow the steps below:
@@ -17123,7 +17660,7 @@ acl = private
upload_cutoff = 5M
chunk_size = 5M
copy_cutoff = 5M
-C14 Cold Storage is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" storage_class
. So you can configure your remote with the storage_class = GLACIER
option to upload directly to C14. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above)
+Scaleway Glacier is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" storage_class
. So you can configure your remote with the storage_class = GLACIER
option to upload directly to Scaleway Glacier. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above)
Seagate Lyve Cloud
Seagate Lyve Cloud is an S3 compatible object storage platform from Seagate intended for enterprise use.
Here is a config run through for a remote called remote
- you may choose a different name of course. Note that to create an access key and secret key you will need to create a service account first.
@@ -17252,6 +17789,105 @@ secret_access_key = any
endpoint = localhost:8333
So once set up, for example to copy files into a bucket
rclone copy /path/to/files seaweedfs_s3:foo
+Selectel
+Selectel Cloud Storage is an S3 compatible storage system which features triple redundancy storage, automatic scaling, high availability and a comprehensive IAM system.
+Selectel have a section on their website for configuring rclone which shows how to make the right API keys.
+From rclone v1.69 Selectel is a supported operator - please choose the Selectel
provider type.
+Note that you should use "vHosted" access for the buckets (which is the recommended default), not "path style".
+You can use rclone config
to make a new provider like this
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+
+Enter name for new remote.
+name> selectel
+
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+XX / Amazon S3 Compliant Storage Providers including ..., Selectel, ...
+ \ (s3)
+[snip]
+Storage> s3
+
+Option provider.
+Choose your S3 provider.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+[snip]
+XX / Selectel Object Storage
+ \ (Selectel)
+[snip]
+provider> Selectel
+
+Option env_auth.
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+Only applies if access_key_id and secret_access_key is blank.
+Choose a number from below, or type in your own boolean value (true or false).
+Press Enter for the default (false).
+ 1 / Enter AWS credentials in the next step.
+ \ (false)
+ 2 / Get AWS credentials from the environment (env vars or IAM).
+ \ (true)
+env_auth> 1
+
+Option access_key_id.
+AWS Access Key ID.
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+access_key_id> ACCESS_KEY
+
+Option secret_access_key.
+AWS Secret Access Key (password).
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+secret_access_key> SECRET_ACCESS_KEY
+
+Option region.
+Region where your data stored.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / St. Petersburg
+ \ (ru-1)
+region> 1
+
+Option endpoint.
+Endpoint for Selectel Object Storage.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / Saint Petersburg
+ \ (s3.ru-1.storage.selcloud.ru)
+endpoint> 1
+
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+
+Configuration complete.
+Options:
+- type: s3
+- provider: Selectel
+- access_key_id: ACCESS_KEY
+- secret_access_key: SECRET_ACCESS_KEY
+- region: ru-1
+- endpoint: s3.ru-1.storage.selcloud.ru
+Keep this "selectel" remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+And your config should end up looking like this:
+[selectel]
+type = s3
+provider = Selectel
+access_key_id = ACCESS_KEY
+secret_access_key = SECRET_ACCESS_KEY
+region = ru-1
+endpoint = s3.ru-1.storage.selcloud.ru
Wasabi
Wasabi is a cloud-based object storage service for a broad range of applications and use cases. Wasabi is designed for individuals and organizations that require a high-performance, reliable, and secure data storage infrastructure at minimal cost.
Wasabi provides an S3 interface which can be configured for use with rclone like this.
@@ -18322,7 +18958,7 @@ cos s3
For Netease NOS configure as per the configurator rclone config
setting the provider Netease
. This will automatically set force_path_style = false
which is necessary for it to run properly.
Petabox
Here is an example of making a Petabox configuration. First run:
-
+
This will guide you through an interactive setup process.
No remotes found, make a new one?
n) New remote
@@ -19054,6 +19690,7 @@ https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxx
{
"daysFromHidingToDeleting": 1,
"daysFromUploadingToHiding": null,
+ "daysFromStartingToCancelingUnfinishedLargeFiles": null,
"fileNamePrefix": ""
}
]
@@ -19069,6 +19706,7 @@ rclone backend lifecycle b2:bucket -o daysFromUploadingToHiding=5 -o daysFromHid
Options:
- "daysFromHidingToDeleting": After a file has been hidden for this many days it is deleted. 0 is off.
+- "daysFromStartingToCancelingUnfinishedLargeFiles": Cancels any unfinished large file versions after this many days
- "daysFromUploadingToHiding": This many days after uploading a file is hidden
cleanup
@@ -19141,7 +19779,7 @@ If not sure try Y. If Y failed, try N.
y) Yes
n) No
y/n> y
-If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
+If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=XXXXXXXXXXXXXXXXXXXXXX
Log in and authorize rclone for access
Waiting for code...
Got code
@@ -19384,6 +20022,16 @@ y/e/d> y
Type: string
Required: false
+--box-client-credentials
+Use client credentials OAuth flow.
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_BOX_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
--box-root-folder-id
Fill in for rclone to use a non root folder as its starting point.
Properties:
@@ -20205,9 +20853,173 @@ y/e/d> y
Type: string
Required: false
+Cloudinary
+This is a backend for the Cloudinary platform
+About Cloudinary
+Cloudinary is an image and video API platform. Trusted by 1.5 million developers and 10,000 enterprise and hyper-growth companies as a critical part of their tech stack to deliver visually engaging experiences.
+Accounts & Pricing
+To use this backend, you need to create a free account on Cloudinary. Start with a free plan with generous usage limits. Then, as your requirements grow, upgrade to a plan that best fits your needs. See the pricing details.
+Securing Your Credentials
+Please refer to the docs
+Configuration
+Here is an example of making a Cloudinary configuration.
+First, create a cloudinary.com account and choose a plan.
+You will need to log in and get the API Key
and API Secret
for your account from the developer section.
+Now run
+rclone config
+Follow the interactive setup process:
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+
+Enter the name for the new remote.
+name> cloudinary-media-library
+
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+XX / cloudinary.com
+\ (cloudinary)
+[snip]
+Storage> cloudinary
+
+Option cloud_name.
+You can find your cloudinary.com cloud_name in your [dashboard](https://console.cloudinary.com/pm/developer-dashboard)
+Enter a value.
+cloud_name> ****************************
+
+Option api_key.
+You can find your cloudinary.com api key in your [dashboard](https://console.cloudinary.com/pm/developer-dashboard)
+Enter a value.
+api_key> ****************************
+
+Option api_secret.
+You can find your cloudinary.com api secret in your [dashboard](https://console.cloudinary.com/pm/developer-dashboard)
+This value must be a single character, one of the following: y, g.
+y/g> y
+Enter a value.
+api_secret> ****************************
+
+Option upload_prefix.
+[Upload prefix](https://cloudinary.com/documentation/cloudinary_sdks#configuration_parameters) to specify alternative data center
+Enter a value.
+upload_prefix>
+
+Option upload_preset.
+[Upload presets](https://cloudinary.com/documentation/upload_presets) can be defined for different upload profiles
+Enter a value.
+upload_preset>
+
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+
+Configuration complete.
+Options:
+- type: cloudinary
+- api_key: ****************************
+- api_secret: ****************************
+- cloud_name: ****************************
+- upload_prefix:
+- upload_preset:
+
+Keep this "cloudinary-media-library" remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+List directories in the top level of your Media Library
+rclone lsd cloudinary-media-library:
+Make a new directory.
+rclone mkdir cloudinary-media-library:directory
+List the contents of a directory.
+rclone ls cloudinary-media-library:directory
+Modified time and hashes
+Cloudinary stores md5 and timestamps for any successful Put automatically and read-only.
+Standard options
+Here are the Standard options specific to cloudinary (Cloudinary).
+--cloudinary-cloud-name
+Cloudinary Environment Name
+Properties:
+
+- Config: cloud_name
+- Env Var: RCLONE_CLOUDINARY_CLOUD_NAME
+- Type: string
+- Required: true
+
+--cloudinary-api-key
+Cloudinary API Key
+Properties:
+
+- Config: api_key
+- Env Var: RCLONE_CLOUDINARY_API_KEY
+- Type: string
+- Required: true
+
+--cloudinary-api-secret
+Cloudinary API Secret
+Properties:
+
+- Config: api_secret
+- Env Var: RCLONE_CLOUDINARY_API_SECRET
+- Type: string
+- Required: true
+
+--cloudinary-upload-prefix
+Specify the API endpoint for environments out of the US
+Properties:
+
+- Config: upload_prefix
+- Env Var: RCLONE_CLOUDINARY_UPLOAD_PREFIX
+- Type: string
+- Required: false
+
+--cloudinary-upload-preset
+Upload Preset to select asset manipulation on upload
+Properties:
+
+- Config: upload_preset
+- Env Var: RCLONE_CLOUDINARY_UPLOAD_PRESET
+- Type: string
+- Required: false
+
+Advanced options
+Here are the Advanced options specific to cloudinary (Cloudinary).
+--cloudinary-encoding
+The encoding for the backend.
+See the encoding section in the overview for more info.
+Properties:
+
+- Config: encoding
+- Env Var: RCLONE_CLOUDINARY_ENCODING
+- Type: Encoding
+- Default: Slash,LtGt,DoubleQuote,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot
+
+--cloudinary-eventually-consistent-delay
+Wait N seconds for eventual consistency of the databases that support the backend operation
+Properties:
+
+- Config: eventually_consistent_delay
+- Env Var: RCLONE_CLOUDINARY_EVENTUALLY_CONSISTENT_DELAY
+- Type: Duration
+- Default: 0s
+
+--cloudinary-description
+Description of the remote.
+Properties:
+
+- Config: description
+- Env Var: RCLONE_CLOUDINARY_DESCRIPTION
+- Type: string
+- Required: false
+
Citrix ShareFile
Citrix ShareFile is a secure file sharing and transfer service aimed as business.
-Configuration
+Configuration
The initial setup for Citrix ShareFile involves getting a token from Citrix ShareFile which you can in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -20360,7 +21172,7 @@ y/e/d> y
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Standard options
+Standard options
Here are the Standard options specific to sharefile (Citrix Sharefile).
--sharefile-client-id
OAuth Client Id.
@@ -20415,7 +21227,7 @@ y/e/d> y
-Advanced options
+Advanced options
Here are the Advanced options specific to sharefile (Citrix Sharefile).
--sharefile-token
OAuth Access Token as a JSON blob.
@@ -20446,6 +21258,16 @@ y/e/d> y
Type: string
Required: false
+--sharefile-client-credentials
+Use client credentials OAuth flow.
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_SHAREFILE_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
--sharefile-upload-cutoff
Cutoff for switching to multipart upload.
Properties:
@@ -20508,7 +21330,7 @@ y/e/d> y
The encryption is a secret-key encryption (also called symmetric key encryption) algorithm, where a password (or pass phrase) is used to generate real encryption key. The password can be supplied by user, or you may chose to let rclone generate one. It will be stored in the configuration file, in a lightly obscured form. If you are in an environment where you are not able to keep your configuration secured, you should add configuration encryption as protection. As long as you have this configuration file, you will be able to decrypt your data. Without the configuration file, as long as you remember the password (or keep it in a safe place), you can re-create the configuration and gain access to the existing data. You may also configure a corresponding remote in a different installation to access the same data. See below for guidance to changing password.
Encryption uses cryptographic salt, to permute the encryption key so that the same string may be encrypted in different ways. When configuring the crypt remote it is optional to enter a salt, or to let rclone generate a unique salt. If omitted, rclone uses a built-in unique string. Normally in cryptography, the salt is stored together with the encrypted content, and do not have to be memorized by the user. This is not the case in rclone, because rclone does not store any additional information on the remotes. Use of custom salt is effectively a second password that must be memorized.
File content encryption is performed using NaCl SecretBox, based on XSalsa20 cipher and Poly1305 for integrity. Names (file- and directory names) are also encrypted by default, but this has some implications and is therefore possible to be turned off.
-Configuration
+Configuration
Here is an example of how to make a remote called secret
.
To use crypt
, first set up the underlying remote. Follow the rclone config
instructions for the specific backend.
Before configuring the crypt remote, check the underlying remote is working. In this example the underlying remote is called remote
. We will configure a path path
within this remote to contain the encrypted content. Anything inside remote:path
will be encrypted and anything outside will not.
@@ -20698,7 +21520,7 @@ $ rclone -q ls secret:
Crypt stores modification times using the underlying remote so support depends on that.
Hashes are not stored for crypt. However the data integrity is protected by an extremely strong crypto authenticator.
Use the rclone cryptcheck
command to check the integrity of an encrypted remote instead of rclone check
which can't check the checksums properly.
-Standard options
+Standard options
Here are the Standard options specific to crypt (Encrypt/Decrypt a remote).
--crypt-remote
Remote to encrypt/decrypt.
@@ -20778,7 +21600,7 @@ $ rclone -q ls secret:
Type: string
Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to crypt (Encrypt/Decrypt a remote).
--crypt-server-side-across-configs
Deprecated: use --server-side-across-configs instead.
@@ -20984,7 +21806,7 @@ rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfile
Warning
This remote is currently experimental. Things may break and data may be lost. Anything you do with this remote is at your own risk. Please understand the risks associated with using experimental code and don't use this remote in critical applications.
The Compress
remote adds compression to another remote. It is best used with remotes containing many large compressible files.
-Configuration
+Configuration
To use this remote, all you need to do is specify another remote and a compression mode to use:
Current remotes:
@@ -21039,7 +21861,7 @@ y/e/d> y
If you open a remote wrapped by compress, you will see that there are many files with an extension corresponding to the compression algorithm you chose. These files are standard files that can be opened by various archive programs, but they have some hidden metadata that allows them to be used by rclone. While you may download and decompress these files at will, do not manually delete or rename files. Files without correct metadata files will not be recognized by rclone.
File names
The compressed files will be named *.###########.gz
where *
is the base file and the #
part is base64 encoded size of the uncompressed file. The file names should not be changed by anything other than the rclone compression backend.
-Standard options
+Standard options
Here are the Standard options specific to compress (Compress a remote).
--compress-remote
Remote to compress.
@@ -21066,7 +21888,7 @@ y/e/d> y
-Advanced options
+Advanced options
Here are the Advanced options specific to compress (Compress a remote).
--compress-level
GZIP compression level (-2 to 9).
@@ -21125,7 +21947,7 @@ y/e/d> y
You'd do this by specifying an upstreams
parameter in the config like this
upstreams = images=s3:imagesbucket files=drive:important/files
During the initial setup with rclone config
you will specify the upstreams remotes as a space separated list. The upstream remotes can either be a local paths or other remotes.
-Configuration
+Configuration
Here is an example of how to make a combine called remote
for the example above. First run:
rclone config
This will guide you through an interactive setup process:
@@ -21180,7 +22002,7 @@ type = combine
upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"
If you then add that config to your config file (find it with rclone config file
) then you can access all the shared drives in one place with the AllDrives:
remote.
See the Google Drive docs for full info.
-Standard options
+Standard options
Here are the Standard options specific to combine (Combine several remotes into one).
--combine-upstreams
Upstreams for combining
@@ -21196,7 +22018,7 @@ upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"Type: SpaceSepList
Default:
-Advanced options
+Advanced options
Here are the Advanced options specific to combine (Combine several remotes into one).
--combine-description
Description of the remote.
@@ -21213,7 +22035,7 @@ upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"Dropbox
Paths are specified as remote:path
Dropbox paths may be as deep as required, e.g. remote:directory/subdirectory
.
-Configuration
+Configuration
The initial setup for dropbox involves getting a token from Dropbox which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -21337,7 +22159,7 @@ y/e/d> y
This provides the maximum possible upload speed especially with lots of small files, however rclone can't check the file got uploaded properly using this mode.
If you are using this mode then using "rclone check" after the transfer completes is recommended. Or you could do an initial transfer with --dropbox-batch-mode async
then do a final transfer with --dropbox-batch-mode sync
(the default).
Note that there may be a pause when quitting rclone while rclone finishes up the last batch using this mode.
-Standard options
+Standard options
Here are the Standard options specific to dropbox (Dropbox).
--dropbox-client-id
OAuth Client Id.
@@ -21359,7 +22181,7 @@ y/e/d> y
Type: string
Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to dropbox (Dropbox).
--dropbox-token
OAuth Access Token as a JSON blob.
@@ -21390,6 +22212,16 @@ y/e/d> y
Type: string
Required: false
+--dropbox-client-credentials
+Use client credentials OAuth flow.
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_DROPBOX_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
--dropbox-chunk-size
Upload chunk size (< 150Mi).
Any files larger than this will be uploaded in chunks of this size.
@@ -21554,7 +22386,7 @@ y/e/d> y
Enterprise File Fabric
This backend supports Storage Made Easy's Enterprise File Fabric™ which provides a software solution to integrate and unify File and Object Storage accessible through a global file system.
-Configuration
+Configuration
The initial setup for the Enterprise File Fabric backend involves getting a token from the Enterprise File Fabric which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -21648,7 +22480,7 @@ y/e/d> y
120673757,My contacts/
120673761,S3 Storage/
The ID for "S3 Storage" would be 120673761
.
-Standard options
+Standard options
Here are the Standard options specific to filefabric (Enterprise File Fabric).
--filefabric-url
URL of the Enterprise File Fabric to connect to.
@@ -21697,7 +22529,7 @@ y/e/d> y
Type: string
Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to filefabric (Enterprise File Fabric).
--filefabric-token
Session Token.
@@ -21752,7 +22584,7 @@ y/e/d> y
Files.com
Files.com is a cloud storage service that provides a secure and easy way to store and share files.
The initial setup for filescom involves authenticating with your Files.com account. You can do this by providing your site subdomain, username, and password. Alternatively, you can authenticate using an API Key from Files.com. rclone config
walks you through it.
-Configuration
+Configuration
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
@@ -21821,7 +22653,7 @@ y/e/d> y
rclone ls remote:
Sync /home/local/directory
to the remote directory, deleting any excess files in the directory.
rclone sync --interactive /home/local/directory remote:dir
-Standard options
+Standard options
Here are the Standard options specific to filescom (Files.com).
Your site subdomain (e.g. mysite) or custom domain (e.g. myfiles.customdomain.com).
@@ -21851,7 +22683,7 @@ y/e/d> y
Type: string
Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to filescom (Files.com).
The API key used to authenticate with Files.com.
@@ -21885,7 +22717,7 @@ y/e/d> y
FTP is the File Transfer Protocol. Rclone FTP support is provided using the github.com/jlaffaye/ftp package.
Limitations of Rclone's FTP backend
Paths are specified as remote:path
. If the path does not begin with a /
it is relative to the home directory of the user. An empty path remote:
refers to the user's home directory.
-Configuration
+Configuration
To create an FTP configuration named remote
, run
rclone config
Rclone config guides you through an interactive setup process. A minimal rclone FTP remote definition only requires host, username and password. For an anonymous FTP server, see below.
@@ -22001,7 +22833,7 @@ rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=IXs2wc8OJOz7SYLBk47
This backend's interactive configuration wizard provides a selection of sensible encoding settings for major FTP servers: ProFTPd, PureFTPd, VsFTPd. Just hit a selection number when prompted.
-Standard options
+Standard options
Here are the Standard options specific to ftp (FTP).
--ftp-host
FTP host to connect to.
@@ -22061,7 +22893,7 @@ rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=IXs2wc8OJOz7SYLBk47
Type: bool
Default: false
-Advanced options
+Advanced options
Here are the Advanced options specific to ftp (FTP).
--ftp-concurrency
Maximum number of FTP simultaneous connections, 0 for unlimited.
@@ -22190,12 +23022,9 @@ rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=IXs2wc8OJOz7SYLBk47
--ftp-socks-proxy
Socks 5 proxy host.
- Supports the format user:pass@host:port, user@host:port, host:port.
-
- Example:
-
- myUser:myPass@localhost:9005
-
+Supports the format user:pass@host:port, user@host:port, host:port.
+Example:
+myUser:myPass@localhost:9005
Properties:
- Config: socks_proxy
@@ -22203,6 +23032,18 @@ rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=IXs2wc8OJOz7SYLBk47
- Type: string
- Required: false
+--ftp-no-check-upload
+Don't check the upload is OK
+Normally rclone will try to check the upload exists after it has uploaded a file to make sure the size and modification time are as expected.
+This flag stops rclone doing these checks. This enables uploading to folders which are write only.
+You will likely need to use the --inplace flag also if uploading to a write only folder.
+Properties:
+
+- Config: no_check_upload
+- Env Var: RCLONE_FTP_NO_CHECK_UPLOAD
+- Type: bool
+- Default: false
+
--ftp-encoding
The encoding for the backend.
See the encoding section in the overview for more info.
@@ -22255,7 +23096,7 @@ rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=IXs2wc8OJOz7SYLBk47
Gofile is a content storage and distribution platform. Its aim is to provide as much service as possible for free or at a very low price.
The initial setup for Gofile involves logging in to the web interface and going to the "My Profile" section. Copy the "Account API token" for use in the config file.
Note that if you wish to connect rclone to Gofile you will need a premium account.
-Configuration
+Configuration
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
@@ -22394,7 +23235,7 @@ d50e356c-29ca-4b27-a3a7-494d91026e04;Videos/
The ID to use is the part before the ;
so you could set
root_folder_id = d6341f53-ee65-4f29-9f59-d11e8070b2a0
To restrict rclone to the Files
directory.
-Standard options
+Standard options
Here are the Standard options specific to gofile (Gofile).
--gofile-access-token
API Access token
@@ -22406,7 +23247,7 @@ d50e356c-29ca-4b27-a3a7-494d91026e04;Videos/
Type: string
Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to gofile (Gofile).
--gofile-root-folder-id
ID of the root folder
@@ -22468,7 +23309,7 @@ d50e356c-29ca-4b27-a3a7-494d91026e04;Videos/
Use rclone dedupe
to fix duplicated files.
Google Cloud Storage
Paths are specified as remote:bucket
(or remote:
for the lsd
command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir
.
-Configuration
+Configuration
The initial setup for google cloud storage involves getting a token from Google Cloud Storage which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -22608,6 +23449,40 @@ y/e/d> y
You can set up rclone with Google Cloud Storage in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don't have actively logged-in users, for example build machines.
To get credentials for Google Cloud Platform IAM Service Accounts, please head to the Service Account section of the Google Developer Console. Service Accounts behave just like normal User
permissions in Google Cloud Storage ACLs, so you can limit their access (e.g. make them read only). After creating an account, a JSON file containing the Service Account's credentials will be downloaded onto your machines. These credentials are what rclone will use for authentication.
To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file
prompt and rclone won't use the browser based authentication flow. If you'd rather stuff the contents of the credentials file into the rclone config file, you can set service_account_credentials
with the actual contents of the file instead, or set the equivalent environment variable.
+Service Account Authentication with Access Tokens
+Another option for service account authentication is to use access tokens via gcloud impersonate-service-account. Access tokens protect security by avoiding the use of the JSON key file, which can be breached. They also bypass oauth login flow, which is simpler on remote VMs that lack a web browser.
+If you already have a working service account, skip to step 3.
+1. Create a service account using
+gcloud iam service-accounts create gcs-read-only
+You can re-use an existing service account as well (like the one created above)
+2. Attach a Viewer (read-only) or User (read-write) role to the service account
+ $ PROJECT_ID=my-project
+ $ gcloud --verbose iam service-accounts add-iam-policy-binding \
+ gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com \
+ --member=serviceAccount:gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com \
+ --role=roles/storage.objectViewer
+Use the Google Cloud console to identify a limited role. Some relevant pre-defined roles:
+
+- roles/storage.objectUser -- read-write access but no admin privileges
+- roles/storage.objectViewer -- read-only access to objects
+- roles/storage.admin -- create buckets & administrative roles
+
+3. Get a temporary access key for the service account
+$ gcloud auth application-default print-access-token \
+ --impersonate-service-account \
+ gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com
+
+ya29.c.c0ASRK0GbAFEewXD [truncated]
+4. Update access_token
setting
+hit CTRL-C
when you see waiting for code. This will save the config without doing oauth flow
+rclone config update ${REMOTE_NAME} access_token ya29.c.c0Axxxx
+5. Run rclone as usual
+rclone ls dev-gcs:${MY_BUCKET}/
+More Info on Service Accounts
+
Anonymous Access
For downloads of objects that permit public access you can configure rclone to use anonymous access by setting anonymous
to true
. With unauthorized access you can't write or create files but only read or list those buckets and objects that have public read access.
Application Default Credentials
@@ -22665,7 +23540,7 @@ y/e/d> y
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Standard options
+Standard options
Here are the Standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).
--gcs-client-id
OAuth Client Id.
@@ -23050,7 +23925,7 @@ y/e/d> y
-Advanced options
+Advanced options
Here are the Advanced options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).
--gcs-token
OAuth Access Token as a JSON blob.
@@ -23081,6 +23956,26 @@ y/e/d> y
Type: string
Required: false
+--gcs-client-credentials
+Use client credentials OAuth flow.
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_GCS_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
+--gcs-access-token
+Short-lived access token.
+Leave blank normally. Needed only if you want use short-lived access token instead of interactive login.
+Properties:
+
+- Config: access_token
+- Env Var: RCLONE_GCS_ACCESS_TOKEN
+- Type: string
+- Required: false
+
--gcs-directory-markers
Upload an empty object with a trailing slash when a new directory is created
Empty folders are unsupported for bucket based remotes, this option creates an empty object ending with "/", to persist the folder.
@@ -23147,7 +24042,7 @@ y/e/d> y
Google Drive
Paths are specified as drive:path
Drive paths may be as deep as required, e.g. drive:directory/subdirectory
.
-Configuration
+Configuration
The initial setup for drive involves getting a token from Google drive which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -23538,81 +24433,86 @@ trashed=false and 'c' in parents
JSON Text Format for Google Apps scripts |
+md |
+text/markdown |
+Markdown Text Format |
+
+
odp |
application/vnd.oasis.opendocument.presentation |
Openoffice Presentation |
-
+
ods |
application/vnd.oasis.opendocument.spreadsheet |
Openoffice Spreadsheet |
-
+
ods |
application/x-vnd.oasis.opendocument.spreadsheet |
Openoffice Spreadsheet |
-
+
odt |
application/vnd.oasis.opendocument.text |
Openoffice Document |
-
+
pdf |
application/pdf |
Adobe PDF Format |
-
+
pjpeg |
image/pjpeg |
Progressive JPEG Image |
-
+
png |
image/png |
PNG Image Format |
-
+
pptx |
application/vnd.openxmlformats-officedocument.presentationml.presentation |
Microsoft Office Powerpoint |
-
+
rtf |
application/rtf |
Rich Text Format |
-
+
svg |
image/svg+xml |
Scalable Vector Graphics Format |
-
+
tsv |
text/tab-separated-values |
Standard TSV format for spreadsheets |
-
+
txt |
text/plain |
Plain Text |
-
+
wmf |
application/x-msmetafile |
Windows Meta File |
-
+
xls |
application/vnd.ms-excel |
Classic Excel file |
-
+
xlsx |
application/vnd.openxmlformats-officedocument.spreadsheetml.sheet |
Microsoft Office Spreadsheet |
-
+
zip |
application/zip |
A ZIP file of HTML, Images CSS |
@@ -23651,7 +24551,7 @@ trashed=false and 'c' in parents
-Standard options
+Standard options
Here are the Standard options specific to drive (Google Drive).
--drive-client-id
Google Application Client Id Setting your own is recommended. See https://rclone.org/drive/#making-your-own-client-id for how to create your own. If you leave this blank, it will use an internal key which is low performance.
@@ -23728,7 +24628,7 @@ trashed=false and 'c' in parents
Type: bool
Default: false
-Advanced options
+Advanced options
Here are the Advanced options specific to drive (Google Drive).
--drive-token
OAuth Access Token as a JSON blob.
@@ -23759,6 +24659,16 @@ trashed=false and 'c' in parents
Type: string
Required: false
+--drive-client-credentials
+Use client credentials OAuth flow.
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_DRIVE_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
--drive-root-folder-id
ID of the root folder. Leave blank normally.
Fill in to access "Computers" folders (see docs), or for rclone to use a non root folder as its starting point.
@@ -24526,6 +25436,22 @@ rclone backend copyid drive: ID1 path1 ID2 path2
"webViewLink": "https://drive.google.com/file/d/0AxBe_CDEF4zkGHI4d0FjYko2QkD/view?usp=drivesdk\u0026resourcekey=0-ABCDEFGHIXJQpIGqBJq3MC"
}
]
+rescue
+Rescue or delete any orphaned files
+rclone backend rescue remote: [options] [<arguments>+]
+This command rescues or deletes any orphaned files or directories.
+Sometimes files can get orphaned in Google Drive. This means that they are no longer in any folder in Google Drive.
+This command finds those files and either rescues them to a directory you specify or deletes them.
+Usage:
+This can be used in 3 ways.
+First, list all orphaned files
+rclone backend rescue drive:
+Second rescue all orphaned files to the directory indicated
+rclone backend rescue drive: "relative/path/to/rescue/directory"
+e.g. To rescue all orphans to a directory called "Orphans" in the top level
+rclone backend rescue drive: Orphans
+Third delete all orphaned files to the trash
+rclone backend rescue drive: -o delete
Limitations
Drive has quite a lot of rate limiting. This causes rclone to be limited to transferring about 2 files per second only. Individual files may be transferred much faster at 100s of MiB/s but lots of small files can take a long time.
Server side copies are also subject to a separate rate limit. If you see User rate limit exceeded errors, wait at least 24 hours and retry. You can disable server-side copies with --disable copy
to download and upload the files if you prefer.
@@ -24568,7 +25494,7 @@ rclone backend copyid drive: ID1 path1 ID2 path2
Click on the "+ CREATE CREDENTIALS" button at the top of the screen, then select "OAuth client ID".
Choose an application type of "Desktop app" and click "Create". (the default name is fine)
It will show you a client ID and client secret. Make a note of these.
-(If you selected "External" at Step 5 continue to Step 9. If you chose "Internal" you don't need to publish and can skip straight to Step 10 but your destination drive must be part of the same Google Workspace.)
+(If you selected "External" at Step 5 continue to Step 10. If you chose "Internal" you don't need to publish and can skip straight to Step 11 but your destination drive must be part of the same Google Workspace.)
Go to "Oauth consent screen" and then click "PUBLISH APP" button and confirm. You will also want to add yourself as a test user.
Provide the noted client ID and client secret to rclone.
@@ -24578,7 +25504,7 @@ rclone backend copyid drive: ID1 path1 ID2 path2
Google Photos
The rclone backend for Google Photos is a specialized backend for transferring photos and videos to and from Google Photos.
NB The Google Photos API which rclone uses has quite a few limitations, so please read the limitations section carefully to make sure it is suitable for your use.
-Configuration
+Configuration
The initial setup for google cloud storage involves getting a token from Google Photos which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -24733,7 +25659,7 @@ y/e/d> y
This means that you can use the album
path pretty much like a normal filesystem and it is a good target for repeated syncing.
The shared-album
directory shows albums shared with you or by you. This is similar to the Sharing tab in the Google Photos web interface.
-Standard options
+Standard options
Here are the Standard options specific to google photos (Google Photos).
--gphotos-client-id
OAuth Client Id.
@@ -24765,7 +25691,7 @@ y/e/d> y
Type: bool
Default: false
-Advanced options
+Advanced options
Here are the Advanced options specific to google photos (Google Photos).
--gphotos-token
OAuth Access Token as a JSON blob.
@@ -24796,6 +25722,16 @@ y/e/d> y
Type: string
Required: false
+--gphotos-client-credentials
+Use client credentials OAuth flow.
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_GPHOTOS_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
--gphotos-read-size
Set to read the size of media items.
Normally rclone does not read the size of media items since this takes another transaction. This isn't necessary for syncing. However rclone mount needs to know the size of files in advance of reading them, so setting this flag when using rclone mount is recommended if you want to read the media.
@@ -24828,6 +25764,24 @@ y/e/d> y
Type: bool
Default: false
+--gphotos-proxy
+Use the gphotosdl proxy for downloading the full resolution images
+The Google API will deliver images and video which aren't full resolution, and/or have EXIF data missing.
+However if you ue the gphotosdl proxy tnen you can download original, unchanged images.
+This runs a headless browser in the background.
+Download the software from gphotosdl
+First run with
+gphotosdl -login
+Then once you have logged into google photos close the browser window and run
+gphotosdl
+Then supply the parameter --gphotos-proxy "http://localhost:8282"
to make rclone use the proxy.
+Properties:
+
+- Config: proxy
+- Env Var: RCLONE_GPHOTOS_PROXY
+- Type: string
+- Required: false
+
--gphotos-encoding
The encoding for the backend.
See the encoding section in the overview for more info.
@@ -24915,8 +25869,10 @@ y/e/d> y
Downloading Images
When Images are downloaded this strips EXIF location (according to the docs and my tests). This is a limitation of the Google Photos API and is covered by bug #112096115.
The current google API does not allow photos to be downloaded at original resolution. This is very important if you are, for example, relying on "Google Photos" as a backup of your photos. You will not be able to use rclone to redownload original images. You could use 'google takeout' to recover the original photos as a last resort
+NB you can use the --gphotos-proxy flag to use a headless browser to download images in full resolution.
Downloading Videos
When videos are downloaded they are downloaded in a really compressed version of the video compared to downloading it via the Google Photos web interface. This is covered by bug #113672044.
+NB you can use the --gphotos-proxy flag to use a headless browser to download images in full resolution.
Duplicates
If a file name is duplicated in a directory then rclone will add the file ID into its name. So two files called file.jpg
would then appear as file {123456}.jpg
and file {ABCDEF}.jpg
(the actual IDs are a lot longer alas!).
If you upload the same image (with the same binary data) twice then Google Photos will deduplicate it. However it will retain the filename from the first upload which may confuse rclone. For example if you uploaded an image to upload
then uploaded the same image to album/my_album
the filename of the image in album/my_album
will be what it was uploaded with initially, not what you uploaded it with to album
. In practise this shouldn't cause too many problems.
@@ -25016,7 +25972,7 @@ rclone backend drop Hasher:
rclone backend stickyimport hasher:path/to/data sha1 remote:/path/to/sum.sha1
stickyimport
is similar to import
but works much faster because it does not need to stat existing files and skips initial tree walk. Instead of binding cache entries to file fingerprints it creates sticky entries bound to the file name alone ignoring size, modification time etc. Such hash entries can be replaced only by purge
, delete
, backend drop
or by full re-read/re-write of the files.
Configuration reference
-Standard options
+Standard options
Here are the Standard options specific to hasher (Better checksums for other remotes).
--hasher-remote
Remote to cache checksums for (e.g. myRemote:path).
@@ -25045,7 +26001,7 @@ rclone backend drop Hasher:
Type: Duration
Default: off
-Advanced options
+Advanced options
Here are the Advanced options specific to hasher (Better checksums for other remotes).
--hasher-auto-size
Auto-update checksum for files smaller than this size (disabled by default).
@@ -25123,7 +26079,7 @@ rclone backend drop Hasher:
HDFS
HDFS is a distributed file-system, part of the Apache Hadoop framework.
Paths are specified as remote:
or remote:path/to/dir
.
-Configuration
+Configuration
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
@@ -25231,7 +26187,7 @@ username = root
Invalid UTF-8 bytes will also be replaced.
-Standard options
+Standard options
Here are the Standard options specific to hdfs (Hadoop distributed file system).
--hdfs-namenode
Hadoop name nodes and ports.
@@ -25259,7 +26215,7 @@ username = root
-Advanced options
+Advanced options
Here are the Advanced options specific to hdfs (Hadoop distributed file system).
--hdfs-service-principal-name
Kerberos service principal name for the namenode.
@@ -25316,7 +26272,7 @@ username = root
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
The initial setup for hidrive involves getting a token from HiDrive which you need to do in your browser. rclone config
walks you through it.
-Configuration
+Configuration
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
@@ -25414,7 +26370,7 @@ rclone lsd remote:/users/test/path
By default, rclone will know the number of directory members contained in a directory. For example, rclone lsd
uses this information.
The acquisition of this information will result in additional time costs for HiDrive's API. When dealing with large directory structures, it may be desirable to circumvent this time cost, especially when this information is not explicitly needed. For this, the disable_fetching_member_count
option can be used.
See the below section about configuration options for more details.
-Standard options
+Standard options
Here are the Standard options specific to hidrive (HiDrive).
--hidrive-client-id
OAuth Client Id.
@@ -25456,7 +26412,7 @@ rclone lsd remote:/users/test/path
-Advanced options
+Advanced options
Here are the Advanced options specific to hidrive (HiDrive).
--hidrive-token
OAuth Access Token as a JSON blob.
@@ -25487,6 +26443,16 @@ rclone lsd remote:/users/test/path
Type: string
Required: false
+--hidrive-client-credentials
+Use client credentials OAuth flow.
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_HIDRIVE_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
--hidrive-scope-role
User-level that rclone should use when requesting access from HiDrive.
Properties:
@@ -25626,7 +26592,7 @@ rclone lsd remote:/users/test/path
The remote:
represents the configured url, and any path following it will be resolved relative to this url, according to the URL standard. This means with remote url https://beta.rclone.org/branch
and path fix
, the resolved URL will be https://beta.rclone.org/branch/fix
, while with path /fix
the resolved URL will be https://beta.rclone.org/fix
as the absolute path is resolved from the root of the domain.
If the path following the remote:
ends with /
it will be assumed to point to a directory. If the path does not end with /
, then a HEAD request is sent and the response used to decide if it it is treated as a file or a directory (run with -vv
to see details). When --http-no-head is specified, a path without ending /
is always assumed to be a file. If rclone incorrectly assumes the path is a file, the solution is to specify the path with ending /
. When you know the path is a directory, ending it with /
is always better as it avoids the initial HEAD request.
To just download a single file it is easier to use copyurl.
-Configuration
+Configuration
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
@@ -25690,7 +26656,7 @@ e/n/d/r/c/s/q> q
rclone lsd --http-url https://beta.rclone.org :http:
or:
rclone lsd :http,url='https://beta.rclone.org':
-Standard options
+Standard options
Here are the Standard options specific to http (HTTP).
--http-url
URL of HTTP host to connect to.
@@ -25711,7 +26677,7 @@ e/n/d/r/c/s/q> q
Type: bool
Default: false
-Advanced options
+Advanced options
Here are the Advanced options specific to http (HTTP).
Set HTTP headers for all transactions.
@@ -25788,9 +26754,9 @@ rclone rc backend/command command=set fs=remote: -o url=https://example.comThis is a backend for the ImageKit.io storage service.
About ImageKit
ImageKit.io provides real-time image and video optimizations, transformations, and CDN delivery. Over 1,000 businesses and 70,000 developers trust ImageKit with their images and videos on the web.
-Accounts & Pricing
+Accounts & Pricing
To use this backend, you need to create an account on ImageKit. Start with a free plan with generous usage limits. Then, as your requirements grow, upgrade to a plan that best fits your needs. See the pricing details.
-Configuration
+Configuration
Here is an example of making an imagekit configuration.
Firstly create a ImageKit.io account and choose a plan.
You will need to log in and get the publicKey
and privateKey
for your account from the developer section.
@@ -25853,11 +26819,11 @@ y/e/d> y
rclone mkdir imagekit-media-library:directory
List the contents of a directory.
rclone ls imagekit-media-library:directory
-Modified time and hashes
+Modified time and hashes
ImageKit does not support modification times or hashes yet.
Checksums
No checksums are supported.
-Standard options
+Standard options
Here are the Standard options specific to imagekit (ImageKit.io).
--imagekit-endpoint
You can find your ImageKit.io URL endpoint in your dashboard
@@ -25886,7 +26852,7 @@ y/e/d> y
Type: string
Required: true
-Advanced options
+Advanced options
Here are the Advanced options specific to imagekit (ImageKit.io).
--imagekit-only-signed
If you have configured Restrict unsigned image URLs
in your dashboard settings, set this to true.
@@ -26035,6 +27001,134 @@ y/e/d> y
See the metadata docs for more info.
+iCloud Drive
+Configuration
+The initial setup for an iCloud Drive backend involves getting a trust token/session. This can be done by simply using the regular iCloud password, and accepting the code prompt on another iCloud connected device.
+IMPORTANT: At the moment an app specific password won't be accepted. Only use your regular password and 2FA.
+rclone config
walks you through the token creation. The trust token is valid for 30 days. After which you will have to reauthenticate with rclone reconnect
or rclone config
.
+Here is an example of how to make a remote called iclouddrive
. First run:
+ rclone config
+This will guide you through an interactive setup process:
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> iclouddrive
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+XX / iCloud Drive
+ \ (iclouddrive)
+[snip]
+Storage> iclouddrive
+Option apple_id.
+Apple ID.
+Enter a value.
+apple_id> APPLEID
+Option password.
+Password.
+Choose an alternative below.
+y) Yes, type in my own password
+g) Generate random password
+y/g> y
+Enter the password:
+password:
+Confirm the password:
+password:
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+Option config_2fa.
+Two-factor authentication: please enter your 2FA code
+Enter a value.
+config_2fa> 2FACODE
+Remote config
+--------------------
+[koofr]
+- type: iclouddrive
+- apple_id: APPLEID
+- password: *** ENCRYPTED ***
+- cookies: ****************************
+- trust_token: ****************************
+--------------------
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+Advanced Data Protection
+ADP is currently unsupported and need to be disabled
+Standard options
+Here are the Standard options specific to iclouddrive (iCloud Drive).
+--iclouddrive-apple-id
+Apple ID.
+Properties:
+
+- Config: apple_id
+- Env Var: RCLONE_ICLOUDDRIVE_APPLE_ID
+- Type: string
+- Required: true
+
+--iclouddrive-password
+Password.
+NB Input to this must be obscured - see rclone obscure.
+Properties:
+
+- Config: password
+- Env Var: RCLONE_ICLOUDDRIVE_PASSWORD
+- Type: string
+- Required: true
+
+--iclouddrive-trust-token
+Trust token (internal use)
+Properties:
+
+- Config: trust_token
+- Env Var: RCLONE_ICLOUDDRIVE_TRUST_TOKEN
+- Type: string
+- Required: false
+
+--iclouddrive-cookies
+cookies (internal use only)
+Properties:
+
+- Config: cookies
+- Env Var: RCLONE_ICLOUDDRIVE_COOKIES
+- Type: string
+- Required: false
+
+Advanced options
+Here are the Advanced options specific to iclouddrive (iCloud Drive).
+--iclouddrive-client-id
+Client id
+Properties:
+
+- Config: client_id
+- Env Var: RCLONE_ICLOUDDRIVE_CLIENT_ID
+- Type: string
+- Default: "d39ba9916b7251055b22c7f910e2ea796ee65e98b2ddecea8f5dde8d9d1a815d"
+
+--iclouddrive-encoding
+The encoding for the backend.
+See the encoding section in the overview for more info.
+Properties:
+
+- Config: encoding
+- Env Var: RCLONE_ICLOUDDRIVE_ENCODING
+- Type: Encoding
+- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
+
+--iclouddrive-description
+Description of the remote.
+Properties:
+
+- Config: description
+- Env Var: RCLONE_ICLOUDDRIVE_DESCRIPTION
+- Type: string
+- Required: false
+
Internet Archive
The Internet Archive backend utilizes Items on archive.org
Refer to IAS3 API documentation for the API this backend uses.
@@ -26062,7 +27156,7 @@ y/e/d> y
These auto-created files can be excluded from the sync using metadata filtering.
rclone sync ... --metadata-exclude "source=metadata" --metadata-exclude "format=Metadata"
Which excludes from the sync any files which have the source=metadata
or format=Metadata
flags which are added to Internet Archive auto-created files.
-Configuration
+Configuration
Here is an example of making an internetarchive configuration. Most applies to the other providers as well, any differences are described below.
First run
rclone config
@@ -26131,7 +27225,7 @@ y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
-Standard options
+Standard options
Here are the Standard options specific to internetarchive (Internet Archive).
--internetarchive-access-key-id
IAS3 Access Key.
@@ -26153,7 +27247,7 @@ y/e/d> y
Type: string
Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to internetarchive (Internet Archive).
--internetarchive-endpoint
IAS3 Endpoint.
@@ -26359,7 +27453,7 @@ Response: {"error":"invalid_grant","error_description&q
Onlime has sold access to Jottacloud proper, while providing localized support to Danish Customers, but have recently set up their own hosting, transferring their customers from Jottacloud servers to their own ones.
This, of course, necessitates using their servers for authentication, but otherwise functionality and architecture seems equivalent to Jottacloud.
To setup rclone to use Onlime Cloud Storage, choose Onlime Cloud authentication in the setup. The rest of the setup is identical to the default setup.
-Configuration
+Configuration
Here is an example of how to make a remote called remote
with the default setup. First run:
rclone config
This will guide you through an interactive setup process:
@@ -26523,7 +27617,7 @@ y/e/d> y
Versioning can be disabled by --jottacloud-no-versions
option. This is achieved by deleting the remote file prior to uploading a new version. If the upload the fails no version of the file will be available in the remote.
To view your current quota you can use the rclone about remote:
command which will display your usage limit (unless it is unlimited) and the current usage.
-Standard options
+Standard options
Here are the Standard options specific to jottacloud (Jottacloud).
--jottacloud-client-id
OAuth Client Id.
@@ -26545,7 +27639,7 @@ y/e/d> y
Type: string
Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to jottacloud (Jottacloud).
--jottacloud-token
OAuth Access Token as a JSON blob.
@@ -26576,6 +27670,16 @@ y/e/d> y
Type: string
Required: false
+--jottacloud-client-credentials
+Use client credentials OAuth flow.
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_JOTTACLOUD_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
--jottacloud-md5-memory-limit
Files bigger than this will be cached on disk to calculate the MD5 if required.
Properties:
@@ -26702,7 +27806,7 @@ y/e/d> y
Koofr
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
-Configuration
+Configuration
The initial setup for Koofr involves creating an application password for rclone. You can do that by opening the Koofr web application, giving the password a nice name like rclone
and clicking on generate.
Here is an example of how to make a remote called koofr
. First run:
rclone config
@@ -26789,7 +27893,7 @@ y/e/d> y
Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.
-Standard options
+Standard options
Here are the Standard options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers).
--koofr-provider
Choose your storage provider.
@@ -26845,7 +27949,7 @@ y/e/d> y
Type: string
Required: true
-Advanced options
+Advanced options
Here are the Advanced options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers).
--koofr-mountid
Mount ID of the mount to use.
@@ -27016,7 +28120,7 @@ d) Delete this remote
y/e/d> y
Linkbox
Linkbox is a private cloud drive.
-Configuration
+Configuration
Here is an example of making a remote for Linkbox.
First run:
rclone config
@@ -27052,7 +28156,7 @@ e) Edit this remote
d) Delete this remote
y/e/d> y
-Standard options
+Standard options
Here are the Standard options specific to linkbox (Linkbox).
--linkbox-token
Token from https://www.linkbox.to/admin/account
@@ -27063,7 +28167,7 @@ y/e/d> y
Type: string
Required: true
-Advanced options
+Advanced options
Here are the Advanced options specific to linkbox (Linkbox).
--linkbox-description
Description of the remote.
@@ -27089,7 +28193,7 @@ y/e/d> y
Storage keeps hash for all files and performs transparent deduplication, the hash algorithm is a modified SHA1
If a particular file is already present in storage, one can quickly submit file hash instead of long file upload (this optimization is supported by rclone)
-Configuration
+Configuration
Here is an example of making a mailru configuration.
First create a Mail.ru Cloud account and choose a tariff.
You will need to log in and create an app password for rclone. Rclone will not work with your normal username and password - it will give an error like oauth2: server response missing access_token
.
@@ -27230,7 +28334,7 @@ y/e/d> y
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Standard options
+Standard options
Here are the Standard options specific to mailru (Mail.ru Cloud).
--mailru-client-id
OAuth Client Id.
@@ -27293,7 +28397,7 @@ y/e/d> y
-Advanced options
+Advanced options
Here are the Advanced options specific to mailru (Mail.ru Cloud).
--mailru-token
OAuth Access Token as a JSON blob.
@@ -27324,6 +28428,16 @@ y/e/d> y
Type: string
Required: false
+--mailru-client-credentials
+Use client credentials OAuth flow.
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_MAILRU_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
--mailru-speedup-file-patterns
Comma separated list of file name patterns eligible for speedup (put by hash).
Patterns are case insensitive and can contain '*' or '?' meta characters.
@@ -27469,7 +28583,7 @@ y/e/d> y
This is an rclone backend for Mega which supports the file transfer features of Mega using the same client side encryption.
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
-Configuration
+Configuration
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
@@ -27570,7 +28684,7 @@ me@example.com:/$
Note that once blocked, the use of other tools (such as megacmd) is not a sure workaround: following megacmd login times have been observed in succession for blocked remote: 7 minutes, 20 min, 30min, 30 min, 30min. Web access looks unaffected though.
Investigation is continuing in relation to workarounds based on timeouts, pacers, retrials and tpslimits - if you discover something relevant, please post on the forum.
So, if rclone was working nicely and suddenly you are unable to log-in and you are sure the user and the password are correct, likely you have got the remote blocked for a while.
-Standard options
+Standard options
Here are the Standard options specific to mega (Mega).
--mega-user
User name.
@@ -27591,7 +28705,7 @@ me@example.com:/$
Type: string
Required: true
-Advanced options
+Advanced options
Here are the Advanced options specific to mega (Mega).
--mega-debug
Output more debug from Mega.
@@ -27650,7 +28764,7 @@ me@example.com:/$
Memory
The memory backend is an in RAM backend. It does not persist its data - use the local backend for that.
The memory backend behaves like a bucket-based remote (e.g. like s3). Because it has no parameters you can just use it with the :memory:
remote name.
-Configuration
+Configuration
You can configure it as a remote like this with rclone config
too if you want to:
No remotes found, make a new one?
n) New remote
@@ -27686,7 +28800,7 @@ rclone serve sftp :memory:
The memory backend supports MD5 hashes and modification times accurate to 1 nS.
Restricted filename characters
The memory backend replaces the default restricted characters set.
-Advanced options
+Advanced options
Here are the Advanced options specific to memory (In memory object storage system.).
--memory-description
Description of the remote.
@@ -27701,7 +28815,7 @@ rclone serve sftp :memory:
Paths are specified as remote:
You may put subdirectories in too, e.g. remote:/path/to/dir
. If you have a CP code you can use that as the folder after the domain such as <domain>/<cpcode>/<internal directories within cpcode>.
For example, this is commonly configured with or without a CP code: * With a CP code. [your-domain-prefix]-nsu.akamaihd.net/123456/subdirectory/
* Without a CP code. [your-domain-prefix]-nsu.akamaihd.net
See all buckets rclone lsd remote: The initial setup for Netstorage involves getting an account and secret. Use rclone config
to walk you through the setup process.
-Configuration
+Configuration
Here's an example of how to make a remote called ns1
.
- To begin the interactive configuration process, enter this command:
@@ -27809,7 +28923,7 @@ y/e/d> y
Purge
NetStorage remote supports the purge feature by using the "quick-delete" NetStorage API action. The quick-delete action is disabled by default for security reasons and can be enabled for the account through the Akamai portal. Rclone will first try to use quick-delete action for the purge command and if this functionality is disabled then will fall back to a standard delete method.
Note: Read the NetStorage Usage API for considerations when using "quick-delete". In general, using quick-delete method will not delete the tree immediately and objects targeted for quick-delete may still be accessible.
-Standard options
+Standard options
Here are the Standard options specific to netstorage (Akamai NetStorage).
--netstorage-host
Domain+path of NetStorage host to connect to.
@@ -27841,7 +28955,7 @@ y/e/d> y
- Type: string
- Required: true
-Advanced options
+Advanced options
Here are the Advanced options specific to netstorage (Akamai NetStorage).
--netstorage-protocol
Select between HTTP or HTTPS protocol.
@@ -27890,7 +29004,7 @@ y/e/d> y
The desired path location (including applicable sub-directories) ending in the object that will be the target of the symlink (for example, /links/mylink). Include the file extension for the object, if applicable. rclone backend symlink <src> <path>
Microsoft Azure Blob Storage
Paths are specified as remote:container
(or remote:
for the lsd
command.) You may put subdirectories in too, e.g. remote:container/path/to/dir
.
-Configuration
+Configuration
Here is an example of making a Microsoft Azure Blob Storage configuration. For a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
@@ -28028,6 +29142,7 @@ y/e/d> y
Env Auth: 2. Managed Service Identity Credentials
When using Managed Service Identity if the VM(SS) on which this program is running has a system-assigned identity, it will be used by default. If the resource has no system-assigned but exactly one user-assigned identity, the user-assigned identity will be used by default.
If the resource has multiple user-assigned identities you will need to unset env_auth
and set use_msi
instead. See the use_msi
section.
+If you are operating in disconnected clouds, or private clouds such as Azure Stack you may want to set disable_instance_discovery = true
. This determines whether rclone requests Microsoft Entra instance metadata from https://login.microsoft.com/
before authenticating. Setting this to true
will skip this request, making you responsible for ensuring the configured authority is valid and trustworthy.
Credentials created with the az
tool can be picked up using env_auth
.
For example if you were to login with a service principal like this:
@@ -28084,10 +29199,14 @@ container/
If use_msi
is set then managed service identity credentials are used. This authentication only works when running in an Azure service. env_auth
needs to be unset to use this.
However if you have multiple user identities to choose from these must be explicitly specified using exactly one of the msi_object_id
, msi_client_id
, or msi_mi_res_id
parameters.
If none of msi_object_id
, msi_client_id
, or msi_mi_res_id
is set, this is is equivalent to using env_auth
.
+Azure CLI tool az
+Set to use the Azure CLI tool az
as the sole means of authentication.
+Setting this can be useful if you wish to use the az
CLI on a host with a System Managed Identity that you do not want to use.
+Don't set env_auth
at the same time.
Anonymous
If you want to access resources with public anonymous access then set account
only. You can do this without making an rclone config:
rclone lsf :azureblob,account=ACCOUNT:CONTAINER
-Standard options
+Standard options
Here are the Standard options specific to azureblob (Microsoft Azure Blob Storage).
--azureblob-account
Azure Storage Account Name.
@@ -28183,7 +29302,7 @@ container/
- Type: string
- Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to azureblob (Microsoft Azure Blob Storage).
--azureblob-client-send-certificate-chain
Send the certificate chain when using certificate auth.
@@ -28233,6 +29352,18 @@ container/
- Type: string
- Required: false
+--azureblob-disable-instance-discovery
+Skip requesting Microsoft Entra instance metadata
+This should be set true only by applications authenticating in disconnected clouds, or private clouds such as Azure Stack.
+It determines whether rclone requests Microsoft Entra instance metadata from https://login.microsoft.com/
before authenticating.
+Setting this to true will skip this request, making you responsible for ensuring the configured authority is valid and trustworthy.
+Properties:
+
+- Config: disable_instance_discovery
+- Env Var: RCLONE_AZUREBLOB_DISABLE_INSTANCE_DISCOVERY
+- Type: bool
+- Default: false
+
--azureblob-use-msi
Use a managed service identity to authenticate (only works in Azure).
When true, use a managed service identity to authenticate to Azure Storage instead of a SAS token or account key.
@@ -28284,6 +29415,18 @@ container/
- Type: bool
- Default: false
+--azureblob-use-az
+Use Azure CLI tool az for authentication
+Set to use the Azure CLI tool az as the sole means of authentication.
+Setting this can be useful if you wish to use the az CLI on a host with a System Managed Identity that you do not want to use.
+Don't set env_auth at the same time.
+Properties:
+
+- Config: use_az
+- Env Var: RCLONE_AZUREBLOB_USE_AZ
+- Type: bool
+- Default: false
+
--azureblob-endpoint
Endpoint for the service.
Leave blank normally.
@@ -28505,7 +29648,7 @@ container/
Also, if you want to access a storage emulator instance running on a different machine, you can override the endpoint
parameter in the advanced settings, setting it to http(s)://<host>:<port>/devstoreaccount1
(e.g. http://10.254.2.5:10000/devstoreaccount1
).
Microsoft Azure Files Storage
Paths are specified as remote:
You may put subdirectories in too, e.g. remote:path/to/dir
.
-Configuration
+Configuration
Here is an example of making a Microsoft Azure Files Storage configuration. For a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
@@ -28750,7 +29893,7 @@ y/e/d>
If use_msi
is set then managed service identity credentials are used. This authentication only works when running in an Azure service. env_auth
needs to be unset to use this.
However if you have multiple user identities to choose from these must be explicitly specified using exactly one of the msi_object_id
, msi_client_id
, or msi_mi_res_id
parameters.
If none of msi_object_id
, msi_client_id
, or msi_mi_res_id
is set, this is is equivalent to using env_auth
.
-Standard options
+Standard options
Here are the Standard options specific to azurefiles (Microsoft Azure Files).
--azurefiles-account
Azure Storage Account Name.
@@ -28865,7 +30008,7 @@ y/e/d>
- Type: string
- Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to azurefiles (Microsoft Azure Files).
--azurefiles-client-send-certificate-chain
Send the certificate chain when using certificate auth.
@@ -29040,7 +30183,7 @@ y/e/d>
Microsoft OneDrive
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
-Configuration
+Configuration
The initial setup for OneDrive involves getting a token from Microsoft which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -29152,6 +30295,17 @@ y/e/d> y
- In the rclone config, set
token_url
to https://login.microsoftonline.com/YOUR_TENANT_ID/oauth2/v2.0/token
.
Note: If you have a special region, you may need a different host in step 4 and 5. Here are some hints.
+Using OAuth Client Credential flow
+OAuth Client Credential flow will allow rclone to use permissions directly associated with the Azure AD Enterprise application, rather that adopting the context of an Azure AD user account.
+This flow can be enabled by following the steps below:
+
+- Create the Enterprise App registration in the Azure AD portal and obtain a Client ID and Client Secret as described above.
+- Ensure that the application has the appropriate permissions and they are assigned as Application Permissions
+- Configure the remote, ensuring that Client ID and Client Secret are entered correctly.
+- In the Advanced Config section, enter
true
for client_credentials
and in the tenant
section enter the tenant ID.
+
+When it comes to choosing the type of the connection work with the client credentials flow. In particular the "onedrive" option does not work. You can use the "sharepoint" option or if that does not find the correct drive ID type it in manually with the "driveid" option.
+NOTE Assigning permissions directly to the application means that anyone with the Client ID and Client Secret can access your OneDrive files. Take care to safeguard these credentials.
Modification times and hashes
OneDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.
OneDrive Personal, OneDrive for Business and Sharepoint Server support QuickXorHash.
@@ -29265,7 +30419,7 @@ y/e/d> y
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
Deleting files
Any files you delete with rclone will end up in the trash. Microsoft doesn't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Microsoft's apps or via the OneDrive website.
-Standard options
+Standard options
Here are the Standard options specific to onedrive (Microsoft OneDrive).
--onedrive-client-id
OAuth Client Id.
@@ -29315,7 +30469,17 @@ y/e/d> y
-Advanced options
+--onedrive-tenant
+ID of the service principal's tenant. Also called its directory ID.
+Set this if using - Client Credential flow
+Properties:
+
+- Config: tenant
+- Env Var: RCLONE_ONEDRIVE_TENANT
+- Type: string
+- Required: false
+
+Advanced options
Here are the Advanced options specific to onedrive (Microsoft OneDrive).
--onedrive-token
OAuth Access Token as a JSON blob.
@@ -29346,6 +30510,16 @@ y/e/d> y
Type: string
Required: false
+--onedrive-client-credentials
+Use client credentials OAuth flow.
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_ONEDRIVE_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
--onedrive-chunk-size
Chunk size to upload files with - must be multiple of 320k (327,680 bytes).
Above this size files will be chunked - must be multiple of 320k (327,680 bytes) and should not exceed 250M (262,144,000 bytes) else you may encounter "Microsoft.SharePoint.Client.InvalidClientQueryException: The request message is too big." Note that the chunks will be buffered into memory.
@@ -29660,75 +30834,75 @@ rclone rc vfs/refresh recursive=true
Permissions are also supported, if --onedrive-metadata-permissions
is set. The accepted values for --onedrive-metadata-permissions
are "read
", "write
", "read,write
", and "off
" (the default). "write
" supports adding new permissions, updating the "role" of existing permissions, and removing permissions. Updating and removing require the Permission ID to be known, so it is recommended to use "read,write
" instead of "write
" if you wish to update/remove permissions.
Permissions are read/written in JSON format using the same schema as the OneDrive API, which differs slightly between OneDrive Personal and Business.
Example for OneDrive Personal:
-[
- {
- "id": "1234567890ABC!123",
- "grantedTo": {
- "user": {
- "id": "ryan@contoso.com"
- },
- "application": {},
- "device": {}
- },
- "invitation": {
- "email": "ryan@contoso.com"
- },
- "link": {
- "webUrl": "https://1drv.ms/t/s!1234567890ABC"
- },
- "roles": [
- "read"
- ],
- "shareId": "s!1234567890ABC"
- }
-]
+[
+ {
+ "id": "1234567890ABC!123",
+ "grantedTo": {
+ "user": {
+ "id": "ryan@contoso.com"
+ },
+ "application": {},
+ "device": {}
+ },
+ "invitation": {
+ "email": "ryan@contoso.com"
+ },
+ "link": {
+ "webUrl": "https://1drv.ms/t/s!1234567890ABC"
+ },
+ "roles": [
+ "read"
+ ],
+ "shareId": "s!1234567890ABC"
+ }
+]
Example for OneDrive Business:
-[
- {
- "id": "48d31887-5fad-4d73-a9f5-3c356e68a038",
- "grantedToIdentities": [
- {
- "user": {
- "displayName": "ryan@contoso.com"
- },
- "application": {},
- "device": {}
- }
- ],
- "link": {
- "type": "view",
- "scope": "users",
- "webUrl": "https://contoso.sharepoint.com/:w:/t/design/a577ghg9hgh737613bmbjf839026561fmzhsr85ng9f3hjck2t5s"
- },
- "roles": [
- "read"
- ],
- "shareId": "u!LKj1lkdlals90j1nlkascl"
- },
- {
- "id": "5D33DD65C6932946",
- "grantedTo": {
- "user": {
- "displayName": "John Doe",
- "id": "efee1b77-fb3b-4f65-99d6-274c11914d12"
- },
- "application": {},
- "device": {}
- },
- "roles": [
- "owner"
- ],
- "shareId": "FWxc1lasfdbEAGM5fI7B67aB5ZMPDMmQ11U"
- }
-]
+[
+ {
+ "id": "48d31887-5fad-4d73-a9f5-3c356e68a038",
+ "grantedToIdentities": [
+ {
+ "user": {
+ "displayName": "ryan@contoso.com"
+ },
+ "application": {},
+ "device": {}
+ }
+ ],
+ "link": {
+ "type": "view",
+ "scope": "users",
+ "webUrl": "https://contoso.sharepoint.com/:w:/t/design/a577ghg9hgh737613bmbjf839026561fmzhsr85ng9f3hjck2t5s"
+ },
+ "roles": [
+ "read"
+ ],
+ "shareId": "u!LKj1lkdlals90j1nlkascl"
+ },
+ {
+ "id": "5D33DD65C6932946",
+ "grantedTo": {
+ "user": {
+ "displayName": "John Doe",
+ "id": "efee1b77-fb3b-4f65-99d6-274c11914d12"
+ },
+ "application": {},
+ "device": {}
+ },
+ "roles": [
+ "owner"
+ ],
+ "shareId": "FWxc1lasfdbEAGM5fI7B67aB5ZMPDMmQ11U"
+ }
+]
To write permissions, pass in a "permissions" metadata key using this same format. The --metadata-mapper
tool can be very helpful for this.
When adding permissions, an email address can be provided in the User.ID
or DisplayName
properties of grantedTo
or grantedToIdentities
. Alternatively, an ObjectID can be provided in User.ID
. At least one valid recipient must be provided in order to add a permission for a user. Creating a Public Link is also supported, if Link.Scope
is set to "anonymous"
.
Example request to add a "read" permission with --metadata-mapper
:
-{
- "Metadata": {
- "permissions": "[{\"grantedToIdentities\":[{\"user\":{\"id\":\"ryan@contoso.com\"}}],\"roles\":[\"read\"]}]"
- }
-}
+{
+ "Metadata": {
+ "permissions": "[{\"grantedToIdentities\":[{\"user\":{\"id\":\"ryan@contoso.com\"}}],\"roles\":[\"read\"]}]"
+ }
+}
Note that adding a permission can fail if a conflicting permission already exists for the file/folder.
To update an existing permission, include both the Permission ID and the new roles
to be assigned. roles
is the only property that can be changed.
To remove permissions, pass in a blob containing only the permissions you wish to keep (which can be empty, to remove all.) Note that the owner
role will be ignored, as it cannot be removed.
@@ -29973,7 +31147,7 @@ ERROR : 20230203_123826234_iOS.heic: vfs cache: failed to download: vfs reader:
OpenDrive
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
-Configuration
+Configuration
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
@@ -30118,7 +31292,7 @@ y/e/d> y
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Standard options
+Standard options
Here are the Standard options specific to opendrive (OpenDrive).
--opendrive-username
Username.
@@ -30139,7 +31313,7 @@ y/e/d> y
Type: string
Required: true
-Advanced options
+Advanced options
Here are the Advanced options specific to opendrive (OpenDrive).
--opendrive-encoding
The encoding for the backend.
@@ -30184,7 +31358,7 @@ y/e/d> y
Paths are specified as remote:bucket
(or remote:
for the lsd
command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir
.
Sample command to transfer local artifacts to remote:bucket in oracle object storage:
rclone -vvv --progress --stats-one-line --max-stats-groups 10 --log-format date,time,UTC,longfile --fast-list --buffer-size 256Mi --oos-no-check-bucket --oos-upload-cutoff 10Mi --multi-thread-cutoff 16Mi --multi-thread-streams 3000 --transfers 3000 --checkers 64 --retries 2 --oos-chunk-size 10Mi --oos-upload-concurrency 10000 --oos-attempt-resume-upload --oos-leave-parts-on-error sync ./artifacts remote:bucket -vv
-Configuration
+Configuration
Here is an example of making an oracle object storage configuration. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -30367,7 +31541,7 @@ provider = no_auth
If the modification time needs to be updated rclone will attempt to perform a server side copy to update the modification if the object can be copied in a single part. In the case the object is larger than 5Gb, the object will be uploaded rather than copied.
Note that reading this from the object takes an additional HEAD
request as the metadata isn't returned in object listings.
The MD5 hash algorithm is supported.
-Multipart uploads
+Multipart uploads
rclone supports multipart uploads with OOS which means that it can upload files bigger than 5 GiB.
Note that files uploaded both with multipart upload and through crypt remotes do not have MD5 sums.
rclone switches from single part uploads to multipart uploads at the point specified by --oos-upload-cutoff
. This can be a maximum of 5 GiB and a minimum of 0 (ie always upload multipart files).
@@ -30375,7 +31549,7 @@ provider = no_auth
Multipart uploads will use --transfers
* --oos-upload-concurrency
* --oos-chunk-size
extra memory. Single part uploads to not use extra memory.
Single part transfers can be faster than multipart transfers or slower depending on your latency from oos - the more latency, the more likely single part transfers will be faster.
Increasing --oos-upload-concurrency
will increase throughput (8 would be a sensible value) and increasing --oos-chunk-size
also increases throughput (16M would be sensible). Increasing either of these will use more memory. The default values are high enough to gain most of the possible performance without using too much memory.
-Standard options
+Standard options
Here are the Standard options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage).
--oos-provider
Choose your Auth Provider
@@ -30428,14 +31602,15 @@ provider = no_auth
Required: true
--oos-compartment
-Object storage compartment OCID
+Specify compartment OCID, if you need to list buckets.
+List objects works without compartment OCID.
Properties:
- Config: compartment
- Env Var: RCLONE_OOS_COMPARTMENT
- Provider: !no_auth
- Type: string
-- Required: true
+- Required: false
--oos-region
Object storage Region
@@ -30490,7 +31665,7 @@ provider = no_auth
-Advanced options
+Advanced options
Here are the Advanced options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage).
--oos-storage-tier
The storage class to use when storing new objects in storage. https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm
@@ -30809,7 +31984,7 @@ if not.
QingStor
Paths are specified as remote:bucket
(or remote:
for the lsd
command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir
.
-Configuration
+Configuration
Here is an example of making an QingStor configuration. First run
rclone config
This will guide you through an interactive setup process.
@@ -30880,7 +32055,7 @@ y/e/d> y
rclone sync --interactive /home/local/directory remote:bucket
--fast-list
This remote supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
-Multipart uploads
+Multipart uploads
rclone supports multipart uploads with QingStor which means that it can upload files bigger than 5 GiB. Note that files uploaded with multipart upload don't have an MD5SUM.
Note that incomplete multipart uploads older than 24 hours can be removed with rclone cleanup remote:bucket
just for one bucket rclone cleanup remote:
for all buckets. QingStor does not ever remove incomplete multipart uploads so it may be necessary to run this from time to time.
Buckets and Zone
@@ -30905,7 +32080,7 @@ y/e/d> y
Restricted filename characters
The control characters 0x00-0x1F and / are replaced as in the default restricted characters set. Note that 0x7F is not replaced.
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Standard options
+Standard options
Here are the Standard options specific to qingstor (QingCloud Object Storage).
--qingstor-env-auth
Get QingStor credentials from runtime.
@@ -30986,7 +32161,7 @@ y/e/d> y
-Advanced options
+Advanced options
Here are the Advanced options specific to qingstor (QingCloud Object Storage).
--qingstor-connection-retries
Number of connection retries.
@@ -31059,7 +32234,7 @@ y/e/d> y
Paths may be as deep as required, e.g., remote:directory/subdirectory
.
The initial setup for Quatrix involves getting an API Key from Quatrix. You can get the API key in the user's profile at https://<account>/profile/api-keys
or with the help of the API - https://docs.maytech.net/quatrix/quatrix-api/api-explorer#/API-Key/post_api_key_create.
See complete Swagger documentation for Quatrix - https://docs.maytech.net/quatrix/quatrix-api/api-explorer
-Configuration
+Configuration
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
@@ -31152,7 +32327,7 @@ y/e/d> y
For files above 50 MiB rclone will use a chunked transfer. Rclone will upload up to --transfers
chunks at the same time (shared among all multipart uploads). Chunks are buffered in memory, and the minimal chunk size is 10_000_000 bytes by default, and it can be changed in the advanced configuration, so increasing --transfers
will increase the memory use. The chunk size has a maximum size limit, which is set to 100_000_000 bytes by default and can be changed in the advanced configuration. The size of the uploaded chunk will dynamically change depending on the upload speed. The total memory use equals the number of transfers multiplied by the minimal chunk size. In case there's free memory allocated for the upload (which equals the difference of maximal_summary_chunk_size
and minimal_chunk_size
* transfers
), the chunk size may increase in case of high upload speed. As well as it can decrease in case of upload speed problems. If no free memory is available, all chunks will equal minimal_chunk_size
.
Deleting files
Files you delete with rclone will end up in Trash and be stored there for 30 days. Quatrix also provides an API to permanently delete files and an API to empty the Trash so that you can remove files permanently from your account.
-Standard options
+Standard options
Here are the Standard options specific to quatrix (Quatrix by Maytech).
--quatrix-api-key
API key for accessing Quatrix account
@@ -31172,7 +32347,7 @@ y/e/d> y
Type: string
Required: true
-Advanced options
+Advanced options
Here are the Advanced options specific to quatrix (Quatrix by Maytech).
--quatrix-encoding
The encoding for the backend.
@@ -31249,7 +32424,7 @@ y/e/d> y
rclone interacts with Sia network by talking to the Sia daemon via HTTP API which is usually available on port 9980. By default you will run the daemon locally on the same computer so it's safe to leave the API password blank (the API URL will be http://127.0.0.1:9980
making external access impossible).
However, if you want to access Sia daemon running on another node, for example due to memory constraints or because you want to share single daemon between several rclone and Sia-UI instances, you'll need to make a few more provisions: - Ensure you have Sia daemon installed directly or in a docker container because Sia-UI does not support this mode natively. - Run it on externally accessible port, for example provide --api-addr :9980
and --disable-api-security
arguments on the daemon command line. - Enforce API password for the siad
daemon via environment variable SIA_API_PASSWORD
or text file named apipassword
in the daemon directory. - Set rclone backend option api_password
taking it from above locations.
Notes: 1. If your wallet is locked, rclone cannot unlock it automatically. You should either unlock it in advance by using Sia-UI or via command line siac wallet unlock
. Alternatively you can make siad
unlock your wallet automatically upon startup by running it with environment variable SIA_WALLET_PASSWORD
. 2. If siad
cannot find the SIA_API_PASSWORD
variable or the apipassword
file in the SIA_DIR
directory, it will generate a random password and store in the text file named apipassword
under YOUR_HOME/.sia/
directory on Unix or C:\Users\YOUR_HOME\AppData\Local\Sia\apipassword
on Windows. Remember this when you configure password in rclone. 3. The only way to use siad
without API password is to run it on localhost with command line argument --authorize-api=false
, but this is insecure and strongly discouraged.
-Configuration
+Configuration
Here is an example of how to make a sia
remote called mySia
. First, run:
rclone config
This will guide you through an interactive setup process:
@@ -31309,7 +32484,7 @@ y/e/d> y
Upload a local directory to the Sia directory called backup
rclone copy /home/source mySia:backup
-Standard options
+Standard options
Here are the Standard options specific to sia (Sia Decentralized Cloud).
--sia-api-url
Sia daemon API URL, like http://sia.daemon.host:9980.
@@ -31332,7 +32507,7 @@ y/e/d> y
Type: string
Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to sia (Sia Decentralized Cloud).
--sia-user-agent
Siad User Agent
@@ -31382,7 +32557,7 @@ y/e/d> y
IBM Bluemix Cloud ObjectStorage Swift
Paths are specified as remote:container
(or remote:
for the lsd
command.) You may put subdirectories in too, e.g. remote:container/path/to/dir
.
-Configuration
+Configuration
Here is an example of making a swift configuration. First run
rclone config
This will guide you through an interactive setup process.
@@ -31548,7 +32723,7 @@ rclone lsd myremote:
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Standard options
+Standard options
Here are the Standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)).
--swift-env-auth
Get swift credentials from environment variables in standard OpenStack form.
@@ -31786,7 +32961,7 @@ rclone lsd myremote:
-Advanced options
+Advanced options
Here are the Advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)).
--swift-leave-parts-on-error
If true avoid calling abort upload on a failure.
@@ -31910,7 +33085,7 @@ rclone lsd myremote:
pCloud
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
-Configuration
+Configuration
The initial setup for pCloud involves getting a token from pCloud which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -31932,6 +33107,10 @@ Pcloud App Client Id - leave blank normally.
client_id>
Pcloud App Client Secret - leave blank normally.
client_secret>
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
Remote config
Use web browser to automatically authenticate rclone with remote?
* Say Y if the machine running rclone has a web browser you can use
@@ -31956,6 +33135,7 @@ e) Edit this remote
d) Delete this remote
y/e/d> y
See the remote setup docs for how to set it up on a machine with no Internet browser available.
+Note if you are using remote config with rclone authorize while your pcloud server is the EU region, you will need to set the hostname in 'Edit advanced config', otherwise you might get a token error.
Note that rclone runs a webserver on your local machine to collect the token as returned from pCloud. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/
and this it may require you to unblock it temporarily if you are running a host firewall.
Once configured you can then use rclone
like this,
List directories in top level of your pCloud
@@ -31996,7 +33176,7 @@ y/e/d> y
However you can set this to restrict rclone to a specific folder hierarchy.
In order to do this you will have to find the Folder ID
of the directory you wish rclone to display. This will be the folder
field of the URL when you open the relevant folder in the pCloud web interface.
So if the folder you want rclone to use has a URL which looks like https://my.pcloud.com/#page=filemanager&folder=5xxxxxxxx8&tpl=foldergrid
in the browser, then you use 5xxxxxxxx8
as the root_folder_id
in the config.
-Standard options
+Standard options
Here are the Standard options specific to pcloud (Pcloud).
--pcloud-client-id
OAuth Client Id.
@@ -32018,7 +33198,7 @@ y/e/d> y
Type: string
Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to pcloud (Pcloud).
--pcloud-token
OAuth Access Token as a JSON blob.
@@ -32049,6 +33229,16 @@ y/e/d> y
Type: string
Required: false
+--pcloud-client-credentials
+Use client credentials OAuth flow.
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_PCLOUD_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
--pcloud-encoding
The encoding for the backend.
See the encoding section in the overview for more info.
@@ -32121,7 +33311,7 @@ y/e/d> y
PikPak
PikPak is a private cloud drive.
Paths are specified as remote:path
, and may be as deep as required, e.g. remote:directory/subdirectory
.
-Configuration
+Configuration
Here is an example of making a remote for PikPak.
First run:
rclone config
@@ -32177,7 +33367,7 @@ y/e/d> y
Modification times and hashes
PikPak keeps modification times on objects, and updates them when uploading objects, but it does not support changing only the modification time
The MD5 hash algorithm is supported.
-Standard options
+Standard options
Here are the Standard options specific to pikpak (PikPak).
--pikpak-user
Pikpak username.
@@ -32198,56 +33388,26 @@ y/e/d> y
Type: string
Required: true
-Advanced options
+Advanced options
Here are the Advanced options specific to pikpak (PikPak).
---pikpak-client-id
-OAuth Client Id.
-Leave blank normally.
+--pikpak-device-id
+Device ID used for authorization.
Properties:
-- Config: client_id
-- Env Var: RCLONE_PIKPAK_CLIENT_ID
+- Config: device_id
+- Env Var: RCLONE_PIKPAK_DEVICE_ID
- Type: string
- Required: false
---pikpak-client-secret
-OAuth Client Secret.
-Leave blank normally.
+--pikpak-user-agent
+HTTP user agent for pikpak.
+Defaults to "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0" or "--pikpak-user-agent" provided on command line.
Properties:
-- Config: client_secret
-- Env Var: RCLONE_PIKPAK_CLIENT_SECRET
+- Config: user_agent
+- Env Var: RCLONE_PIKPAK_USER_AGENT
- Type: string
-- Required: false
-
---pikpak-token
-OAuth Access Token as a JSON blob.
-Properties:
-
-- Config: token
-- Env Var: RCLONE_PIKPAK_TOKEN
-- Type: string
-- Required: false
-
---pikpak-auth-url
-Auth server URL.
-Leave blank to use the provider defaults.
-Properties:
-
-- Config: auth_url
-- Env Var: RCLONE_PIKPAK_AUTH_URL
-- Type: string
-- Required: false
-
---pikpak-token-url
-Token server url.
-Leave blank to use the provider defaults.
-Properties:
-
-- Config: token_url
-- Env Var: RCLONE_PIKPAK_TOKEN_URL
-- Type: string
-- Required: false
+- Default: "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0"
--pikpak-root-folder-id
ID of the root folder. Leave blank normally.
@@ -32279,6 +33439,16 @@ y/e/d> y
Type: bool
Default: false
+
+Use original file links instead of media links.
+This avoids issues caused by invalid media links, but may reduce download speeds.
+Properties:
+
+- Config: no_media_link
+- Env Var: RCLONE_PIKPAK_NO_MEDIA_LINK
+- Type: bool
+- Default: false
+
--pikpak-hash-memory-limit
Files bigger than this will be cached on disk to calculate hash if required.
Properties:
@@ -32439,7 +33609,7 @@ e/n/d/r/c/s/q> q
rclone lsf Pixeldrain: --dirs-only -Fpi
This will print directories in your Pixeldrain
home directory and their public IDs.
Enter this directory ID in the rclone config and you will be able to access the directory.
-Standard options
+Standard options
Here are the Standard options specific to pixeldrain (Pixeldrain Filesystem).
--pixeldrain-api-key
API key for your pixeldrain account. Found on https://pixeldrain.com/user/api_keys.
@@ -32460,7 +33630,7 @@ e/n/d/r/c/s/q> q
Type: string
Default: "me"
-Advanced options
+Advanced options
Here are the Advanced options specific to pixeldrain (Pixeldrain Filesystem).
--pixeldrain-api-url
The API endpoint to connect to. In the vast majority of cases it's fine to leave this at default. It is only intended to be changed for testing purposes.
@@ -32528,7 +33698,7 @@ e/n/d/r/c/s/q> q
premiumize.me
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
-Configuration
+Configuration
The initial setup for premiumize.me involves getting a token from premiumize.me which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -32605,7 +33775,7 @@ y/e/d>
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Standard options
+Standard options
Here are the Standard options specific to premiumizeme (premiumize.me).
--premiumizeme-client-id
OAuth Client Id.
@@ -32637,7 +33807,7 @@ y/e/d>
Type: string
Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to premiumizeme (premiumize.me).
--premiumizeme-token
OAuth Access Token as a JSON blob.
@@ -32668,6 +33838,16 @@ y/e/d>
Type: string
Required: false
+--premiumizeme-client-credentials
+Use client credentials OAuth flow.
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_PREMIUMIZEME_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
--premiumizeme-encoding
The encoding for the backend.
See the encoding section in the overview for more info.
@@ -32760,7 +33940,7 @@ y/e/d> y
Please set your mailbox password in the advanced config section.
Caching
The cache is currently built for the case when the rclone is the only instance performing operations to the mount point. The event system, which is the proton API system that provides visibility of what has changed on the drive, is yet to be implemented, so updates from other clients won’t be reflected in the cache. Thus, if there are concurrent clients accessing the same mount point, then we might have a problem with caching the stale data.
-Standard options
+Standard options
Here are the Standard options specific to protondrive (Proton Drive).
--protondrive-username
The username of your proton account
@@ -32792,7 +33972,7 @@ y/e/d> y
Type: string
Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to protondrive (Proton Drive).
--protondrive-mailbox-password
The mailbox password of your two-password proton account.
@@ -32912,7 +34092,7 @@ y/e/d> y
put.io
Paths are specified as remote:path
put.io paths may be as deep as required, e.g. remote:directory/subdirectory
.
-Configuration
+Configuration
The initial setup for put.io involves getting a token from put.io which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -32996,7 +34176,7 @@ e/n/d/r/c/s/q> q
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Standard options
+Standard options
Here are the Standard options specific to putio (Put.io).
--putio-client-id
OAuth Client Id.
@@ -33018,7 +34198,7 @@ e/n/d/r/c/s/q> q
Type: string
Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to putio (Put.io).
--putio-token
OAuth Access Token as a JSON blob.
@@ -33049,6 +34229,16 @@ e/n/d/r/c/s/q> q
Type: string
Required: false
+--putio-client-credentials
+Use client credentials OAuth flow.
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_PUTIO_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
--putio-encoding
The encoding for the backend.
See the encoding section in the overview for more info.
@@ -33140,7 +34330,7 @@ y/e/d> y
Please set your mailbox password in the advanced config section.
Caching
The cache is currently built for the case when the rclone is the only instance performing operations to the mount point. The event system, which is the proton API system that provides visibility of what has changed on the drive, is yet to be implemented, so updates from other clients won’t be reflected in the cache. Thus, if there are concurrent clients accessing the same mount point, then we might have a problem with caching the stale data.
-Standard options
+Standard options
Here are the Standard options specific to protondrive (Proton Drive).
--protondrive-username
The username of your proton account
@@ -33172,7 +34362,7 @@ y/e/d> y
Type: string
Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to protondrive (Proton Drive).
--protondrive-mailbox-password
The mailbox password of your two-password proton account.
@@ -33291,7 +34481,7 @@ y/e/d> y
The Proton-API-Bridge, attempts to bridge the gap, so rclone can be built on top of this quickly. This codebase handles the intricate tasks before and after calling Proton APIs, particularly the complex encryption scheme, allowing developers to implement features for other software on top of this codebase. There are likely quite a few errors in this library, as there isn't official documentation available.
Seafile
This is a backend for the Seafile storage service: - It works with both the free community edition or the professional edition. - Seafile versions 6.x, 7.x, 8.x and 9.x are all supported. - Encrypted libraries are also supported. - It supports 2FA enabled users - Using a Library API Token is not supported
-Configuration
+Configuration
There are two distinct modes you can setup your remote: - you point your remote to the root of the server, meaning you don't specify a library during the configuration: Paths are specified as remote:library
. You may put subdirectories in too, e.g. remote:library/path/to/dir
. - you point your remote to a specific library during the configuration: Paths are specified as remote:path/to/dir
. This is the recommended mode when using encrypted libraries. (This mode is possibly slightly faster than the root mode)
Configuration in root mode
Here is an example of making a seafile configuration for a user with no two-factor authentication. First run
@@ -33492,7 +34682,7 @@ http://my.seafile.server/d/9ea2455f6f55478bbb0d/
It has been actively developed using the seafile docker image of these versions: - 6.3.4 community edition - 7.0.5 community edition - 7.1.3 community edition - 9.0.10 community edition
Versions below 6.0 are not supported. Versions between 6.0 and 6.3 haven't been tested and might not work properly.
Each new version of rclone
is automatically tested against the latest docker image of the seafile community server.
-Standard options
+Standard options
Here are the Standard options specific to seafile (seafile).
--seafile-url
URL of seafile host to connect to.
@@ -33568,7 +34758,7 @@ http://my.seafile.server/d/9ea2455f6f55478bbb0d/
Type: string
Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to seafile (seafile).
--seafile-create-library
Should rclone create a library if it doesn't exist.
@@ -33609,7 +34799,7 @@ http://my.seafile.server/d/9ea2455f6f55478bbb0d/
Paths are specified as remote:path
. If the path does not begin with a /
it is relative to the home directory of the user. An empty path remote:
refers to the user's home directory. For example, rclone lsd remote:
would list the home directory of the user configured in the rclone remote config (i.e /home/sftpuser
). However, rclone lsd remote:/
would list the root directory for remote machine (i.e. /
)
Note that some SFTP servers will need the leading / - Synology is a good example of this. rsync.net and Hetzner, on the other hand, requires users to OMIT the leading /.
Note that by default rclone will try to execute shell commands on the server, see shell access considerations.
-Configuration
+Configuration
Here is an example of making an SFTP configuration. First run
rclone config
This will guide you through an interactive setup process.
@@ -33688,7 +34878,7 @@ y/e/d> y
If you set the ask_password
option, rclone will prompt for a password when needed and no password has been configured.
Certificate-signed keys
With traditional key-based authentication, you configure your private key only, and the public key built into it will be used during the authentication process.
-If you have a certificate you may use it to sign your public key, creating a separate SSH user certificate that should be used instead of the plain public key extracted from the private key. Then you must provide the path to the user certificate public key file in pubkey_file
.
+If you have a certificate you may use it to sign your public key, creating a separate SSH user certificate that should be used instead of the plain public key extracted from the private key. Then you must provide the path to the user certificate public key file in pubkey_file
or the content of the file in pubkey
.
Note: This is not the traditional public key paired with your private key, typically saved as /home/$USER/.ssh/id_rsa.pub
. Setting this path in pubkey_file
will not work.
Example:
[remote]
@@ -33753,7 +34943,7 @@ known_hosts_file = ~/.ssh/known_hosts
About command
The about
command returns the total space, free space, and used space on the remote for the disk of the specified path on the remote or, if not set, the disk of the root on the remote.
SFTP usually supports the about command, but it depends on the server. If the server implements the vendor-specific VFS statistics extension, which is normally the case with OpenSSH instances, it will be used. If not, but the same login has access to a Unix shell, where the df
command is available (e.g. in the remote's PATH), then this will be used instead. If the server shell is PowerShell, probably with a Windows OpenSSH server, rclone will use a built-in shell command (see shell access). If none of the above is applicable, about
will fail.
-Standard options
+Standard options
Here are the Standard options specific to sftp (SSH/SFTP).
--sftp-host
SSH host to connect to.
@@ -33829,6 +35019,15 @@ known_hosts_file = ~/.ssh/known_hosts
Type: string
Required: false
+--sftp-pubkey
+SSH public certificate for public certificate based authentication. Set this if you have a signed certificate you want to use for authentication. If specified will override pubkey_file.
+Properties:
+
+- Config: pubkey
+- Env Var: RCLONE_SFTP_PUBKEY
+- Type: string
+- Required: false
+
--sftp-pubkey-file
Optional path to public key file.
Set this if you have a signed certificate you want to use for authentication.
@@ -33908,7 +35107,7 @@ known_hosts_file = ~/.ssh/known_hosts
Type: SpaceSepList
Default:
-Advanced options
+Advanced options
Here are the Advanced options specific to sftp (SSH/SFTP).
--sftp-known-hosts-file
Optional path to known_hosts file.
@@ -34247,13 +35446,13 @@ server_command = sudo /usr/libexec/openssh/sftp-server
See Hetzner's documentation for details
SMB
SMB is a communication protocol to share files over network.
-This relies on go-smb2 library for communication with SMB protocol.
+This relies on go-smb2 library for communication with SMB protocol.
Paths are specified as remote:sharename
(or remote:
for the lsd
command.) You may put subdirectories in too, e.g. remote:item/path/to/dir
.
Notes
The first path segment must be the name of the share, which you entered when you started to share on Windows. On smbd, it's the section title in smb.conf
(usually in /etc/samba/
) file. You can find shares by querying the root if you're unsure (e.g. rclone lsd remote:
).
You can't access to the shared printers from rclone, obviously.
You can't use Anonymous access for logging in. You have to use the guest
user with an empty password instead. The rclone client tries to avoid 8.3 names when uploading files by encoding trailing spaces and periods. Alternatively, the local backend on Windows can access SMB servers using UNC paths, by \\server\share
. This doesn't apply to non-Windows OSes, such as Linux and macOS.
-Configuration
+Configuration
Here is an example of making a SMB configuration.
First run
rclone config
@@ -34328,7 +35527,7 @@ y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> d
-Standard options
+Standard options
Here are the Standard options specific to smb (SMB / CIFS).
--smb-host
SMB server hostname to connect to.
@@ -34389,7 +35588,7 @@ y/e/d> d
Type: string
Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to smb (SMB / CIFS).
--smb-idle-timeout
Max time before closing idle connections.
@@ -34500,7 +35699,7 @@ y/e/d> d
S3 backend: secret encryption key is shared with the gateway
-Configuration
+Configuration
To make a new Storj configuration you need one of the following: * Access Grant that someone else shared with you. * API Key of a Storj project you are a member of.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -34597,7 +35796,7 @@ y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
-Standard options
+Standard options
Here are the Standard options specific to storj (Storj Decentralized Cloud Storage).
--storj-provider
Choose an authentication method.
@@ -34676,7 +35875,7 @@ y/e/d> y
Type: string
Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to storj (Storj Decentralized Cloud Storage).
--storj-description
Description of the remote.
@@ -34749,7 +35948,7 @@ y/e/d> y
To fix these, please raise your system limits. You can do this issuing a ulimit -n 65536
just before you run rclone. To change the limits more permanently you can add this to your shell startup script, e.g. $HOME/.bashrc
, or change the system-wide configuration, usually /etc/sysctl.conf
and/or /etc/security/limits.conf
, but please refer to your operating system manual.
SugarSync
SugarSync is a cloud service that enables active synchronization of files across computers and other devices for file backup, access, syncing, and sharing.
-Configuration
+Configuration
The initial setup for SugarSync involves getting a token from SugarSync which you can do with rclone. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -34822,7 +36021,7 @@ y/e/d> y
Deleting files
Deleted files will be moved to the "Deleted items" folder by default.
However you can supply the flag --sugarsync-hard-delete
or set the config parameter hard_delete = true
if you would like files to be deleted straight away.
-Standard options
+Standard options
Here are the Standard options specific to sugarsync (Sugarsync).
--sugarsync-app-id
Sugarsync App ID.
@@ -34863,7 +36062,7 @@ y/e/d> y
Type: bool
Default: false
-Advanced options
+Advanced options
Here are the Advanced options specific to sugarsync (Sugarsync).
--sugarsync-refresh-token
Sugarsync refresh token.
@@ -34951,7 +36150,7 @@ y/e/d> y
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
The initial setup for Uloz.to involves filling in the user credentials. rclone config
walks you through it.
-Configuration
+Configuration
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
@@ -35044,7 +36243,7 @@ y/e/d> y
In order to do this you will have to find the Folder slug
of the folder you wish to use as root. This will be the last segment of the URL when you open the relevant folder in the Uloz.to web interface.
For example, for exploring a folder with URL https://uloz.to/fm/my-files/foobar
, foobar
should be used as the root slug.
root_folder_slug
can be used alongside a specific path in the remote path. For example, if your remote's root_folder_slug
corresponds to /foo/bar
, remote:baz/qux
will refer to ABSOLUTE_ULOZTO_ROOT/foo/bar/baz/qux
.
-Standard options
+Standard options
Here are the Standard options specific to ulozto (Uloz.to).
--ulozto-app-token
The application token identifying the app. An app API key can be either found in the API doc https://uloz.to/upload-resumable-api-beta or obtained from customer service.
@@ -35074,7 +36273,7 @@ y/e/d> y
Type: string
Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to ulozto (Uloz.to).
--ulozto-root-folder-slug
If set, rclone will use this folder as the root folder for all operations. For example, if the slug identifies 'foo/bar/', 'ulozto:baz' is equivalent to 'ulozto:foo/bar/baz' without any root slug set.
@@ -35123,7 +36322,7 @@ y/e/d> y
This is a Backend for Uptobox file storage service. Uptobox is closer to a one-click hoster than a traditional cloud storage provider and therefore not suitable for long term storage.
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
-Configuration
+Configuration
To configure an Uptobox backend you'll need your personal api token. You'll find it in your account settings
Here is an example of how to make a remote called remote
with the default setup. First run:
rclone config
@@ -35203,7 +36402,7 @@ y/e/d>
Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.
-Standard options
+Standard options
Here are the Standard options specific to uptobox (Uptobox).
--uptobox-access-token
Your access token.
@@ -35215,7 +36414,7 @@ y/e/d>
Type: string
Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to uptobox (Uptobox).
--uptobox-private
Set to make uploaded files private
@@ -35259,7 +36458,7 @@ y/e/d>
Subfolders can be used in upstream remotes. Assume a union remote named backup
with the remotes mydrive:private/backup
. Invoking rclone mkdir backup:desktop
is exactly the same as invoking rclone mkdir mydrive:private/backup/desktop
.
There is no special handling of paths containing ..
segments. Invoking rclone mkdir backup:../desktop
is exactly the same as invoking rclone mkdir mydrive:private/backup/../desktop
.
-Configuration
+Configuration
Here is an example of how to make a union called remote
for local folders. First run:
rclone config
This will guide you through an interactive setup process:
@@ -35492,7 +36691,7 @@ upstreams = /local:writeback remote:dir
When files are written, they will be written to both remote:dir
and /local
.
As many remotes as desired can be added to upstreams
but there should only be one :writeback
tag.
Rclone does not manage the :writeback
remote in any way other than writing files back to it. So if you need to expire old files or manage the size then you will have to do this yourself.
-Standard options
+Standard options
Here are the Standard options specific to union (Union merges the contents of several upstream fs).
--union-upstreams
List of space separated upstreams.
@@ -35541,7 +36740,7 @@ upstreams = /local:writeback remote:dir
Type: int
Default: 120
-Advanced options
+Advanced options
Here are the Advanced options specific to union (Union merges the contents of several upstream fs).
--union-min-free-space
Minimum viable free space for lfs/eplfs policies.
@@ -35568,7 +36767,7 @@ upstreams = /local:writeback remote:dir
WebDAV
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
-Configuration
+Configuration
To configure the WebDAV remote you will need to have a URL for it, and a username and password. If you know what kind of system you are connecting to then rclone can enable extra features.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -35645,7 +36844,7 @@ y/e/d> y
Modification times and hashes
Plain WebDAV does not support modified times. However when used with Fastmail Files, Owncloud or Nextcloud rclone will support modified times.
Likewise plain WebDAV does not support hashes, however when used with Fastmail Files, Owncloud or Nextcloud rclone will support SHA1 and MD5 hashes. Depending on the exact version of Owncloud or Nextcloud hashes may appear on all objects, or only on objects which had a hash uploaded with them.
-Standard options
+Standard options
Here are the Standard options specific to webdav (WebDAV).
--webdav-url
URL of http host to connect to.
@@ -35726,7 +36925,7 @@ y/e/d> y
Type: string
Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to webdav (WebDAV).
--webdav-bearer-token-command
Command to run to get a bearer token.
@@ -35808,6 +37007,18 @@ y/e/d> y
Type: string
Required: false
+--webdav-auth-redirect
+Preserve authentication on redirect.
+If the server redirects rclone to a new domain when it is trying to read a file then normally rclone will drop the Authorization: header from the request.
+This is standard security practice to avoid sending your credentials to an unknown webserver.
+However this is desirable in some circumstances. If you are getting an error like "401 Unauthorized" when rclone is attempting to read files from the webdav server then you can try this option.
+Properties:
+
+- Config: auth_redirect
+- Env Var: RCLONE_WEBDAV_AUTH_REDIRECT
+- Type: bool
+- Default: false
+
--webdav-description
Description of the remote.
Properties:
@@ -35895,7 +37106,7 @@ vendor = other
bearer_token_command = oidc-token XDC
Yandex Disk
Yandex Disk is a cloud storage solution created by Yandex.
-Configuration
+Configuration
Here is an example of making a yandex configuration. First run
rclone config
This will guide you through an interactive setup process:
@@ -35960,7 +37171,7 @@ y/e/d> y
Restricted filename characters
The default restricted characters set are replaced.
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Standard options
+Standard options
Here are the Standard options specific to yandex (Yandex Disk).
--yandex-client-id
OAuth Client Id.
@@ -35982,7 +37193,7 @@ y/e/d> y
Type: string
Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to yandex (Yandex Disk).
--yandex-token
OAuth Access Token as a JSON blob.
@@ -36013,6 +37224,16 @@ y/e/d> y
Type: string
Required: false
+--yandex-client-credentials
+Use client credentials OAuth flow.
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_YANDEX_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
--yandex-hard-delete
Delete files permanently rather than putting them into the trash.
Properties:
@@ -36056,7 +37277,7 @@ y/e/d> y
[403 - DiskUnsupportedUserAccountTypeError] User account type is not supported.
Zoho Workdrive
Zoho WorkDrive is a cloud storage solution created by Zoho.
-Configuration
+Configuration
Here is an example of making a zoho configuration. First run
rclone config
This will guide you through an interactive setup process:
@@ -36136,7 +37357,7 @@ y/e/d>
To view your current quota you can use the rclone about remote:
command which will display your current usage.
Restricted filename characters
Only control characters and invalid UTF-8 are replaced. In addition most Unicode full-width characters are not supported at all and will be removed from filenames during upload.
-Standard options
+Standard options
Here are the Standard options specific to zoho (Zoho).
--zoho-client-id
OAuth Client Id.
@@ -36195,7 +37416,7 @@ y/e/d>
-Advanced options
+Advanced options
Here are the Advanced options specific to zoho (Zoho).
--zoho-token
OAuth Access Token as a JSON blob.
@@ -36226,6 +37447,25 @@ y/e/d>
Type: string
Required: false
+--zoho-client-credentials
+Use client credentials OAuth flow.
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_ZOHO_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
+--zoho-upload-cutoff
+Cutoff for switching to large file upload api (>= 10 MiB).
+Properties:
+
+- Config: upload_cutoff
+- Env Var: RCLONE_ZOHO_UPLOAD_CUTOFF
+- Type: SizeSuffix
+- Default: 10Mi
+
--zoho-encoding
The encoding for the backend.
See the encoding section in the overview for more info.
@@ -36257,7 +37497,7 @@ y/e/d>
Local paths are specified as normal filesystem paths, e.g. /path/to/wherever
, so
rclone sync --interactive /home/source /tmp/destination
Will sync /home/source
to /tmp/destination
.
-Configuration
+Configuration
For consistencies sake one can also configure a remote of type local
in the config file, and access the local filesystem using rclone remote paths, e.g. remote:path/to/wherever
, but it is probably easier not to.
Modification times
Rclone reads and writes the modification times using an accuracy determined by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second on OS X.
@@ -36568,9 +37808,9 @@ nounc = true
6 two/three
6 b/two
6 b/one
---links, -l
+--local-links, --links, -l
Normally rclone will ignore symlinks or junction points (which behave like symlinks under Windows).
-If you supply this flag then rclone will copy symbolic links from the local storage, and store them as text files, with a '.rclonelink' suffix in the remote storage.
+If you supply this flag then rclone will copy symbolic links from the local storage, and store them as text files, with a .rclonelink
suffix in the remote storage.
The text file will contain the target of the symbolic link (see example).
This flag applies to all commands.
For example, supposing you have a directory structure like this
@@ -36580,7 +37820,7 @@ nounc = true
└── file2 -> /home/user/file3
Copying the entire directory with '-l'
$ rclone copy -l /tmp/a/ remote:/tmp/a/
-The remote files are created with a '.rclonelink' suffix
+The remote files are created with a .rclonelink
suffix
$ rclone ls remote:/tmp/a
5 file1.rclonelink
14 file2.rclonelink
@@ -36610,6 +37850,7 @@ $ tree /tmp/b
$ tree /tmp/c
/tmp/c
└── file1 -> ./file4
+Note that --local-links
just enables this feature for the local backend. --links
and -l
enable the feature for all supported backends and the VFS.
Note that this flag is incompatible with -copy-links
/ -L
.
Restricting filesystems with --one-file-system
Normally rclone will recurse through filesystems as mounted.
@@ -36633,7 +37874,7 @@ $ tree /tmp/c
0 file2
NB Rclone (like most unix tools such as du
, rsync
and tar
) treats a bind mount to the same device as being on the same filesystem.
NB This flag is only available on Unix based systems. On systems where it isn't supported (e.g. Windows) it will be ignored.
-Advanced options
+Advanced options
Here are the Advanced options specific to local (Local Disk).
--local-nounc
Disable UNC (long path names) conversion on Windows.
@@ -36660,8 +37901,8 @@ $ tree /tmp/c
Type: bool
Default: false
---links / -l
-Translate symlinks to/from regular files with a '.rclonelink' extension.
+--local-links
+Translate symlinks to/from regular files with a '.rclonelink' extension for the local backend.
Properties:
- Config: links
@@ -36949,6 +38190,289 @@ $ tree /tmp/c
- "error": return an error based on option value
Changelog
+v1.69.0 - 2025-01-12
+See commits
+
+- New backends
+
+- Security fixes
+
+- serve sftp: Resolve CVE-2024-45337 - Misuse of ServerConfig.PublicKeyCallback may cause authorization bypass (dependabot)
+
+- Rclone was not vulnerable to this.
+- See https://github.com/advisories/GHSA-v778-237x-gjrc
+
+- build: Update golang.org/x/net to v0.33.0 to fix CVE-2024-45338 - Non-linear parsing of case-insensitive content (Nick Craig-Wood)
+
+- Rclone was not vulnerable to this.
+- See https://github.com/advisories/GHSA-w32m-9786-jp63
+
+
+- New Features
+
+- accounting: Write the current bwlimit to the log on SIGUSR2 (Nick Craig-Wood)
+- bisync: Change exit code from 2 to 7 for critically aborted run (albertony)
+- build
+
+- Update all dependencies (Nick Craig-Wood)
+- Replace Windows-specific
NewLazyDLL
with NewLazySystemDLL
(albertony)
+
+- cmd: Change exit code from 1 to 2 for syntax and usage errors (albertony)
+- docker serve: make sure all mount and VFS options are parsed (Nick Craig-Wood)
+- doc fixes (albertony, Alexandre Hamez, Anthony Metzidis, buengese, Dan McArdle, David Seifert, Francesco Frassinelli, Michael R. Davis, Nick Craig-Wood, Pawel Palucha, Randy Bush, remygrandin, Sam Harrison, shenpengfeng, tgfisher, Thomas ten Cate, ToM, Tony Metzidis, vintagefuture, Yxxx)
+- fs: Make
--links
flag global and add new --local-links
and --vfs-links
flags (Nick Craig-Wood)
+- http servers: Disable automatic authentication skipping for unix sockets in http servers (Moises Lima)
+
+- This was making it impossible to use unix sockets with an proxy
+- This might now cause rclone to need authenticaton where it didn't before
+
+- oauthutil: add support for OAuth client credential flow (Martin Hassack, Nick Craig-Wood)
+- operations: make log messages consistent for mkdir/rmdir at INFO level (Nick Craig-Wood)
+- rc: Add
relative
to vfs/queue-set-expiry (Nick Craig-Wood)
+- serve dlna: Sort the directory entries by directories first then alphabetically by name (Nick Craig-Wood)
+- serve nfs
+
+- Introduce symlink support (Nick Craig-Wood)
+- Implement
--nfs-cache-type
symlink (Nick Craig-Wood)
+
+- size: Make output compatible with
-P
(Nick Craig-Wood)
+- test makefiles: Add
--flat
flag for making directories with many entries (Nick Craig-Wood)
+
+- Bug Fixes
+
+- accounting
+
+- Fix global error acounting (Benjamin Legrand)
+- Fix debug printing when debug wasn't set (Nick Craig-Wood)
+- Fix race stopping/starting the stats counter (Nick Craig-Wood)
+
+- rc/job: Use mutex for adding listeners thread safety (hayden.pan)
+- serve docker: Fix incorrect GID assignment (TAKEI Yuya)
+- serve nfs: Fix missing inode numbers which was messing up
ls -laR
(Nick Craig-Wood)
+- serve s3: Fix
Last-Modified
timestamp (Nick Craig-Wood)
+- serve sftp: Fix loading of authorized keys file with comment on last line (albertony)
+
+- Mount
+
+- Introduce symlink support (Filipe Azevedo, Nick Craig-Wood)
+- Better snap mount error message (divinity76)
+- mount2: Fix missing
.
and ..
entries (Filipe Azevedo)
+
+- VFS
+
+- With
--vfs-used-is-size
value is calculated and then thrown away (Ilias Ozgur Can Leonard)
+- Add symlink support to VFS (Filipe Azevedo, Nick Craig-Wood)
+
+- This can be enabled with the specific
--vfs-links
flag or the global --links
flag
+
+- Fix open files disappearing from directory listings (Nick Craig-Wood)
+- Add remote name to vfs cache log messages (Nick Craig-Wood)
+
+- Cache
+
+- Fix parent not getting pinned when remote is a file (nielash)
+
+- Azure Blob
+
+- Add
--azureblob-disable-instance-discovery
(Nick Craig-Wood)
+- Add
--azureblob-use-az
to force the use of the Azure CLI for auth (Nick Craig-Wood)
+- Quit multipart uploads if the context is cancelled (Nick Craig-Wood)
+
+- Azurefiles
+
+- Fix missing x-ms-file-request-intent header (Nick Craig-Wood)
+
+- B2
+
+- Add
daysFromStartingToCancelingUnfinishedLargeFiles
to backend lifecycle
command (Louis Laureys)
+
+- Box
+
+- Fix server-side copying a file over existing dst (nielash)
+- Fix panic when decoding corrupted PEM from JWT file (Nick Craig-Wood)
+
+- Drive
+
+- Add support for markdown format (Noam Ross)
+- Implement
rclone backend rescue
to rescue orphaned files (Nick Craig-Wood)
+
+- Dropbox
+
+- Fix server side copying over existing object (Nick Craig-Wood)
+- Fix return status when full to be fatal error (Nick Craig-Wood)
+
+- FTP
+
+- Implement
--ftp-no-check-upload
to allow upload to write only dirs (Nick Craig-Wood)
+- Fix ls commands returning empty on "Microsoft FTP Service" servers (Francesco Frassinelli)
+
+- Gofile
+
+- Fix server side copying over existing object (Nick Craig-Wood)
+
+- Google Cloud Storage
+
+- Add access token auth with
--gcs-access-token
(Leandro Piccilli)
+- Update docs on service account access tokens (Anthony Metzidis)
+
+- Googlephotos
+
+- Implement
--gphotos-proxy
to allow download of full resolution media (Nick Craig-Wood)
+- Fix nil pointer crash on upload (Nick Craig-Wood)
+
+- HTTP
+
+- Fix incorrect URLs with initial slash (Oleg Kunitsyn)
+
+- Onedrive
+
+- Add support for OAuth client credential flow (Martin Hassack, Nick Craig-Wood)
+- Fix time precision for OneDrive personal (Nick Craig-Wood)
+- Fix server side copying over existing object (Nick Craig-Wood)
+
+- Opendrive
+
+- Add
rclone about
support to backend (quiescens)
+
+- Oracle Object Storage
+
+- Make specifying
compartmentid
optional (Manoj Ghosh)
+- Quit multipart uploads if the context is cancelled (Nick Craig-Wood)
+
+- Pikpak
+
+- Add option to use original file links (wiserain)
+
+- Protondrive
+
+- Improve performance of Proton Drive backend (Lawrence Murray)
+
+- Putio
+
+- Fix server side copying over existing object (Nick Craig-Wood)
+
+- S3
+
+- Add initial
--s3-directory-bucket
to support AWS Directory Buckets (Nick Craig-Wood)
+- Add Wasabi
eu-south-1
region (Diego Monti)
+- Fix download of compressed files from Cloudflare R2 (Nick Craig-Wood)
+- Rename glacier storage class to flexible retrieval (Henry Lee)
+- Quit multipart uploads if the context is cancelled (Nick Craig-Wood)
+
+- SFTP
+
+- Allow inline ssh public certificate for sftp (Dimitar Ivanov)
+- Fix nil check when using auth proxy (Nick Craig-Wood)
+
+- Smb
+
+- Add initial support for Kerberos authentication (more work needed). (Francesco Frassinelli)
+- Fix panic if stat fails (Nick Craig-Wood)
+
+- Sugarsync
+
+- Fix server side copying over existing object (Nick Craig-Wood)
+
+- WebDAV
+
+- Nextcloud: implement backoff and retry for 423 LOCKED errors (Nick Craig-Wood)
+- Make
--webdav-auth-redirect
to fix 401 unauthorized on redirect (Nick Craig-Wood)
+
+- Yandex
+
+- Fix server side copying over existing object (Nick Craig-Wood)
+
+- Zoho
+
+- Use download server to accelerate downloads (buengese)
+- Switch to large file upload API for larger files, fix missing URL encoding of filenames for the upload API (buengese)
+- Print clear error message when missing oauth scope (buengese)
+- Try to handle rate limits a bit better (buengese)
+- Add support for private spaces (buengese)
+- Make upload cutoff configurable (buengese)
+
+
+v1.68.2 - 2024-11-15
+See commits
+
+- Security fixes
+
+- local backend: CVE-2024-52522: fix permission and ownership on symlinks with
--links
and --metadata
(Nick Craig-Wood)
+
+- Only affects users using
--metadata
and --links
and copying files to the local backend
+- See https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
+
+- build: bump github.com/golang-jwt/jwt/v4 from 4.5.0 to 4.5.1 (dependabot)
+
+- This is an issue in a dependency which is used for JWT certificates
+- See https://github.com/golang-jwt/jwt/security/advisories/GHSA-29wx-vh33-7x7r
+
+
+- Bug Fixes
+
+- accounting: Fix wrong message on SIGUSR2 to enable/disable bwlimit (Nick Craig-Wood)
+- bisync: Fix output capture restoring the wrong output for logrus (Dimitrios Slamaris)
+- dlna: Fix loggingResponseWriter disregarding log level (Simon Bos)
+- serve s3: Fix excess locking which was making serve s3 single threaded (Nick Craig-Wood)
+- doc fixes (Nick Craig-Wood, tgfisher, Alexandre Hamez, Randy Bush)
+
+- Local
+
+- Fix permission and ownership on symlinks with
--links
and --metadata
(Nick Craig-Wood)
+- Fix
--copy-links
on macOS when cloning (nielash)
+
+- Onedrive
+
+- Fix Retry-After handling to look at 503 errors also (Nick Craig-Wood)
+
+- Pikpak
+
+- Fix cid/gcid calculations for fs.OverrideRemote (wiserain)
+- Fix fatal crash on startup with token that can't be refreshed (Nick Craig-Wood)
+
+- S3
+
+- Fix crash when using
--s3-download-url
after migration to SDKv2 (Nick Craig-Wood)
+- Storj provider: fix server-side copy of files bigger than 5GB (Kaloyan Raev)
+- Fix multitenant multipart uploads with CEPH (Nick Craig-Wood)
+
+
+v1.68.1 - 2024-09-24
+See commits
+
+- Bug Fixes
+
+- build: Fix docker release build (ttionya)
+- doc fixes (Nick Craig-Wood, Pawel Palucha)
+- fs
+
+- Fix
--dump filters
not always appearing (Nick Craig-Wood)
+- Fix setting
stringArray
config values from environment variables (Nick Craig-Wood)
+
+- rc: Fix default value of
--metrics-addr
(Nick Craig-Wood)
+- serve docker: Add missing
vfs-read-chunk-streams
option in docker volume driver (Divyam)
+
+- Onedrive
+
+- Fix spurious "Couldn't decode error response: EOF" DEBUG (Nick Craig-Wood)
+
+- Pikpak
+
+- Fix login issue where token retrieval fails (wiserain)
+
+- S3
+
+- Fix rclone ignoring static credentials when
env_auth=true
(Nick Craig-Wood)
+
+
v1.68.0 - 2024-09-08
See commits
@@ -37090,6 +38614,7 @@ $ tree /tmp/c
- Implement
SetModTime
(Georg Welzel)
- Implement
OpenWriterAt
feature to enable multipart uploads (Georg Welzel)
+- Fix failing large file uploads (Georg Welzel)
- Pikpak
Forum
diff --git a/MANUAL.md b/MANUAL.md
index 11e520ba8..8d2f405fe 100644
--- a/MANUAL.md
+++ b/MANUAL.md
@@ -1,6 +1,6 @@
% rclone(1) User Manual
% Nick Craig-Wood
-% Sep 08, 2024
+% Jan 12, 2025
# Rclone syncs your files to cloud storage
@@ -113,6 +113,7 @@ WebDAV or S3, that work out of the box.)
- Arvan Cloud Object Storage (AOS)
- Citrix ShareFile
- Cloudflare R2
+- Cloudinary
- DigitalOcean Spaces
- Digi Storage
- Dreamhost
@@ -129,6 +130,7 @@ WebDAV or S3, that work out of the box.)
- Hetzner Storage Box
- HiDrive
- HTTP
+- iCloud Drive
- ImageKit
- Internet Archive
- Jottacloud
@@ -156,6 +158,7 @@ WebDAV or S3, that work out of the box.)
- OpenStack Swift
- Oracle Cloud Storage Swift
- Oracle Object Storage
+- Outscale
- ownCloud
- pCloud
- Petabox
@@ -173,6 +176,7 @@ WebDAV or S3, that work out of the box.)
- Seafile
- Seagate Lyve Cloud
- SeaweedFS
+- Selectel
- SFTP
- Sia
- SMB / CIFS
@@ -874,6 +878,7 @@ See the following for detailed instructions for
* [Chunker](https://rclone.org/chunker/) - transparently splits large files for other remotes
* [Citrix ShareFile](https://rclone.org/sharefile/)
* [Compress](https://rclone.org/compress/)
+ * [Cloudinary](https://rclone.org/cloudinary/)
* [Combine](https://rclone.org/combine/)
* [Crypt](https://rclone.org/crypt/) - to encrypt other remotes
* [DigitalOcean Spaces](https://rclone.org/s3/#digitalocean-spaces)
@@ -891,6 +896,7 @@ See the following for detailed instructions for
* [Hetzner Storage Box](https://rclone.org/sftp/#hetzner-storage-box)
* [HiDrive](https://rclone.org/hidrive/)
* [HTTP](https://rclone.org/http/)
+ * [iCloud Drive](https://rclone.org/iclouddrive/)
* [Internet Archive](https://rclone.org/internetarchive/)
* [Jottacloud](https://rclone.org/jottacloud/)
* [Koofr](https://rclone.org/koofr/)
@@ -1117,6 +1123,7 @@ Flags for anything which can copy a file
-I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
@@ -1325,6 +1332,7 @@ Flags for anything which can copy a file
-I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
@@ -1495,6 +1503,7 @@ Flags for anything which can copy a file
-I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
@@ -2938,6 +2947,7 @@ Flags for anything which can copy a file
-I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
@@ -4181,6 +4191,7 @@ Flags for anything which can copy a file
-I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
@@ -5259,7 +5270,9 @@ When running in background mode the user will have to stop the mount manually:
# Linux
fusermount -u /path/to/local/mount
- # OS X
+ #... or on some systems
+ fusermount3 -u /path/to/local/mount
+ # OS X or Linux when using nfsmount
umount /path/to/local/mount
The umount operation can fail, for example when the mountpoint is busy.
@@ -5603,9 +5616,9 @@ Note that systemd runs mount units without any environment variables including
`PATH` or `HOME`. This means that tilde (`~`) expansion will not work
and you should provide `--config` and `--cache-dir` explicitly as absolute
paths via rclone arguments.
-Since mounting requires the `fusermount` program, rclone will use the fallback
-PATH of `/bin:/usr/bin` in this scenario. Please ensure that `fusermount`
-is present on this PATH.
+Since mounting requires the `fusermount` or `fusermount3` program,
+rclone will use the fallback PATH of `/bin:/usr/bin` in this scenario.
+Please ensure that `fusermount`/`fusermount3` is present on this PATH.
## Rclone as Unix mount helper
@@ -5982,6 +5995,50 @@ modified files from the cache (the related global flag `--checkers` has no effec
--transfers int Number of file transfers to run in parallel (default 4)
+## Symlinks
+
+By default the VFS does not support symlinks. However this may be
+enabled with either of the following flags:
+
+ --links Translate symlinks to/from regular files with a '.rclonelink' extension.
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension. So a
+file which appears as a symlink `link-to-file.txt` would be stored on
+cloud storage as `link-to-file.txt.rclonelink` and the contents would
+be the path to the symlink destination.
+
+Note that `--links` enables symlink translation globally in rclone -
+this includes any backend which supports the concept (for example the
+local backend). `--vfs-links` just enables it for the VFS layer.
+
+This scheme is compatible with that used by the [local backend with the --local-links flag](https://rclone.org/local/#symlinks-junction-points).
+
+The `--vfs-links` flag has been designed for `rclone mount`, `rclone
+nfsmount` and `rclone serve nfs`.
+
+It hasn't been tested with the other `rclone serve` commands yet.
+
+A limitation of the current implementation is that it expects the
+caller to resolve sub-symlinks. For example given this directory tree
+
+```
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dir
+```
+
+The VFS will correctly resolve `linked-dir` but not
+`linked-dir/file.txt`. This is not a problem for the tested commands
+but may be for other commands.
+
+**Note** that there is an outstanding issue with symlink support
+[issue #8245](https://github.com/rclone/rclone/issues/8245) with duplicate
+files being created when symlinks are moved into directories where
+there is a file of the same name (or vice versa).
+
## VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only
@@ -6086,6 +6143,7 @@ rclone mount remote:path /path/to/mountpoint [flags]
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for mount
+ --link-perms FileMode Link permissions (default 666)
--max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads (not supported on Windows) (default 128Ki)
--mount-case-insensitive Tristate Tell the OS the mount is case insensitive (true) or sensitive (false) regardless of the backend (auto) (default unset)
--network-mode Mount as remote network drive, instead of fixed disk drive (supported on Windows only)
@@ -6108,6 +6166,7 @@ rclone mount remote:path /path/to/mountpoint [flags]
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -6225,6 +6284,7 @@ Flags for anything which can copy a file
-I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
@@ -6472,7 +6532,9 @@ When running in background mode the user will have to stop the mount manually:
# Linux
fusermount -u /path/to/local/mount
- # OS X
+ #... or on some systems
+ fusermount3 -u /path/to/local/mount
+ # OS X or Linux when using nfsmount
umount /path/to/local/mount
The umount operation can fail, for example when the mountpoint is busy.
@@ -6816,9 +6878,9 @@ Note that systemd runs mount units without any environment variables including
`PATH` or `HOME`. This means that tilde (`~`) expansion will not work
and you should provide `--config` and `--cache-dir` explicitly as absolute
paths via rclone arguments.
-Since mounting requires the `fusermount` program, rclone will use the fallback
-PATH of `/bin:/usr/bin` in this scenario. Please ensure that `fusermount`
-is present on this PATH.
+Since mounting requires the `fusermount` or `fusermount3` program,
+rclone will use the fallback PATH of `/bin:/usr/bin` in this scenario.
+Please ensure that `fusermount`/`fusermount3` is present on this PATH.
## Rclone as Unix mount helper
@@ -7195,6 +7257,50 @@ modified files from the cache (the related global flag `--checkers` has no effec
--transfers int Number of file transfers to run in parallel (default 4)
+## Symlinks
+
+By default the VFS does not support symlinks. However this may be
+enabled with either of the following flags:
+
+ --links Translate symlinks to/from regular files with a '.rclonelink' extension.
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension. So a
+file which appears as a symlink `link-to-file.txt` would be stored on
+cloud storage as `link-to-file.txt.rclonelink` and the contents would
+be the path to the symlink destination.
+
+Note that `--links` enables symlink translation globally in rclone -
+this includes any backend which supports the concept (for example the
+local backend). `--vfs-links` just enables it for the VFS layer.
+
+This scheme is compatible with that used by the [local backend with the --local-links flag](https://rclone.org/local/#symlinks-junction-points).
+
+The `--vfs-links` flag has been designed for `rclone mount`, `rclone
+nfsmount` and `rclone serve nfs`.
+
+It hasn't been tested with the other `rclone serve` commands yet.
+
+A limitation of the current implementation is that it expects the
+caller to resolve sub-symlinks. For example given this directory tree
+
+```
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dir
+```
+
+The VFS will correctly resolve `linked-dir` but not
+`linked-dir/file.txt`. This is not a problem for the tested commands
+but may be for other commands.
+
+**Note** that there is an outstanding issue with symlink support
+[issue #8245](https://github.com/rclone/rclone/issues/8245) with duplicate
+files being created when symlinks are moved into directories where
+there is a file of the same name (or vice versa).
+
## VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only
@@ -7300,6 +7406,7 @@ rclone nfsmount remote:path /path/to/mountpoint [flags]
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for nfsmount
+ --link-perms FileMode Link permissions (default 666)
--max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads (not supported on Windows) (default 128Ki)
--mount-case-insensitive Tristate Tell the OS the mount is case insensitive (true) or sensitive (false) regardless of the backend (auto) (default unset)
--network-mode Mount as remote network drive, instead of fixed disk drive (supported on Windows only)
@@ -7326,6 +7433,7 @@ rclone nfsmount remote:path /path/to/mountpoint [flags]
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -7599,8 +7707,7 @@ If you set `--rc-addr` to listen on a public or LAN accessible IP address
then using Authentication is advised - see the next section for info.
You can use a unix socket by setting the url to `unix:///path/to/socket`
-or just by using an absolute path name. Note that unix sockets bypass the
-authentication - this is expected to be done with file system permissions.
+or just by using an absolute path name.
`--rc-addr` may be repeated to listen on multiple IPs/ports/sockets.
Socket activation, described further below, can also be used to accomplish the same.
@@ -7627,19 +7734,21 @@ https. You will need to supply the `--rc-cert` and `--rc-key` flags.
If you wish to do client side certificate validation then you will need to
supply `--rc-client-ca` also.
-`--rc-cert` should be a either a PEM encoded certificate or a concatenation
-of that with the CA certificate. `--krc-ey` should be the PEM encoded
-private key and `--rc-client-ca` should be the PEM encoded client
-certificate authority certificate.
+`--rc-cert` must be set to the path of a file containing
+either a PEM encoded certificate, or a concatenation of that with the CA
+certificate. `--rc-key` must be set to the path of a file
+with the PEM encoded private key. If setting `--rc-client-ca`,
+it should be set to the path of a file with PEM encoded client certificate
+authority certificates.
`--rc-min-tls-version` is minimum TLS version that is acceptable. Valid
- values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default
- "tls1.0").
+values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").
## Socket activation
Instead of the listening addresses specified above, rclone will listen to all
-FDs passed by the service manager, if any (and ignore any arguments passed by --rc-addr`).
+FDs passed by the service manager, if any (and ignore any arguments passed
+by `--rc-addr`).
This allows rclone to be a socket-activated service.
It can be configured with .socket and .service unit files as described in
@@ -7734,7 +7843,7 @@ Flags to control the Remote Control API
```
--rc Enable the remote control server
- --rc-addr stringArray IPaddress:Port or :Port to bind server to (default ["localhost:5572"])
+ --rc-addr stringArray IPaddress:Port or :Port to bind server to (default localhost:5572)
--rc-allow-origin string Origin which cross-domain request (CORS) can be executed from
--rc-baseurl string Prefix for URLs - leave blank for root
--rc-cert string TLS PEM key (concatenation of certificate and CA certificate)
@@ -8276,6 +8385,50 @@ modified files from the cache (the related global flag `--checkers` has no effec
--transfers int Number of file transfers to run in parallel (default 4)
+## Symlinks
+
+By default the VFS does not support symlinks. However this may be
+enabled with either of the following flags:
+
+ --links Translate symlinks to/from regular files with a '.rclonelink' extension.
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension. So a
+file which appears as a symlink `link-to-file.txt` would be stored on
+cloud storage as `link-to-file.txt.rclonelink` and the contents would
+be the path to the symlink destination.
+
+Note that `--links` enables symlink translation globally in rclone -
+this includes any backend which supports the concept (for example the
+local backend). `--vfs-links` just enables it for the VFS layer.
+
+This scheme is compatible with that used by the [local backend with the --local-links flag](https://rclone.org/local/#symlinks-junction-points).
+
+The `--vfs-links` flag has been designed for `rclone mount`, `rclone
+nfsmount` and `rclone serve nfs`.
+
+It hasn't been tested with the other `rclone serve` commands yet.
+
+A limitation of the current implementation is that it expects the
+caller to resolve sub-symlinks. For example given this directory tree
+
+```
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dir
+```
+
+The VFS will correctly resolve `linked-dir` but not
+`linked-dir/file.txt`. This is not a problem for the tested commands
+but may be for other commands.
+
+**Note** that there is an outstanding issue with symlink support
+[issue #8245](https://github.com/rclone/rclone/issues/8245) with duplicate
+files being created when symlinks are moved into directories where
+there is a file of the same name (or vice versa).
+
## VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only
@@ -8370,6 +8523,7 @@ rclone serve dlna remote:path [flags]
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for dlna
--interface stringArray The interface to use for SSDP (repeat as necessary)
+ --link-perms FileMode Link permissions (default 666)
--log-trace Enable trace logging of SOAP traffic
--name string Name of DLNA server
--no-checksum Don't compare checksums on up/download
@@ -8388,6 +8542,7 @@ rclone serve dlna remote:path [flags]
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -8785,6 +8940,50 @@ modified files from the cache (the related global flag `--checkers` has no effec
--transfers int Number of file transfers to run in parallel (default 4)
+## Symlinks
+
+By default the VFS does not support symlinks. However this may be
+enabled with either of the following flags:
+
+ --links Translate symlinks to/from regular files with a '.rclonelink' extension.
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension. So a
+file which appears as a symlink `link-to-file.txt` would be stored on
+cloud storage as `link-to-file.txt.rclonelink` and the contents would
+be the path to the symlink destination.
+
+Note that `--links` enables symlink translation globally in rclone -
+this includes any backend which supports the concept (for example the
+local backend). `--vfs-links` just enables it for the VFS layer.
+
+This scheme is compatible with that used by the [local backend with the --local-links flag](https://rclone.org/local/#symlinks-junction-points).
+
+The `--vfs-links` flag has been designed for `rclone mount`, `rclone
+nfsmount` and `rclone serve nfs`.
+
+It hasn't been tested with the other `rclone serve` commands yet.
+
+A limitation of the current implementation is that it expects the
+caller to resolve sub-symlinks. For example given this directory tree
+
+```
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dir
+```
+
+The VFS will correctly resolve `linked-dir` but not
+`linked-dir/file.txt`. This is not a problem for the tested commands
+but may be for other commands.
+
+**Note** that there is an outstanding issue with symlink support
+[issue #8245](https://github.com/rclone/rclone/issues/8245) with duplicate
+files being created when symlinks are moved into directories where
+there is a file of the same name (or vice versa).
+
## VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only
@@ -8891,6 +9090,7 @@ rclone serve docker [flags]
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for docker
+ --link-perms FileMode Link permissions (default 666)
--max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads (not supported on Windows) (default 128Ki)
--mount-case-insensitive Tristate Tell the OS the mount is case insensitive (true) or sensitive (false) regardless of the backend (auto) (default unset)
--network-mode Mount as remote network drive, instead of fixed disk drive (supported on Windows only)
@@ -8916,6 +9116,7 @@ rclone serve docker [flags]
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -9296,6 +9497,50 @@ modified files from the cache (the related global flag `--checkers` has no effec
--transfers int Number of file transfers to run in parallel (default 4)
+## Symlinks
+
+By default the VFS does not support symlinks. However this may be
+enabled with either of the following flags:
+
+ --links Translate symlinks to/from regular files with a '.rclonelink' extension.
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension. So a
+file which appears as a symlink `link-to-file.txt` would be stored on
+cloud storage as `link-to-file.txt.rclonelink` and the contents would
+be the path to the symlink destination.
+
+Note that `--links` enables symlink translation globally in rclone -
+this includes any backend which supports the concept (for example the
+local backend). `--vfs-links` just enables it for the VFS layer.
+
+This scheme is compatible with that used by the [local backend with the --local-links flag](https://rclone.org/local/#symlinks-junction-points).
+
+The `--vfs-links` flag has been designed for `rclone mount`, `rclone
+nfsmount` and `rclone serve nfs`.
+
+It hasn't been tested with the other `rclone serve` commands yet.
+
+A limitation of the current implementation is that it expects the
+caller to resolve sub-symlinks. For example given this directory tree
+
+```
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dir
+```
+
+The VFS will correctly resolve `linked-dir` but not
+`linked-dir/file.txt`. This is not a problem for the tested commands
+but may be for other commands.
+
+**Note** that there is an outstanding issue with symlink support
+[issue #8245](https://github.com/rclone/rclone/issues/8245) with duplicate
+files being created when symlinks are moved into directories where
+there is a file of the same name (or vice versa).
+
## VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only
@@ -9472,6 +9717,7 @@ rclone serve ftp remote:path [flags]
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for ftp
--key string TLS PEM Private key
+ --link-perms FileMode Link permissions (default 666)
--no-checksum Don't compare checksums on up/download
--no-modtime Don't read/write the modification time (can speed things up)
--no-seek Don't allow seeking in files
@@ -9492,6 +9738,7 @@ rclone serve ftp remote:path [flags]
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -9568,8 +9815,7 @@ If you set `--addr` to listen on a public or LAN accessible IP address
then using Authentication is advised - see the next section for info.
You can use a unix socket by setting the url to `unix:///path/to/socket`
-or just by using an absolute path name. Note that unix sockets bypass the
-authentication - this is expected to be done with file system permissions.
+or just by using an absolute path name.
`--addr` may be repeated to listen on multiple IPs/ports/sockets.
Socket activation, described further below, can also be used to accomplish the same.
@@ -9596,19 +9842,21 @@ https. You will need to supply the `--cert` and `--key` flags.
If you wish to do client side certificate validation then you will need to
supply `--client-ca` also.
-`--cert` should be a either a PEM encoded certificate or a concatenation
-of that with the CA certificate. `--key` should be the PEM encoded
-private key and `--client-ca` should be the PEM encoded client
-certificate authority certificate.
+`--cert` must be set to the path of a file containing
+either a PEM encoded certificate, or a concatenation of that with the CA
+certificate. `--key` must be set to the path of a file
+with the PEM encoded private key. If setting `--client-ca`,
+it should be set to the path of a file with PEM encoded client certificate
+authority certificates.
`--min-tls-version` is minimum TLS version that is acceptable. Valid
- values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default
- "tls1.0").
+values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").
## Socket activation
Instead of the listening addresses specified above, rclone will listen to all
-FDs passed by the service manager, if any (and ignore any arguments passed by --addr`).
+FDs passed by the service manager, if any (and ignore any arguments passed
+by `--addr`).
This allows rclone to be a socket-activated service.
It can be configured with .socket and .service unit files as described in
@@ -9987,6 +10235,50 @@ modified files from the cache (the related global flag `--checkers` has no effec
--transfers int Number of file transfers to run in parallel (default 4)
+## Symlinks
+
+By default the VFS does not support symlinks. However this may be
+enabled with either of the following flags:
+
+ --links Translate symlinks to/from regular files with a '.rclonelink' extension.
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension. So a
+file which appears as a symlink `link-to-file.txt` would be stored on
+cloud storage as `link-to-file.txt.rclonelink` and the contents would
+be the path to the symlink destination.
+
+Note that `--links` enables symlink translation globally in rclone -
+this includes any backend which supports the concept (for example the
+local backend). `--vfs-links` just enables it for the VFS layer.
+
+This scheme is compatible with that used by the [local backend with the --local-links flag](https://rclone.org/local/#symlinks-junction-points).
+
+The `--vfs-links` flag has been designed for `rclone mount`, `rclone
+nfsmount` and `rclone serve nfs`.
+
+It hasn't been tested with the other `rclone serve` commands yet.
+
+A limitation of the current implementation is that it expects the
+caller to resolve sub-symlinks. For example given this directory tree
+
+```
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dir
+```
+
+The VFS will correctly resolve `linked-dir` but not
+`linked-dir/file.txt`. This is not a problem for the tested commands
+but may be for other commands.
+
+**Note** that there is an outstanding issue with symlink support
+[issue #8245](https://github.com/rclone/rclone/issues/8245) with duplicate
+files being created when symlinks are moved into directories where
+there is a file of the same name (or vice versa).
+
## VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only
@@ -10158,15 +10450,16 @@ rclone serve http remote:path [flags]
--allow-origin string Origin which cross-domain request (CORS) can be executed from
--auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
- --cert string TLS PEM key (concatenation of certificate and CA certificate)
- --client-ca string Client certificate authority to verify clients with
+ --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
+ --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 777)
--file-perms FileMode File permissions (default 666)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for http
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string TLS PEM Private key
+ --key string Path to TLS PEM private key file
+ --link-perms FileMode Link permissions (default 666)
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--no-checksum Don't compare checksums on up/download
@@ -10192,6 +10485,7 @@ rclone serve http remote:path [flags]
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -10287,7 +10581,9 @@ that it uses an on disk cache, but the cache entries are held as
symlinks. Rclone will use the handle of the underlying file as the NFS
handle which improves performance. This sort of cache can't be backed
up and restored as the underlying handles will change. This is Linux
-only.
+only. It requres running rclone as root or with `CAP_DAC_READ_SEARCH`.
+You can run rclone with this extra permission by doing this to the
+rclone binary `sudo setcap cap_dac_read_search+ep /path/to/rclone`.
`--nfs-cache-handle-limit` controls the maximum number of cached NFS
handles stored by the caching handler. This should not be set too low
@@ -10616,6 +10912,50 @@ modified files from the cache (the related global flag `--checkers` has no effec
--transfers int Number of file transfers to run in parallel (default 4)
+## Symlinks
+
+By default the VFS does not support symlinks. However this may be
+enabled with either of the following flags:
+
+ --links Translate symlinks to/from regular files with a '.rclonelink' extension.
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension. So a
+file which appears as a symlink `link-to-file.txt` would be stored on
+cloud storage as `link-to-file.txt.rclonelink` and the contents would
+be the path to the symlink destination.
+
+Note that `--links` enables symlink translation globally in rclone -
+this includes any backend which supports the concept (for example the
+local backend). `--vfs-links` just enables it for the VFS layer.
+
+This scheme is compatible with that used by the [local backend with the --local-links flag](https://rclone.org/local/#symlinks-junction-points).
+
+The `--vfs-links` flag has been designed for `rclone mount`, `rclone
+nfsmount` and `rclone serve nfs`.
+
+It hasn't been tested with the other `rclone serve` commands yet.
+
+A limitation of the current implementation is that it expects the
+caller to resolve sub-symlinks. For example given this directory tree
+
+```
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dir
+```
+
+The VFS will correctly resolve `linked-dir` but not
+`linked-dir/file.txt`. This is not a problem for the tested commands
+but may be for other commands.
+
+**Note** that there is an outstanding issue with symlink support
+[issue #8245](https://github.com/rclone/rclone/issues/8245) with duplicate
+files being created when symlinks are moved into directories where
+there is a file of the same name (or vice versa).
+
## VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only
@@ -10708,6 +11048,7 @@ rclone serve nfs remote:path [flags]
--file-perms FileMode File permissions (default 666)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for nfs
+ --link-perms FileMode Link permissions (default 666)
--nfs-cache-dir string The directory the NFS handle cache will use if set
--nfs-cache-handle-limit int max file handles cached simultaneously (min 5) (default 1000000)
--nfs-cache-type memory|disk|symlink Type of NFS handle cache to use (default memory)
@@ -10727,6 +11068,7 @@ rclone serve nfs remote:path [flags]
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -10873,8 +11215,7 @@ If you set `--addr` to listen on a public or LAN accessible IP address
then using Authentication is advised - see the next section for info.
You can use a unix socket by setting the url to `unix:///path/to/socket`
-or just by using an absolute path name. Note that unix sockets bypass the
-authentication - this is expected to be done with file system permissions.
+or just by using an absolute path name.
`--addr` may be repeated to listen on multiple IPs/ports/sockets.
Socket activation, described further below, can also be used to accomplish the same.
@@ -10901,19 +11242,21 @@ https. You will need to supply the `--cert` and `--key` flags.
If you wish to do client side certificate validation then you will need to
supply `--client-ca` also.
-`--cert` should be a either a PEM encoded certificate or a concatenation
-of that with the CA certificate. `--key` should be the PEM encoded
-private key and `--client-ca` should be the PEM encoded client
-certificate authority certificate.
+`--cert` must be set to the path of a file containing
+either a PEM encoded certificate, or a concatenation of that with the CA
+certificate. `--key` must be set to the path of a file
+with the PEM encoded private key. If setting `--client-ca`,
+it should be set to the path of a file with PEM encoded client certificate
+authority certificates.
`--min-tls-version` is minimum TLS version that is acceptable. Valid
- values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default
- "tls1.0").
+values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").
## Socket activation
Instead of the listening addresses specified above, rclone will listen to all
-FDs passed by the service manager, if any (and ignore any arguments passed by --addr`).
+FDs passed by the service manager, if any (and ignore any arguments passed
+by `--addr`).
This allows rclone to be a socket-activated service.
It can be configured with .socket and .service unit files as described in
@@ -10965,11 +11308,11 @@ rclone serve restic remote:path [flags]
--append-only Disallow deletion of repository data
--baseurl string Prefix for URLs - leave blank for root
--cache-objects Cache listed objects (default true)
- --cert string TLS PEM key (concatenation of certificate and CA certificate)
- --client-ca string Client certificate authority to verify clients with
+ --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
+ --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
-h, --help help for restic
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string TLS PEM Private key
+ --key string Path to TLS PEM private key file
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--pass string Password for authentication
@@ -11168,8 +11511,7 @@ If you set `--addr` to listen on a public or LAN accessible IP address
then using Authentication is advised - see the next section for info.
You can use a unix socket by setting the url to `unix:///path/to/socket`
-or just by using an absolute path name. Note that unix sockets bypass the
-authentication - this is expected to be done with file system permissions.
+or just by using an absolute path name.
`--addr` may be repeated to listen on multiple IPs/ports/sockets.
Socket activation, described further below, can also be used to accomplish the same.
@@ -11196,19 +11538,21 @@ https. You will need to supply the `--cert` and `--key` flags.
If you wish to do client side certificate validation then you will need to
supply `--client-ca` also.
-`--cert` should be a either a PEM encoded certificate or a concatenation
-of that with the CA certificate. `--key` should be the PEM encoded
-private key and `--client-ca` should be the PEM encoded client
-certificate authority certificate.
+`--cert` must be set to the path of a file containing
+either a PEM encoded certificate, or a concatenation of that with the CA
+certificate. `--key` must be set to the path of a file
+with the PEM encoded private key. If setting `--client-ca`,
+it should be set to the path of a file with PEM encoded client certificate
+authority certificates.
`--min-tls-version` is minimum TLS version that is acceptable. Valid
- values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default
- "tls1.0").
+values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").
## Socket activation
Instead of the listening addresses specified above, rclone will listen to all
-FDs passed by the service manager, if any (and ignore any arguments passed by --addr`).
+FDs passed by the service manager, if any (and ignore any arguments passed
+by `--addr`).
This allows rclone to be a socket-activated service.
It can be configured with .socket and .service unit files as described in
@@ -11524,6 +11868,50 @@ modified files from the cache (the related global flag `--checkers` has no effec
--transfers int Number of file transfers to run in parallel (default 4)
+## Symlinks
+
+By default the VFS does not support symlinks. However this may be
+enabled with either of the following flags:
+
+ --links Translate symlinks to/from regular files with a '.rclonelink' extension.
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension. So a
+file which appears as a symlink `link-to-file.txt` would be stored on
+cloud storage as `link-to-file.txt.rclonelink` and the contents would
+be the path to the symlink destination.
+
+Note that `--links` enables symlink translation globally in rclone -
+this includes any backend which supports the concept (for example the
+local backend). `--vfs-links` just enables it for the VFS layer.
+
+This scheme is compatible with that used by the [local backend with the --local-links flag](https://rclone.org/local/#symlinks-junction-points).
+
+The `--vfs-links` flag has been designed for `rclone mount`, `rclone
+nfsmount` and `rclone serve nfs`.
+
+It hasn't been tested with the other `rclone serve` commands yet.
+
+A limitation of the current implementation is that it expects the
+caller to resolve sub-symlinks. For example given this directory tree
+
+```
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dir
+```
+
+The VFS will correctly resolve `linked-dir` but not
+`linked-dir/file.txt`. This is not a problem for the tested commands
+but may be for other commands.
+
+**Note** that there is an outstanding issue with symlink support
+[issue #8245](https://github.com/rclone/rclone/issues/8245) with duplicate
+files being created when symlinks are moved into directories where
+there is a file of the same name (or vice versa).
+
## VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only
@@ -11615,8 +12003,8 @@ rclone serve s3 remote:path [flags]
--auth-key stringArray Set key pair for v4 authorization: access_key_id,secret_access_key
--auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
- --cert string TLS PEM key (concatenation of certificate and CA certificate)
- --client-ca string Client certificate authority to verify clients with
+ --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
+ --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 777)
--etag-hash string Which hash to use for the ETag, or auto or blank for off (default "MD5")
@@ -11625,7 +12013,8 @@ rclone serve s3 remote:path [flags]
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for s3
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string TLS PEM Private key
+ --key string Path to TLS PEM private key file
+ --link-perms FileMode Link permissions (default 666)
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--no-checksum Don't compare checksums on up/download
@@ -11651,6 +12040,7 @@ rclone serve s3 remote:path [flags]
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -12072,6 +12462,50 @@ modified files from the cache (the related global flag `--checkers` has no effec
--transfers int Number of file transfers to run in parallel (default 4)
+## Symlinks
+
+By default the VFS does not support symlinks. However this may be
+enabled with either of the following flags:
+
+ --links Translate symlinks to/from regular files with a '.rclonelink' extension.
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension. So a
+file which appears as a symlink `link-to-file.txt` would be stored on
+cloud storage as `link-to-file.txt.rclonelink` and the contents would
+be the path to the symlink destination.
+
+Note that `--links` enables symlink translation globally in rclone -
+this includes any backend which supports the concept (for example the
+local backend). `--vfs-links` just enables it for the VFS layer.
+
+This scheme is compatible with that used by the [local backend with the --local-links flag](https://rclone.org/local/#symlinks-junction-points).
+
+The `--vfs-links` flag has been designed for `rclone mount`, `rclone
+nfsmount` and `rclone serve nfs`.
+
+It hasn't been tested with the other `rclone serve` commands yet.
+
+A limitation of the current implementation is that it expects the
+caller to resolve sub-symlinks. For example given this directory tree
+
+```
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dir
+```
+
+The VFS will correctly resolve `linked-dir` but not
+`linked-dir/file.txt`. This is not a problem for the tested commands
+but may be for other commands.
+
+**Note** that there is an outstanding issue with symlink support
+[issue #8245](https://github.com/rclone/rclone/issues/8245) with duplicate
+files being created when symlinks are moved into directories where
+there is a file of the same name (or vice versa).
+
## VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only
@@ -12248,6 +12682,7 @@ rclone serve sftp remote:path [flags]
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for sftp
--key stringArray SSH private host key file (Can be multi-valued, leave blank to auto generate)
+ --link-perms FileMode Link permissions (default 666)
--no-auth Allow connections with no authentication if set
--no-checksum Don't compare checksums on up/download
--no-modtime Don't read/write the modification time (can speed things up)
@@ -12268,6 +12703,7 @@ rclone serve sftp remote:path [flags]
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -12387,8 +12823,7 @@ If you set `--addr` to listen on a public or LAN accessible IP address
then using Authentication is advised - see the next section for info.
You can use a unix socket by setting the url to `unix:///path/to/socket`
-or just by using an absolute path name. Note that unix sockets bypass the
-authentication - this is expected to be done with file system permissions.
+or just by using an absolute path name.
`--addr` may be repeated to listen on multiple IPs/ports/sockets.
Socket activation, described further below, can also be used to accomplish the same.
@@ -12415,19 +12850,21 @@ https. You will need to supply the `--cert` and `--key` flags.
If you wish to do client side certificate validation then you will need to
supply `--client-ca` also.
-`--cert` should be a either a PEM encoded certificate or a concatenation
-of that with the CA certificate. `--key` should be the PEM encoded
-private key and `--client-ca` should be the PEM encoded client
-certificate authority certificate.
+`--cert` must be set to the path of a file containing
+either a PEM encoded certificate, or a concatenation of that with the CA
+certificate. `--key` must be set to the path of a file
+with the PEM encoded private key. If setting `--client-ca`,
+it should be set to the path of a file with PEM encoded client certificate
+authority certificates.
`--min-tls-version` is minimum TLS version that is acceptable. Valid
- values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default
- "tls1.0").
+values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").
## Socket activation
Instead of the listening addresses specified above, rclone will listen to all
-FDs passed by the service manager, if any (and ignore any arguments passed by --addr`).
+FDs passed by the service manager, if any (and ignore any arguments passed
+by `--addr`).
This allows rclone to be a socket-activated service.
It can be configured with .socket and .service unit files as described in
@@ -12806,6 +13243,50 @@ modified files from the cache (the related global flag `--checkers` has no effec
--transfers int Number of file transfers to run in parallel (default 4)
+## Symlinks
+
+By default the VFS does not support symlinks. However this may be
+enabled with either of the following flags:
+
+ --links Translate symlinks to/from regular files with a '.rclonelink' extension.
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension. So a
+file which appears as a symlink `link-to-file.txt` would be stored on
+cloud storage as `link-to-file.txt.rclonelink` and the contents would
+be the path to the symlink destination.
+
+Note that `--links` enables symlink translation globally in rclone -
+this includes any backend which supports the concept (for example the
+local backend). `--vfs-links` just enables it for the VFS layer.
+
+This scheme is compatible with that used by the [local backend with the --local-links flag](https://rclone.org/local/#symlinks-junction-points).
+
+The `--vfs-links` flag has been designed for `rclone mount`, `rclone
+nfsmount` and `rclone serve nfs`.
+
+It hasn't been tested with the other `rclone serve` commands yet.
+
+A limitation of the current implementation is that it expects the
+caller to resolve sub-symlinks. For example given this directory tree
+
+```
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dir
+```
+
+The VFS will correctly resolve `linked-dir` but not
+`linked-dir/file.txt`. This is not a problem for the tested commands
+but may be for other commands.
+
+**Note** that there is an outstanding issue with symlink support
+[issue #8245](https://github.com/rclone/rclone/issues/8245) with duplicate
+files being created when symlinks are moved into directories where
+there is a file of the same name (or vice versa).
+
## VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only
@@ -12977,8 +13458,8 @@ rclone serve webdav remote:path [flags]
--allow-origin string Origin which cross-domain request (CORS) can be executed from
--auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
- --cert string TLS PEM key (concatenation of certificate and CA certificate)
- --client-ca string Client certificate authority to verify clients with
+ --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
+ --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 777)
--disable-dir-list Disable HTML directory list on GET request for a directory
@@ -12987,7 +13468,8 @@ rclone serve webdav remote:path [flags]
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for webdav
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string TLS PEM Private key
+ --key string Path to TLS PEM private key file
+ --link-perms FileMode Link permissions (default 666)
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--no-checksum Don't compare checksums on up/download
@@ -13013,6 +13495,7 @@ rclone serve webdav remote:path [flags]
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -13271,6 +13754,7 @@ rclone test makefiles [flags]
--chargen Fill files with a ASCII chargen pattern
--files int Number of files to create (default 1000)
--files-per-directory int Average number of files per directory (default 10)
+ --flat If set create all files in the root directory
-h, --help help for makefiles
--max-depth int Maximum depth of directory hierarchy (default 10)
--max-file-size SizeSuffix Maximum size of files to create (default 100)
@@ -14768,6 +15252,22 @@ The options mean
During rmdirs it will not remove root directory, even if it's empty.
+### --links / -l
+
+Normally rclone will ignore symlinks or junction points (which behave
+like symlinks under Windows).
+
+If you supply this flag then rclone will copy symbolic links from any
+supported backend backend, and store them as text files, with a
+`.rclonelink` suffix in the destination.
+
+The text file will contain the target of the symbolic link.
+
+The `--links` / `-l` flag enables this feature for all supported
+backends and the VFS. There are individual flags for just enabling it
+for the VFS `--vfs-links` and the local backend `--local-links` if
+required.
+
### --log-file=FILE ###
Log all of rclone's output to FILE. This is not active by default.
@@ -16211,9 +16711,9 @@ messages may not be valid after the retry. If rclone has done a retry
it will log a high priority message if the retry was successful.
### List of exit codes ###
- * `0` - success
- * `1` - Syntax or usage error
- * `2` - Error not otherwise categorised
+ * `0` - Success
+ * `1` - Error not otherwise categorised
+ * `2` - Syntax or usage error
* `3` - Directory not found
* `4` - File not found
* `5` - Temporary error (one that more retries might fix) (Retry errors)
@@ -16254,6 +16754,22 @@ so they take exactly the same form.
The options set by environment variables can be seen with the `-vv` flag, e.g. `rclone version -vv`.
+Options that can appear multiple times (type `stringArray`) are
+treated slighly differently as environment variables can only be
+defined once. In order to allow a simple mechanism for adding one or
+many items, the input is treated as a [CSV encoded](https://godoc.org/encoding/csv)
+string. For example
+
+| Environment Variable | Equivalent options |
+|----------------------|--------------------|
+| `RCLONE_EXCLUDE="*.jpg"` | `--exclude "*.jpg"` |
+| `RCLONE_EXCLUDE="*.jpg,*.png"` | `--exclude "*.jpg"` `--exclude "*.png"` |
+| `RCLONE_EXCLUDE='"*.jpg","*.png"'` | `--exclude "*.jpg"` `--exclude "*.png"` |
+| `RCLONE_EXCLUDE='"/directory with comma , in it /**"'` | `--exclude "/directory with comma , in it /**" |
+
+If `stringArray` options are defined as environment variables **and**
+options on the command line then all the values will be used.
+
### Config file ###
You can set defaults for values in the config file on an individual
@@ -16910,10 +17426,10 @@ flags with `--exclude`, `--exclude-from`, `--filter` or `--filter-from`,
you must use include rules for all the files you want in the include
statement. For more flexibility use the `--filter-from` flag.
-`--exclude-from` has no effect when combined with `--files-from` or
+`--include-from` has no effect when combined with `--files-from` or
`--files-from-raw` flags.
-`--exclude-from` followed by `-` reads filter rules from standard input.
+`--include-from` followed by `-` reads filter rules from standard input.
### `--filter` - Add a file-filtering rule
@@ -16950,6 +17466,8 @@ processed in.
Arrange the order of filter rules with the most restrictive first and
work down.
+Lines starting with # or ; are ignored, and can be used to write comments. Inline comments are not supported. _Use `-vv --dump filters` to see how they appear in the final regexp._
+
E.g. for `filter-file.txt`:
# a sample filter rule file
@@ -16957,6 +17475,7 @@ E.g. for `filter-file.txt`:
+ *.jpg
+ *.png
+ file2.avi
+ - /dir/tmp/** # WARNING! This text will be treated as part of the path.
- /dir/Trash/**
+ /dir/**
# exclude everything else
@@ -17004,6 +17523,8 @@ Other filter flags (`--include`, `--include-from`, `--exclude`,
trailing whitespace is stripped from the input lines. Lines starting
with `#` or `;` are ignored.
+`--files-from` followed by `-` reads the list of files from standard input.
+
Rclone commands with a `--files-from` flag traverse the remote,
treating the names in `--files-from` as a set of filters.
@@ -17366,29 +17887,31 @@ If you just want to run a remote control then see the [rcd](https://rclone.org/c
### --rc
-Flag to start the http server listen on remote requests
+Flag to start the http server listen on remote requests.
### --rc-addr=IP
-IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+IPaddress:Port or :Port to bind server to. (default "localhost:5572").
### --rc-cert=KEY
-SSL PEM key (concatenation of certificate and CA certificate)
+
+SSL PEM key (concatenation of certificate and CA certificate).
### --rc-client-ca=PATH
-Client certificate authority to verify clients with
+
+Client certificate authority to verify clients with.
### --rc-htpasswd=PATH
-htpasswd file - if not provided no authentication is done
+htpasswd file - if not provided no authentication is done.
### --rc-key=PATH
-SSL PEM Private key
+TLS PEM private key file.
### --rc-max-header-bytes=VALUE
-Maximum size of request header (default 4096)
+Maximum size of request header (default 4096).
### --rc-min-tls-version=VALUE
@@ -17405,15 +17928,15 @@ Password for authentication.
### --rc-realm=VALUE
-Realm for authentication (default "rclone")
+Realm for authentication (default "rclone").
### --rc-server-read-timeout=DURATION
-Timeout for server reading data (default 1h0m0s)
+Timeout for server reading data (default 1h0m0s).
### --rc-server-write-timeout=DURATION
-Timeout for server writing data (default 1h0m0s)
+Timeout for server writing data (default 1h0m0s).
### --rc-serve
@@ -17526,7 +18049,7 @@ User-specified template.
Rclone itself implements the remote control protocol in its `rclone
rc` command.
-You can use it like this
+You can use it like this:
```
$ rclone rc rc/noop param1=one param2=two
@@ -17536,8 +18059,23 @@ $ rclone rc rc/noop param1=one param2=two
}
```
-Run `rclone rc` on its own to see the help for the installed remote
-control commands.
+If the remote is running on a different URL than the default
+`http://localhost:5572/`, use the `--url` option to specify it:
+
+```
+$ rclone rc --url http://some.remote:1234/ rc/noop
+```
+
+Or, if the remote is listening on a Unix socket, use the `--unix-socket` option
+instead:
+
+```
+$ rclone rc --unix-socket /tmp/rclone.sock rc/noop
+```
+
+Run `rclone rc` on its own, without any commands, to see the help for the
+installed remote control commands. Note that this also needs to connect to the
+remote server.
## JSON input
@@ -19435,6 +19973,7 @@ This takes the following parameters
- `fs` - select the VFS in use (optional)
- `id` - a numeric ID as returned from `vfs/queue`
- `expiry` - a new expiry time as floating point seconds
+- `relative` - if set, expiry is to be treated as relative to the current expiry (optional, boolean)
This returns an empty result on success, or an error.
@@ -19743,6 +20282,7 @@ Here is an overview of the major features of each cloud storage system.
| Backblaze B2 | SHA1 | R/W | No | No | R/W | - |
| Box | SHA1 | R/W | Yes | No | - | - |
| Citrix ShareFile | MD5 | R/W | Yes | No | - | - |
+| Cloudinary | MD5 | R | No | Yes | - | - |
| Dropbox | DBHASH ¹ | R | Yes | No | - | - |
| Enterprise File Fabric | - | R/W | Yes | No | R/W | - |
| Files.com | MD5, CRC32 | DR/W | Yes | No | R | - |
@@ -19754,6 +20294,7 @@ Here is an overview of the major features of each cloud storage system.
| HDFS | - | R/W | No | No | - | - |
| HiDrive | HiDrive ¹² | R/W | No | No | - | - |
| HTTP | - | R | No | No | R | - |
+| iCloud Drive | - | R | No | No | - | - |
| Internet Archive | MD5, SHA1, CRC32 | R/W ¹¹ | No | No | - | RWU |
| Jottacloud | MD5 | R/W | Yes | No | R | RW |
| Koofr | MD5 | - | Yes | No | - | - |
@@ -19767,7 +20308,7 @@ Here is an overview of the major features of each cloud storage system.
| OpenDrive | MD5 | R/W | Yes | Partial ⁸ | - | - |
| OpenStack Swift | MD5 | R/W | No | No | R/W | - |
| Oracle Object Storage | MD5 | R/W | No | No | R/W | - |
-| pCloud | MD5, SHA1 ⁷ | R | No | No | W | - |
+| pCloud | MD5, SHA1 ⁷ | R/W | No | No | W | - |
| PikPak | MD5 | R | No | No | R | - |
| Pixeldrain | SHA256 | R/W | No | No | R | RW |
| premiumize.me | - | - | Yes | No | R | - |
@@ -20222,16 +20763,18 @@ upon backend-specific capabilities.
| Box | Yes | Yes | Yes | Yes | Yes | No | Yes | No | Yes | Yes | Yes |
| Citrix ShareFile | Yes | Yes | Yes | Yes | No | No | No | No | No | No | Yes |
| Dropbox | Yes | Yes | Yes | Yes | No | No | Yes | No | Yes | Yes | Yes |
+| Cloudinary | No | No | No | No | No | No | Yes | No | No | No | No |
| Enterprise File Fabric | Yes | Yes | Yes | Yes | Yes | No | No | No | No | No | Yes |
| Files.com | Yes | Yes | Yes | Yes | No | No | Yes | No | Yes | No | Yes |
| FTP | No | No | Yes | Yes | No | No | Yes | No | No | No | Yes |
| Gofile | Yes | Yes | Yes | Yes | No | No | Yes | No | Yes | Yes | Yes |
-| Google Cloud Storage | Yes | Yes | No | No | No | Yes | Yes | No | No | No | No |
+| Google Cloud Storage | Yes | Yes | No | No | No | No | Yes | No | No | No | No |
| Google Drive | Yes | Yes | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | Yes |
| Google Photos | No | No | No | No | No | No | No | No | No | No | No |
| HDFS | Yes | No | Yes | Yes | No | No | Yes | No | No | Yes | Yes |
| HiDrive | Yes | Yes | Yes | Yes | No | No | Yes | No | No | No | Yes |
| HTTP | No | No | No | No | No | No | No | No | No | No | Yes |
+| iCloud Drive | Yes | Yes | Yes | Yes | No | No | No | No | No | No | Yes |
| ImageKit | Yes | Yes | Yes | No | No | No | No | No | No | No | Yes |
| Internet Archive | No | Yes | No | No | Yes | Yes | No | No | Yes | Yes | No |
| Jottacloud | Yes | Yes | Yes | Yes | Yes | Yes | No | No | Yes | Yes | Yes |
@@ -20242,7 +20785,7 @@ upon backend-specific capabilities.
| Microsoft Azure Blob Storage | Yes | Yes | No | No | No | Yes | Yes | Yes | No | No | No |
| Microsoft Azure Files Storage | No | Yes | Yes | Yes | No | No | Yes | Yes | No | Yes | Yes |
| Microsoft OneDrive | Yes | Yes | Yes | Yes | Yes | Yes ⁵ | No | No | Yes | Yes | Yes |
-| OpenDrive | Yes | Yes | Yes | Yes | No | No | No | No | No | No | Yes |
+| OpenDrive | Yes | Yes | Yes | Yes | No | No | No | No | No | Yes | Yes |
| OpenStack Swift | Yes ¹ | Yes | No | No | No | Yes | Yes | No | No | Yes | No |
| Oracle Object Storage | No | Yes | No | No | Yes | Yes | Yes | Yes | No | No | No |
| pCloud | Yes | Yes | Yes | Yes | Yes | No | No | No | Yes | Yes | Yes |
@@ -20386,6 +20929,7 @@ Flags for anything which can copy a file.
-I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
@@ -20474,7 +21018,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.68.0")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.69.0")
```
@@ -20623,7 +21167,7 @@ Flags to control the Remote Control API.
```
--rc Enable the remote control server
- --rc-addr stringArray IPaddress:Port or :Port to bind server to (default ["localhost:5572"])
+ --rc-addr stringArray IPaddress:Port or :Port to bind server to (default localhost:5572)
--rc-allow-origin string Origin which cross-domain request (CORS) can be executed from
--rc-baseurl string Prefix for URLs - leave blank for root
--rc-cert string TLS PEM key (concatenation of certificate and CA certificate)
@@ -20659,7 +21203,7 @@ Flags to control the Remote Control API.
Flags to control the Metrics HTTP endpoint..
```
- --metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to (default [""])
+ --metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to
--metrics-allow-origin string Origin which cross-domain request (CORS) can be executed from
--metrics-baseurl string Prefix for URLs - leave blank for root
--metrics-cert string TLS PEM key (concatenation of certificate and CA certificate)
@@ -20699,6 +21243,7 @@ Backend-only flags (these can be set in the config file also).
--azureblob-description string Description of the remote
--azureblob-directory-markers Upload an empty object with a trailing slash when a new directory is created
--azureblob-disable-checksum Don't store MD5 checksum with object metadata
+ --azureblob-disable-instance-discovery Skip requesting Microsoft Entra instance metadata
--azureblob-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
--azureblob-endpoint string Endpoint for the service
--azureblob-env-auth Read credentials from runtime (environment variables, CLI or MSI)
@@ -20716,6 +21261,7 @@ Backend-only flags (these can be set in the config file also).
--azureblob-tenant string ID of the service principal's tenant. Also called its directory ID
--azureblob-upload-concurrency int Concurrency for multipart uploads (default 16)
--azureblob-upload-cutoff string Cutoff for switching to chunked upload (<= 256 MiB) (deprecated)
+ --azureblob-use-az Use Azure CLI tool az for authentication
--azureblob-use-emulator Uses local storage emulator if provided as 'true'
--azureblob-use-msi Use a managed service identity to authenticate (only works in Azure)
--azureblob-username string User name (usually an email address)
@@ -20765,6 +21311,7 @@ Backend-only flags (these can be set in the config file also).
--box-auth-url string Auth server URL
--box-box-config-file string Box App config.json location
--box-box-sub-type string (default "user")
+ --box-client-credentials Use client credentials OAuth flow
--box-client-id string OAuth Client Id
--box-client-secret string OAuth Client Secret
--box-commit-retries int Max number of times to try committing a multipart file (default 100)
@@ -20803,6 +21350,14 @@ Backend-only flags (these can be set in the config file also).
--chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks
--chunker-hash-type string Choose how chunker handles hash sums (default "md5")
--chunker-remote string Remote to chunk/unchunk
+ --cloudinary-api-key string Cloudinary API Key
+ --cloudinary-api-secret string Cloudinary API Secret
+ --cloudinary-cloud-name string Cloudinary Environment Name
+ --cloudinary-description string Description of the remote
+ --cloudinary-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
+ --cloudinary-eventually-consistent-delay Duration Wait N seconds for eventual consistency of the databases that support the backend operation (default 0s)
+ --cloudinary-upload-prefix string Specify the API endpoint for environments out of the US
+ --cloudinary-upload-preset string Upload Preset to select asset manipulation on upload
--combine-description string Description of the remote
--combine-upstreams SpaceSepList Upstreams for combining
--compress-description string Description of the remote
@@ -20829,6 +21384,7 @@ Backend-only flags (these can be set in the config file also).
--drive-auth-owner-only Only consider files owned by the authenticated user
--drive-auth-url string Auth server URL
--drive-chunk-size SizeSuffix Upload chunk size (default 8Mi)
+ --drive-client-credentials Use client credentials OAuth flow
--drive-client-id string Google Application Client Id
--drive-client-secret string OAuth Client Secret
--drive-copy-shortcut-content Server side copy contents of shortcuts instead of the shortcut
@@ -20879,6 +21435,7 @@ Backend-only flags (these can be set in the config file also).
--dropbox-batch-size int Max number of files in upload batch
--dropbox-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
--dropbox-chunk-size SizeSuffix Upload chunk size (< 150Mi) (default 48Mi)
+ --dropbox-client-credentials Use client credentials OAuth flow
--dropbox-client-id string OAuth Client Id
--dropbox-client-secret string OAuth Client Secret
--dropbox-description string Description of the remote
@@ -20925,6 +21482,7 @@ Backend-only flags (these can be set in the config file also).
--ftp-host string FTP host to connect to
--ftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--ftp-no-check-certificate Do not verify the TLS certificate of the server
+ --ftp-no-check-upload Don't check the upload is OK
--ftp-pass string FTP password (obscured)
--ftp-port int FTP port number (default 21)
--ftp-shut-timeout Duration Maximum time to wait for data connection closing status (default 1m0s)
@@ -20933,10 +21491,12 @@ Backend-only flags (these can be set in the config file also).
--ftp-tls-cache-size int Size of TLS session cache for all control and data connections (default 32)
--ftp-user string FTP username (default "$USER")
--ftp-writing-mdtm Use MDTM to set modification time (VsFtpd quirk)
+ --gcs-access-token string Short-lived access token
--gcs-anonymous Access public buckets and objects without credentials
--gcs-auth-url string Auth server URL
--gcs-bucket-acl string Access Control List for new buckets
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies
+ --gcs-client-credentials Use client credentials OAuth flow
--gcs-client-id string OAuth Client Id
--gcs-client-secret string OAuth Client Secret
--gcs-decompress If set this will decompress gzip encoded objects
@@ -20965,11 +21525,13 @@ Backend-only flags (these can be set in the config file also).
--gphotos-batch-mode string Upload file batching sync|async|off (default "sync")
--gphotos-batch-size int Max number of files in upload batch
--gphotos-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
+ --gphotos-client-credentials Use client credentials OAuth flow
--gphotos-client-id string OAuth Client Id
--gphotos-client-secret string OAuth Client Secret
--gphotos-description string Description of the remote
--gphotos-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gphotos-include-archived Also view and download archived media
+ --gphotos-proxy string Use the gphotosdl proxy for downloading the full resolution images
--gphotos-read-only Set to make the Google Photos backend read only
--gphotos-read-size Set to read the size of media items
--gphotos-start-year int Year limits the photos to be downloaded to those which are uploaded after the given year (default 2000)
@@ -20988,6 +21550,7 @@ Backend-only flags (these can be set in the config file also).
--hdfs-username string Hadoop user name
--hidrive-auth-url string Auth server URL
--hidrive-chunk-size SizeSuffix Chunksize for chunked uploads (default 48Mi)
+ --hidrive-client-credentials Use client credentials OAuth flow
--hidrive-client-id string OAuth Client Id
--hidrive-client-secret string OAuth Client Secret
--hidrive-description string Description of the remote
@@ -21007,6 +21570,11 @@ Backend-only flags (these can be set in the config file also).
--http-no-head Don't use HEAD requests
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of HTTP host to connect to
+ --iclouddrive-apple-id string Apple ID
+ --iclouddrive-client-id string Client id (default "d39ba9916b7251055b22c7f910e2ea796ee65e98b2ddecea8f5dde8d9d1a815d")
+ --iclouddrive-description string Description of the remote
+ --iclouddrive-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --iclouddrive-password string Password (obscured)
--imagekit-description string Description of the remote
--imagekit-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Dollar,Question,Hash,Percent,BackSlash,Del,Ctl,InvalidUtf8,Dot,SquareBracket)
--imagekit-endpoint string You can find your ImageKit.io URL endpoint in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
@@ -21024,6 +21592,7 @@ Backend-only flags (these can be set in the config file also).
--internetarchive-secret-access-key string IAS3 Secret Key (password)
--internetarchive-wait-archive Duration Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish (default 0s)
--jottacloud-auth-url string Auth server URL
+ --jottacloud-client-credentials Use client credentials OAuth flow
--jottacloud-client-id string OAuth Client Id
--jottacloud-client-secret string OAuth Client Secret
--jottacloud-description string Description of the remote
@@ -21045,11 +21614,11 @@ Backend-only flags (these can be set in the config file also).
--koofr-user string Your user name
--linkbox-description string Description of the remote
--linkbox-token string Token from https://www.linkbox.to/admin/account
- -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-case-insensitive Force the filesystem to report itself as case insensitive
--local-case-sensitive Force the filesystem to report itself as case sensitive
--local-description string Description of the remote
--local-encoding Encoding The encoding for the backend (default Slash,Dot)
+ --local-links Translate symlinks to/from regular files with a '.rclonelink' extension for the local backend
--local-no-check-updated Don't check to see if the files change during upload
--local-no-clone Disable reflink cloning for server-side copies
--local-no-preallocate Disable preallocation of disk space for transferred files
@@ -21061,6 +21630,7 @@ Backend-only flags (these can be set in the config file also).
--local-zero-size-links Assume the Stat size of links is zero (and read them instead) (deprecated)
--mailru-auth-url string Auth server URL
--mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true)
+ --mailru-client-credentials Use client credentials OAuth flow
--mailru-client-id string OAuth Client Id
--mailru-client-secret string OAuth Client Secret
--mailru-description string Description of the remote
@@ -21091,6 +21661,7 @@ Backend-only flags (these can be set in the config file also).
--onedrive-auth-url string Auth server URL
--onedrive-av-override Allows download of files the server thinks has a virus
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes) (default 10Mi)
+ --onedrive-client-credentials Use client credentials OAuth flow
--onedrive-client-id string OAuth Client Id
--onedrive-client-secret string OAuth Client Secret
--onedrive-delta If set rclone will use delta listing to implement recursive listings
@@ -21110,11 +21681,12 @@ Backend-only flags (these can be set in the config file also).
--onedrive-region string Choose national cloud region for OneDrive (default "global")
--onedrive-root-folder-id string ID of the root folder
--onedrive-server-side-across-configs Deprecated: use --server-side-across-configs instead
+ --onedrive-tenant string ID of the service principal's tenant. Also called its directory ID
--onedrive-token string OAuth Access Token as a JSON blob
--onedrive-token-url string Token server url
--oos-attempt-resume-upload If true attempt to resume previously started multipart upload for the object
--oos-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
- --oos-compartment string Object storage compartment OCID
+ --oos-compartment string Specify compartment OCID, if you need to list buckets
--oos-config-file string Path to OCI config file (default "~/.oci/config")
--oos-config-profile string Profile name inside the oci config file (default "Default")
--oos-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
@@ -21143,6 +21715,7 @@ Backend-only flags (these can be set in the config file also).
--opendrive-password string Password (obscured)
--opendrive-username string Username
--pcloud-auth-url string Auth server URL
+ --pcloud-client-credentials Use client credentials OAuth flow
--pcloud-client-id string OAuth Client Id
--pcloud-client-secret string OAuth Client Secret
--pcloud-description string Description of the remote
@@ -21153,26 +21726,25 @@ Backend-only flags (these can be set in the config file also).
--pcloud-token string OAuth Access Token as a JSON blob
--pcloud-token-url string Token server url
--pcloud-username string Your pcloud username
- --pikpak-auth-url string Auth server URL
--pikpak-chunk-size SizeSuffix Chunk size for multipart uploads (default 5Mi)
- --pikpak-client-id string OAuth Client Id
- --pikpak-client-secret string OAuth Client Secret
--pikpak-description string Description of the remote
+ --pikpak-device-id string Device ID used for authorization
--pikpak-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot)
--pikpak-hash-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate hash if required (default 10Mi)
+ --pikpak-no-media-link Use original file links instead of media links
--pikpak-pass string Pikpak password (obscured)
--pikpak-root-folder-id string ID of the root folder
- --pikpak-token string OAuth Access Token as a JSON blob
- --pikpak-token-url string Token server url
--pikpak-trashed-only Only show files that are in the trash
--pikpak-upload-concurrency int Concurrency for multipart uploads (default 5)
--pikpak-use-trash Send files to the trash instead of deleting permanently (default true)
--pikpak-user string Pikpak username
+ --pikpak-user-agent string HTTP user agent for pikpak (default "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0")
--pixeldrain-api-key string API key for your pixeldrain account
--pixeldrain-api-url string The API endpoint to connect to. In the vast majority of cases it's fine to leave (default "https://pixeldrain.com/api")
--pixeldrain-description string Description of the remote
--pixeldrain-root-folder-id string Root of the filesystem to use (default "me")
--premiumizeme-auth-url string Auth server URL
+ --premiumizeme-client-credentials Use client credentials OAuth flow
--premiumizeme-client-id string OAuth Client Id
--premiumizeme-client-secret string OAuth Client Secret
--premiumizeme-description string Description of the remote
@@ -21190,6 +21762,7 @@ Backend-only flags (these can be set in the config file also).
--protondrive-replace-existing-draft Create a new revision when filename conflict is detected
--protondrive-username string The username of your proton account
--putio-auth-url string Auth server URL
+ --putio-client-credentials Use client credentials OAuth flow
--putio-client-id string OAuth Client Id
--putio-client-secret string OAuth Client Secret
--putio-description string Description of the remote
@@ -21223,6 +21796,7 @@ Backend-only flags (these can be set in the config file also).
--s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
--s3-decompress If set this will decompress gzip encoded objects
--s3-description string Description of the remote
+ --s3-directory-bucket Set to use AWS Directory Buckets
--s3-directory-markers Upload an empty object with a trailing slash when a new directory is created
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-disable-http2 Disable usage of http2 for S3 backends
@@ -21304,6 +21878,7 @@ Backend-only flags (these can be set in the config file also).
--sftp-pass string SSH password, leave blank to use ssh-agent (obscured)
--sftp-path-override string Override path used by SSH shell commands
--sftp-port int SSH port number (default 22)
+ --sftp-pubkey string SSH public certificate for public certificate based authentication
--sftp-pubkey-file string Optional path to public key file
--sftp-server-command string Specifies the path or command to run a sftp server on the remote host
--sftp-set-env SpaceSepList Environment variables to pass to sftp and commands
@@ -21319,6 +21894,7 @@ Backend-only flags (these can be set in the config file also).
--sftp-user string SSH username (default "$USER")
--sharefile-auth-url string Auth server URL
--sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi)
+ --sharefile-client-credentials Use client credentials OAuth flow
--sharefile-client-id string OAuth Client Id
--sharefile-client-secret string OAuth Client Secret
--sharefile-description string Description of the remote
@@ -21408,6 +21984,7 @@ Backend-only flags (these can be set in the config file also).
--uptobox-description string Description of the remote
--uptobox-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
--uptobox-private Set to make uploaded files private
+ --webdav-auth-redirect Preserve authentication on redirect
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
--webdav-bearer-token-command string Command to run to get a bearer token
--webdav-description string Description of the remote
@@ -21423,6 +22000,7 @@ Backend-only flags (these can be set in the config file also).
--webdav-user string User name
--webdav-vendor string Name of the WebDAV site/service/software you are using
--yandex-auth-url string Auth server URL
+ --yandex-client-credentials Use client credentials OAuth flow
--yandex-client-id string OAuth Client Id
--yandex-client-secret string OAuth Client Secret
--yandex-description string Description of the remote
@@ -21432,6 +22010,7 @@ Backend-only flags (these can be set in the config file also).
--yandex-token string OAuth Access Token as a JSON blob
--yandex-token-url string Token server url
--zoho-auth-url string Auth server URL
+ --zoho-client-credentials Use client credentials OAuth flow
--zoho-client-id string OAuth Client Id
--zoho-client-secret string OAuth Client Secret
--zoho-description string Description of the remote
@@ -21439,6 +22018,7 @@ Backend-only flags (these can be set in the config file also).
--zoho-region string Zoho region to connect to
--zoho-token string OAuth Access Token as a JSON blob
--zoho-token-url string Token server url
+ --zoho-upload-cutoff SizeSuffix Cutoff for switching to large file upload api (>= 10 MiB) (default 10Mi)
```
# Docker Volume Plugin
@@ -21646,7 +22226,7 @@ but is arguably easier to parameterize in scripts.
The `path` part is optional.
[Mount and VFS options](https://rclone.org/commands/rclone_serve_docker/#options)
-as well as [backend parameters](https://rclone.org/flags/#backend-flags) are named
+as well as [backend parameters](https://rclone.org/flags/#backend) are named
like their twin command-line flags without the `--` CLI prefix.
Optionally you can use underscores instead of dashes in option names.
For example, `--vfs-cache-mode full` becomes
@@ -21974,6 +22554,13 @@ sudo curl -H Content-Type:application/json -XPOST -d {} --unix-socket /run/docke
```
though this is rarely needed.
+If the plugin fails to work properly, and only as a last resort after you tried diagnosing with the above methods, you can try clearing the state of the plugin. **Note that all existing rclone docker volumes will probably have to be recreated.** This might be needed because a reinstall don't cleanup existing state files to allow for easy restoration, as stated above.
+```
+docker plugin disable rclone # disable the plugin to ensure no interference
+sudo rm /var/lib/docker-plugins/rclone/cache/docker-plugin.state # removing the plugin state
+docker plugin enable rclone # re-enable the plugin afterward
+```
+
## Caveats
Finally I'd like to mention a _caveat with updating volume settings_.
@@ -22960,12 +23547,15 @@ that while concurrent bisync runs are allowed, _be very cautious_
that there is no overlap in the trees being synched between concurrent runs,
lest there be replicated files, deleted files and general mayhem.
-### Return codes
+### Exit codes
`rclone bisync` returns the following codes to calling program:
- `0` on a successful run,
- `1` for a non-critical failing run (a rerun may be successful),
-- `2` for a critically aborted run (requires a `--resync` to recover).
+- `2` on syntax or usage error,
+- `7` for a critically aborted run (requires a `--resync` to recover).
+
+See also the section about [exit codes](https://rclone.org/docs/#exit-code) in main docs.
### Graceful Shutdown
@@ -24367,6 +24957,7 @@ The S3 backend can be used with a number of different providers:
- Linode Object Storage
- Magalu Object Storage
- Minio
+- Outscale
- Petabox
- Qiniu Cloud Object Storage (Kodo)
- RackCorp Object Storage
@@ -24374,6 +24965,7 @@ The S3 backend can be used with a number of different providers:
- Scaleway
- Seagate Lyve Cloud
- SeaweedFS
+- Selectel
- StackPath
- Storj
- Synology C2 Object Storage
@@ -24582,7 +25174,7 @@ Choose a number from below, or type in your own value
\ "STANDARD_IA"
5 / One Zone Infrequent Access storage class
\ "ONEZONE_IA"
- 6 / Glacier storage class
+ 6 / Glacier Flexible Retrieval storage class
\ "GLACIER"
7 / Glacier Deep Archive storage class
\ "DEEP_ARCHIVE"
@@ -24741,6 +25333,115 @@ there for more details.
Setting this flag increases the chance for undetected upload failures.
+### Increasing performance
+
+#### Using server-side copy
+
+If you are copying objects between S3 buckets in the same region, you should
+use server-side copy.
+This is much faster than downloading and re-uploading the objects, as no data is transferred.
+
+For rclone to use server-side copy, you must use the same remote for the source and destination.
+
+ rclone copy s3:source-bucket s3:destination-bucket
+
+When using server-side copy, the performance is limited by the rate at which rclone issues
+API requests to S3.
+See below for how to increase the number of API requests rclone makes.
+
+#### Increasing the rate of API requests
+
+You can increase the rate of API requests to S3 by increasing the parallelism using `--transfers` and `--checkers`
+options.
+
+Rclone uses a very conservative defaults for these settings, as not all providers support high rates of requests.
+Depending on your provider, you can increase significantly the number of transfers and checkers.
+
+For example, with AWS S3, if you can increase the number of checkers to values like 200.
+If you are doing a server-side copy, you can also increase the number of transfers to 200.
+
+ rclone sync --transfers 200 --checkers 200 --checksum s3:source-bucket s3:destination-bucket
+
+You will need to experiment with these values to find the optimal settings for your setup.
+
+
+### Data integrity
+
+Rclone does its best to verify every part of an upload or download to
+the s3 provider using various hashes.
+
+Every HTTP transaction to/from the provider has a
+`X-Amz-Content-Sha256` or a `Content-Md5` header to guard against
+corruption of the HTTP body. The HTTP Header is protected by the
+signature passed in the `Authorization` header.
+
+All communications with the provider is done over https for encryption
+and additional error protection.
+
+#### Single part uploads
+
+- Rclone uploads single part uploads with a `Content-Md5` using the
+ MD5 hash read from the source. The provider checks this is correct
+ on receipt of the data.
+
+- Rclone then does a HEAD request (disable with `--s3-no-head`) to
+ read the `ETag` back which is the MD5 of the file and checks that with
+ what it sent.
+
+Note that if the source does not have an MD5 then the single part
+uploads will not have hash protection. In this case it is recommended
+to use `--s3-upload-cutoff 0` so all files are uploaded as multipart
+uploads.
+
+#### Multipart uploads
+
+For files above `--s3-upload-cutoff` rclone splits the file into
+multiple parts for upload.
+
+- Each part is protected with both an `X-Amz-Content-Sha256` and a
+ `Content-Md5`
+
+When rclone has finished the upload of all the parts it then completes
+the upload by sending:
+
+- The MD5 hash of each part
+- The number of parts
+- This info is all protected with a `X-Amz-Content-Sha256`
+
+The provider checks the MD5 for all the parts it has received against
+what rclone sends and if it is good it returns OK.
+
+Rclone then does a HEAD request (disable with `--s3-no-head`) and
+checks the ETag is what it expects (in this case it should be the MD5
+sum of all the MD5 sums of all the parts with the number of parts on
+the end).
+
+If the source has an MD5 sum then rclone will attach the
+`X-Amz-Meta-Md5chksum` with it as the `ETag` for a multipart upload
+can't easily be checked against the file as the chunk size must be
+known in order to calculate it.
+
+#### Downloads
+
+Rclone checks the MD5 hash of the data downloaded against either the
+ETag or the `X-Amz-Meta-Md5chksum` metadata (if present) which rclone
+uploads with multipart uploads.
+
+#### Further checking
+
+At each stage rclone and the provider are sending and checking hashes of
+**everything**. Rclone deliberately HEADs each object after upload to
+check it arrived safely for extra security. (You can disable this with
+`--s3-no-head`).
+
+If you require further assurance that your data is intact you can use
+`rclone check` to check the hashes locally vs the remote.
+
+And if you are feeling ultimately paranoid use `rclone check --download`
+which will download the files and check them against the local copies.
+(Note that this doesn't use disk to do this - it streams them in
+memory).
+
### Versions
When bucket versioning is enabled (this can be done with rclone with
@@ -25018,7 +25719,7 @@ A simple solution is to set the `--s3-upload-cutoff 0` and force all the files t
### Standard options
-Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
+Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
#### --s3-provider
@@ -25071,6 +25772,8 @@ Properties:
- Minio Object Storage
- "Netease"
- Netease Object Storage (NOS)
+ - "Outscale"
+ - OUTSCALE Object Storage (OOS)
- "Petabox"
- Petabox Object Storage
- "RackCorp"
@@ -25081,6 +25784,8 @@ Properties:
- Scaleway Object Storage
- "SeaweedFS"
- SeaweedFS S3
+ - "Selectel"
+ - Selectel Object Storage
- "StackPath"
- StackPath Object Storage
- "Storj"
@@ -25332,7 +26037,7 @@ Properties:
- Config: acl
- Env Var: RCLONE_S3_ACL
-- Provider: !Storj,Synology,Cloudflare
+- Provider: !Storj,Selectel,Synology,Cloudflare
- Type: string
- Required: false
- Examples:
@@ -25436,7 +26141,7 @@ Properties:
- "ONEZONE_IA"
- One Zone Infrequent Access storage class
- "GLACIER"
- - Glacier storage class
+ - Glacier Flexible Retrieval storage class
- "DEEP_ARCHIVE"
- Glacier Deep Archive storage class
- "INTELLIGENT_TIERING"
@@ -25446,7 +26151,7 @@ Properties:
### Advanced options
-Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
+Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
#### --s3-bucket-acl
@@ -26280,6 +26985,41 @@ Properties:
- Type: Tristate
- Default: unset
+#### --s3-directory-bucket
+
+Set to use AWS Directory Buckets
+
+If you are using an AWS Directory Bucket then set this flag.
+
+This will ensure no `Content-Md5` headers are sent and ensure `ETag`
+headers are not interpreted as MD5 sums. `X-Amz-Meta-Md5chksum` will
+be set on all objects whether single or multipart uploaded.
+
+This also sets `no_check_bucket = true`.
+
+Note that Directory Buckets do not support:
+
+- Versioning
+- `Content-Encoding: gzip`
+
+Rclone limitations with Directory Buckets:
+
+- rclone does not support creating Directory Buckets with `rclone mkdir`
+- ... or removing them with `rclone rmdir` yet
+- Directory Buckets do not appear when doing `rclone lsf` at the top level.
+- Rclone can't remove auto created directories yet. In theory this should
+ work with `directory_markers = true` but it doesn't.
+- Directories don't seem to appear in recursive (ListR) listings.
+
+
+Properties:
+
+- Config: directory_bucket
+- Env Var: RCLONE_S3_DIRECTORY_BUCKET
+- Provider: AWS
+- Type: bool
+- Default: false
+
#### --s3-sdk-log-mode
Set to debug the SDK
@@ -26602,6 +27342,21 @@ You can also do this entirely on the command line
This is the provider used as main example and described in the [configuration](#configuration) section above.
+### AWS Directory Buckets
+
+From rclone v1.69 [Directory Buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/directory-buckets-overview.html)
+are supported.
+
+You will need to set the `directory_buckets = true` config parameter
+or use `--s3-directory-buckets`.
+
+Note that rclone cannot yet:
+
+- Create directory buckets
+- List directory buckets
+
+See [the --s3-directory-buckets flag](#s3-directory-buckets) for more info
+
### AWS Snowball Edge
[AWS Snowball](https://aws.amazon.com/snowball/) is a hardware
@@ -26794,6 +27549,9 @@ Note that Cloudflare decompresses files uploaded with
does. If this is causing a problem then upload the files with
`--header-upload "Cache-Control: no-transform"`
+A consequence of this is that `Content-Encoding: gzip` will never
+appear in the metadata on Cloudflare.
+
### Dreamhost
Dreamhost [DreamObjects](https://www.dreamhost.com/cloud/storage/) is
@@ -27518,6 +28276,168 @@ So once set up, for example, to copy files into a bucket
rclone copy /path/to/files minio:bucket
```
+### Outscale
+
+[OUTSCALE Object Storage (OOS)](https://en.outscale.com/storage/outscale-object-storage/) is an enterprise-grade, S3-compatible storage service provided by OUTSCALE, a brand of Dassault Systèmes. For more information about OOS, see the [official documentation](https://docs.outscale.com/en/userguide/OUTSCALE-Object-Storage-OOS.html).
+
+Here is an example of an OOS configuration that you can paste into your rclone configuration file:
+
+```
+[outscale]
+type = s3
+provider = Outscale
+env_auth = false
+access_key_id = ABCDEFGHIJ0123456789
+secret_access_key = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+region = eu-west-2
+endpoint = oos.eu-west-2.outscale.com
+acl = private
+```
+
+You can also run `rclone config` to go through the interactive setup process:
+
+```
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+```
+
+```
+Enter name for new remote.
+name> outscale
+```
+
+```
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+ X / Amazon S3 Compliant Storage Providers including AWS, ...Outscale, ...and others
+ \ (s3)
+[snip]
+Storage> outscale
+```
+
+```
+Option provider.
+Choose your S3 provider.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+[snip]
+XX / OUTSCALE Object Storage (OOS)
+ \ (Outscale)
+[snip]
+provider> Outscale
+```
+
+```
+Option env_auth.
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+Only applies if access_key_id and secret_access_key is blank.
+Choose a number from below, or type in your own boolean value (true or false).
+Press Enter for the default (false).
+ 1 / Enter AWS credentials in the next step.
+ \ (false)
+ 2 / Get AWS credentials from the environment (env vars or IAM).
+ \ (true)
+env_auth>
+```
+
+```
+Option access_key_id.
+AWS Access Key ID.
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+access_key_id> ABCDEFGHIJ0123456789
+```
+
+```
+Option secret_access_key.
+AWS Secret Access Key (password).
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+secret_access_key> XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+```
+
+```
+Option region.
+Region where your bucket will be created and your data stored.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / Paris, France
+ \ (eu-west-2)
+ 2 / New Jersey, USA
+ \ (us-east-2)
+ 3 / California, USA
+ \ (us-west-1)
+ 4 / SecNumCloud, Paris, France
+ \ (cloudgouv-eu-west-1)
+ 5 / Tokyo, Japan
+ \ (ap-northeast-1)
+region> 1
+```
+
+```
+Option endpoint.
+Endpoint for S3 API.
+Required when using an S3 clone.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / Outscale EU West 2 (Paris)
+ \ (oos.eu-west-2.outscale.com)
+ 2 / Outscale US east 2 (New Jersey)
+ \ (oos.us-east-2.outscale.com)
+ 3 / Outscale EU West 1 (California)
+ \ (oos.us-west-1.outscale.com)
+ 4 / Outscale SecNumCloud (Paris)
+ \ (oos.cloudgouv-eu-west-1.outscale.com)
+ 5 / Outscale AP Northeast 1 (Japan)
+ \ (oos.ap-northeast-1.outscale.com)
+endpoint> 1
+```
+
+```
+Option acl.
+Canned ACL used when creating buckets and storing or copying objects.
+This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
+For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
+Note that this ACL is applied when server-side copying objects as S3
+doesn't copy the ACL from the source but rather writes a fresh one.
+If the acl is an empty string then no X-Amz-Acl: header is added and
+the default (private) will be used.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ / Owner gets FULL_CONTROL.
+ 1 | No one else has access rights (default).
+ \ (private)
+[snip]
+acl> 1
+```
+
+```
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+```
+
+```
+Configuration complete.
+Options:
+- type: s3
+- provider: Outscale
+- access_key_id: ABCDEFGHIJ0123456789
+- secret_access_key: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+- endpoint: oos.eu-west-2.outscale.com
+Keep this "outscale" remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+```
+
### Qiniu Cloud Object Storage (Kodo) {#qiniu}
[Qiniu Cloud Object Storage (Kodo)](https://www.qiniu.com/en/products/kodo), a completely independent-researched core technology which is proven by repeated customer experience has occupied absolute leading market leader position. Kodo can be widely applied to mass data management.
@@ -27784,8 +28704,8 @@ chunk_size = 5M
copy_cutoff = 5M
```
-[C14 Cold Storage](https://www.online.net/en/storage/c14-cold-storage) is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" `storage_class`.
-So you can configure your remote with the `storage_class = GLACIER` option to upload directly to C14. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above)
+[Scaleway Glacier](https://www.scaleway.com/en/glacier-cold-storage/) is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" `storage_class`.
+So you can configure your remote with the `storage_class = GLACIER` option to upload directly to Scaleway Glacier. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above)
### Seagate Lyve Cloud {#lyve}
@@ -27980,6 +28900,125 @@ So once set up, for example to copy files into a bucket
rclone copy /path/to/files seaweedfs_s3:foo
```
+### Selectel
+
+[Selectel Cloud Storage](https://selectel.ru/services/cloud/storage/)
+is an S3 compatible storage system which features triple redundancy
+storage, automatic scaling, high availability and a comprehensive IAM
+system.
+
+Selectel have a section on their website for [configuring
+rclone](https://docs.selectel.ru/en/cloud/object-storage/tools/rclone/)
+which shows how to make the right API keys.
+
+From rclone v1.69 Selectel is a supported operator - please choose the
+`Selectel` provider type.
+
+Note that you should use "vHosted" access for the buckets (which is
+the recommended default), not "path style".
+
+You can use `rclone config` to make a new provider like this
+
+```
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+
+Enter name for new remote.
+name> selectel
+
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+XX / Amazon S3 Compliant Storage Providers including ..., Selectel, ...
+ \ (s3)
+[snip]
+Storage> s3
+
+Option provider.
+Choose your S3 provider.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+[snip]
+XX / Selectel Object Storage
+ \ (Selectel)
+[snip]
+provider> Selectel
+
+Option env_auth.
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+Only applies if access_key_id and secret_access_key is blank.
+Choose a number from below, or type in your own boolean value (true or false).
+Press Enter for the default (false).
+ 1 / Enter AWS credentials in the next step.
+ \ (false)
+ 2 / Get AWS credentials from the environment (env vars or IAM).
+ \ (true)
+env_auth> 1
+
+Option access_key_id.
+AWS Access Key ID.
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+access_key_id> ACCESS_KEY
+
+Option secret_access_key.
+AWS Secret Access Key (password).
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+secret_access_key> SECRET_ACCESS_KEY
+
+Option region.
+Region where your data stored.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / St. Petersburg
+ \ (ru-1)
+region> 1
+
+Option endpoint.
+Endpoint for Selectel Object Storage.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / Saint Petersburg
+ \ (s3.ru-1.storage.selcloud.ru)
+endpoint> 1
+
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+
+Configuration complete.
+Options:
+- type: s3
+- provider: Selectel
+- access_key_id: ACCESS_KEY
+- secret_access_key: SECRET_ACCESS_KEY
+- region: ru-1
+- endpoint: s3.ru-1.storage.selcloud.ru
+Keep this "selectel" remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+```
+
+And your config should end up looking like this:
+
+```
+[selectel]
+type = s3
+provider = Selectel
+access_key_id = ACCESS_KEY
+secret_access_key = SECRET_ACCESS_KEY
+region = ru-1
+endpoint = s3.ru-1.storage.selcloud.ru
+```
+
### Wasabi
[Wasabi](https://wasabi.com) is a cloud-based object storage service for a
@@ -30286,6 +31325,7 @@ This will dump something like this showing the lifecycle rules.
{
"daysFromHidingToDeleting": 1,
"daysFromUploadingToHiding": null,
+ "daysFromStartingToCancelingUnfinishedLargeFiles": null,
"fileNamePrefix": ""
}
]
@@ -30315,6 +31355,7 @@ See: https://www.backblaze.com/docs/cloud-storage-lifecycle-rules
Options:
- "daysFromHidingToDeleting": After a file has been hidden for this many days it is deleted. 0 is off.
+- "daysFromStartingToCancelingUnfinishedLargeFiles": Cancels any unfinished large file versions after this many days
- "daysFromUploadingToHiding": This many days after uploading a file is hidden
### cleanup
@@ -30424,7 +31465,7 @@ If not sure try Y. If Y failed, try N.
y) Yes
n) No
y/n> y
-If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
+If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=XXXXXXXXXXXXXXXXXXXXXX
Log in and authorize rclone for access
Waiting for code...
Got code
@@ -30744,6 +31785,19 @@ Properties:
- Type: string
- Required: false
+#### --box-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_BOX_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
#### --box-root-folder-id
Fill in for rclone to use a non root folder as its starting point.
@@ -32071,6 +33125,221 @@ Properties:
+# Cloudinary
+
+This is a backend for the [Cloudinary](https://cloudinary.com/) platform
+
+## About Cloudinary
+
+[Cloudinary](https://cloudinary.com/) is an image and video API platform.
+Trusted by 1.5 million developers and 10,000 enterprise and hyper-growth companies as a critical part of their tech stack to deliver visually engaging experiences.
+
+## Accounts & Pricing
+
+To use this backend, you need to [create a free account](https://cloudinary.com/users/register_free) on Cloudinary. Start with a free plan with generous usage limits. Then, as your requirements grow, upgrade to a plan that best fits your needs. See [the pricing details](https://cloudinary.com/pricing).
+
+## Securing Your Credentials
+
+Please refer to the [docs](https://rclone.org/docs/#configuration-encryption-cheatsheet)
+
+## Configuration
+
+Here is an example of making a Cloudinary configuration.
+
+First, create a [cloudinary.com](https://cloudinary.com/users/register_free) account and choose a plan.
+
+You will need to log in and get the `API Key` and `API Secret` for your account from the developer section.
+
+Now run
+
+`rclone config`
+
+Follow the interactive setup process:
+
+```text
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+
+Enter the name for the new remote.
+name> cloudinary-media-library
+
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+XX / cloudinary.com
+\ (cloudinary)
+[snip]
+Storage> cloudinary
+
+Option cloud_name.
+You can find your cloudinary.com cloud_name in your [dashboard](https://console.cloudinary.com/pm/developer-dashboard)
+Enter a value.
+cloud_name> ****************************
+
+Option api_key.
+You can find your cloudinary.com api key in your [dashboard](https://console.cloudinary.com/pm/developer-dashboard)
+Enter a value.
+api_key> ****************************
+
+Option api_secret.
+You can find your cloudinary.com api secret in your [dashboard](https://console.cloudinary.com/pm/developer-dashboard)
+This value must be a single character, one of the following: y, g.
+y/g> y
+Enter a value.
+api_secret> ****************************
+
+Option upload_prefix.
+[Upload prefix](https://cloudinary.com/documentation/cloudinary_sdks#configuration_parameters) to specify alternative data center
+Enter a value.
+upload_prefix>
+
+Option upload_preset.
+[Upload presets](https://cloudinary.com/documentation/upload_presets) can be defined for different upload profiles
+Enter a value.
+upload_preset>
+
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+
+Configuration complete.
+Options:
+- type: cloudinary
+- api_key: ****************************
+- api_secret: ****************************
+- cloud_name: ****************************
+- upload_prefix:
+- upload_preset:
+
+Keep this "cloudinary-media-library" remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+```
+
+List directories in the top level of your Media Library
+
+`rclone lsd cloudinary-media-library:`
+
+Make a new directory.
+
+`rclone mkdir cloudinary-media-library:directory`
+
+List the contents of a directory.
+
+`rclone ls cloudinary-media-library:directory`
+
+### Modified time and hashes
+
+Cloudinary stores md5 and timestamps for any successful Put automatically and read-only.
+
+
+### Standard options
+
+Here are the Standard options specific to cloudinary (Cloudinary).
+
+#### --cloudinary-cloud-name
+
+Cloudinary Environment Name
+
+Properties:
+
+- Config: cloud_name
+- Env Var: RCLONE_CLOUDINARY_CLOUD_NAME
+- Type: string
+- Required: true
+
+#### --cloudinary-api-key
+
+Cloudinary API Key
+
+Properties:
+
+- Config: api_key
+- Env Var: RCLONE_CLOUDINARY_API_KEY
+- Type: string
+- Required: true
+
+#### --cloudinary-api-secret
+
+Cloudinary API Secret
+
+Properties:
+
+- Config: api_secret
+- Env Var: RCLONE_CLOUDINARY_API_SECRET
+- Type: string
+- Required: true
+
+#### --cloudinary-upload-prefix
+
+Specify the API endpoint for environments out of the US
+
+Properties:
+
+- Config: upload_prefix
+- Env Var: RCLONE_CLOUDINARY_UPLOAD_PREFIX
+- Type: string
+- Required: false
+
+#### --cloudinary-upload-preset
+
+Upload Preset to select asset manipulation on upload
+
+Properties:
+
+- Config: upload_preset
+- Env Var: RCLONE_CLOUDINARY_UPLOAD_PRESET
+- Type: string
+- Required: false
+
+### Advanced options
+
+Here are the Advanced options specific to cloudinary (Cloudinary).
+
+#### --cloudinary-encoding
+
+The encoding for the backend.
+
+See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
+
+Properties:
+
+- Config: encoding
+- Env Var: RCLONE_CLOUDINARY_ENCODING
+- Type: Encoding
+- Default: Slash,LtGt,DoubleQuote,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot
+
+#### --cloudinary-eventually-consistent-delay
+
+Wait N seconds for eventual consistency of the databases that support the backend operation
+
+Properties:
+
+- Config: eventually_consistent_delay
+- Env Var: RCLONE_CLOUDINARY_EVENTUALLY_CONSISTENT_DELAY
+- Type: Duration
+- Default: 0s
+
+#### --cloudinary-description
+
+Description of the remote.
+
+Properties:
+
+- Config: description
+- Env Var: RCLONE_CLOUDINARY_DESCRIPTION
+- Type: string
+- Required: false
+
+
+
# Citrix ShareFile
[Citrix ShareFile](https://sharefile.com) is a secure file sharing and transfer service aimed as business.
@@ -32313,6 +33582,19 @@ Properties:
- Type: string
- Required: false
+#### --sharefile-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_SHAREFILE_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
#### --sharefile-upload-cutoff
Cutoff for switching to multipart upload.
@@ -33824,6 +35106,19 @@ Properties:
- Type: string
- Required: false
+#### --dropbox-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_DROPBOX_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
#### --dropbox-chunk-size
Upload chunk size (< 150Mi).
@@ -34967,12 +36262,12 @@ Properties:
Socks 5 proxy host.
- Supports the format user:pass@host:port, user@host:port, host:port.
+Supports the format user:pass@host:port, user@host:port, host:port.
- Example:
-
- myUser:myPass@localhost:9005
+Example:
+ myUser:myPass@localhost:9005
+
Properties:
@@ -34981,6 +36276,28 @@ Properties:
- Type: string
- Required: false
+#### --ftp-no-check-upload
+
+Don't check the upload is OK
+
+Normally rclone will try to check the upload exists after it has
+uploaded a file to make sure the size and modification time are as
+expected.
+
+This flag stops rclone doing these checks. This enables uploading to
+folders which are write only.
+
+You will likely need to use the --inplace flag also if uploading to
+a write only folder.
+
+
+Properties:
+
+- Config: no_check_upload
+- Env Var: RCLONE_FTP_NO_CHECK_UPLOAD
+- Type: bool
+- Default: false
+
#### --ftp-encoding
The encoding for the backend.
@@ -35524,6 +36841,55 @@ the rclone config file, you can set `service_account_credentials` with
the actual contents of the file instead, or set the equivalent
environment variable.
+### Service Account Authentication with Access Tokens
+
+Another option for service account authentication is to use access tokens via *gcloud impersonate-service-account*. Access tokens protect security by avoiding the use of the JSON
+key file, which can be breached. They also bypass oauth login flow, which is simpler
+on remote VMs that lack a web browser.
+
+If you already have a working service account, skip to step 3.
+
+#### 1. Create a service account using
+
+ gcloud iam service-accounts create gcs-read-only
+
+You can re-use an existing service account as well (like the one created above)
+
+#### 2. Attach a Viewer (read-only) or User (read-write) role to the service account
+ $ PROJECT_ID=my-project
+ $ gcloud --verbose iam service-accounts add-iam-policy-binding \
+ gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com \
+ --member=serviceAccount:gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com \
+ --role=roles/storage.objectViewer
+
+Use the Google Cloud console to identify a limited role. Some relevant pre-defined roles:
+
+* *roles/storage.objectUser* -- read-write access but no admin privileges
+* *roles/storage.objectViewer* -- read-only access to objects
+* *roles/storage.admin* -- create buckets & administrative roles
+
+#### 3. Get a temporary access key for the service account
+
+ $ gcloud auth application-default print-access-token \
+ --impersonate-service-account \
+ gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com
+
+ ya29.c.c0ASRK0GbAFEewXD [truncated]
+
+#### 4. Update `access_token` setting
+hit `CTRL-C` when you see *waiting for code*. This will save the config without doing oauth flow
+
+ rclone config update ${REMOTE_NAME} access_token ya29.c.c0Axxxx
+
+#### 5. Run rclone as usual
+
+ rclone ls dev-gcs:${MY_BUCKET}/
+
+### More Info on Service Accounts
+
+* [Official GCS Docs](https://cloud.google.com/compute/docs/access/service-accounts)
+* [Guide on Service Accounts using Key Files (less secure, but similar concepts)](https://forum.rclone.org/t/access-using-google-service-account/24822/2)
+
### Anonymous Access
For downloads of objects that permit public access you can configure rclone
@@ -35947,6 +37313,33 @@ Properties:
- Type: string
- Required: false
+#### --gcs-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_GCS_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
+#### --gcs-access-token
+
+Short-lived access token.
+
+Leave blank normally.
+Needed only if you want use short-lived access token instead of interactive login.
+
+Properties:
+
+- Config: access_token
+- Env Var: RCLONE_GCS_ACCESS_TOKEN
+- Type: string
+- Required: false
+
#### --gcs-directory-markers
Upload an empty object with a trailing slash when a new directory is created
@@ -36576,6 +37969,7 @@ represent the currently available conversions.
| html | text/html | An HTML Document |
| jpg | image/jpeg | A JPEG Image File |
| json | application/vnd.google-apps.script+json | JSON Text Format for Google Apps scripts |
+| md | text/markdown | Markdown Text Format |
| odp | application/vnd.oasis.opendocument.presentation | Openoffice Presentation |
| ods | application/vnd.oasis.opendocument.spreadsheet | Openoffice Spreadsheet |
| ods | application/x-vnd.oasis.opendocument.spreadsheet | Openoffice Spreadsheet |
@@ -36732,6 +38126,19 @@ Properties:
- Type: string
- Required: false
+#### --drive-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_DRIVE_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
#### --drive-root-folder-id
ID of the root folder.
@@ -37724,6 +39131,41 @@ The result is a JSON array of matches, for example:
}
]
+### rescue
+
+Rescue or delete any orphaned files
+
+ rclone backend rescue remote: [options] [+]
+
+This command rescues or deletes any orphaned files or directories.
+
+Sometimes files can get orphaned in Google Drive. This means that they
+are no longer in any folder in Google Drive.
+
+This command finds those files and either rescues them to a directory
+you specify or deletes them.
+
+Usage:
+
+This can be used in 3 ways.
+
+First, list all orphaned files
+
+ rclone backend rescue drive:
+
+Second rescue all orphaned files to the directory indicated
+
+ rclone backend rescue drive: "relative/path/to/rescue/directory"
+
+e.g. To rescue all orphans to a directory called "Orphans" in the top level
+
+ rclone backend rescue drive: Orphans
+
+Third delete all orphaned files to the trash
+
+ rclone backend rescue drive: -o delete
+
+
## Limitations
@@ -37849,9 +39291,9 @@ then select "OAuth client ID".
9. It will show you a client ID and client secret. Make a note of these.
- (If you selected "External" at Step 5 continue to Step 9.
+ (If you selected "External" at Step 5 continue to Step 10.
If you chose "Internal" you don't need to publish and can skip straight to
- Step 10 but your destination drive must be part of the same Google Workspace.)
+ Step 11 but your destination drive must be part of the same Google Workspace.)
10. Go to "Oauth consent screen" and then click "PUBLISH APP" button and confirm.
You will also want to add yourself as a test user.
@@ -38188,6 +39630,19 @@ Properties:
- Type: string
- Required: false
+#### --gphotos-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_GPHOTOS_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
#### --gphotos-read-size
Set to read the size of media items.
@@ -38239,6 +39694,40 @@ Properties:
- Type: bool
- Default: false
+#### --gphotos-proxy
+
+Use the gphotosdl proxy for downloading the full resolution images
+
+The Google API will deliver images and video which aren't full
+resolution, and/or have EXIF data missing.
+
+However if you ue the gphotosdl proxy tnen you can download original,
+unchanged images.
+
+This runs a headless browser in the background.
+
+Download the software from [gphotosdl](https://github.com/rclone/gphotosdl)
+
+First run with
+
+ gphotosdl -login
+
+Then once you have logged into google photos close the browser window
+and run
+
+ gphotosdl
+
+Then supply the parameter `--gphotos-proxy "http://localhost:8282"` to make
+rclone use the proxy.
+
+
+Properties:
+
+- Config: proxy
+- Env Var: RCLONE_GPHOTOS_PROXY
+- Type: string
+- Required: false
+
#### --gphotos-encoding
The encoding for the backend.
@@ -38377,12 +39866,18 @@ is covered by [bug #112096115](https://issuetracker.google.com/issues/112096115)
**The current google API does not allow photos to be downloaded at original resolution. This is very important if you are, for example, relying on "Google Photos" as a backup of your photos. You will not be able to use rclone to redownload original images. You could use 'google takeout' to recover the original photos as a last resort**
+**NB** you **can** use the [--gphotos-proxy](#gphotos-proxy) flag to use a
+headless browser to download images in full resolution.
+
### Downloading Videos
When videos are downloaded they are downloaded in a really compressed
version of the video compared to downloading it via the Google Photos
web interface. This is covered by [bug #113672044](https://issuetracker.google.com/issues/113672044).
+**NB** you **can** use the [--gphotos-proxy](#gphotos-proxy) flag to use a
+headless browser to download images in full resolution.
+
### Duplicates
If a file name is duplicated in a directory then rclone will add the
@@ -39319,6 +40814,19 @@ Properties:
- Type: string
- Required: false
+#### --hidrive-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_HIDRIVE_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
#### --hidrive-scope-role
User-level that rclone should use when requesting access from HiDrive.
@@ -39997,6 +41505,171 @@ See the [metadata](https://rclone.org/docs/#metadata) docs for more info.
+# iCloud Drive
+
+
+## Configuration
+
+The initial setup for an iCloud Drive backend involves getting a trust token/session. This can be done by simply using the regular iCloud password, and accepting the code prompt on another iCloud connected device.
+
+**IMPORTANT**: At the moment an app specific password won't be accepted. Only use your regular password and 2FA.
+
+`rclone config` walks you through the token creation. The trust token is valid for 30 days. After which you will have to reauthenticate with `rclone reconnect` or `rclone config`.
+
+Here is an example of how to make a remote called `iclouddrive`. First run:
+
+ rclone config
+
+This will guide you through an interactive setup process:
+
+```
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> iclouddrive
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+XX / iCloud Drive
+ \ (iclouddrive)
+[snip]
+Storage> iclouddrive
+Option apple_id.
+Apple ID.
+Enter a value.
+apple_id> APPLEID
+Option password.
+Password.
+Choose an alternative below.
+y) Yes, type in my own password
+g) Generate random password
+y/g> y
+Enter the password:
+password:
+Confirm the password:
+password:
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+Option config_2fa.
+Two-factor authentication: please enter your 2FA code
+Enter a value.
+config_2fa> 2FACODE
+Remote config
+--------------------
+[koofr]
+- type: iclouddrive
+- apple_id: APPLEID
+- password: *** ENCRYPTED ***
+- cookies: ****************************
+- trust_token: ****************************
+--------------------
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+```
+
+## Advanced Data Protection
+
+ADP is currently unsupported and need to be disabled
+
+
+### Standard options
+
+Here are the Standard options specific to iclouddrive (iCloud Drive).
+
+#### --iclouddrive-apple-id
+
+Apple ID.
+
+Properties:
+
+- Config: apple_id
+- Env Var: RCLONE_ICLOUDDRIVE_APPLE_ID
+- Type: string
+- Required: true
+
+#### --iclouddrive-password
+
+Password.
+
+**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
+
+Properties:
+
+- Config: password
+- Env Var: RCLONE_ICLOUDDRIVE_PASSWORD
+- Type: string
+- Required: true
+
+#### --iclouddrive-trust-token
+
+Trust token (internal use)
+
+Properties:
+
+- Config: trust_token
+- Env Var: RCLONE_ICLOUDDRIVE_TRUST_TOKEN
+- Type: string
+- Required: false
+
+#### --iclouddrive-cookies
+
+cookies (internal use only)
+
+Properties:
+
+- Config: cookies
+- Env Var: RCLONE_ICLOUDDRIVE_COOKIES
+- Type: string
+- Required: false
+
+### Advanced options
+
+Here are the Advanced options specific to iclouddrive (iCloud Drive).
+
+#### --iclouddrive-client-id
+
+Client id
+
+Properties:
+
+- Config: client_id
+- Env Var: RCLONE_ICLOUDDRIVE_CLIENT_ID
+- Type: string
+- Default: "d39ba9916b7251055b22c7f910e2ea796ee65e98b2ddecea8f5dde8d9d1a815d"
+
+#### --iclouddrive-encoding
+
+The encoding for the backend.
+
+See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
+
+Properties:
+
+- Config: encoding
+- Env Var: RCLONE_ICLOUDDRIVE_ENCODING
+- Type: Encoding
+- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
+
+#### --iclouddrive-description
+
+Description of the remote.
+
+Properties:
+
+- Config: description
+- Env Var: RCLONE_ICLOUDDRIVE_DESCRIPTION
+- Type: string
+- Required: false
+
+
+
# Internet Archive
The Internet Archive backend utilizes Items on [archive.org](https://archive.org/)
@@ -40671,6 +42344,19 @@ Properties:
- Type: string
- Required: false
+#### --jottacloud-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_JOTTACLOUD_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
#### --jottacloud-md5-memory-limit
Files bigger than this will be cached on disk to calculate the MD5 if required.
@@ -41533,6 +43219,19 @@ Properties:
- Type: string
- Required: false
+#### --mailru-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_MAILRU_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
#### --mailru-speedup-file-patterns
Comma separated list of file name patterns eligible for speedup (put by hash).
@@ -42518,6 +44217,13 @@ If the resource has multiple user-assigned identities you will need to
unset `env_auth` and set `use_msi` instead. See the [`use_msi`
section](#use_msi).
+If you are operating in disconnected clouds, or private clouds such as
+Azure Stack you may want to set `disable_instance_discovery = true`.
+This determines whether rclone requests Microsoft Entra instance
+metadata from `https://login.microsoft.com/` before authenticating.
+Setting this to `true` will skip this request, making you responsible
+for ensuring the configured authority is valid and trustworthy.
+
##### Env Auth: 3. Azure CLI credentials (as used by the az tool)
Credentials created with the `az` tool can be picked up using `env_auth`.
@@ -42628,6 +44334,16 @@ be explicitly specified using exactly one of the `msi_object_id`,
If none of `msi_object_id`, `msi_client_id`, or `msi_mi_res_id` is
set, this is is equivalent to using `env_auth`.
+#### Azure CLI tool `az` {#use_az}
+
+Set to use the [Azure CLI tool `az`](https://learn.microsoft.com/en-us/cli/azure/)
+as the sole means of authentication.
+
+Setting this can be useful if you wish to use the `az` CLI on a host with
+a System Managed Identity that you do not want to use.
+
+Don't set `env_auth` at the same time.
+
#### Anonymous {#anonymous}
If you want to access resources with public anonymous access then set
@@ -42861,6 +44577,28 @@ Properties:
- Type: string
- Required: false
+#### --azureblob-disable-instance-discovery
+
+Skip requesting Microsoft Entra instance metadata
+
+This should be set true only by applications authenticating in
+disconnected clouds, or private clouds such as Azure Stack.
+
+It determines whether rclone requests Microsoft Entra instance
+metadata from `https://login.microsoft.com/` before
+authenticating.
+
+Setting this to true will skip this request, making you responsible
+for ensuring the configured authority is valid and trustworthy.
+
+
+Properties:
+
+- Config: disable_instance_discovery
+- Env Var: RCLONE_AZUREBLOB_DISABLE_INSTANCE_DISCOVERY
+- Type: bool
+- Default: false
+
#### --azureblob-use-msi
Use a managed service identity to authenticate (only works in Azure).
@@ -42933,6 +44671,26 @@ Properties:
- Type: bool
- Default: false
+#### --azureblob-use-az
+
+Use Azure CLI tool az for authentication
+
+Set to use the [Azure CLI tool az](https://learn.microsoft.com/en-us/cli/azure/)
+as the sole means of authentication.
+
+Setting this can be useful if you wish to use the az CLI on a host with
+a System Managed Identity that you do not want to use.
+
+Don't set env_auth at the same time.
+
+
+Properties:
+
+- Config: use_az
+- Env Var: RCLONE_AZUREBLOB_USE_AZ
+- Type: bool
+- Default: false
+
#### --azureblob-endpoint
Endpoint for the service.
@@ -44119,6 +45877,27 @@ You may try to [verify you account](https://docs.microsoft.com/en-us/azure/activ
Note: If you have a special region, you may need a different host in step 4 and 5. Here are [some hints](https://github.com/rclone/rclone/blob/bc23bf11db1c78c6ebbf8ea538fbebf7058b4176/backend/onedrive/onedrive.go#L86).
+### Using OAuth Client Credential flow
+
+OAuth Client Credential flow will allow rclone to use permissions
+directly associated with the Azure AD Enterprise application, rather
+that adopting the context of an Azure AD user account.
+
+This flow can be enabled by following the steps below:
+
+1. Create the Enterprise App registration in the Azure AD portal and obtain a Client ID and Client Secret as described above.
+2. Ensure that the application has the appropriate permissions and they are assigned as *Application Permissions*
+3. Configure the remote, ensuring that *Client ID* and *Client Secret* are entered correctly.
+4. In the *Advanced Config* section, enter `true` for `client_credentials` and in the `tenant` section enter the tenant ID.
+
+When it comes to choosing the type of the connection work with the
+client credentials flow. In particular the "onedrive" option does not
+work. You can use the "sharepoint" option or if that does not find the
+correct drive ID type it in manually with the "driveid" option.
+
+**NOTE** Assigning permissions directly to the application means that
+anyone with the *Client ID* and *Client Secret* can access your
+OneDrive files. Take care to safeguard these credentials.
### Modification times and hashes
@@ -44260,6 +46039,21 @@ Properties:
- "cn"
- Azure and Office 365 operated by Vnet Group in China
+#### --onedrive-tenant
+
+ID of the service principal's tenant. Also called its directory ID.
+
+Set this if using
+- Client Credential flow
+
+
+Properties:
+
+- Config: tenant
+- Env Var: RCLONE_ONEDRIVE_TENANT
+- Type: string
+- Required: false
+
### Advanced options
Here are the Advanced options specific to onedrive (Microsoft OneDrive).
@@ -44301,6 +46095,19 @@ Properties:
- Type: string
- Required: false
+#### --onedrive-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_ONEDRIVE_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
#### --onedrive-chunk-size
Chunk size to upload files with - must be multiple of 320k (327,680 bytes).
@@ -45599,7 +47406,9 @@ Properties:
#### --oos-compartment
-Object storage compartment OCID
+Specify compartment OCID, if you need to list buckets.
+
+List objects works without compartment OCID.
Properties:
@@ -45607,7 +47416,7 @@ Properties:
- Env Var: RCLONE_OOS_COMPARTMENT
- Provider: !no_auth
- Type: string
-- Required: true
+- Required: false
#### --oos-region
@@ -47668,6 +49477,10 @@ Pcloud App Client Id - leave blank normally.
client_id>
Pcloud App Client Secret - leave blank normally.
client_secret>
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
Remote config
Use web browser to automatically authenticate rclone with remote?
* Say Y if the machine running rclone has a web browser you can use
@@ -47696,6 +49509,10 @@ y/e/d> y
See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a
machine with no Internet browser available.
+Note if you are using remote config with rclone authorize while your pcloud
+server is the EU region, you will need to set the hostname in 'Edit advanced
+config', otherwise you might get a token error.
+
Note that rclone runs a webserver on your local machine to collect the
token as returned from pCloud. This only runs from the moment it opens
your browser to the moment you get back the verification code. This
@@ -47845,6 +49662,19 @@ Properties:
- Type: string
- Required: false
+#### --pcloud-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_PCLOUD_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
#### --pcloud-encoding
The encoding for the backend.
@@ -48038,68 +49868,29 @@ Properties:
Here are the Advanced options specific to pikpak (PikPak).
-#### --pikpak-client-id
+#### --pikpak-device-id
-OAuth Client Id.
-
-Leave blank normally.
+Device ID used for authorization.
Properties:
-- Config: client_id
-- Env Var: RCLONE_PIKPAK_CLIENT_ID
+- Config: device_id
+- Env Var: RCLONE_PIKPAK_DEVICE_ID
- Type: string
- Required: false
-#### --pikpak-client-secret
+#### --pikpak-user-agent
-OAuth Client Secret.
+HTTP user agent for pikpak.
-Leave blank normally.
+Defaults to "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0" or "--pikpak-user-agent" provided on command line.
Properties:
-- Config: client_secret
-- Env Var: RCLONE_PIKPAK_CLIENT_SECRET
+- Config: user_agent
+- Env Var: RCLONE_PIKPAK_USER_AGENT
- Type: string
-- Required: false
-
-#### --pikpak-token
-
-OAuth Access Token as a JSON blob.
-
-Properties:
-
-- Config: token
-- Env Var: RCLONE_PIKPAK_TOKEN
-- Type: string
-- Required: false
-
-#### --pikpak-auth-url
-
-Auth server URL.
-
-Leave blank to use the provider defaults.
-
-Properties:
-
-- Config: auth_url
-- Env Var: RCLONE_PIKPAK_AUTH_URL
-- Type: string
-- Required: false
-
-#### --pikpak-token-url
-
-Token server url.
-
-Leave blank to use the provider defaults.
-
-Properties:
-
-- Config: token_url
-- Env Var: RCLONE_PIKPAK_TOKEN_URL
-- Type: string
-- Required: false
+- Default: "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0"
#### --pikpak-root-folder-id
@@ -48143,6 +49934,19 @@ Properties:
- Type: bool
- Default: false
+#### --pikpak-no-media-link
+
+Use original file links instead of media links.
+
+This avoids issues caused by invalid media links, but may reduce download speeds.
+
+Properties:
+
+- Config: no_media_link
+- Env Var: RCLONE_PIKPAK_NO_MEDIA_LINK
+- Type: bool
+- Default: false
+
#### --pikpak-hash-memory-limit
Files bigger than this will be cached on disk to calculate hash if required.
@@ -48659,6 +50463,19 @@ Properties:
- Type: string
- Required: false
+#### --premiumizeme-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_PREMIUMIZEME_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
#### --premiumizeme-encoding
The encoding for the backend.
@@ -49242,6 +51059,19 @@ Properties:
- Type: string
- Required: false
+#### --putio-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_PUTIO_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
#### --putio-encoding
The encoding for the backend.
@@ -50191,7 +52021,7 @@ and the public key built into it will be used during the authentication process.
If you have a certificate you may use it to sign your public key, creating a
separate SSH user certificate that should be used instead of the plain public key
extracted from the private key. Then you must provide the path to the
-user certificate public key file in `pubkey_file`.
+user certificate public key file in `pubkey_file` or the content of the file in `pubkey`.
Note: This is not the traditional public key paired with your private key,
typically saved as `/home/$USER/.ssh/id_rsa.pub`. Setting this path in
@@ -50529,6 +52359,19 @@ Properties:
- Type: string
- Required: false
+#### --sftp-pubkey
+
+SSH public certificate for public certificate based authentication.
+Set this if you have a signed certificate you want to use for authentication.
+If specified will override pubkey_file.
+
+Properties:
+
+- Config: pubkey
+- Env Var: RCLONE_SFTP_PUBKEY
+- Type: string
+- Required: false
+
#### --sftp-pubkey-file
Optional path to public key file.
@@ -51165,7 +53008,7 @@ See [Hetzner's documentation for details](https://docs.hetzner.com/robot/storage
SMB is [a communication protocol to share files over network](https://en.wikipedia.org/wiki/Server_Message_Block).
-This relies on [go-smb2 library](https://github.com/hirochachacha/go-smb2/) for communication with SMB protocol.
+This relies on [go-smb2 library](https://github.com/CloudSoda/go-smb2/) for communication with SMB protocol.
Paths are specified as `remote:sharename` (or `remote:` for the `lsd`
command.) You may put subdirectories in too, e.g. `remote:item/path/to/dir`.
@@ -53181,6 +55024,29 @@ Properties:
- Type: string
- Required: false
+#### --webdav-auth-redirect
+
+Preserve authentication on redirect.
+
+If the server redirects rclone to a new domain when it is trying to
+read a file then normally rclone will drop the Authorization: header
+from the request.
+
+This is standard security practice to avoid sending your credentials
+to an unknown webserver.
+
+However this is desirable in some circumstances. If you are getting
+an error like "401 Unauthorized" when rclone is attempting to read
+files from the webdav server then you can try this option.
+
+
+Properties:
+
+- Config: auth_redirect
+- Env Var: RCLONE_WEBDAV_AUTH_REDIRECT
+- Type: bool
+- Default: false
+
#### --webdav-description
Description of the remote.
@@ -53570,6 +55436,19 @@ Properties:
- Type: string
- Required: false
+#### --yandex-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_YANDEX_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
#### --yandex-hard-delete
Delete files permanently rather than putting them into the trash.
@@ -53857,6 +55736,30 @@ Properties:
- Type: string
- Required: false
+#### --zoho-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_ZOHO_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
+#### --zoho-upload-cutoff
+
+Cutoff for switching to large file upload api (>= 10 MiB).
+
+Properties:
+
+- Config: upload_cutoff
+- Env Var: RCLONE_ZOHO_UPLOAD_CUTOFF
+- Type: SizeSuffix
+- Default: 10Mi
+
#### --zoho-encoding
The encoding for the backend.
@@ -54100,13 +56003,13 @@ $ rclone -L ls /tmp/a
6 b/one
```
-#### --links, -l
+#### --local-links, --links, -l
Normally rclone will ignore symlinks or junction points (which behave
like symlinks under Windows).
If you supply this flag then rclone will copy symbolic links from the local storage,
-and store them as text files, with a '.rclonelink' suffix in the remote storage.
+and store them as text files, with a `.rclonelink` suffix in the remote storage.
The text file will contain the target of the symbolic link (see example).
@@ -54127,7 +56030,7 @@ Copying the entire directory with '-l'
$ rclone copy -l /tmp/a/ remote:/tmp/a/
```
-The remote files are created with a '.rclonelink' suffix
+The remote files are created with a `.rclonelink` suffix
```
$ rclone ls remote:/tmp/a
@@ -54165,7 +56068,7 @@ $ tree /tmp/b
/tmp/b
├── file1.rclonelink
└── file2.rclonelink
-````
+```
If you want to copy a single file with `-l` then you must use the `.rclonelink` suffix.
@@ -54177,6 +56080,10 @@ $ tree /tmp/c
└── file1 -> ./file4
```
+Note that `--local-links` just enables this feature for the local
+backend. `--links` and `-l` enable the feature for all supported
+backends and the VFS.
+
Note that this flag is incompatible with `-copy-links` / `-L`.
### Restricting filesystems with --one-file-system
@@ -54252,9 +56159,9 @@ Properties:
- Type: bool
- Default: false
-#### --links / -l
+#### --local-links
-Translate symlinks to/from regular files with a '.rclonelink' extension.
+Translate symlinks to/from regular files with a '.rclonelink' extension for the local backend.
Properties:
@@ -54602,6 +56509,188 @@ Options:
# Changelog
+## v1.69.0 - 2025-01-12
+
+[See commits](https://github.com/rclone/rclone/compare/v1.68.0...v1.69.0)
+
+* New backends
+ * [ICloud Drive](https://rclone.org/iclouddrive/) (lostb1t)
+ * [Cloudinary](https://rclone.org/cloudinary/) (yuval-cloudinary)
+ * New S3 providers:
+ * [Outscale](https://rclone.org/s3/#outscale) (Matthias Gatto)
+ * [Selectel](https://rclone.org/s3/#selectel) (Nick Craig-Wood)
+* Security fixes
+ * serve sftp: Resolve CVE-2024-45337 - Misuse of ServerConfig.PublicKeyCallback may cause authorization bypass (dependabot)
+ * Rclone was **not** vulnerable to this.
+ * See https://github.com/advisories/GHSA-v778-237x-gjrc
+ * build: Update golang.org/x/net to v0.33.0 to fix CVE-2024-45338 - Non-linear parsing of case-insensitive content (Nick Craig-Wood)
+ * Rclone was **not** vulnerable to this.
+ * See https://github.com/advisories/GHSA-w32m-9786-jp63
+* New Features
+ * accounting: Write the current bwlimit to the log on SIGUSR2 (Nick Craig-Wood)
+ * bisync: Change exit code from 2 to 7 for critically aborted run (albertony)
+ * build
+ * Update all dependencies (Nick Craig-Wood)
+ * Replace Windows-specific `NewLazyDLL` with `NewLazySystemDLL` (albertony)
+ * cmd: Change exit code from 1 to 2 for syntax and usage errors (albertony)
+ * docker serve: make sure all mount and VFS options are parsed (Nick Craig-Wood)
+ * doc fixes (albertony, Alexandre Hamez, Anthony Metzidis, buengese, Dan McArdle, David Seifert, Francesco Frassinelli, Michael R. Davis, Nick Craig-Wood, Pawel Palucha, Randy Bush, remygrandin, Sam Harrison, shenpengfeng, tgfisher, Thomas ten Cate, ToM, Tony Metzidis, vintagefuture, Yxxx)
+ * fs: Make `--links` flag global and add new `--local-links` and `--vfs-links` flags (Nick Craig-Wood)
+ * http servers: Disable automatic authentication skipping for unix sockets in http servers (Moises Lima)
+ * This was making it impossible to use unix sockets with an proxy
+ * This might now cause rclone to need authenticaton where it didn't before
+ * oauthutil: add support for OAuth client credential flow (Martin Hassack, Nick Craig-Wood)
+ * operations: make log messages consistent for mkdir/rmdir at INFO level (Nick Craig-Wood)
+ * rc: Add `relative` to [vfs/queue-set-expiry](https://rclone.org/rc/#vfs-queue-set-expiry) (Nick Craig-Wood)
+ * serve dlna: Sort the directory entries by directories first then alphabetically by name (Nick Craig-Wood)
+ * serve nfs
+ * Introduce symlink support (Nick Craig-Wood)
+ * Implement `--nfs-cache-type` symlink (Nick Craig-Wood)
+ * size: Make output compatible with `-P` (Nick Craig-Wood)
+ * test makefiles: Add `--flat` flag for making directories with many entries (Nick Craig-Wood)
+* Bug Fixes
+ * accounting
+ * Fix global error acounting (Benjamin Legrand)
+ * Fix debug printing when debug wasn't set (Nick Craig-Wood)
+ * Fix race stopping/starting the stats counter (Nick Craig-Wood)
+ * rc/job: Use mutex for adding listeners thread safety (hayden.pan)
+ * serve docker: Fix incorrect GID assignment (TAKEI Yuya)
+ * serve nfs: Fix missing inode numbers which was messing up `ls -laR` (Nick Craig-Wood)
+ * serve s3: Fix `Last-Modified` timestamp (Nick Craig-Wood)
+ * serve sftp: Fix loading of authorized keys file with comment on last line (albertony)
+* Mount
+ * Introduce symlink support (Filipe Azevedo, Nick Craig-Wood)
+ * Better snap mount error message (divinity76)
+ * mount2: Fix missing `.` and `..` entries (Filipe Azevedo)
+* VFS
+ * With `--vfs-used-is-size` value is calculated and then thrown away (Ilias Ozgur Can Leonard)
+ * Add symlink support to VFS (Filipe Azevedo, Nick Craig-Wood)
+ * This can be enabled with the specific `--vfs-links` flag or the global `--links` flag
+ * Fix open files disappearing from directory listings (Nick Craig-Wood)
+ * Add remote name to vfs cache log messages (Nick Craig-Wood)
+* Cache
+ * Fix parent not getting pinned when remote is a file (nielash)
+* Azure Blob
+ * Add `--azureblob-disable-instance-discovery` (Nick Craig-Wood)
+ * Add `--azureblob-use-az` to force the use of the Azure CLI for auth (Nick Craig-Wood)
+ * Quit multipart uploads if the context is cancelled (Nick Craig-Wood)
+* Azurefiles
+ * Fix missing x-ms-file-request-intent header (Nick Craig-Wood)
+* B2
+ * Add `daysFromStartingToCancelingUnfinishedLargeFiles` to `backend lifecycle` command (Louis Laureys)
+* Box
+ * Fix server-side copying a file over existing dst (nielash)
+ * Fix panic when decoding corrupted PEM from JWT file (Nick Craig-Wood)
+* Drive
+ * Add support for markdown format (Noam Ross)
+ * Implement `rclone backend rescue` to rescue orphaned files (Nick Craig-Wood)
+* Dropbox
+ * Fix server side copying over existing object (Nick Craig-Wood)
+ * Fix return status when full to be fatal error (Nick Craig-Wood)
+* FTP
+ * Implement `--ftp-no-check-upload` to allow upload to write only dirs (Nick Craig-Wood)
+ * Fix ls commands returning empty on "Microsoft FTP Service" servers (Francesco Frassinelli)
+* Gofile
+ * Fix server side copying over existing object (Nick Craig-Wood)
+* Google Cloud Storage
+ * Add access token auth with `--gcs-access-token` (Leandro Piccilli)
+ * Update docs on service account access tokens (Anthony Metzidis)
+* Googlephotos
+ * Implement `--gphotos-proxy` to allow download of full resolution media (Nick Craig-Wood)
+ * Fix nil pointer crash on upload (Nick Craig-Wood)
+* HTTP
+ * Fix incorrect URLs with initial slash (Oleg Kunitsyn)
+* Onedrive
+ * Add support for OAuth client credential flow (Martin Hassack, Nick Craig-Wood)
+ * Fix time precision for OneDrive personal (Nick Craig-Wood)
+ * Fix server side copying over existing object (Nick Craig-Wood)
+* Opendrive
+ * Add `rclone about` support to backend (quiescens)
+* Oracle Object Storage
+ * Make specifying `compartmentid` optional (Manoj Ghosh)
+ * Quit multipart uploads if the context is cancelled (Nick Craig-Wood)
+* Pikpak
+ * Add option to use original file links (wiserain)
+* Protondrive
+ * Improve performance of Proton Drive backend (Lawrence Murray)
+* Putio
+ * Fix server side copying over existing object (Nick Craig-Wood)
+* S3
+ * Add initial `--s3-directory-bucket` to support AWS Directory Buckets (Nick Craig-Wood)
+ * Add Wasabi `eu-south-1` region (Diego Monti)
+ * Fix download of compressed files from Cloudflare R2 (Nick Craig-Wood)
+ * Rename glacier storage class to flexible retrieval (Henry Lee)
+ * Quit multipart uploads if the context is cancelled (Nick Craig-Wood)
+* SFTP
+ * Allow inline ssh public certificate for sftp (Dimitar Ivanov)
+ * Fix nil check when using auth proxy (Nick Craig-Wood)
+* Smb
+ * Add initial support for Kerberos authentication (more work needed). (Francesco Frassinelli)
+ * Fix panic if stat fails (Nick Craig-Wood)
+* Sugarsync
+ * Fix server side copying over existing object (Nick Craig-Wood)
+* WebDAV
+ * Nextcloud: implement backoff and retry for 423 LOCKED errors (Nick Craig-Wood)
+ * Make `--webdav-auth-redirect` to fix 401 unauthorized on redirect (Nick Craig-Wood)
+* Yandex
+ * Fix server side copying over existing object (Nick Craig-Wood)
+* Zoho
+ * Use download server to accelerate downloads (buengese)
+ * Switch to large file upload API for larger files, fix missing URL encoding of filenames for the upload API (buengese)
+ * Print clear error message when missing oauth scope (buengese)
+ * Try to handle rate limits a bit better (buengese)
+ * Add support for private spaces (buengese)
+ * Make upload cutoff configurable (buengese)
+
+## v1.68.2 - 2024-11-15
+
+[See commits](https://github.com/rclone/rclone/compare/v1.68.1...v1.68.2)
+
+* Security fixes
+ * local backend: CVE-2024-52522: fix permission and ownership on symlinks with `--links` and `--metadata` (Nick Craig-Wood)
+ * Only affects users using `--metadata` and `--links` and copying files to the local backend
+ * See https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
+ * build: bump github.com/golang-jwt/jwt/v4 from 4.5.0 to 4.5.1 (dependabot)
+ * This is an issue in a dependency which is used for JWT certificates
+ * See https://github.com/golang-jwt/jwt/security/advisories/GHSA-29wx-vh33-7x7r
+* Bug Fixes
+ * accounting: Fix wrong message on SIGUSR2 to enable/disable bwlimit (Nick Craig-Wood)
+ * bisync: Fix output capture restoring the wrong output for logrus (Dimitrios Slamaris)
+ * dlna: Fix loggingResponseWriter disregarding log level (Simon Bos)
+ * serve s3: Fix excess locking which was making serve s3 single threaded (Nick Craig-Wood)
+ * doc fixes (Nick Craig-Wood, tgfisher, Alexandre Hamez, Randy Bush)
+* Local
+ * Fix permission and ownership on symlinks with `--links` and `--metadata` (Nick Craig-Wood)
+ * Fix `--copy-links` on macOS when cloning (nielash)
+* Onedrive
+ * Fix Retry-After handling to look at 503 errors also (Nick Craig-Wood)
+* Pikpak
+ * Fix cid/gcid calculations for fs.OverrideRemote (wiserain)
+ * Fix fatal crash on startup with token that can't be refreshed (Nick Craig-Wood)
+* S3
+ * Fix crash when using `--s3-download-url` after migration to SDKv2 (Nick Craig-Wood)
+ * Storj provider: fix server-side copy of files bigger than 5GB (Kaloyan Raev)
+ * Fix multitenant multipart uploads with CEPH (Nick Craig-Wood)
+
+## v1.68.1 - 2024-09-24
+
+[See commits](https://github.com/rclone/rclone/compare/v1.68.0...v1.68.1)
+
+* Bug Fixes
+ * build: Fix docker release build (ttionya)
+ * doc fixes (Nick Craig-Wood, Pawel Palucha)
+ * fs
+ * Fix `--dump filters` not always appearing (Nick Craig-Wood)
+ * Fix setting `stringArray` config values from environment variables (Nick Craig-Wood)
+ * rc: Fix default value of `--metrics-addr` (Nick Craig-Wood)
+ * serve docker: Add missing `vfs-read-chunk-streams` option in docker volume driver (Divyam)
+* Onedrive
+ * Fix spurious "Couldn't decode error response: EOF" DEBUG (Nick Craig-Wood)
+* Pikpak
+ * Fix login issue where token retrieval fails (wiserain)
+* S3
+ * Fix rclone ignoring static credentials when `env_auth=true` (Nick Craig-Wood)
+
## v1.68.0 - 2024-09-08
[See commits](https://github.com/rclone/rclone/compare/v1.67.0...v1.68.0)
@@ -54693,6 +56782,7 @@ Options:
* Pcloud
* Implement `SetModTime` (Georg Welzel)
* Implement `OpenWriterAt` feature to enable multipart uploads (Georg Welzel)
+ * Fix failing large file uploads (Georg Welzel)
* Pikpak
* Improve data consistency by ensuring async tasks complete (wiserain)
* Implement custom hash to replace wrong sha1 (wiserain)
@@ -61228,6 +63318,42 @@ put them back in again.` >}}
* Mathieu Moreau
* fsantagostinobietti <6057026+fsantagostinobietti@users.noreply.github.com>
* Oleg Kunitsyn <114359669+hiddenmarten@users.noreply.github.com>
+ * Divyam <47589864+divyam234@users.noreply.github.com>
+ * ttionya
+ * quiescens
+ * rishi.sridhar
+ * Lawrence Murray
+ * Leandro Piccilli
+ * Benjamin Legrand
+ * Noam Ross
+ * lostb1t
+ * Matthias Gatto
+ * André Tran
+ * Simon Bos
+ * Alexandre Hamez <199517+ahamez@users.noreply.github.com>
+ * Randy Bush
+ * Diego Monti
+ * tgfisher
+ * Moises Lima
+ * Dimitar Ivanov
+ * shenpengfeng
+ * Dimitrios Slamaris
+ * vintagefuture <39503528+vintagefuture@users.noreply.github.com>
+ * David Seifert
+ * Michael R. Davis
+ * remygrandin
+ * Ilias Ozgur Can Leonard
+ * divinity76
+ * Martin Hassack
+ * Filipe Azevedo
+ * hayden.pan
+ * Yxxx <45665172+marsjane@users.noreply.github.com>
+ * Thomas ten Cate
+ * Louis Laureys
+ * Henry Lee
+ * ToM
+ * TAKEI Yuya <853320+takei-yuya@users.noreply.github.com>
+ * Francesco Frassinelli
# Contact the rclone project
diff --git a/MANUAL.txt b/MANUAL.txt
index 0c2d0d066..9247df8dc 100644
--- a/MANUAL.txt
+++ b/MANUAL.txt
@@ -1,6 +1,6 @@
rclone(1) User Manual
Nick Craig-Wood
-Sep 08, 2024
+Jan 12, 2025
Rclone syncs your files to cloud storage
@@ -102,6 +102,7 @@ S3, that work out of the box.)
- Arvan Cloud Object Storage (AOS)
- Citrix ShareFile
- Cloudflare R2
+- Cloudinary
- DigitalOcean Spaces
- Digi Storage
- Dreamhost
@@ -118,6 +119,7 @@ S3, that work out of the box.)
- Hetzner Storage Box
- HiDrive
- HTTP
+- iCloud Drive
- ImageKit
- Internet Archive
- Jottacloud
@@ -145,6 +147,7 @@ S3, that work out of the box.)
- OpenStack Swift
- Oracle Cloud Storage Swift
- Oracle Object Storage
+- Outscale
- ownCloud
- pCloud
- Petabox
@@ -162,6 +165,7 @@ S3, that work out of the box.)
- Seafile
- Seagate Lyve Cloud
- SeaweedFS
+- Selectel
- SFTP
- Sia
- SMB / CIFS
@@ -838,6 +842,7 @@ See the following for detailed instructions for
- Chunker - transparently splits large files for other remotes
- Citrix ShareFile
- Compress
+- Cloudinary
- Combine
- Crypt - to encrypt other remotes
- DigitalOcean Spaces
@@ -855,6 +860,7 @@ See the following for detailed instructions for
- Hetzner Storage Box
- HiDrive
- HTTP
+- iCloud Drive
- Internet Archive
- Jottacloud
- Koofr
@@ -1072,6 +1078,7 @@ Flags for anything which can copy a file
-I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
@@ -1273,6 +1280,7 @@ Flags for anything which can copy a file
-I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
@@ -1424,6 +1432,7 @@ Flags for anything which can copy a file
-I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
@@ -2732,6 +2741,7 @@ Flags for anything which can copy a file
-I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
@@ -3848,6 +3858,7 @@ Flags for anything which can copy a file
-I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
@@ -4843,7 +4854,9 @@ manually:
# Linux
fusermount -u /path/to/local/mount
- # OS X
+ #... or on some systems
+ fusermount3 -u /path/to/local/mount
+ # OS X or Linux when using nfsmount
umount /path/to/local/mount
The umount operation can fail, for example when the mountpoint is busy.
@@ -5188,8 +5201,9 @@ Note that systemd runs mount units without any environment variables
including PATH or HOME. This means that tilde (~) expansion will not
work and you should provide --config and --cache-dir explicitly as
absolute paths via rclone arguments. Since mounting requires the
-fusermount program, rclone will use the fallback PATH of /bin:/usr/bin
-in this scenario. Please ensure that fusermount is present on this PATH.
+fusermount or fusermount3 program, rclone will use the fallback PATH of
+/bin:/usr/bin in this scenario. Please ensure that
+fusermount/fusermount3 is present on this PATH.
Rclone as Unix mount helper
@@ -5560,6 +5574,48 @@ flag --checkers has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
+Symlinks
+
+By default the VFS does not support symlinks. However this may be
+enabled with either of the following flags:
+
+ --links Translate symlinks to/from regular files with a '.rclonelink' extension.
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension. So a file
+which appears as a symlink link-to-file.txt would be stored on cloud
+storage as link-to-file.txt.rclonelink and the contents would be the
+path to the symlink destination.
+
+Note that --links enables symlink translation globally in rclone - this
+includes any backend which supports the concept (for example the local
+backend). --vfs-links just enables it for the VFS layer.
+
+This scheme is compatible with that used by the local backend with the
+--local-links flag.
+
+The --vfs-links flag has been designed for rclone mount, rclone nfsmount
+and rclone serve nfs.
+
+It hasn't been tested with the other rclone serve commands yet.
+
+A limitation of the current implementation is that it expects the caller
+to resolve sub-symlinks. For example given this directory tree
+
+ .
+ ├── dir
+ │ └── file.txt
+ └── linked-dir -> dir
+
+The VFS will correctly resolve linked-dir but not linked-dir/file.txt.
+This is not a problem for the tested commands but may be for other
+commands.
+
+Note that there is an outstanding issue with symlink support issue #8245
+with duplicate files being created when symlinks are moved into
+directories where there is a file of the same name (or vice versa).
+
VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only by
@@ -5664,6 +5720,7 @@ Options
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for mount
+ --link-perms FileMode Link permissions (default 666)
--max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads (not supported on Windows) (default 128Ki)
--mount-case-insensitive Tristate Tell the OS the mount is case insensitive (true) or sensitive (false) regardless of the backend (auto) (default unset)
--network-mode Mount as remote network drive, instead of fixed disk drive (supported on Windows only)
@@ -5686,6 +5743,7 @@ Options
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -5794,6 +5852,7 @@ Flags for anything which can copy a file
-I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
@@ -6027,7 +6086,9 @@ manually:
# Linux
fusermount -u /path/to/local/mount
- # OS X
+ #... or on some systems
+ fusermount3 -u /path/to/local/mount
+ # OS X or Linux when using nfsmount
umount /path/to/local/mount
The umount operation can fail, for example when the mountpoint is busy.
@@ -6372,8 +6433,9 @@ Note that systemd runs mount units without any environment variables
including PATH or HOME. This means that tilde (~) expansion will not
work and you should provide --config and --cache-dir explicitly as
absolute paths via rclone arguments. Since mounting requires the
-fusermount program, rclone will use the fallback PATH of /bin:/usr/bin
-in this scenario. Please ensure that fusermount is present on this PATH.
+fusermount or fusermount3 program, rclone will use the fallback PATH of
+/bin:/usr/bin in this scenario. Please ensure that
+fusermount/fusermount3 is present on this PATH.
Rclone as Unix mount helper
@@ -6744,6 +6806,48 @@ flag --checkers has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
+Symlinks
+
+By default the VFS does not support symlinks. However this may be
+enabled with either of the following flags:
+
+ --links Translate symlinks to/from regular files with a '.rclonelink' extension.
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension. So a file
+which appears as a symlink link-to-file.txt would be stored on cloud
+storage as link-to-file.txt.rclonelink and the contents would be the
+path to the symlink destination.
+
+Note that --links enables symlink translation globally in rclone - this
+includes any backend which supports the concept (for example the local
+backend). --vfs-links just enables it for the VFS layer.
+
+This scheme is compatible with that used by the local backend with the
+--local-links flag.
+
+The --vfs-links flag has been designed for rclone mount, rclone nfsmount
+and rclone serve nfs.
+
+It hasn't been tested with the other rclone serve commands yet.
+
+A limitation of the current implementation is that it expects the caller
+to resolve sub-symlinks. For example given this directory tree
+
+ .
+ ├── dir
+ │ └── file.txt
+ └── linked-dir -> dir
+
+The VFS will correctly resolve linked-dir but not linked-dir/file.txt.
+This is not a problem for the tested commands but may be for other
+commands.
+
+Note that there is an outstanding issue with symlink support issue #8245
+with duplicate files being created when symlinks are moved into
+directories where there is a file of the same name (or vice versa).
+
VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only by
@@ -6849,6 +6953,7 @@ Options
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for nfsmount
+ --link-perms FileMode Link permissions (default 666)
--max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads (not supported on Windows) (default 128Ki)
--mount-case-insensitive Tristate Tell the OS the mount is case insensitive (true) or sensitive (false) regardless of the backend (auto) (default unset)
--network-mode Mount as remote network drive, instead of fixed disk drive (supported on Windows only)
@@ -6875,6 +6980,7 @@ Options
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -7129,9 +7235,7 @@ If you set --rc-addr to listen on a public or LAN accessible IP address
then using Authentication is advised - see the next section for info.
You can use a unix socket by setting the url to unix:///path/to/socket
-or just by using an absolute path name. Note that unix sockets bypass
-the authentication - this is expected to be done with file system
-permissions.
+or just by using an absolute path name.
--rc-addr may be repeated to listen on multiple IPs/ports/sockets.
Socket activation, described further below, can also be used to
@@ -7159,10 +7263,11 @@ https. You will need to supply the --rc-cert and --rc-key flags. If you
wish to do client side certificate validation then you will need to
supply --rc-client-ca also.
---rc-cert should be a either a PEM encoded certificate or a
-concatenation of that with the CA certificate. --krc-ey should be the
-PEM encoded private key and --rc-client-ca should be the PEM encoded
-client certificate authority certificate.
+--rc-cert must be set to the path of a file containing either a PEM
+encoded certificate, or a concatenation of that with the CA certificate.
+--rc-key must be set to the path of a file with the PEM encoded private
+key. If setting --rc-client-ca, it should be set to the path of a file
+with PEM encoded client certificate authority certificates.
--rc-min-tls-version is minimum TLS version that is acceptable. Valid
values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").
@@ -7171,7 +7276,7 @@ Socket activation
Instead of the listening addresses specified above, rclone will listen
to all FDs passed by the service manager, if any (and ignore any
-arguments passed by --rc-addr`).
+arguments passed by --rc-addr).
This allows rclone to be a socket-activated service. It can be
configured with .socket and .service unit files as described in
@@ -7298,7 +7403,7 @@ RC Options
Flags to control the Remote Control API
--rc Enable the remote control server
- --rc-addr stringArray IPaddress:Port or :Port to bind server to (default ["localhost:5572"])
+ --rc-addr stringArray IPaddress:Port or :Port to bind server to (default localhost:5572)
--rc-allow-origin string Origin which cross-domain request (CORS) can be executed from
--rc-baseurl string Prefix for URLs - leave blank for root
--rc-cert string TLS PEM key (concatenation of certificate and CA certificate)
@@ -7817,6 +7922,48 @@ flag --checkers has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
+Symlinks
+
+By default the VFS does not support symlinks. However this may be
+enabled with either of the following flags:
+
+ --links Translate symlinks to/from regular files with a '.rclonelink' extension.
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension. So a file
+which appears as a symlink link-to-file.txt would be stored on cloud
+storage as link-to-file.txt.rclonelink and the contents would be the
+path to the symlink destination.
+
+Note that --links enables symlink translation globally in rclone - this
+includes any backend which supports the concept (for example the local
+backend). --vfs-links just enables it for the VFS layer.
+
+This scheme is compatible with that used by the local backend with the
+--local-links flag.
+
+The --vfs-links flag has been designed for rclone mount, rclone nfsmount
+and rclone serve nfs.
+
+It hasn't been tested with the other rclone serve commands yet.
+
+A limitation of the current implementation is that it expects the caller
+to resolve sub-symlinks. For example given this directory tree
+
+ .
+ ├── dir
+ │ └── file.txt
+ └── linked-dir -> dir
+
+The VFS will correctly resolve linked-dir but not linked-dir/file.txt.
+This is not a problem for the tested commands but may be for other
+commands.
+
+Note that there is an outstanding issue with symlink support issue #8245
+with duplicate files being created when symlinks are moved into
+directories where there is a file of the same name (or vice versa).
+
VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only by
@@ -7911,6 +8058,7 @@ Options
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for dlna
--interface stringArray The interface to use for SSDP (repeat as necessary)
+ --link-perms FileMode Link permissions (default 666)
--log-trace Enable trace logging of SOAP traffic
--name string Name of DLNA server
--no-checksum Don't compare checksums on up/download
@@ -7929,6 +8077,7 @@ Options
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -8318,6 +8467,48 @@ flag --checkers has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
+Symlinks
+
+By default the VFS does not support symlinks. However this may be
+enabled with either of the following flags:
+
+ --links Translate symlinks to/from regular files with a '.rclonelink' extension.
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension. So a file
+which appears as a symlink link-to-file.txt would be stored on cloud
+storage as link-to-file.txt.rclonelink and the contents would be the
+path to the symlink destination.
+
+Note that --links enables symlink translation globally in rclone - this
+includes any backend which supports the concept (for example the local
+backend). --vfs-links just enables it for the VFS layer.
+
+This scheme is compatible with that used by the local backend with the
+--local-links flag.
+
+The --vfs-links flag has been designed for rclone mount, rclone nfsmount
+and rclone serve nfs.
+
+It hasn't been tested with the other rclone serve commands yet.
+
+A limitation of the current implementation is that it expects the caller
+to resolve sub-symlinks. For example given this directory tree
+
+ .
+ ├── dir
+ │ └── file.txt
+ └── linked-dir -> dir
+
+The VFS will correctly resolve linked-dir but not linked-dir/file.txt.
+This is not a problem for the tested commands but may be for other
+commands.
+
+Note that there is an outstanding issue with symlink support issue #8245
+with duplicate files being created when symlinks are moved into
+directories where there is a file of the same name (or vice versa).
+
VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only by
@@ -8424,6 +8615,7 @@ Options
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for docker
+ --link-perms FileMode Link permissions (default 666)
--max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads (not supported on Windows) (default 128Ki)
--mount-case-insensitive Tristate Tell the OS the mount is case insensitive (true) or sensitive (false) regardless of the backend (auto) (default unset)
--network-mode Mount as remote network drive, instead of fixed disk drive (supported on Windows only)
@@ -8449,6 +8641,7 @@ Options
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -8823,6 +9016,48 @@ flag --checkers has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
+Symlinks
+
+By default the VFS does not support symlinks. However this may be
+enabled with either of the following flags:
+
+ --links Translate symlinks to/from regular files with a '.rclonelink' extension.
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension. So a file
+which appears as a symlink link-to-file.txt would be stored on cloud
+storage as link-to-file.txt.rclonelink and the contents would be the
+path to the symlink destination.
+
+Note that --links enables symlink translation globally in rclone - this
+includes any backend which supports the concept (for example the local
+backend). --vfs-links just enables it for the VFS layer.
+
+This scheme is compatible with that used by the local backend with the
+--local-links flag.
+
+The --vfs-links flag has been designed for rclone mount, rclone nfsmount
+and rclone serve nfs.
+
+It hasn't been tested with the other rclone serve commands yet.
+
+A limitation of the current implementation is that it expects the caller
+to resolve sub-symlinks. For example given this directory tree
+
+ .
+ ├── dir
+ │ └── file.txt
+ └── linked-dir -> dir
+
+The VFS will correctly resolve linked-dir but not linked-dir/file.txt.
+This is not a problem for the tested commands but may be for other
+commands.
+
+Note that there is an outstanding issue with symlink support issue #8245
+with duplicate files being created when symlinks are moved into
+directories where there is a file of the same name (or vice versa).
+
VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only by
@@ -8989,6 +9224,7 @@ Options
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for ftp
--key string TLS PEM Private key
+ --link-perms FileMode Link permissions (default 666)
--no-checksum Don't compare checksums on up/download
--no-modtime Don't read/write the modification time (can speed things up)
--no-seek Don't allow seeking in files
@@ -9009,6 +9245,7 @@ Options
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -9081,9 +9318,7 @@ If you set --addr to listen on a public or LAN accessible IP address
then using Authentication is advised - see the next section for info.
You can use a unix socket by setting the url to unix:///path/to/socket
-or just by using an absolute path name. Note that unix sockets bypass
-the authentication - this is expected to be done with file system
-permissions.
+or just by using an absolute path name.
--addr may be repeated to listen on multiple IPs/ports/sockets. Socket
activation, described further below, can also be used to accomplish the
@@ -9111,10 +9346,11 @@ https. You will need to supply the --cert and --key flags. If you wish
to do client side certificate validation then you will need to supply
--client-ca also.
---cert should be a either a PEM encoded certificate or a concatenation
-of that with the CA certificate. --key should be the PEM encoded private
-key and --client-ca should be the PEM encoded client certificate
-authority certificate.
+--cert must be set to the path of a file containing either a PEM encoded
+certificate, or a concatenation of that with the CA certificate. --key
+must be set to the path of a file with the PEM encoded private key. If
+setting --client-ca, it should be set to the path of a file with PEM
+encoded client certificate authority certificates.
--min-tls-version is minimum TLS version that is acceptable. Valid
values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").
@@ -9123,7 +9359,7 @@ Socket activation
Instead of the listening addresses specified above, rclone will listen
to all FDs passed by the service manager, if any (and ignore any
-arguments passed by --addr`).
+arguments passed by --addr).
This allows rclone to be a socket-activated service. It can be
configured with .socket and .service unit files as described in
@@ -9537,6 +9773,48 @@ flag --checkers has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
+Symlinks
+
+By default the VFS does not support symlinks. However this may be
+enabled with either of the following flags:
+
+ --links Translate symlinks to/from regular files with a '.rclonelink' extension.
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension. So a file
+which appears as a symlink link-to-file.txt would be stored on cloud
+storage as link-to-file.txt.rclonelink and the contents would be the
+path to the symlink destination.
+
+Note that --links enables symlink translation globally in rclone - this
+includes any backend which supports the concept (for example the local
+backend). --vfs-links just enables it for the VFS layer.
+
+This scheme is compatible with that used by the local backend with the
+--local-links flag.
+
+The --vfs-links flag has been designed for rclone mount, rclone nfsmount
+and rclone serve nfs.
+
+It hasn't been tested with the other rclone serve commands yet.
+
+A limitation of the current implementation is that it expects the caller
+to resolve sub-symlinks. For example given this directory tree
+
+ .
+ ├── dir
+ │ └── file.txt
+ └── linked-dir -> dir
+
+The VFS will correctly resolve linked-dir but not linked-dir/file.txt.
+This is not a problem for the tested commands but may be for other
+commands.
+
+Note that there is an outstanding issue with symlink support issue #8245
+with duplicate files being created when symlinks are moved into
+directories where there is a file of the same name (or vice versa).
+
VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only by
@@ -9698,15 +9976,16 @@ Options
--allow-origin string Origin which cross-domain request (CORS) can be executed from
--auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
- --cert string TLS PEM key (concatenation of certificate and CA certificate)
- --client-ca string Client certificate authority to verify clients with
+ --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
+ --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 777)
--file-perms FileMode File permissions (default 666)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for http
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string TLS PEM Private key
+ --key string Path to TLS PEM private key file
+ --link-perms FileMode Link permissions (default 666)
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--no-checksum Don't compare checksums on up/download
@@ -9732,6 +10011,7 @@ Options
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -9821,7 +10101,10 @@ connected clients.
uses an on disk cache, but the cache entries are held as symlinks.
Rclone will use the handle of the underlying file as the NFS handle
which improves performance. This sort of cache can't be backed up and
-restored as the underlying handles will change. This is Linux only.
+restored as the underlying handles will change. This is Linux only. It
+requres running rclone as root or with CAP_DAC_READ_SEARCH. You can run
+rclone with this extra permission by doing this to the rclone binary
+sudo setcap cap_dac_read_search+ep /path/to/rclone.
--nfs-cache-handle-limit controls the maximum number of cached NFS
handles stored by the caching handler. This should not be set too low or
@@ -10146,6 +10429,48 @@ flag --checkers has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
+Symlinks
+
+By default the VFS does not support symlinks. However this may be
+enabled with either of the following flags:
+
+ --links Translate symlinks to/from regular files with a '.rclonelink' extension.
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension. So a file
+which appears as a symlink link-to-file.txt would be stored on cloud
+storage as link-to-file.txt.rclonelink and the contents would be the
+path to the symlink destination.
+
+Note that --links enables symlink translation globally in rclone - this
+includes any backend which supports the concept (for example the local
+backend). --vfs-links just enables it for the VFS layer.
+
+This scheme is compatible with that used by the local backend with the
+--local-links flag.
+
+The --vfs-links flag has been designed for rclone mount, rclone nfsmount
+and rclone serve nfs.
+
+It hasn't been tested with the other rclone serve commands yet.
+
+A limitation of the current implementation is that it expects the caller
+to resolve sub-symlinks. For example given this directory tree
+
+ .
+ ├── dir
+ │ └── file.txt
+ └── linked-dir -> dir
+
+The VFS will correctly resolve linked-dir but not linked-dir/file.txt.
+This is not a problem for the tested commands but may be for other
+commands.
+
+Note that there is an outstanding issue with symlink support issue #8245
+with duplicate files being created when symlinks are moved into
+directories where there is a file of the same name (or vice versa).
+
VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only by
@@ -10238,6 +10563,7 @@ Options
--file-perms FileMode File permissions (default 666)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for nfs
+ --link-perms FileMode Link permissions (default 666)
--nfs-cache-dir string The directory the NFS handle cache will use if set
--nfs-cache-handle-limit int max file handles cached simultaneously (min 5) (default 1000000)
--nfs-cache-type memory|disk|symlink Type of NFS handle cache to use (default memory)
@@ -10257,6 +10583,7 @@ Options
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -10397,9 +10724,7 @@ If you set --addr to listen on a public or LAN accessible IP address
then using Authentication is advised - see the next section for info.
You can use a unix socket by setting the url to unix:///path/to/socket
-or just by using an absolute path name. Note that unix sockets bypass
-the authentication - this is expected to be done with file system
-permissions.
+or just by using an absolute path name.
--addr may be repeated to listen on multiple IPs/ports/sockets. Socket
activation, described further below, can also be used to accomplish the
@@ -10427,10 +10752,11 @@ https. You will need to supply the --cert and --key flags. If you wish
to do client side certificate validation then you will need to supply
--client-ca also.
---cert should be a either a PEM encoded certificate or a concatenation
-of that with the CA certificate. --key should be the PEM encoded private
-key and --client-ca should be the PEM encoded client certificate
-authority certificate.
+--cert must be set to the path of a file containing either a PEM encoded
+certificate, or a concatenation of that with the CA certificate. --key
+must be set to the path of a file with the PEM encoded private key. If
+setting --client-ca, it should be set to the path of a file with PEM
+encoded client certificate authority certificates.
--min-tls-version is minimum TLS version that is acceptable. Valid
values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").
@@ -10439,7 +10765,7 @@ Socket activation
Instead of the listening addresses specified above, rclone will listen
to all FDs passed by the service manager, if any (and ignore any
-arguments passed by --addr`).
+arguments passed by --addr).
This allows rclone to be a socket-activated service. It can be
configured with .socket and .service unit files as described in
@@ -10488,11 +10814,11 @@ Options
--append-only Disallow deletion of repository data
--baseurl string Prefix for URLs - leave blank for root
--cache-objects Cache listed objects (default true)
- --cert string TLS PEM key (concatenation of certificate and CA certificate)
- --client-ca string Client certificate authority to verify clients with
+ --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
+ --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
-h, --help help for restic
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string TLS PEM Private key
+ --key string Path to TLS PEM private key file
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--pass string Password for authentication
@@ -10677,9 +11003,7 @@ If you set --addr to listen on a public or LAN accessible IP address
then using Authentication is advised - see the next section for info.
You can use a unix socket by setting the url to unix:///path/to/socket
-or just by using an absolute path name. Note that unix sockets bypass
-the authentication - this is expected to be done with file system
-permissions.
+or just by using an absolute path name.
--addr may be repeated to listen on multiple IPs/ports/sockets. Socket
activation, described further below, can also be used to accomplish the
@@ -10707,10 +11031,11 @@ https. You will need to supply the --cert and --key flags. If you wish
to do client side certificate validation then you will need to supply
--client-ca also.
---cert should be a either a PEM encoded certificate or a concatenation
-of that with the CA certificate. --key should be the PEM encoded private
-key and --client-ca should be the PEM encoded client certificate
-authority certificate.
+--cert must be set to the path of a file containing either a PEM encoded
+certificate, or a concatenation of that with the CA certificate. --key
+must be set to the path of a file with the PEM encoded private key. If
+setting --client-ca, it should be set to the path of a file with PEM
+encoded client certificate authority certificates.
--min-tls-version is minimum TLS version that is acceptable. Valid
values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").
@@ -10719,7 +11044,7 @@ Socket activation
Instead of the listening addresses specified above, rclone will listen
to all FDs passed by the service manager, if any (and ignore any
-arguments passed by --addr`).
+arguments passed by --addr).
This allows rclone to be a socket-activated service. It can be
configured with .socket and .service unit files as described in
@@ -11032,6 +11357,48 @@ flag --checkers has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
+Symlinks
+
+By default the VFS does not support symlinks. However this may be
+enabled with either of the following flags:
+
+ --links Translate symlinks to/from regular files with a '.rclonelink' extension.
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension. So a file
+which appears as a symlink link-to-file.txt would be stored on cloud
+storage as link-to-file.txt.rclonelink and the contents would be the
+path to the symlink destination.
+
+Note that --links enables symlink translation globally in rclone - this
+includes any backend which supports the concept (for example the local
+backend). --vfs-links just enables it for the VFS layer.
+
+This scheme is compatible with that used by the local backend with the
+--local-links flag.
+
+The --vfs-links flag has been designed for rclone mount, rclone nfsmount
+and rclone serve nfs.
+
+It hasn't been tested with the other rclone serve commands yet.
+
+A limitation of the current implementation is that it expects the caller
+to resolve sub-symlinks. For example given this directory tree
+
+ .
+ ├── dir
+ │ └── file.txt
+ └── linked-dir -> dir
+
+The VFS will correctly resolve linked-dir but not linked-dir/file.txt.
+This is not a problem for the tested commands but may be for other
+commands.
+
+Note that there is an outstanding issue with symlink support issue #8245
+with duplicate files being created when symlinks are moved into
+directories where there is a file of the same name (or vice versa).
+
VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only by
@@ -11123,8 +11490,8 @@ Options
--auth-key stringArray Set key pair for v4 authorization: access_key_id,secret_access_key
--auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
- --cert string TLS PEM key (concatenation of certificate and CA certificate)
- --client-ca string Client certificate authority to verify clients with
+ --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
+ --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 777)
--etag-hash string Which hash to use for the ETag, or auto or blank for off (default "MD5")
@@ -11133,7 +11500,8 @@ Options
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for s3
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string TLS PEM Private key
+ --key string Path to TLS PEM private key file
+ --link-perms FileMode Link permissions (default 666)
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--no-checksum Don't compare checksums on up/download
@@ -11159,6 +11527,7 @@ Options
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -11575,6 +11944,48 @@ flag --checkers has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
+Symlinks
+
+By default the VFS does not support symlinks. However this may be
+enabled with either of the following flags:
+
+ --links Translate symlinks to/from regular files with a '.rclonelink' extension.
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension. So a file
+which appears as a symlink link-to-file.txt would be stored on cloud
+storage as link-to-file.txt.rclonelink and the contents would be the
+path to the symlink destination.
+
+Note that --links enables symlink translation globally in rclone - this
+includes any backend which supports the concept (for example the local
+backend). --vfs-links just enables it for the VFS layer.
+
+This scheme is compatible with that used by the local backend with the
+--local-links flag.
+
+The --vfs-links flag has been designed for rclone mount, rclone nfsmount
+and rclone serve nfs.
+
+It hasn't been tested with the other rclone serve commands yet.
+
+A limitation of the current implementation is that it expects the caller
+to resolve sub-symlinks. For example given this directory tree
+
+ .
+ ├── dir
+ │ └── file.txt
+ └── linked-dir -> dir
+
+The VFS will correctly resolve linked-dir but not linked-dir/file.txt.
+This is not a problem for the tested commands but may be for other
+commands.
+
+Note that there is an outstanding issue with symlink support issue #8245
+with duplicate files being created when symlinks are moved into
+directories where there is a file of the same name (or vice versa).
+
VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only by
@@ -11741,6 +12152,7 @@ Options
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for sftp
--key stringArray SSH private host key file (Can be multi-valued, leave blank to auto generate)
+ --link-perms FileMode Link permissions (default 666)
--no-auth Allow connections with no authentication if set
--no-checksum Don't compare checksums on up/download
--no-modtime Don't read/write the modification time (can speed things up)
@@ -11761,6 +12173,7 @@ Options
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -11876,9 +12289,7 @@ If you set --addr to listen on a public or LAN accessible IP address
then using Authentication is advised - see the next section for info.
You can use a unix socket by setting the url to unix:///path/to/socket
-or just by using an absolute path name. Note that unix sockets bypass
-the authentication - this is expected to be done with file system
-permissions.
+or just by using an absolute path name.
--addr may be repeated to listen on multiple IPs/ports/sockets. Socket
activation, described further below, can also be used to accomplish the
@@ -11906,10 +12317,11 @@ https. You will need to supply the --cert and --key flags. If you wish
to do client side certificate validation then you will need to supply
--client-ca also.
---cert should be a either a PEM encoded certificate or a concatenation
-of that with the CA certificate. --key should be the PEM encoded private
-key and --client-ca should be the PEM encoded client certificate
-authority certificate.
+--cert must be set to the path of a file containing either a PEM encoded
+certificate, or a concatenation of that with the CA certificate. --key
+must be set to the path of a file with the PEM encoded private key. If
+setting --client-ca, it should be set to the path of a file with PEM
+encoded client certificate authority certificates.
--min-tls-version is minimum TLS version that is acceptable. Valid
values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").
@@ -11918,7 +12330,7 @@ Socket activation
Instead of the listening addresses specified above, rclone will listen
to all FDs passed by the service manager, if any (and ignore any
-arguments passed by --addr`).
+arguments passed by --addr).
This allows rclone to be a socket-activated service. It can be
configured with .socket and .service unit files as described in
@@ -12332,6 +12744,48 @@ flag --checkers has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
+Symlinks
+
+By default the VFS does not support symlinks. However this may be
+enabled with either of the following flags:
+
+ --links Translate symlinks to/from regular files with a '.rclonelink' extension.
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension. So a file
+which appears as a symlink link-to-file.txt would be stored on cloud
+storage as link-to-file.txt.rclonelink and the contents would be the
+path to the symlink destination.
+
+Note that --links enables symlink translation globally in rclone - this
+includes any backend which supports the concept (for example the local
+backend). --vfs-links just enables it for the VFS layer.
+
+This scheme is compatible with that used by the local backend with the
+--local-links flag.
+
+The --vfs-links flag has been designed for rclone mount, rclone nfsmount
+and rclone serve nfs.
+
+It hasn't been tested with the other rclone serve commands yet.
+
+A limitation of the current implementation is that it expects the caller
+to resolve sub-symlinks. For example given this directory tree
+
+ .
+ ├── dir
+ │ └── file.txt
+ └── linked-dir -> dir
+
+The VFS will correctly resolve linked-dir but not linked-dir/file.txt.
+This is not a problem for the tested commands but may be for other
+commands.
+
+Note that there is an outstanding issue with symlink support issue #8245
+with duplicate files being created when symlinks are moved into
+directories where there is a file of the same name (or vice versa).
+
VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only by
@@ -12493,8 +12947,8 @@ Options
--allow-origin string Origin which cross-domain request (CORS) can be executed from
--auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
- --cert string TLS PEM key (concatenation of certificate and CA certificate)
- --client-ca string Client certificate authority to verify clients with
+ --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
+ --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 777)
--disable-dir-list Disable HTML directory list on GET request for a directory
@@ -12503,7 +12957,8 @@ Options
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for webdav
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string TLS PEM Private key
+ --key string Path to TLS PEM private key file
+ --link-perms FileMode Link permissions (default 666)
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--no-checksum Don't compare checksums on up/download
@@ -12529,6 +12984,7 @@ Options
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -12760,6 +13216,7 @@ Options
--chargen Fill files with a ASCII chargen pattern
--files int Number of files to create (default 1000)
--files-per-directory int Average number of files per directory (default 10)
+ --flat If set create all files in the root directory
-h, --help help for makefiles
--max-depth int Maximum depth of directory hierarchy (default 10)
--max-file-size SizeSuffix Maximum size of files to create (default 100)
@@ -14240,6 +14697,21 @@ The options mean
During rmdirs it will not remove root directory, even if it's empty.
+--links / -l
+
+Normally rclone will ignore symlinks or junction points (which behave
+like symlinks under Windows).
+
+If you supply this flag then rclone will copy symbolic links from any
+supported backend backend, and store them as text files, with a
+.rclonelink suffix in the destination.
+
+The text file will contain the target of the symbolic link.
+
+The --links / -l flag enables this feature for all supported backends
+and the VFS. There are individual flags for just enabling it for the VFS
+--vfs-links and the local backend --local-links if required.
+
--log-file=FILE
Log all of rclone's output to FILE. This is not active by default. This
@@ -15660,9 +16132,9 @@ message if the retry was successful.
List of exit codes
-- 0 - success
-- 1 - Syntax or usage error
-- 2 - Error not otherwise categorised
+- 0 - Success
+- 1 - Error not otherwise categorised
+- 2 - Syntax or usage error
- 3 - Directory not found
- 4 - File not found
- 5 - Temporary error (one that more retries might fix) (Retry errors)
@@ -15704,6 +16176,29 @@ they take exactly the same form.
The options set by environment variables can be seen with the -vv flag,
e.g. rclone version -vv.
+Options that can appear multiple times (type stringArray) are treated
+slighly differently as environment variables can only be defined once.
+In order to allow a simple mechanism for adding one or many items, the
+input is treated as a CSV encoded string. For example
+
+ ----------------------------------------------------------------------------------------
+ Environment Variable Equivalent options
+ ------------------------------------------------------ ---------------------------------
+ RCLONE_EXCLUDE="*.jpg" --exclude "*.jpg"
+
+ RCLONE_EXCLUDE="*.jpg,*.png" --exclude "*.jpg"
+ --exclude "*.png"
+
+ RCLONE_EXCLUDE='"*.jpg","*.png"' --exclude "*.jpg"
+ --exclude "*.png"
+
+ RCLONE_EXCLUDE='"/directory with comma , in it /**"' `--exclude "/directory with comma
+ , in it /**"
+ ----------------------------------------------------------------------------------------
+
+If stringArray options are defined as environment variables and options
+on the command line then all the values will be used.
+
Config file
You can set defaults for values in the config file on an individual
@@ -16360,10 +16855,10 @@ with --exclude, --exclude-from, --filter or --filter-from, you must use
include rules for all the files you want in the include statement. For
more flexibility use the --filter-from flag.
---exclude-from has no effect when combined with --files-from or
+--include-from has no effect when combined with --files-from or
--files-from-raw flags.
---exclude-from followed by - reads filter rules from standard input.
+--include-from followed by - reads filter rules from standard input.
--filter - Add a file-filtering rule
@@ -16399,6 +16894,10 @@ processed in.
Arrange the order of filter rules with the most restrictive first and
work down.
+Lines starting with # or ; are ignored, and can be used to write
+comments. Inline comments are not supported. Use -vv --dump filters to
+see how they appear in the final regexp.
+
E.g. for filter-file.txt:
# a sample filter rule file
@@ -16406,6 +16905,7 @@ E.g. for filter-file.txt:
+ *.jpg
+ *.png
+ file2.avi
+ - /dir/tmp/** # WARNING! This text will be treated as part of the path.
- /dir/Trash/**
+ /dir/**
# exclude everything else
@@ -16451,6 +16951,8 @@ Other filter flags (--include, --include-from, --exclude,
whitespace is stripped from the input lines. Lines starting with # or ;
are ignored.
+--files-from followed by - reads the list of files from standard input.
+
Rclone commands with a --files-from flag traverse the remote, treating
the names in --files-from as a set of filters.
@@ -16816,31 +17318,31 @@ Supported parameters
--rc
-Flag to start the http server listen on remote requests
+Flag to start the http server listen on remote requests.
--rc-addr=IP
-IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+IPaddress:Port or :Port to bind server to. (default "localhost:5572").
--rc-cert=KEY
-SSL PEM key (concatenation of certificate and CA certificate)
+SSL PEM key (concatenation of certificate and CA certificate).
--rc-client-ca=PATH
-Client certificate authority to verify clients with
+Client certificate authority to verify clients with.
--rc-htpasswd=PATH
-htpasswd file - if not provided no authentication is done
+htpasswd file - if not provided no authentication is done.
--rc-key=PATH
-SSL PEM Private key
+TLS PEM private key file.
--rc-max-header-bytes=VALUE
-Maximum size of request header (default 4096)
+Maximum size of request header (default 4096).
--rc-min-tls-version=VALUE
@@ -16857,15 +17359,15 @@ Password for authentication.
--rc-realm=VALUE
-Realm for authentication (default "rclone")
+Realm for authentication (default "rclone").
--rc-server-read-timeout=DURATION
-Timeout for server reading data (default 1h0m0s)
+Timeout for server reading data (default 1h0m0s).
--rc-server-write-timeout=DURATION
-Timeout for server writing data (default 1h0m0s)
+Timeout for server writing data (default 1h0m0s).
--rc-serve
@@ -16985,7 +17487,7 @@ Accessing the remote control via the rclone rc command
Rclone itself implements the remote control protocol in its rclone rc
command.
-You can use it like this
+You can use it like this:
$ rclone rc rc/noop param1=one param2=two
{
@@ -16993,8 +17495,19 @@ You can use it like this
"param2": "two"
}
-Run rclone rc on its own to see the help for the installed remote
-control commands.
+If the remote is running on a different URL than the default
+http://localhost:5572/, use the --url option to specify it:
+
+ $ rclone rc --url http://some.remote:1234/ rc/noop
+
+Or, if the remote is listening on a Unix socket, use the --unix-socket
+option instead:
+
+ $ rclone rc --unix-socket /tmp/rclone.sock rc/noop
+
+Run rclone rc on its own, without any commands, to see the help for the
+installed remote control commands. Note that this also needs to connect
+to the remote server.
JSON input
@@ -18940,6 +19453,8 @@ This takes the following parameters
- fs - select the VFS in use (optional)
- id - a numeric ID as returned from vfs/queue
- expiry - a new expiry time as floating point seconds
+- relative - if set, expiry is to be treated as relative to the
+ current expiry (optional, boolean)
This returns an empty result on success, or an error.
@@ -19216,6 +19731,7 @@ Here is an overview of the major features of each cloud storage system.
Backblaze B2 SHA1 R/W No No R/W -
Box SHA1 R/W Yes No - -
Citrix ShareFile MD5 R/W Yes No - -
+ Cloudinary MD5 R No Yes - -
Dropbox DBHASH ¹ R Yes No - -
Enterprise File Fabric - R/W Yes No R/W -
Files.com MD5, CRC32 DR/W Yes No R -
@@ -19227,6 +19743,7 @@ Here is an overview of the major features of each cloud storage system.
HDFS - R/W No No - -
HiDrive HiDrive ¹² R/W No No - -
HTTP - R No No R -
+ iCloud Drive - R No No - -
Internet Archive MD5, SHA1, CRC32 R/W ¹¹ No No - RWU
Jottacloud MD5 R/W Yes No R RW
Koofr MD5 - Yes No - -
@@ -19240,7 +19757,7 @@ Here is an overview of the major features of each cloud storage system.
OpenDrive MD5 R/W Yes Partial ⁸ - -
OpenStack Swift MD5 R/W No No R/W -
Oracle Object Storage MD5 R/W No No R/W -
- pCloud MD5, SHA1 ⁷ R No No W -
+ pCloud MD5, SHA1 ⁷ R/W No No W -
PikPak MD5 R No No R -
Pixeldrain SHA256 R/W No No R RW
premiumize.me - - Yes No R -
@@ -19759,6 +20276,8 @@ upon backend-specific capabilities.
Dropbox Yes Yes Yes Yes No No Yes No Yes Yes Yes
+ Cloudinary No No No No No No Yes No No No No
+
Enterprise File Yes Yes Yes Yes Yes No No No No No Yes
Fabric
@@ -19768,7 +20287,7 @@ upon backend-specific capabilities.
Gofile Yes Yes Yes Yes No No Yes No Yes Yes Yes
- Google Cloud Yes Yes No No No Yes Yes No No No No
+ Google Cloud Yes Yes No No No No Yes No No No No
Storage
Google Drive Yes Yes Yes Yes Yes Yes Yes No Yes Yes Yes
@@ -19781,6 +20300,8 @@ upon backend-specific capabilities.
HTTP No No No No No No No No No No Yes
+ iCloud Drive Yes Yes Yes Yes No No No No No No Yes
+
ImageKit Yes Yes Yes No No No No No No No Yes
Internet No Yes No No Yes Yes No No Yes Yes No
@@ -19805,7 +20326,7 @@ upon backend-specific capabilities.
Microsoft Yes Yes Yes Yes Yes Yes ⁵ No No Yes Yes Yes
OneDrive
- OpenDrive Yes Yes Yes Yes No No No No No No Yes
+ OpenDrive Yes Yes Yes Yes No No No No No Yes Yes
OpenStack Swift Yes ¹ Yes No No No Yes Yes No No Yes No
@@ -19971,6 +20492,7 @@ Flags for anything which can copy a file.
-I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
@@ -20047,7 +20569,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.68.0")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.69.0")
Performance
@@ -20172,7 +20694,7 @@ RC
Flags to control the Remote Control API.
--rc Enable the remote control server
- --rc-addr stringArray IPaddress:Port or :Port to bind server to (default ["localhost:5572"])
+ --rc-addr stringArray IPaddress:Port or :Port to bind server to (default localhost:5572)
--rc-allow-origin string Origin which cross-domain request (CORS) can be executed from
--rc-baseurl string Prefix for URLs - leave blank for root
--rc-cert string TLS PEM key (concatenation of certificate and CA certificate)
@@ -20205,7 +20727,7 @@ Metrics
Flags to control the Metrics HTTP endpoint..
- --metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to (default [""])
+ --metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to
--metrics-allow-origin string Origin which cross-domain request (CORS) can be executed from
--metrics-baseurl string Prefix for URLs - leave blank for root
--metrics-cert string TLS PEM key (concatenation of certificate and CA certificate)
@@ -20242,6 +20764,7 @@ Backend-only flags (these can be set in the config file also).
--azureblob-description string Description of the remote
--azureblob-directory-markers Upload an empty object with a trailing slash when a new directory is created
--azureblob-disable-checksum Don't store MD5 checksum with object metadata
+ --azureblob-disable-instance-discovery Skip requesting Microsoft Entra instance metadata
--azureblob-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
--azureblob-endpoint string Endpoint for the service
--azureblob-env-auth Read credentials from runtime (environment variables, CLI or MSI)
@@ -20259,6 +20782,7 @@ Backend-only flags (these can be set in the config file also).
--azureblob-tenant string ID of the service principal's tenant. Also called its directory ID
--azureblob-upload-concurrency int Concurrency for multipart uploads (default 16)
--azureblob-upload-cutoff string Cutoff for switching to chunked upload (<= 256 MiB) (deprecated)
+ --azureblob-use-az Use Azure CLI tool az for authentication
--azureblob-use-emulator Uses local storage emulator if provided as 'true'
--azureblob-use-msi Use a managed service identity to authenticate (only works in Azure)
--azureblob-username string User name (usually an email address)
@@ -20308,6 +20832,7 @@ Backend-only flags (these can be set in the config file also).
--box-auth-url string Auth server URL
--box-box-config-file string Box App config.json location
--box-box-sub-type string (default "user")
+ --box-client-credentials Use client credentials OAuth flow
--box-client-id string OAuth Client Id
--box-client-secret string OAuth Client Secret
--box-commit-retries int Max number of times to try committing a multipart file (default 100)
@@ -20346,6 +20871,14 @@ Backend-only flags (these can be set in the config file also).
--chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks
--chunker-hash-type string Choose how chunker handles hash sums (default "md5")
--chunker-remote string Remote to chunk/unchunk
+ --cloudinary-api-key string Cloudinary API Key
+ --cloudinary-api-secret string Cloudinary API Secret
+ --cloudinary-cloud-name string Cloudinary Environment Name
+ --cloudinary-description string Description of the remote
+ --cloudinary-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
+ --cloudinary-eventually-consistent-delay Duration Wait N seconds for eventual consistency of the databases that support the backend operation (default 0s)
+ --cloudinary-upload-prefix string Specify the API endpoint for environments out of the US
+ --cloudinary-upload-preset string Upload Preset to select asset manipulation on upload
--combine-description string Description of the remote
--combine-upstreams SpaceSepList Upstreams for combining
--compress-description string Description of the remote
@@ -20372,6 +20905,7 @@ Backend-only flags (these can be set in the config file also).
--drive-auth-owner-only Only consider files owned by the authenticated user
--drive-auth-url string Auth server URL
--drive-chunk-size SizeSuffix Upload chunk size (default 8Mi)
+ --drive-client-credentials Use client credentials OAuth flow
--drive-client-id string Google Application Client Id
--drive-client-secret string OAuth Client Secret
--drive-copy-shortcut-content Server side copy contents of shortcuts instead of the shortcut
@@ -20422,6 +20956,7 @@ Backend-only flags (these can be set in the config file also).
--dropbox-batch-size int Max number of files in upload batch
--dropbox-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
--dropbox-chunk-size SizeSuffix Upload chunk size (< 150Mi) (default 48Mi)
+ --dropbox-client-credentials Use client credentials OAuth flow
--dropbox-client-id string OAuth Client Id
--dropbox-client-secret string OAuth Client Secret
--dropbox-description string Description of the remote
@@ -20468,6 +21003,7 @@ Backend-only flags (these can be set in the config file also).
--ftp-host string FTP host to connect to
--ftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--ftp-no-check-certificate Do not verify the TLS certificate of the server
+ --ftp-no-check-upload Don't check the upload is OK
--ftp-pass string FTP password (obscured)
--ftp-port int FTP port number (default 21)
--ftp-shut-timeout Duration Maximum time to wait for data connection closing status (default 1m0s)
@@ -20476,10 +21012,12 @@ Backend-only flags (these can be set in the config file also).
--ftp-tls-cache-size int Size of TLS session cache for all control and data connections (default 32)
--ftp-user string FTP username (default "$USER")
--ftp-writing-mdtm Use MDTM to set modification time (VsFtpd quirk)
+ --gcs-access-token string Short-lived access token
--gcs-anonymous Access public buckets and objects without credentials
--gcs-auth-url string Auth server URL
--gcs-bucket-acl string Access Control List for new buckets
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies
+ --gcs-client-credentials Use client credentials OAuth flow
--gcs-client-id string OAuth Client Id
--gcs-client-secret string OAuth Client Secret
--gcs-decompress If set this will decompress gzip encoded objects
@@ -20508,11 +21046,13 @@ Backend-only flags (these can be set in the config file also).
--gphotos-batch-mode string Upload file batching sync|async|off (default "sync")
--gphotos-batch-size int Max number of files in upload batch
--gphotos-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
+ --gphotos-client-credentials Use client credentials OAuth flow
--gphotos-client-id string OAuth Client Id
--gphotos-client-secret string OAuth Client Secret
--gphotos-description string Description of the remote
--gphotos-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gphotos-include-archived Also view and download archived media
+ --gphotos-proxy string Use the gphotosdl proxy for downloading the full resolution images
--gphotos-read-only Set to make the Google Photos backend read only
--gphotos-read-size Set to read the size of media items
--gphotos-start-year int Year limits the photos to be downloaded to those which are uploaded after the given year (default 2000)
@@ -20531,6 +21071,7 @@ Backend-only flags (these can be set in the config file also).
--hdfs-username string Hadoop user name
--hidrive-auth-url string Auth server URL
--hidrive-chunk-size SizeSuffix Chunksize for chunked uploads (default 48Mi)
+ --hidrive-client-credentials Use client credentials OAuth flow
--hidrive-client-id string OAuth Client Id
--hidrive-client-secret string OAuth Client Secret
--hidrive-description string Description of the remote
@@ -20550,6 +21091,11 @@ Backend-only flags (these can be set in the config file also).
--http-no-head Don't use HEAD requests
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of HTTP host to connect to
+ --iclouddrive-apple-id string Apple ID
+ --iclouddrive-client-id string Client id (default "d39ba9916b7251055b22c7f910e2ea796ee65e98b2ddecea8f5dde8d9d1a815d")
+ --iclouddrive-description string Description of the remote
+ --iclouddrive-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --iclouddrive-password string Password (obscured)
--imagekit-description string Description of the remote
--imagekit-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Dollar,Question,Hash,Percent,BackSlash,Del,Ctl,InvalidUtf8,Dot,SquareBracket)
--imagekit-endpoint string You can find your ImageKit.io URL endpoint in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
@@ -20567,6 +21113,7 @@ Backend-only flags (these can be set in the config file also).
--internetarchive-secret-access-key string IAS3 Secret Key (password)
--internetarchive-wait-archive Duration Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish (default 0s)
--jottacloud-auth-url string Auth server URL
+ --jottacloud-client-credentials Use client credentials OAuth flow
--jottacloud-client-id string OAuth Client Id
--jottacloud-client-secret string OAuth Client Secret
--jottacloud-description string Description of the remote
@@ -20588,11 +21135,11 @@ Backend-only flags (these can be set in the config file also).
--koofr-user string Your user name
--linkbox-description string Description of the remote
--linkbox-token string Token from https://www.linkbox.to/admin/account
- -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-case-insensitive Force the filesystem to report itself as case insensitive
--local-case-sensitive Force the filesystem to report itself as case sensitive
--local-description string Description of the remote
--local-encoding Encoding The encoding for the backend (default Slash,Dot)
+ --local-links Translate symlinks to/from regular files with a '.rclonelink' extension for the local backend
--local-no-check-updated Don't check to see if the files change during upload
--local-no-clone Disable reflink cloning for server-side copies
--local-no-preallocate Disable preallocation of disk space for transferred files
@@ -20604,6 +21151,7 @@ Backend-only flags (these can be set in the config file also).
--local-zero-size-links Assume the Stat size of links is zero (and read them instead) (deprecated)
--mailru-auth-url string Auth server URL
--mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true)
+ --mailru-client-credentials Use client credentials OAuth flow
--mailru-client-id string OAuth Client Id
--mailru-client-secret string OAuth Client Secret
--mailru-description string Description of the remote
@@ -20634,6 +21182,7 @@ Backend-only flags (these can be set in the config file also).
--onedrive-auth-url string Auth server URL
--onedrive-av-override Allows download of files the server thinks has a virus
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes) (default 10Mi)
+ --onedrive-client-credentials Use client credentials OAuth flow
--onedrive-client-id string OAuth Client Id
--onedrive-client-secret string OAuth Client Secret
--onedrive-delta If set rclone will use delta listing to implement recursive listings
@@ -20653,11 +21202,12 @@ Backend-only flags (these can be set in the config file also).
--onedrive-region string Choose national cloud region for OneDrive (default "global")
--onedrive-root-folder-id string ID of the root folder
--onedrive-server-side-across-configs Deprecated: use --server-side-across-configs instead
+ --onedrive-tenant string ID of the service principal's tenant. Also called its directory ID
--onedrive-token string OAuth Access Token as a JSON blob
--onedrive-token-url string Token server url
--oos-attempt-resume-upload If true attempt to resume previously started multipart upload for the object
--oos-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
- --oos-compartment string Object storage compartment OCID
+ --oos-compartment string Specify compartment OCID, if you need to list buckets
--oos-config-file string Path to OCI config file (default "~/.oci/config")
--oos-config-profile string Profile name inside the oci config file (default "Default")
--oos-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
@@ -20686,6 +21236,7 @@ Backend-only flags (these can be set in the config file also).
--opendrive-password string Password (obscured)
--opendrive-username string Username
--pcloud-auth-url string Auth server URL
+ --pcloud-client-credentials Use client credentials OAuth flow
--pcloud-client-id string OAuth Client Id
--pcloud-client-secret string OAuth Client Secret
--pcloud-description string Description of the remote
@@ -20696,26 +21247,25 @@ Backend-only flags (these can be set in the config file also).
--pcloud-token string OAuth Access Token as a JSON blob
--pcloud-token-url string Token server url
--pcloud-username string Your pcloud username
- --pikpak-auth-url string Auth server URL
--pikpak-chunk-size SizeSuffix Chunk size for multipart uploads (default 5Mi)
- --pikpak-client-id string OAuth Client Id
- --pikpak-client-secret string OAuth Client Secret
--pikpak-description string Description of the remote
+ --pikpak-device-id string Device ID used for authorization
--pikpak-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot)
--pikpak-hash-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate hash if required (default 10Mi)
+ --pikpak-no-media-link Use original file links instead of media links
--pikpak-pass string Pikpak password (obscured)
--pikpak-root-folder-id string ID of the root folder
- --pikpak-token string OAuth Access Token as a JSON blob
- --pikpak-token-url string Token server url
--pikpak-trashed-only Only show files that are in the trash
--pikpak-upload-concurrency int Concurrency for multipart uploads (default 5)
--pikpak-use-trash Send files to the trash instead of deleting permanently (default true)
--pikpak-user string Pikpak username
+ --pikpak-user-agent string HTTP user agent for pikpak (default "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0")
--pixeldrain-api-key string API key for your pixeldrain account
--pixeldrain-api-url string The API endpoint to connect to. In the vast majority of cases it's fine to leave (default "https://pixeldrain.com/api")
--pixeldrain-description string Description of the remote
--pixeldrain-root-folder-id string Root of the filesystem to use (default "me")
--premiumizeme-auth-url string Auth server URL
+ --premiumizeme-client-credentials Use client credentials OAuth flow
--premiumizeme-client-id string OAuth Client Id
--premiumizeme-client-secret string OAuth Client Secret
--premiumizeme-description string Description of the remote
@@ -20733,6 +21283,7 @@ Backend-only flags (these can be set in the config file also).
--protondrive-replace-existing-draft Create a new revision when filename conflict is detected
--protondrive-username string The username of your proton account
--putio-auth-url string Auth server URL
+ --putio-client-credentials Use client credentials OAuth flow
--putio-client-id string OAuth Client Id
--putio-client-secret string OAuth Client Secret
--putio-description string Description of the remote
@@ -20766,6 +21317,7 @@ Backend-only flags (these can be set in the config file also).
--s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
--s3-decompress If set this will decompress gzip encoded objects
--s3-description string Description of the remote
+ --s3-directory-bucket Set to use AWS Directory Buckets
--s3-directory-markers Upload an empty object with a trailing slash when a new directory is created
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-disable-http2 Disable usage of http2 for S3 backends
@@ -20847,6 +21399,7 @@ Backend-only flags (these can be set in the config file also).
--sftp-pass string SSH password, leave blank to use ssh-agent (obscured)
--sftp-path-override string Override path used by SSH shell commands
--sftp-port int SSH port number (default 22)
+ --sftp-pubkey string SSH public certificate for public certificate based authentication
--sftp-pubkey-file string Optional path to public key file
--sftp-server-command string Specifies the path or command to run a sftp server on the remote host
--sftp-set-env SpaceSepList Environment variables to pass to sftp and commands
@@ -20862,6 +21415,7 @@ Backend-only flags (these can be set in the config file also).
--sftp-user string SSH username (default "$USER")
--sharefile-auth-url string Auth server URL
--sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi)
+ --sharefile-client-credentials Use client credentials OAuth flow
--sharefile-client-id string OAuth Client Id
--sharefile-client-secret string OAuth Client Secret
--sharefile-description string Description of the remote
@@ -20951,6 +21505,7 @@ Backend-only flags (these can be set in the config file also).
--uptobox-description string Description of the remote
--uptobox-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
--uptobox-private Set to make uploaded files private
+ --webdav-auth-redirect Preserve authentication on redirect
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
--webdav-bearer-token-command string Command to run to get a bearer token
--webdav-description string Description of the remote
@@ -20966,6 +21521,7 @@ Backend-only flags (these can be set in the config file also).
--webdav-user string User name
--webdav-vendor string Name of the WebDAV site/service/software you are using
--yandex-auth-url string Auth server URL
+ --yandex-client-credentials Use client credentials OAuth flow
--yandex-client-id string OAuth Client Id
--yandex-client-secret string OAuth Client Secret
--yandex-description string Description of the remote
@@ -20975,6 +21531,7 @@ Backend-only flags (these can be set in the config file also).
--yandex-token string OAuth Access Token as a JSON blob
--yandex-token-url string Token server url
--zoho-auth-url string Auth server URL
+ --zoho-client-credentials Use client credentials OAuth flow
--zoho-client-id string OAuth Client Id
--zoho-client-secret string OAuth Client Secret
--zoho-description string Description of the remote
@@ -20982,6 +21539,7 @@ Backend-only flags (these can be set in the config file also).
--zoho-region string Zoho region to connect to
--zoho-token string OAuth Access Token as a JSON blob
--zoho-token-url string Token server url
+ --zoho-upload-cutoff SizeSuffix Cutoff for switching to large file upload api (>= 10 MiB) (default 10Mi)
Docker Volume Plugin
@@ -21466,6 +22024,17 @@ Also you can use curl to check the plugin socket connectivity:
though this is rarely needed.
+If the plugin fails to work properly, and only as a last resort after
+you tried diagnosing with the above methods, you can try clearing the
+state of the plugin. Note that all existing rclone docker volumes will
+probably have to be recreated. This might be needed because a reinstall
+don't cleanup existing state files to allow for easy restoration, as
+stated above.
+
+ docker plugin disable rclone # disable the plugin to ensure no interference
+ sudo rm /var/lib/docker-plugins/rclone/cache/docker-plugin.state # removing the plugin state
+ docker plugin enable rclone # re-enable the plugin afterward
+
Caveats
Finally I'd like to mention a caveat with updating volume settings.
@@ -22494,12 +23063,14 @@ Note that while concurrent bisync runs are allowed, be very cautious
that there is no overlap in the trees being synched between concurrent
runs, lest there be replicated files, deleted files and general mayhem.
-Return codes
+Exit codes
rclone bisync returns the following codes to calling program: - 0 on a
successful run, - 1 for a non-critical failing run (a rerun may be
-successful), - 2 for a critically aborted run (requires a --resync to
-recover).
+successful), - 2 on syntax or usage error, - 7 for a critically aborted
+run (requires a --resync to recover).
+
+See also the section about exit codes in main docs.
Graceful Shutdown
@@ -23903,6 +24474,7 @@ The S3 backend can be used with a number of different providers:
- Linode Object Storage
- Magalu Object Storage
- Minio
+- Outscale
- Petabox
- Qiniu Cloud Object Storage (Kodo)
- RackCorp Object Storage
@@ -23910,6 +24482,7 @@ The S3 backend can be used with a number of different providers:
- Scaleway
- Seagate Lyve Cloud
- SeaweedFS
+- Selectel
- StackPath
- Storj
- Synology C2 Object Storage
@@ -24117,7 +24690,7 @@ This will guide you through an interactive setup process.
\ "STANDARD_IA"
5 / One Zone Infrequent Access storage class
\ "ONEZONE_IA"
- 6 / Glacier storage class
+ 6 / Glacier Flexible Retrieval storage class
\ "GLACIER"
7 / Glacier Deep Archive storage class
\ "DEEP_ARCHIVE"
@@ -24275,6 +24848,116 @@ details.
Setting this flag increases the chance for undetected upload failures.
+Increasing performance
+
+Using server-side copy
+
+If you are copying objects between S3 buckets in the same region, you
+should use server-side copy. This is much faster than downloading and
+re-uploading the objects, as no data is transferred.
+
+For rclone to use server-side copy, you must use the same remote for the
+source and destination.
+
+ rclone copy s3:source-bucket s3:destination-bucket
+
+When using server-side copy, the performance is limited by the rate at
+which rclone issues API requests to S3. See below for how to increase
+the number of API requests rclone makes.
+
+Increasing the rate of API requests
+
+You can increase the rate of API requests to S3 by increasing the
+parallelism using --transfers and --checkers options.
+
+Rclone uses a very conservative defaults for these settings, as not all
+providers support high rates of requests. Depending on your provider,
+you can increase significantly the number of transfers and checkers.
+
+For example, with AWS S3, if you can increase the number of checkers to
+values like 200. If you are doing a server-side copy, you can also
+increase the number of transfers to 200.
+
+ rclone sync --transfers 200 --checkers 200 --checksum s3:source-bucket s3:destination-bucket
+
+You will need to experiment with these values to find the optimal
+settings for your setup.
+
+Data integrity
+
+Rclone does its best to verify every part of an upload or download to
+the s3 provider using various hashes.
+
+Every HTTP transaction to/from the provider has a X-Amz-Content-Sha256
+or a Content-Md5 header to guard against corruption of the HTTP body.
+The HTTP Header is protected by the signature passed in the
+Authorization header.
+
+All communications with the provider is done over https for encryption
+and additional error protection.
+
+Single part uploads
+
+- Rclone uploads single part uploads with a Content-Md5 using the MD5
+ hash read from the source. The provider checks this is correct on
+ receipt of the data.
+
+- Rclone then does a HEAD request (disable with --s3-no-head) to read
+ the ETag back which is the MD5 of the file and checks that with what
+ it sent.
+
+Note that if the source does not have an MD5 then the single part
+uploads will not have hash protection. In this case it is recommended to
+use --s3-upload-cutoff 0 so all files are uploaded as multipart uploads.
+
+Multipart uploads
+
+For files above --s3-upload-cutoff rclone splits the file into multiple
+parts for upload.
+
+- Each part is protected with both an X-Amz-Content-Sha256 and a
+ Content-Md5
+
+When rclone has finished the upload of all the parts it then completes
+the upload by sending:
+
+- The MD5 hash of each part
+- The number of parts
+- This info is all protected with a X-Amz-Content-Sha256
+
+The provider checks the MD5 for all the parts it has received against
+what rclone sends and if it is good it returns OK.
+
+Rclone then does a HEAD request (disable with --s3-no-head) and checks
+the ETag is what it expects (in this case it should be the MD5 sum of
+all the MD5 sums of all the parts with the number of parts on the end).
+
+If the source has an MD5 sum then rclone will attach the
+X-Amz-Meta-Md5chksum with it as the ETag for a multipart upload can't
+easily be checked against the file as the chunk size must be known in
+order to calculate it.
+
+Downloads
+
+Rclone checks the MD5 hash of the data downloaded against either the
+ETag or the X-Amz-Meta-Md5chksum metadata (if present) which rclone
+uploads with multipart uploads.
+
+Further checking
+
+At each stage rclone and the provider are sending and checking hashes of
+everything. Rclone deliberately HEADs each object after upload to check
+it arrived safely for extra security. (You can disable this with
+--s3-no-head).
+
+If you require further assurance that your data is intact you can use
+rclone check to check the hashes locally vs the remote.
+
+And if you are feeling ultimately paranoid use rclone check --download
+which will download the files and check them against the local copies.
+(Note that this doesn't use disk to do this - it streams them in
+memory).
+
Versions
When bucket versioning is enabled (this can be done with rclone with the
@@ -24555,8 +25238,8 @@ Here are the Standard options specific to s3 (Amazon S3 Compliant
Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile,
Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive,
IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease,
-Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj,
-Synology, TencentCOS, Wasabi, Qiniu and others).
+Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel,
+StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
--s3-provider
@@ -24609,6 +25292,8 @@ Properties:
- Minio Object Storage
- "Netease"
- Netease Object Storage (NOS)
+ - "Outscale"
+ - OUTSCALE Object Storage (OOS)
- "Petabox"
- Petabox Object Storage
- "RackCorp"
@@ -24619,6 +25304,8 @@ Properties:
- Scaleway Object Storage
- "SeaweedFS"
- SeaweedFS S3
+ - "Selectel"
+ - Selectel Object Storage
- "StackPath"
- StackPath Object Storage
- "Storj"
@@ -24872,7 +25559,7 @@ Properties:
- Config: acl
- Env Var: RCLONE_S3_ACL
-- Provider: !Storj,Synology,Cloudflare
+- Provider: !Storj,Selectel,Synology,Cloudflare
- Type: string
- Required: false
- Examples:
@@ -24984,7 +25671,7 @@ Properties:
- "ONEZONE_IA"
- One Zone Infrequent Access storage class
- "GLACIER"
- - Glacier storage class
+ - Glacier Flexible Retrieval storage class
- "DEEP_ARCHIVE"
- Glacier Deep Archive storage class
- "INTELLIGENT_TIERING"
@@ -24998,8 +25685,8 @@ Here are the Advanced options specific to s3 (Amazon S3 Compliant
Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile,
Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive,
IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease,
-Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj,
-Synology, TencentCOS, Wasabi, Qiniu and others).
+Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel,
+StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
--s3-bucket-acl
@@ -25823,6 +26510,41 @@ Properties:
- Type: Tristate
- Default: unset
+--s3-directory-bucket
+
+Set to use AWS Directory Buckets
+
+If you are using an AWS Directory Bucket then set this flag.
+
+This will ensure no Content-Md5 headers are sent and ensure ETag headers
+are not interpreted as MD5 sums. X-Amz-Meta-Md5chksum will be set on all
+objects whether single or multipart uploaded.
+
+This also sets no_check_bucket = true.
+
+Note that Directory Buckets do not support:
+
+- Versioning
+- Content-Encoding: gzip
+
+Rclone limitations with Directory Buckets:
+
+- rclone does not support creating Directory Buckets with rclone mkdir
+- ... or removing them with rclone rmdir yet
+- Directory Buckets do not appear when doing rclone lsf at the top
+ level.
+- Rclone can't remove auto created directories yet. In theory this
+ should work with directory_markers = true but it doesn't.
+- Directories don't seem to appear in recursive (ListR) listings.
+
+Properties:
+
+- Config: directory_bucket
+- Env Var: RCLONE_S3_DIRECTORY_BUCKET
+- Provider: AWS
+- Type: bool
+- Default: false
+
--s3-sdk-log-mode
Set to debug the SDK
@@ -26154,6 +26876,20 @@ AWS S3
This is the provider used as main example and described in the
configuration section above.
+AWS Directory Buckets
+
+From rclone v1.69 Directory Buckets are supported.
+
+You will need to set the directory_buckets = true config parameter or
+use --s3-directory-buckets.
+
+Note that rclone cannot yet:
+
+- Create directory buckets
+- List directory buckets
+
+See the --s3-directory-buckets flag for more info
+
AWS Snowball Edge
AWS Snowball is a hardware appliance used for transferring bulk data
@@ -26333,6 +27069,9 @@ Content-Encoding: gzip by default which is a deviation from what AWS
does. If this is causing a problem then upload the files with
--header-upload "Cache-Control: no-transform"
+A consequence of this is that Content-Encoding: gzip will never appear
+in the metadata on Cloudflare.
+
Dreamhost
Dreamhost DreamObjects is an object storage system based on CEPH.
@@ -27046,6 +27785,146 @@ So once set up, for example, to copy files into a bucket
rclone copy /path/to/files minio:bucket
+Outscale
+
+OUTSCALE Object Storage (OOS) is an enterprise-grade, S3-compatible
+storage service provided by OUTSCALE, a brand of Dassault Systèmes. For
+more information about OOS, see the official documentation.
+
+Here is an example of an OOS configuration that you can paste into your
+rclone configuration file:
+
+ [outscale]
+ type = s3
+ provider = Outscale
+ env_auth = false
+ access_key_id = ABCDEFGHIJ0123456789
+ secret_access_key = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+ region = eu-west-2
+ endpoint = oos.eu-west-2.outscale.com
+ acl = private
+
+You can also run rclone config to go through the interactive setup
+process:
+
+ No remotes found, make a new one?
+ n) New remote
+ s) Set configuration password
+ q) Quit config
+ n/s/q> n
+
+ Enter name for new remote.
+ name> outscale
+
+ Option Storage.
+ Type of storage to configure.
+ Choose a number from below, or type in your own value.
+ [snip]
+ X / Amazon S3 Compliant Storage Providers including AWS, ...Outscale, ...and others
+ \ (s3)
+ [snip]
+ Storage> outscale
+
+ Option provider.
+ Choose your S3 provider.
+ Choose a number from below, or type in your own value.
+ Press Enter to leave empty.
+ [snip]
+ XX / OUTSCALE Object Storage (OOS)
+ \ (Outscale)
+ [snip]
+ provider> Outscale
+
+ Option env_auth.
+ Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ Only applies if access_key_id and secret_access_key is blank.
+ Choose a number from below, or type in your own boolean value (true or false).
+ Press Enter for the default (false).
+ 1 / Enter AWS credentials in the next step.
+ \ (false)
+ 2 / Get AWS credentials from the environment (env vars or IAM).
+ \ (true)
+ env_auth>
+
+ Option access_key_id.
+ AWS Access Key ID.
+ Leave blank for anonymous access or runtime credentials.
+ Enter a value. Press Enter to leave empty.
+ access_key_id> ABCDEFGHIJ0123456789
+
+ Option secret_access_key.
+ AWS Secret Access Key (password).
+ Leave blank for anonymous access or runtime credentials.
+ Enter a value. Press Enter to leave empty.
+ secret_access_key> XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+
+ Option region.
+ Region where your bucket will be created and your data stored.
+ Choose a number from below, or type in your own value.
+ Press Enter to leave empty.
+ 1 / Paris, France
+ \ (eu-west-2)
+ 2 / New Jersey, USA
+ \ (us-east-2)
+ 3 / California, USA
+ \ (us-west-1)
+ 4 / SecNumCloud, Paris, France
+ \ (cloudgouv-eu-west-1)
+ 5 / Tokyo, Japan
+ \ (ap-northeast-1)
+ region> 1
+
+ Option endpoint.
+ Endpoint for S3 API.
+ Required when using an S3 clone.
+ Choose a number from below, or type in your own value.
+ Press Enter to leave empty.
+ 1 / Outscale EU West 2 (Paris)
+ \ (oos.eu-west-2.outscale.com)
+ 2 / Outscale US east 2 (New Jersey)
+ \ (oos.us-east-2.outscale.com)
+ 3 / Outscale EU West 1 (California)
+ \ (oos.us-west-1.outscale.com)
+ 4 / Outscale SecNumCloud (Paris)
+ \ (oos.cloudgouv-eu-west-1.outscale.com)
+ 5 / Outscale AP Northeast 1 (Japan)
+ \ (oos.ap-northeast-1.outscale.com)
+ endpoint> 1
+
+ Option acl.
+ Canned ACL used when creating buckets and storing or copying objects.
+ This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
+ For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
+ Note that this ACL is applied when server-side copying objects as S3
+ doesn't copy the ACL from the source but rather writes a fresh one.
+ If the acl is an empty string then no X-Amz-Acl: header is added and
+ the default (private) will be used.
+ Choose a number from below, or type in your own value.
+ Press Enter to leave empty.
+ / Owner gets FULL_CONTROL.
+ 1 | No one else has access rights (default).
+ \ (private)
+ [snip]
+ acl> 1
+
+ Edit advanced config?
+ y) Yes
+ n) No (default)
+ y/n> n
+
+ Configuration complete.
+ Options:
+ - type: s3
+ - provider: Outscale
+ - access_key_id: ABCDEFGHIJ0123456789
+ - secret_access_key: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+ - endpoint: oos.eu-west-2.outscale.com
+ Keep this "outscale" remote?
+ y) Yes this is OK (default)
+ e) Edit this remote
+ d) Delete this remote
+ y/e/d> y
+
Qiniu Cloud Object Storage (Kodo)
Qiniu Cloud Object Storage (Kodo), a completely independent-researched
@@ -27303,13 +28182,13 @@ rclone like this:
chunk_size = 5M
copy_cutoff = 5M
-C14 Cold Storage is the low-cost S3 Glacier alternative from Scaleway
+Scaleway Glacier is the low-cost S3 Glacier alternative from Scaleway
and it works the same way as on S3 by accepting the "GLACIER"
storage_class. So you can configure your remote with the
-storage_class = GLACIER option to upload directly to C14. Don't forget
-that in this state you can't read files back after, you will need to
-restore them to "STANDARD" storage_class first before being able to read
-them (see "restore" section above)
+storage_class = GLACIER option to upload directly to Scaleway Glacier.
+Don't forget that in this state you can't read files back after, you
+will need to restore them to "STANDARD" storage_class first before being
+able to read them (see "restore" section above)
Seagate Lyve Cloud
@@ -27479,6 +28358,119 @@ So once set up, for example to copy files into a bucket
rclone copy /path/to/files seaweedfs_s3:foo
+Selectel
+
+Selectel Cloud Storage is an S3 compatible storage system which features
+triple redundancy storage, automatic scaling, high availability and a
+comprehensive IAM system.
+
+Selectel have a section on their website for configuring rclone which
+shows how to make the right API keys.
+
+From rclone v1.69 Selectel is a supported operator - please choose the
+Selectel provider type.
+
+Note that you should use "vHosted" access for the buckets (which is the
+recommended default), not "path style".
+
+You can use rclone config to make a new provider like this
+
+ No remotes found, make a new one?
+ n) New remote
+ s) Set configuration password
+ q) Quit config
+ n/s/q> n
+
+ Enter name for new remote.
+ name> selectel
+
+ Option Storage.
+ Type of storage to configure.
+ Choose a number from below, or type in your own value.
+ [snip]
+ XX / Amazon S3 Compliant Storage Providers including ..., Selectel, ...
+ \ (s3)
+ [snip]
+ Storage> s3
+
+ Option provider.
+ Choose your S3 provider.
+ Choose a number from below, or type in your own value.
+ Press Enter to leave empty.
+ [snip]
+ XX / Selectel Object Storage
+ \ (Selectel)
+ [snip]
+ provider> Selectel
+
+ Option env_auth.
+ Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ Only applies if access_key_id and secret_access_key is blank.
+ Choose a number from below, or type in your own boolean value (true or false).
+ Press Enter for the default (false).
+ 1 / Enter AWS credentials in the next step.
+ \ (false)
+ 2 / Get AWS credentials from the environment (env vars or IAM).
+ \ (true)
+ env_auth> 1
+
+ Option access_key_id.
+ AWS Access Key ID.
+ Leave blank for anonymous access or runtime credentials.
+ Enter a value. Press Enter to leave empty.
+ access_key_id> ACCESS_KEY
+
+ Option secret_access_key.
+ AWS Secret Access Key (password).
+ Leave blank for anonymous access or runtime credentials.
+ Enter a value. Press Enter to leave empty.
+ secret_access_key> SECRET_ACCESS_KEY
+
+ Option region.
+ Region where your data stored.
+ Choose a number from below, or type in your own value.
+ Press Enter to leave empty.
+ 1 / St. Petersburg
+ \ (ru-1)
+ region> 1
+
+ Option endpoint.
+ Endpoint for Selectel Object Storage.
+ Choose a number from below, or type in your own value.
+ Press Enter to leave empty.
+ 1 / Saint Petersburg
+ \ (s3.ru-1.storage.selcloud.ru)
+ endpoint> 1
+
+ Edit advanced config?
+ y) Yes
+ n) No (default)
+ y/n> n
+
+ Configuration complete.
+ Options:
+ - type: s3
+ - provider: Selectel
+ - access_key_id: ACCESS_KEY
+ - secret_access_key: SECRET_ACCESS_KEY
+ - region: ru-1
+ - endpoint: s3.ru-1.storage.selcloud.ru
+ Keep this "selectel" remote?
+ y) Yes this is OK (default)
+ e) Edit this remote
+ d) Delete this remote
+ y/e/d> y
+
+And your config should end up looking like this:
+
+ [selectel]
+ type = s3
+ provider = Selectel
+ access_key_id = ACCESS_KEY
+ secret_access_key = SECRET_ACCESS_KEY
+ region = ru-1
+ endpoint = s3.ru-1.storage.selcloud.ru
+
Wasabi
Wasabi is a cloud-based object storage service for a broad range of
@@ -29697,6 +30689,7 @@ This will dump something like this showing the lifecycle rules.
{
"daysFromHidingToDeleting": 1,
"daysFromUploadingToHiding": null,
+ "daysFromStartingToCancelingUnfinishedLargeFiles": null,
"fileNamePrefix": ""
}
]
@@ -29727,6 +30720,8 @@ Options:
- "daysFromHidingToDeleting": After a file has been hidden for this
many days it is deleted. 0 is off.
+- "daysFromStartingToCancelingUnfinishedLargeFiles": Cancels any
+ unfinished large file versions after this many days
- "daysFromUploadingToHiding": This many days after uploading a file
is hidden
@@ -29831,7 +30826,7 @@ This will guide you through an interactive setup process:
y) Yes
n) No
y/n> y
- If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
+ If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=XXXXXXXXXXXXXXXXXXXXXX
Log in and authorize rclone for access
Waiting for code...
Got code
@@ -30141,6 +31136,20 @@ Properties:
- Type: string
- Required: false
+--box-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC
+6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_BOX_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
--box-root-folder-id
Fill in for rclone to use a non root folder as its starting point.
@@ -31480,6 +32489,225 @@ Properties:
- Type: string
- Required: false
+Cloudinary
+
+This is a backend for the Cloudinary platform
+
+About Cloudinary
+
+Cloudinary is an image and video API platform. Trusted by 1.5 million
+developers and 10,000 enterprise and hyper-growth companies as a
+critical part of their tech stack to deliver visually engaging
+experiences.
+
+Accounts & Pricing
+
+To use this backend, you need to create a free account on Cloudinary.
+Start with a free plan with generous usage limits. Then, as your
+requirements grow, upgrade to a plan that best fits your needs. See the
+pricing details.
+
+Securing Your Credentials
+
+Please refer to the docs
+
+Configuration
+
+Here is an example of making a Cloudinary configuration.
+
+First, create a cloudinary.com account and choose a plan.
+
+You will need to log in and get the API Key and API Secret for your
+account from the developer section.
+
+Now run
+
+rclone config
+
+Follow the interactive setup process:
+
+ No remotes found, make a new one?
+ n) New remote
+ s) Set configuration password
+ q) Quit config
+ n/s/q> n
+
+ Enter the name for the new remote.
+ name> cloudinary-media-library
+
+ Option Storage.
+ Type of storage to configure.
+ Choose a number from below, or type in your own value.
+ [snip]
+ XX / cloudinary.com
+ \ (cloudinary)
+ [snip]
+ Storage> cloudinary
+
+ Option cloud_name.
+ You can find your cloudinary.com cloud_name in your [dashboard](https://console.cloudinary.com/pm/developer-dashboard)
+ Enter a value.
+ cloud_name> ****************************
+
+ Option api_key.
+ You can find your cloudinary.com api key in your [dashboard](https://console.cloudinary.com/pm/developer-dashboard)
+ Enter a value.
+ api_key> ****************************
+
+ Option api_secret.
+ You can find your cloudinary.com api secret in your [dashboard](https://console.cloudinary.com/pm/developer-dashboard)
+ This value must be a single character, one of the following: y, g.
+ y/g> y
+ Enter a value.
+ api_secret> ****************************
+
+ Option upload_prefix.
+ [Upload prefix](https://cloudinary.com/documentation/cloudinary_sdks#configuration_parameters) to specify alternative data center
+ Enter a value.
+ upload_prefix>
+
+ Option upload_preset.
+ [Upload presets](https://cloudinary.com/documentation/upload_presets) can be defined for different upload profiles
+ Enter a value.
+ upload_preset>
+
+ Edit advanced config?
+ y) Yes
+ n) No (default)
+ y/n> n
+
+ Configuration complete.
+ Options:
+ - type: cloudinary
+ - api_key: ****************************
+ - api_secret: ****************************
+ - cloud_name: ****************************
+ - upload_prefix:
+ - upload_preset:
+
+ Keep this "cloudinary-media-library" remote?
+ y) Yes this is OK (default)
+ e) Edit this remote
+ d) Delete this remote
+ y/e/d> y
+
+List directories in the top level of your Media Library
+
+rclone lsd cloudinary-media-library:
+
+Make a new directory.
+
+rclone mkdir cloudinary-media-library:directory
+
+List the contents of a directory.
+
+rclone ls cloudinary-media-library:directory
+
+Modified time and hashes
+
+Cloudinary stores md5 and timestamps for any successful Put
+automatically and read-only.
+
+Standard options
+
+Here are the Standard options specific to cloudinary (Cloudinary).
+
+--cloudinary-cloud-name
+
+Cloudinary Environment Name
+
+Properties:
+
+- Config: cloud_name
+- Env Var: RCLONE_CLOUDINARY_CLOUD_NAME
+- Type: string
+- Required: true
+
+--cloudinary-api-key
+
+Cloudinary API Key
+
+Properties:
+
+- Config: api_key
+- Env Var: RCLONE_CLOUDINARY_API_KEY
+- Type: string
+- Required: true
+
+--cloudinary-api-secret
+
+Cloudinary API Secret
+
+Properties:
+
+- Config: api_secret
+- Env Var: RCLONE_CLOUDINARY_API_SECRET
+- Type: string
+- Required: true
+
+--cloudinary-upload-prefix
+
+Specify the API endpoint for environments out of the US
+
+Properties:
+
+- Config: upload_prefix
+- Env Var: RCLONE_CLOUDINARY_UPLOAD_PREFIX
+- Type: string
+- Required: false
+
+--cloudinary-upload-preset
+
+Upload Preset to select asset manipulation on upload
+
+Properties:
+
+- Config: upload_preset
+- Env Var: RCLONE_CLOUDINARY_UPLOAD_PRESET
+- Type: string
+- Required: false
+
+Advanced options
+
+Here are the Advanced options specific to cloudinary (Cloudinary).
+
+--cloudinary-encoding
+
+The encoding for the backend.
+
+See the encoding section in the overview for more info.
+
+Properties:
+
+- Config: encoding
+- Env Var: RCLONE_CLOUDINARY_ENCODING
+- Type: Encoding
+- Default:
+ Slash,LtGt,DoubleQuote,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot
+
+--cloudinary-eventually-consistent-delay
+
+Wait N seconds for eventual consistency of the databases that support
+the backend operation
+
+Properties:
+
+- Config: eventually_consistent_delay
+- Env Var: RCLONE_CLOUDINARY_EVENTUALLY_CONSISTENT_DELAY
+- Type: Duration
+- Default: 0s
+
+--cloudinary-description
+
+Description of the remote.
+
+Properties:
+
+- Config: description
+- Env Var: RCLONE_CLOUDINARY_DESCRIPTION
+- Type: string
+- Required: false
+
Citrix ShareFile
Citrix ShareFile is a secure file sharing and transfer service aimed as
@@ -31720,6 +32948,20 @@ Properties:
- Type: string
- Required: false
+--sharefile-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC
+6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_SHAREFILE_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
--sharefile-upload-cutoff
Cutoff for switching to multipart upload.
@@ -33188,6 +34430,20 @@ Properties:
- Type: string
- Required: false
+--dropbox-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC
+6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_DROPBOX_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
--dropbox-chunk-size
Upload chunk size (< 150Mi).
@@ -34319,12 +35575,11 @@ Properties:
Socks 5 proxy host.
- Supports the format user:pass@host:port, user@host:port, host:port.
-
- Example:
-
- myUser:myPass@localhost:9005
-
+Supports the format user:pass@host:port, user@host:port, host:port.
+
+Example:
+
+ myUser:myPass@localhost:9005
Properties:
@@ -34333,6 +35588,27 @@ Properties:
- Type: string
- Required: false
+--ftp-no-check-upload
+
+Don't check the upload is OK
+
+Normally rclone will try to check the upload exists after it has
+uploaded a file to make sure the size and modification time are as
+expected.
+
+This flag stops rclone doing these checks. This enables uploading to
+folders which are write only.
+
+You will likely need to use the --inplace flag also if uploading to a
+write only folder.
+
+Properties:
+
+- Config: no_check_upload
+- Env Var: RCLONE_FTP_NO_CHECK_UPLOAD
+- Type: bool
+- Default: false
+
--ftp-encoding
The encoding for the backend.
@@ -34856,6 +36132,64 @@ stuff the contents of the credentials file into the rclone config file,
you can set service_account_credentials with the actual contents of the
file instead, or set the equivalent environment variable.
+Service Account Authentication with Access Tokens
+
+Another option for service account authentication is to use access
+tokens via gcloud impersonate-service-account. Access tokens protect
+security by avoiding the use of the JSON key file, which can be
+breached. They also bypass oauth login flow, which is simpler on remote
+VMs that lack a web browser.
+
+If you already have a working service account, skip to step 3.
+
+1. Create a service account using
+
+ gcloud iam service-accounts create gcs-read-only
+
+You can re-use an existing service account as well (like the one created
+above)
+
+2. Attach a Viewer (read-only) or User (read-write) role to the service account
+
+ $ PROJECT_ID=my-project
+ $ gcloud --verbose iam service-accounts add-iam-policy-binding \
+ gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com \
+ --member=serviceAccount:gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com \
+ --role=roles/storage.objectViewer
+
+Use the Google Cloud console to identify a limited role. Some relevant
+pre-defined roles:
+
+- roles/storage.objectUser -- read-write access but no admin
+ privileges
+- roles/storage.objectViewer -- read-only access to objects
+- roles/storage.admin -- create buckets & administrative roles
+
+3. Get a temporary access key for the service account
+
+ $ gcloud auth application-default print-access-token \
+ --impersonate-service-account \
+ gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com
+
+ ya29.c.c0ASRK0GbAFEewXD [truncated]
+
+4. Update access_token setting
+
+hit CTRL-C when you see waiting for code. This will save the config
+without doing oauth flow
+
+ rclone config update ${REMOTE_NAME} access_token ya29.c.c0Axxxx
+
+5. Run rclone as usual
+
+ rclone ls dev-gcs:${MY_BUCKET}/
+
+More Info on Service Accounts
+
+- Official GCS Docs
+- Guide on Service Accounts using Key Files (less secure, but similar
+ concepts)
+
Anonymous Access
For downloads of objects that permit public access you can configure
@@ -35285,6 +36619,34 @@ Properties:
- Type: string
- Required: false
+--gcs-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC
+6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_GCS_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
+--gcs-access-token
+
+Short-lived access token.
+
+Leave blank normally. Needed only if you want use short-lived access
+token instead of interactive login.
+
+Properties:
+
+- Config: access_token
+- Env Var: RCLONE_GCS_ACCESS_TOKEN
+- Type: string
+- Required: false
+
--gcs-directory-markers
Upload an empty object with a trailing slash when a new directory is
@@ -35930,6 +37292,8 @@ represent the currently available conversions.
json application/vnd.google-apps.script+json JSON Text Format for
Google Apps scripts
+ md text/markdown Markdown Text Format
+
odp application/vnd.oasis.opendocument.presentation Openoffice Presentation
ods application/vnd.oasis.opendocument.spreadsheet Openoffice Spreadsheet
@@ -36109,6 +37473,20 @@ Properties:
- Type: string
- Required: false
+--drive-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC
+6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_DRIVE_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
--drive-root-folder-id
ID of the root folder. Leave blank normally.
@@ -37130,6 +38508,41 @@ The result is a JSON array of matches, for example:
}
]
+rescue
+
+Rescue or delete any orphaned files
+
+ rclone backend rescue remote: [options] [+]
+
+This command rescues or deletes any orphaned files or directories.
+
+Sometimes files can get orphaned in Google Drive. This means that they
+are no longer in any folder in Google Drive.
+
+This command finds those files and either rescues them to a directory
+you specify or deletes them.
+
+Usage:
+
+This can be used in 3 ways.
+
+First, list all orphaned files
+
+ rclone backend rescue drive:
+
+Second rescue all orphaned files to the directory indicated
+
+ rclone backend rescue drive: "relative/path/to/rescue/directory"
+
+e.g. To rescue all orphans to a directory called "Orphans" in the top
+level
+
+ rclone backend rescue drive: Orphans
+
+Third delete all orphaned files to the trash
+
+ rclone backend rescue drive: -o delete
+
Limitations
Drive has quite a lot of rate limiting. This causes rclone to be limited
@@ -37263,9 +38676,9 @@ Here is how to create your own Google Drive client ID for rclone:
9. It will show you a client ID and client secret. Make a note of
these.
- (If you selected "External" at Step 5 continue to Step 9. If you
+ (If you selected "External" at Step 5 continue to Step 10. If you
chose "Internal" you don't need to publish and can skip straight to
- Step 10 but your destination drive must be part of the same Google
+ Step 11 but your destination drive must be part of the same Google
Workspace.)
10. Go to "Oauth consent screen" and then click "PUBLISH APP" button and
@@ -37595,6 +39008,20 @@ Properties:
- Type: string
- Required: false
+--gphotos-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC
+6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_GPHOTOS_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
--gphotos-read-size
Set to read the size of media items.
@@ -37647,6 +39074,39 @@ Properties:
- Type: bool
- Default: false
+--gphotos-proxy
+
+Use the gphotosdl proxy for downloading the full resolution images
+
+The Google API will deliver images and video which aren't full
+resolution, and/or have EXIF data missing.
+
+However if you ue the gphotosdl proxy tnen you can download original,
+unchanged images.
+
+This runs a headless browser in the background.
+
+Download the software from gphotosdl
+
+First run with
+
+ gphotosdl -login
+
+Then once you have logged into google photos close the browser window
+and run
+
+ gphotosdl
+
+Then supply the parameter --gphotos-proxy "http://localhost:8282" to
+make rclone use the proxy.
+
+Properties:
+
+- Config: proxy
+- Env Var: RCLONE_GPHOTOS_PROXY
+- Type: string
+- Required: false
+
--gphotos-encoding
The encoding for the backend.
@@ -37783,12 +39243,18 @@ relying on "Google Photos" as a backup of your photos. You will not be
able to use rclone to redownload original images. You could use 'google
takeout' to recover the original photos as a last resort
+NB you can use the --gphotos-proxy flag to use a headless browser to
+download images in full resolution.
+
Downloading Videos
When videos are downloaded they are downloaded in a really compressed
version of the video compared to downloading it via the Google Photos
web interface. This is covered by bug #113672044.
+NB you can use the --gphotos-proxy flag to use a headless browser to
+download images in full resolution.
+
Duplicates
If a file name is duplicated in a directory then rclone will add the
@@ -38713,6 +40179,20 @@ Properties:
- Type: string
- Required: false
+--hidrive-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC
+6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_HIDRIVE_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
--hidrive-scope-role
User-level that rclone should use when requesting access from HiDrive.
@@ -39437,6 +40917,173 @@ Here are the possible system metadata items for the imagekit backend.
See the metadata docs for more info.
+iCloud Drive
+
+Configuration
+
+The initial setup for an iCloud Drive backend involves getting a trust
+token/session. This can be done by simply using the regular iCloud
+password, and accepting the code prompt on another iCloud connected
+device.
+
+IMPORTANT: At the moment an app specific password won't be accepted.
+Only use your regular password and 2FA.
+
+rclone config walks you through the token creation. The trust token is
+valid for 30 days. After which you will have to reauthenticate with
+rclone reconnect or rclone config.
+
+Here is an example of how to make a remote called iclouddrive. First
+run:
+
+ rclone config
+
+This will guide you through an interactive setup process:
+
+ No remotes found, make a new one?
+ n) New remote
+ s) Set configuration password
+ q) Quit config
+ n/s/q> n
+ name> iclouddrive
+ Option Storage.
+ Type of storage to configure.
+ Choose a number from below, or type in your own value.
+ [snip]
+ XX / iCloud Drive
+ \ (iclouddrive)
+ [snip]
+ Storage> iclouddrive
+ Option apple_id.
+ Apple ID.
+ Enter a value.
+ apple_id> APPLEID
+ Option password.
+ Password.
+ Choose an alternative below.
+ y) Yes, type in my own password
+ g) Generate random password
+ y/g> y
+ Enter the password:
+ password:
+ Confirm the password:
+ password:
+ Edit advanced config?
+ y) Yes
+ n) No (default)
+ y/n> n
+ Option config_2fa.
+ Two-factor authentication: please enter your 2FA code
+ Enter a value.
+ config_2fa> 2FACODE
+ Remote config
+ --------------------
+ [koofr]
+ - type: iclouddrive
+ - apple_id: APPLEID
+ - password: *** ENCRYPTED ***
+ - cookies: ****************************
+ - trust_token: ****************************
+ --------------------
+ y) Yes this is OK (default)
+ e) Edit this remote
+ d) Delete this remote
+ y/e/d> y
+
+Advanced Data Protection
+
+ADP is currently unsupported and need to be disabled
+
+Standard options
+
+Here are the Standard options specific to iclouddrive (iCloud Drive).
+
+--iclouddrive-apple-id
+
+Apple ID.
+
+Properties:
+
+- Config: apple_id
+- Env Var: RCLONE_ICLOUDDRIVE_APPLE_ID
+- Type: string
+- Required: true
+
+--iclouddrive-password
+
+Password.
+
+NB Input to this must be obscured - see rclone obscure.
+
+Properties:
+
+- Config: password
+- Env Var: RCLONE_ICLOUDDRIVE_PASSWORD
+- Type: string
+- Required: true
+
+--iclouddrive-trust-token
+
+Trust token (internal use)
+
+Properties:
+
+- Config: trust_token
+- Env Var: RCLONE_ICLOUDDRIVE_TRUST_TOKEN
+- Type: string
+- Required: false
+
+--iclouddrive-cookies
+
+cookies (internal use only)
+
+Properties:
+
+- Config: cookies
+- Env Var: RCLONE_ICLOUDDRIVE_COOKIES
+- Type: string
+- Required: false
+
+Advanced options
+
+Here are the Advanced options specific to iclouddrive (iCloud Drive).
+
+--iclouddrive-client-id
+
+Client id
+
+Properties:
+
+- Config: client_id
+- Env Var: RCLONE_ICLOUDDRIVE_CLIENT_ID
+- Type: string
+- Default:
+ "d39ba9916b7251055b22c7f910e2ea796ee65e98b2ddecea8f5dde8d9d1a815d"
+
+--iclouddrive-encoding
+
+The encoding for the backend.
+
+See the encoding section in the overview for more info.
+
+Properties:
+
+- Config: encoding
+- Env Var: RCLONE_ICLOUDDRIVE_ENCODING
+- Type: Encoding
+- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
+
+--iclouddrive-description
+
+Description of the remote.
+
+Properties:
+
+- Config: description
+- Env Var: RCLONE_ICLOUDDRIVE_DESCRIPTION
+- Type: string
+- Required: false
+
Internet Archive
The Internet Archive backend utilizes Items on archive.org
@@ -40158,6 +41805,20 @@ Properties:
- Type: string
- Required: false
+--jottacloud-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC
+6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_JOTTACLOUD_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
--jottacloud-md5-memory-limit
Files bigger than this will be cached on disk to calculate the MD5 if
@@ -41041,6 +42702,20 @@ Properties:
- Type: string
- Required: false
+--mailru-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC
+6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_MAILRU_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
--mailru-speedup-file-patterns
Comma separated list of file name patterns eligible for speedup (put by
@@ -42070,6 +43745,13 @@ identity, the user-assigned identity will be used by default.
If the resource has multiple user-assigned identities you will need to
unset env_auth and set use_msi instead. See the use_msi section.
+If you are operating in disconnected clouds, or private clouds such as
+Azure Stack you may want to set disable_instance_discovery = true. This
+determines whether rclone requests Microsoft Entra instance metadata
+from https://login.microsoft.com/ before authenticating. Setting this to
+true will skip this request, making you responsible for ensuring the
+configured authority is valid and trustworthy.
+
Env Auth: 3. Azure CLI credentials (as used by the az tool)
Credentials created with the az tool can be picked up using env_auth.
@@ -42191,6 +43873,15 @@ msi_client_id, or msi_mi_res_id parameters.
If none of msi_object_id, msi_client_id, or msi_mi_res_id is set, this
is is equivalent to using env_auth.
+Azure CLI tool az
+
+Set to use the Azure CLI tool az as the sole means of authentication.
+
+Setting this can be useful if you wish to use the az CLI on a host with
+a System Managed Identity that you do not want to use.
+
+Don't set env_auth at the same time.
+
Anonymous
If you want to access resources with public anonymous access then set
@@ -42407,6 +44098,26 @@ Properties:
- Type: string
- Required: false
+--azureblob-disable-instance-discovery
+
+Skip requesting Microsoft Entra instance metadata
+
+This should be set true only by applications authenticating in
+disconnected clouds, or private clouds such as Azure Stack.
+
+It determines whether rclone requests Microsoft Entra instance metadata
+from https://login.microsoft.com/ before authenticating.
+
+Setting this to true will skip this request, making you responsible for
+ensuring the configured authority is valid and trustworthy.
+
+Properties:
+
+- Config: disable_instance_discovery
+- Env Var: RCLONE_AZUREBLOB_DISABLE_INSTANCE_DISCOVERY
+- Type: bool
+- Default: false
+
--azureblob-use-msi
Use a managed service identity to authenticate (only works in Azure).
@@ -42481,6 +44192,24 @@ Properties:
- Type: bool
- Default: false
+--azureblob-use-az
+
+Use Azure CLI tool az for authentication
+
+Set to use the Azure CLI tool az as the sole means of authentication.
+
+Setting this can be useful if you wish to use the az CLI on a host with
+a System Managed Identity that you do not want to use.
+
+Don't set env_auth at the same time.
+
+Properties:
+
+- Config: use_az
+- Env Var: RCLONE_AZUREBLOB_USE_AZ
+- Type: bool
+- Default: false
+
--azureblob-endpoint
Endpoint for the service.
@@ -43699,6 +45428,32 @@ organization only, as shown below.
Note: If you have a special region, you may need a different host in
step 4 and 5. Here are some hints.
+Using OAuth Client Credential flow
+
+OAuth Client Credential flow will allow rclone to use permissions
+directly associated with the Azure AD Enterprise application, rather
+that adopting the context of an Azure AD user account.
+
+This flow can be enabled by following the steps below:
+
+1. Create the Enterprise App registration in the Azure AD portal and
+ obtain a Client ID and Client Secret as described above.
+2. Ensure that the application has the appropriate permissions and they
+ are assigned as Application Permissions
+3. Configure the remote, ensuring that Client ID and Client Secret are
+ entered correctly.
+4. In the Advanced Config section, enter true for client_credentials
+ and in the tenant section enter the tenant ID.
+
+When it comes to choosing the type of the connection work with the
+client credentials flow. In particular the "onedrive" option does not
+work. You can use the "sharepoint" option or if that does not find the
+correct drive ID type it in manually with the "driveid" option.
+
+NOTE Assigning permissions directly to the application means that anyone
+with the Client ID and Client Secret can access your OneDrive files.
+Take care to safeguard these credentials.
+
Modification times and hashes
OneDrive allows modification times to be set on objects accurate to 1
@@ -43838,6 +45593,19 @@ Properties:
- "cn"
- Azure and Office 365 operated by Vnet Group in China
+--onedrive-tenant
+
+ID of the service principal's tenant. Also called its directory ID.
+
+Set this if using - Client Credential flow
+
+Properties:
+
+- Config: tenant
+- Env Var: RCLONE_ONEDRIVE_TENANT
+- Type: string
+- Required: false
+
Advanced options
Here are the Advanced options specific to onedrive (Microsoft OneDrive).
@@ -43879,6 +45647,20 @@ Properties:
- Type: string
- Required: false
+--onedrive-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC
+6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_ONEDRIVE_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
--onedrive-chunk-size
Chunk size to upload files with - must be multiple of 320k (327,680
@@ -45255,7 +47037,9 @@ Properties:
--oos-compartment
-Object storage compartment OCID
+Specify compartment OCID, if you need to list buckets.
+
+List objects works without compartment OCID.
Properties:
@@ -45263,7 +47047,7 @@ Properties:
- Env Var: RCLONE_OOS_COMPARTMENT
- Provider: !no_auth
- Type: string
-- Required: true
+- Required: false
--oos-region
@@ -47330,6 +49114,10 @@ This will guide you through an interactive setup process:
client_id>
Pcloud App Client Secret - leave blank normally.
client_secret>
+ Edit advanced config?
+ y) Yes
+ n) No (default)
+ y/n> n
Remote config
Use web browser to automatically authenticate rclone with remote?
* Say Y if the machine running rclone has a web browser you can use
@@ -47357,6 +49145,10 @@ This will guide you through an interactive setup process:
See the remote setup docs for how to set it up on a machine with no
Internet browser available.
+Note if you are using remote config with rclone authorize while your
+pcloud server is the EU region, you will need to set the hostname in
+'Edit advanced config', otherwise you might get a token error.
+
Note that rclone runs a webserver on your local machine to collect the
token as returned from pCloud. This only runs from the moment it opens
your browser to the moment you get back the verification code. This is
@@ -47505,6 +49297,20 @@ Properties:
- Type: string
- Required: false
+--pcloud-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC
+6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_PCLOUD_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
--pcloud-encoding
The encoding for the backend.
@@ -47695,68 +49501,32 @@ Advanced options
Here are the Advanced options specific to pikpak (PikPak).
---pikpak-client-id
+--pikpak-device-id
-OAuth Client Id.
-
-Leave blank normally.
+Device ID used for authorization.
Properties:
-- Config: client_id
-- Env Var: RCLONE_PIKPAK_CLIENT_ID
+- Config: device_id
+- Env Var: RCLONE_PIKPAK_DEVICE_ID
- Type: string
- Required: false
---pikpak-client-secret
+--pikpak-user-agent
-OAuth Client Secret.
+HTTP user agent for pikpak.
-Leave blank normally.
+Defaults to "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0)
+Gecko/20100101 Firefox/129.0" or "--pikpak-user-agent" provided on
+command line.
Properties:
-- Config: client_secret
-- Env Var: RCLONE_PIKPAK_CLIENT_SECRET
+- Config: user_agent
+- Env Var: RCLONE_PIKPAK_USER_AGENT
- Type: string
-- Required: false
-
---pikpak-token
-
-OAuth Access Token as a JSON blob.
-
-Properties:
-
-- Config: token
-- Env Var: RCLONE_PIKPAK_TOKEN
-- Type: string
-- Required: false
-
---pikpak-auth-url
-
-Auth server URL.
-
-Leave blank to use the provider defaults.
-
-Properties:
-
-- Config: auth_url
-- Env Var: RCLONE_PIKPAK_AUTH_URL
-- Type: string
-- Required: false
-
---pikpak-token-url
-
-Token server url.
-
-Leave blank to use the provider defaults.
-
-Properties:
-
-- Config: token_url
-- Env Var: RCLONE_PIKPAK_TOKEN_URL
-- Type: string
-- Required: false
+- Default: "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0)
+ Gecko/20100101 Firefox/129.0"
--pikpak-root-folder-id
@@ -47798,6 +49568,20 @@ Properties:
- Type: bool
- Default: false
+--pikpak-no-media-link
+
+Use original file links instead of media links.
+
+This avoids issues caused by invalid media links, but may reduce
+download speeds.
+
+Properties:
+
+- Config: no_media_link
+- Env Var: RCLONE_PIKPAK_NO_MEDIA_LINK
+- Type: bool
+- Default: false
+
--pikpak-hash-memory-limit
Files bigger than this will be cached on disk to calculate hash if
@@ -48317,6 +50101,20 @@ Properties:
- Type: string
- Required: false
+--premiumizeme-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC
+6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_PREMIUMIZEME_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
--premiumizeme-encoding
The encoding for the backend.
@@ -48887,6 +50685,20 @@ Properties:
- Type: string
- Required: false
+--putio-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC
+6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_PUTIO_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
--putio-encoding
The encoding for the backend.
@@ -49820,7 +51632,8 @@ authentication process.
If you have a certificate you may use it to sign your public key,
creating a separate SSH user certificate that should be used instead of
the plain public key extracted from the private key. Then you must
-provide the path to the user certificate public key file in pubkey_file.
+provide the path to the user certificate public key file in pubkey_file
+or the content of the file in pubkey.
Note: This is not the traditional public key paired with your private
key, typically saved as /home/$USER/.ssh/id_rsa.pub. Setting this path
@@ -50149,6 +51962,19 @@ Properties:
- Type: string
- Required: false
+--sftp-pubkey
+
+SSH public certificate for public certificate based authentication. Set
+this if you have a signed certificate you want to use for
+authentication. If specified will override pubkey_file.
+
+Properties:
+
+- Config: pubkey
+- Env Var: RCLONE_SFTP_PUBKEY
+- Type: string
+- Required: false
+
--sftp-pubkey-file
Optional path to public key file.
@@ -52876,6 +54702,28 @@ Properties:
- Type: string
- Required: false
+--webdav-auth-redirect
+
+Preserve authentication on redirect.
+
+If the server redirects rclone to a new domain when it is trying to read
+a file then normally rclone will drop the Authorization: header from the
+request.
+
+This is standard security practice to avoid sending your credentials to
+an unknown webserver.
+
+However this is desirable in some circumstances. If you are getting an
+error like "401 Unauthorized" when rclone is attempting to read files
+from the webdav server then you can try this option.
+
+Properties:
+
+- Config: auth_redirect
+- Env Var: RCLONE_WEBDAV_AUTH_REDIRECT
+- Type: bool
+- Default: false
+
--webdav-description
Description of the remote.
@@ -53247,6 +55095,20 @@ Properties:
- Type: string
- Required: false
+--yandex-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC
+6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_YANDEX_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
--yandex-hard-delete
Delete files permanently rather than putting them into the trash.
@@ -53528,6 +55390,31 @@ Properties:
- Type: string
- Required: false
+--zoho-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC
+6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_ZOHO_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
+--zoho-upload-cutoff
+
+Cutoff for switching to large file upload api (>= 10 MiB).
+
+Properties:
+
+- Config: upload_cutoff
+- Env Var: RCLONE_ZOHO_UPLOAD_CUTOFF
+- Type: SizeSuffix
+- Default: 10Mi
+
--zoho-encoding
The encoding for the backend.
@@ -53766,13 +55653,13 @@ and
6 b/two
6 b/one
---links, -l
+--local-links, --links, -l
Normally rclone will ignore symlinks or junction points (which behave
like symlinks under Windows).
If you supply this flag then rclone will copy symbolic links from the
-local storage, and store them as text files, with a '.rclonelink' suffix
+local storage, and store them as text files, with a .rclonelink suffix
in the remote storage.
The text file will contain the target of the symbolic link (see
@@ -53791,7 +55678,7 @@ Copying the entire directory with '-l'
$ rclone copy -l /tmp/a/ remote:/tmp/a/
-The remote files are created with a '.rclonelink' suffix
+The remote files are created with a .rclonelink suffix
$ rclone ls remote:/tmp/a
5 file1.rclonelink
@@ -53832,6 +55719,10 @@ If you want to copy a single file with -l then you must use the
/tmp/c
└── file1 -> ./file4
+Note that --local-links just enables this feature for the local backend.
+--links and -l enable the feature for all supported backends and the
+VFS.
+
Note that this flag is incompatible with -copy-links / -L.
Restricting filesystems with --one-file-system
@@ -53900,9 +55791,10 @@ Properties:
- Type: bool
- Default: false
---links / -l
+--local-links
-Translate symlinks to/from regular files with a '.rclonelink' extension.
+Translate symlinks to/from regular files with a '.rclonelink' extension
+for the local backend.
Properties:
@@ -54265,6 +56157,253 @@ Options:
Changelog
+v1.69.0 - 2025-01-12
+
+See commits
+
+- New backends
+ - ICloud Drive (lostb1t)
+ - Cloudinary (yuval-cloudinary)
+ - New S3 providers:
+ - Outscale (Matthias Gatto)
+ - Selectel (Nick Craig-Wood)
+- Security fixes
+ - serve sftp: Resolve CVE-2024-45337 - Misuse of
+ ServerConfig.PublicKeyCallback may cause authorization bypass
+ (dependabot)
+ - Rclone was not vulnerable to this.
+ - See https://github.com/advisories/GHSA-v778-237x-gjrc
+ - build: Update golang.org/x/net to v0.33.0 to fix
+ CVE-2024-45338 - Non-linear parsing of case-insensitive content
+ (Nick Craig-Wood)
+ - Rclone was not vulnerable to this.
+ - See https://github.com/advisories/GHSA-w32m-9786-jp63
+- New Features
+ - accounting: Write the current bwlimit to the log on SIGUSR2
+ (Nick Craig-Wood)
+ - bisync: Change exit code from 2 to 7 for critically aborted run
+ (albertony)
+ - build
+ - Update all dependencies (Nick Craig-Wood)
+ - Replace Windows-specific NewLazyDLL with NewLazySystemDLL
+ (albertony)
+ - cmd: Change exit code from 1 to 2 for syntax and usage errors
+ (albertony)
+ - docker serve: make sure all mount and VFS options are parsed
+ (Nick Craig-Wood)
+ - doc fixes (albertony, Alexandre Hamez, Anthony Metzidis,
+ buengese, Dan McArdle, David Seifert, Francesco Frassinelli,
+ Michael R. Davis, Nick Craig-Wood, Pawel Palucha, Randy Bush,
+ remygrandin, Sam Harrison, shenpengfeng, tgfisher, Thomas ten
+ Cate, ToM, Tony Metzidis, vintagefuture, Yxxx)
+ - fs: Make --links flag global and add new --local-links and
+ --vfs-links flags (Nick Craig-Wood)
+ - http servers: Disable automatic authentication skipping for unix
+ sockets in http servers (Moises Lima)
+ - This was making it impossible to use unix sockets with an
+ proxy
+ - This might now cause rclone to need authenticaton where it
+ didn't before
+ - oauthutil: add support for OAuth client credential flow (Martin
+ Hassack, Nick Craig-Wood)
+ - operations: make log messages consistent for mkdir/rmdir at INFO
+ level (Nick Craig-Wood)
+ - rc: Add relative to vfs/queue-set-expiry (Nick Craig-Wood)
+ - serve dlna: Sort the directory entries by directories first then
+ alphabetically by name (Nick Craig-Wood)
+ - serve nfs
+ - Introduce symlink support (Nick Craig-Wood)
+ - Implement --nfs-cache-type symlink (Nick Craig-Wood)
+ - size: Make output compatible with -P (Nick Craig-Wood)
+ - test makefiles: Add --flat flag for making directories with many
+ entries (Nick Craig-Wood)
+- Bug Fixes
+ - accounting
+ - Fix global error acounting (Benjamin Legrand)
+ - Fix debug printing when debug wasn't set (Nick Craig-Wood)
+ - Fix race stopping/starting the stats counter (Nick
+ Craig-Wood)
+ - rc/job: Use mutex for adding listeners thread safety
+ (hayden.pan)
+ - serve docker: Fix incorrect GID assignment (TAKEI Yuya)
+ - serve nfs: Fix missing inode numbers which was messing up
+ ls -laR (Nick Craig-Wood)
+ - serve s3: Fix Last-Modified timestamp (Nick Craig-Wood)
+ - serve sftp: Fix loading of authorized keys file with comment on
+ last line (albertony)
+- Mount
+ - Introduce symlink support (Filipe Azevedo, Nick Craig-Wood)
+ - Better snap mount error message (divinity76)
+ - mount2: Fix missing . and .. entries (Filipe Azevedo)
+- VFS
+ - With --vfs-used-is-size value is calculated and then thrown away
+ (Ilias Ozgur Can Leonard)
+ - Add symlink support to VFS (Filipe Azevedo, Nick Craig-Wood)
+ - This can be enabled with the specific --vfs-links flag or
+ the global --links flag
+ - Fix open files disappearing from directory listings (Nick
+ Craig-Wood)
+ - Add remote name to vfs cache log messages (Nick Craig-Wood)
+- Cache
+ - Fix parent not getting pinned when remote is a file (nielash)
+- Azure Blob
+ - Add --azureblob-disable-instance-discovery (Nick Craig-Wood)
+ - Add --azureblob-use-az to force the use of the Azure CLI for
+ auth (Nick Craig-Wood)
+ - Quit multipart uploads if the context is cancelled (Nick
+ Craig-Wood)
+- Azurefiles
+ - Fix missing x-ms-file-request-intent header (Nick Craig-Wood)
+- B2
+ - Add daysFromStartingToCancelingUnfinishedLargeFiles to
+ backend lifecycle command (Louis Laureys)
+- Box
+ - Fix server-side copying a file over existing dst (nielash)
+ - Fix panic when decoding corrupted PEM from JWT file (Nick
+ Craig-Wood)
+- Drive
+ - Add support for markdown format (Noam Ross)
+ - Implement rclone backend rescue to rescue orphaned files (Nick
+ Craig-Wood)
+- Dropbox
+ - Fix server side copying over existing object (Nick Craig-Wood)
+ - Fix return status when full to be fatal error (Nick Craig-Wood)
+- FTP
+ - Implement --ftp-no-check-upload to allow upload to write only
+ dirs (Nick Craig-Wood)
+ - Fix ls commands returning empty on "Microsoft FTP Service"
+ servers (Francesco Frassinelli)
+- Gofile
+ - Fix server side copying over existing object (Nick Craig-Wood)
+- Google Cloud Storage
+ - Add access token auth with --gcs-access-token (Leandro Piccilli)
+ - Update docs on service account access tokens (Anthony Metzidis)
+- Googlephotos
+ - Implement --gphotos-proxy to allow download of full resolution
+ media (Nick Craig-Wood)
+ - Fix nil pointer crash on upload (Nick Craig-Wood)
+- HTTP
+ - Fix incorrect URLs with initial slash (Oleg Kunitsyn)
+- Onedrive
+ - Add support for OAuth client credential flow (Martin Hassack,
+ Nick Craig-Wood)
+ - Fix time precision for OneDrive personal (Nick Craig-Wood)
+ - Fix server side copying over existing object (Nick Craig-Wood)
+- Opendrive
+ - Add rclone about support to backend (quiescens)
+- Oracle Object Storage
+ - Make specifying compartmentid optional (Manoj Ghosh)
+ - Quit multipart uploads if the context is cancelled (Nick
+ Craig-Wood)
+- Pikpak
+ - Add option to use original file links (wiserain)
+- Protondrive
+ - Improve performance of Proton Drive backend (Lawrence Murray)
+- Putio
+ - Fix server side copying over existing object (Nick Craig-Wood)
+- S3
+ - Add initial --s3-directory-bucket to support AWS Directory
+ Buckets (Nick Craig-Wood)
+ - Add Wasabi eu-south-1 region (Diego Monti)
+ - Fix download of compressed files from Cloudflare R2 (Nick
+ Craig-Wood)
+ - Rename glacier storage class to flexible retrieval (Henry Lee)
+ - Quit multipart uploads if the context is cancelled (Nick
+ Craig-Wood)
+- SFTP
+ - Allow inline ssh public certificate for sftp (Dimitar Ivanov)
+ - Fix nil check when using auth proxy (Nick Craig-Wood)
+- Smb
+ - Add initial support for Kerberos authentication (more work
+ needed). (Francesco Frassinelli)
+ - Fix panic if stat fails (Nick Craig-Wood)
+- Sugarsync
+ - Fix server side copying over existing object (Nick Craig-Wood)
+- WebDAV
+ - Nextcloud: implement backoff and retry for 423 LOCKED errors
+ (Nick Craig-Wood)
+ - Make --webdav-auth-redirect to fix 401 unauthorized on redirect
+ (Nick Craig-Wood)
+- Yandex
+ - Fix server side copying over existing object (Nick Craig-Wood)
+- Zoho
+ - Use download server to accelerate downloads (buengese)
+ - Switch to large file upload API for larger files, fix missing
+ URL encoding of filenames for the upload API (buengese)
+ - Print clear error message when missing oauth scope (buengese)
+ - Try to handle rate limits a bit better (buengese)
+ - Add support for private spaces (buengese)
+ - Make upload cutoff configurable (buengese)
+
+v1.68.2 - 2024-11-15
+
+See commits
+
+- Security fixes
+ - local backend: CVE-2024-52522: fix permission and ownership on
+ symlinks with --links and --metadata (Nick Craig-Wood)
+ - Only affects users using --metadata and --links and copying
+ files to the local backend
+ - See
+ https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
+ - build: bump github.com/golang-jwt/jwt/v4 from 4.5.0 to 4.5.1
+ (dependabot)
+ - This is an issue in a dependency which is used for JWT
+ certificates
+ - See
+ https://github.com/golang-jwt/jwt/security/advisories/GHSA-29wx-vh33-7x7r
+- Bug Fixes
+ - accounting: Fix wrong message on SIGUSR2 to enable/disable
+ bwlimit (Nick Craig-Wood)
+ - bisync: Fix output capture restoring the wrong output for logrus
+ (Dimitrios Slamaris)
+ - dlna: Fix loggingResponseWriter disregarding log level (Simon
+ Bos)
+ - serve s3: Fix excess locking which was making serve s3 single
+ threaded (Nick Craig-Wood)
+ - doc fixes (Nick Craig-Wood, tgfisher, Alexandre Hamez, Randy
+ Bush)
+- Local
+ - Fix permission and ownership on symlinks with --links and
+ --metadata (Nick Craig-Wood)
+ - Fix --copy-links on macOS when cloning (nielash)
+- Onedrive
+ - Fix Retry-After handling to look at 503 errors also (Nick
+ Craig-Wood)
+- Pikpak
+ - Fix cid/gcid calculations for fs.OverrideRemote (wiserain)
+ - Fix fatal crash on startup with token that can't be refreshed
+ (Nick Craig-Wood)
+- S3
+ - Fix crash when using --s3-download-url after migration to SDKv2
+ (Nick Craig-Wood)
+ - Storj provider: fix server-side copy of files bigger than 5GB
+ (Kaloyan Raev)
+ - Fix multitenant multipart uploads with CEPH (Nick Craig-Wood)
+
+v1.68.1 - 2024-09-24
+
+See commits
+
+- Bug Fixes
+ - build: Fix docker release build (ttionya)
+ - doc fixes (Nick Craig-Wood, Pawel Palucha)
+ - fs
+ - Fix --dump filters not always appearing (Nick Craig-Wood)
+ - Fix setting stringArray config values from environment
+ variables (Nick Craig-Wood)
+ - rc: Fix default value of --metrics-addr (Nick Craig-Wood)
+ - serve docker: Add missing vfs-read-chunk-streams option in
+ docker volume driver (Divyam)
+- Onedrive
+ - Fix spurious "Couldn't decode error response: EOF" DEBUG (Nick
+ Craig-Wood)
+- Pikpak
+ - Fix login issue where token retrieval fails (wiserain)
+- S3
+ - Fix rclone ignoring static credentials when env_auth=true (Nick
+ Craig-Wood)
+
v1.68.0 - 2024-09-08
See commits
@@ -54392,6 +56531,7 @@ See commits
- Implement SetModTime (Georg Welzel)
- Implement OpenWriterAt feature to enable multipart uploads
(Georg Welzel)
+ - Fix failing large file uploads (Georg Welzel)
- Pikpak
- Improve data consistency by ensuring async tasks complete
(wiserain)
@@ -62640,6 +64780,43 @@ email addresses removed from here need to be added to bin/.ignore-emails to make
- fsantagostinobietti
6057026+fsantagostinobietti@users.noreply.github.com
- Oleg Kunitsyn 114359669+hiddenmarten@users.noreply.github.com
+- Divyam 47589864+divyam234@users.noreply.github.com
+- ttionya ttionya@users.noreply.github.com
+- quiescens quiescens@gmail.com
+- rishi.sridhar rishi.sridhar@zohocorp.com
+- Lawrence Murray lawrence@indii.org
+- Leandro Piccilli leandro.piccilli@thalesgroup.com
+- Benjamin Legrand benjamin.legrand@seagate.com
+- Noam Ross noam.ross@gmail.com
+- lostb1t coding-mosses0z@icloud.com
+- Matthias Gatto matthias.gatto@outscale.com
+- André Tran andre.tran@outscale.com
+- Simon Bos simon@simonbos.be
+- Alexandre Hamez 199517+ahamez@users.noreply.github.com
+- Randy Bush randy@psg.com
+- Diego Monti diegmonti@users.noreply.github.com
+- tgfisher tgfisher@stanford.edu
+- Moises Lima mozlima@gmail.com
+- Dimitar Ivanov mimiteto@gmail.com
+- shenpengfeng xinhangzhou@icloud.com
+- Dimitrios Slamaris dim0x69@users.noreply.github.com
+- vintagefuture 39503528+vintagefuture@users.noreply.github.com
+- David Seifert soap@gentoo.org
+- Michael R. Davis mrdvt92@users.noreply.github.com
+- remygrandin remy.gr@ndin.fr
+- Ilias Ozgur Can Leonard iscilyas@gmail.com
+- divinity76 divinity76@gmail.com
+- Martin Hassack martin@redmaple.tech
+- Filipe Azevedo pasnox@gmail.com
+- hayden.pan hayden.pan@outlook.com
+- Yxxx 45665172+marsjane@users.noreply.github.com
+- Thomas ten Cate ttencate@gmail.com
+- Louis Laureys louis@laureys.me
+- Henry Lee contact@nynxz.com
+- ToM thomas.faucher@bibliosansfrontieres.org
+- TAKEI Yuya 853320+takei-yuya@users.noreply.github.com
+- Francesco Frassinelli fraph24@gmail.com
+ francesco.frassinelli@nina.no
Contact the rclone project
diff --git a/docs/content/azureblob.md b/docs/content/azureblob.md
index f95f1299a..8d197c831 100644
--- a/docs/content/azureblob.md
+++ b/docs/content/azureblob.md
@@ -540,6 +540,28 @@ Properties:
- Type: string
- Required: false
+#### --azureblob-disable-instance-discovery
+
+Skip requesting Microsoft Entra instance metadata
+
+This should be set true only by applications authenticating in
+disconnected clouds, or private clouds such as Azure Stack.
+
+It determines whether rclone requests Microsoft Entra instance
+metadata from `https://login.microsoft.com/` before
+authenticating.
+
+Setting this to true will skip this request, making you responsible
+for ensuring the configured authority is valid and trustworthy.
+
+
+Properties:
+
+- Config: disable_instance_discovery
+- Env Var: RCLONE_AZUREBLOB_DISABLE_INSTANCE_DISCOVERY
+- Type: bool
+- Default: false
+
#### --azureblob-use-msi
Use a managed service identity to authenticate (only works in Azure).
@@ -612,6 +634,26 @@ Properties:
- Type: bool
- Default: false
+#### --azureblob-use-az
+
+Use Azure CLI tool az for authentication
+
+Set to use the [Azure CLI tool az](https://learn.microsoft.com/en-us/cli/azure/)
+as the sole means of authentication.
+
+Setting this can be useful if you wish to use the az CLI on a host with
+a System Managed Identity that you do not want to use.
+
+Don't set env_auth at the same time.
+
+
+Properties:
+
+- Config: use_az
+- Env Var: RCLONE_AZUREBLOB_USE_AZ
+- Type: bool
+- Default: false
+
#### --azureblob-endpoint
Endpoint for the service.
diff --git a/docs/content/b2.md b/docs/content/b2.md
index 6f73e0219..bb281a2f3 100644
--- a/docs/content/b2.md
+++ b/docs/content/b2.md
@@ -702,6 +702,7 @@ This will dump something like this showing the lifecycle rules.
{
"daysFromHidingToDeleting": 1,
"daysFromUploadingToHiding": null,
+ "daysFromStartingToCancelingUnfinishedLargeFiles": null,
"fileNamePrefix": ""
}
]
@@ -731,6 +732,7 @@ See: https://www.backblaze.com/docs/cloud-storage-lifecycle-rules
Options:
- "daysFromHidingToDeleting": After a file has been hidden for this many days it is deleted. 0 is off.
+- "daysFromStartingToCancelingUnfinishedLargeFiles": Cancels any unfinished large file versions after this many days
- "daysFromUploadingToHiding": This many days after uploading a file is hidden
### cleanup
diff --git a/docs/content/box.md b/docs/content/box.md
index 63e42133c..ff197024e 100644
--- a/docs/content/box.md
+++ b/docs/content/box.md
@@ -384,6 +384,19 @@ Properties:
- Type: string
- Required: false
+#### --box-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_BOX_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
#### --box-root-folder-id
Fill in for rclone to use a non root folder as its starting point.
diff --git a/docs/content/changelog.md b/docs/content/changelog.md
index 5e31f318e..e3170bbf7 100644
--- a/docs/content/changelog.md
+++ b/docs/content/changelog.md
@@ -5,6 +5,139 @@ description: "Rclone Changelog"
# Changelog
+## v1.69.0 - 2025-01-12
+
+[See commits](https://github.com/rclone/rclone/compare/v1.68.0...v1.69.0)
+
+* New backends
+ * [ICloud Drive](/iclouddrive/) (lostb1t)
+ * [Cloudinary](/cloudinary/) (yuval-cloudinary)
+ * New S3 providers:
+ * [Outscale](/s3/#outscale) (Matthias Gatto)
+ * [Selectel](/s3/#selectel) (Nick Craig-Wood)
+* Security fixes
+ * serve sftp: Resolve CVE-2024-45337 - Misuse of ServerConfig.PublicKeyCallback may cause authorization bypass (dependabot)
+ * Rclone was **not** vulnerable to this.
+ * See https://github.com/advisories/GHSA-v778-237x-gjrc
+ * build: Update golang.org/x/net to v0.33.0 to fix CVE-2024-45338 - Non-linear parsing of case-insensitive content (Nick Craig-Wood)
+ * Rclone was **not** vulnerable to this.
+ * See https://github.com/advisories/GHSA-w32m-9786-jp63
+* New Features
+ * accounting: Write the current bwlimit to the log on SIGUSR2 (Nick Craig-Wood)
+ * bisync: Change exit code from 2 to 7 for critically aborted run (albertony)
+ * build
+ * Update all dependencies (Nick Craig-Wood)
+ * Replace Windows-specific `NewLazyDLL` with `NewLazySystemDLL` (albertony)
+ * cmd: Change exit code from 1 to 2 for syntax and usage errors (albertony)
+ * docker serve: make sure all mount and VFS options are parsed (Nick Craig-Wood)
+ * doc fixes (albertony, Alexandre Hamez, Anthony Metzidis, buengese, Dan McArdle, David Seifert, Francesco Frassinelli, Michael R. Davis, Nick Craig-Wood, Pawel Palucha, Randy Bush, remygrandin, Sam Harrison, shenpengfeng, tgfisher, Thomas ten Cate, ToM, Tony Metzidis, vintagefuture, Yxxx)
+ * fs: Make `--links` flag global and add new `--local-links` and `--vfs-links` flags (Nick Craig-Wood)
+ * http servers: Disable automatic authentication skipping for unix sockets in http servers (Moises Lima)
+ * This was making it impossible to use unix sockets with an proxy
+ * This might now cause rclone to need authenticaton where it didn't before
+ * oauthutil: add support for OAuth client credential flow (Martin Hassack, Nick Craig-Wood)
+ * operations: make log messages consistent for mkdir/rmdir at INFO level (Nick Craig-Wood)
+ * rc: Add `relative` to [vfs/queue-set-expiry](/rc/#vfs-queue-set-expiry) (Nick Craig-Wood)
+ * serve dlna: Sort the directory entries by directories first then alphabetically by name (Nick Craig-Wood)
+ * serve nfs
+ * Introduce symlink support (Nick Craig-Wood)
+ * Implement `--nfs-cache-type` symlink (Nick Craig-Wood)
+ * size: Make output compatible with `-P` (Nick Craig-Wood)
+ * test makefiles: Add `--flat` flag for making directories with many entries (Nick Craig-Wood)
+* Bug Fixes
+ * accounting
+ * Fix global error acounting (Benjamin Legrand)
+ * Fix debug printing when debug wasn't set (Nick Craig-Wood)
+ * Fix race stopping/starting the stats counter (Nick Craig-Wood)
+ * rc/job: Use mutex for adding listeners thread safety (hayden.pan)
+ * serve docker: Fix incorrect GID assignment (TAKEI Yuya)
+ * serve nfs: Fix missing inode numbers which was messing up `ls -laR` (Nick Craig-Wood)
+ * serve s3: Fix `Last-Modified` timestamp (Nick Craig-Wood)
+ * serve sftp: Fix loading of authorized keys file with comment on last line (albertony)
+* Mount
+ * Introduce symlink support (Filipe Azevedo, Nick Craig-Wood)
+ * Better snap mount error message (divinity76)
+ * mount2: Fix missing `.` and `..` entries (Filipe Azevedo)
+* VFS
+ * With `--vfs-used-is-size` value is calculated and then thrown away (Ilias Ozgur Can Leonard)
+ * Add symlink support to VFS (Filipe Azevedo, Nick Craig-Wood)
+ * This can be enabled with the specific `--vfs-links` flag or the global `--links` flag
+ * Fix open files disappearing from directory listings (Nick Craig-Wood)
+ * Add remote name to vfs cache log messages (Nick Craig-Wood)
+* Cache
+ * Fix parent not getting pinned when remote is a file (nielash)
+* Azure Blob
+ * Add `--azureblob-disable-instance-discovery` (Nick Craig-Wood)
+ * Add `--azureblob-use-az` to force the use of the Azure CLI for auth (Nick Craig-Wood)
+ * Quit multipart uploads if the context is cancelled (Nick Craig-Wood)
+* Azurefiles
+ * Fix missing x-ms-file-request-intent header (Nick Craig-Wood)
+* B2
+ * Add `daysFromStartingToCancelingUnfinishedLargeFiles` to `backend lifecycle` command (Louis Laureys)
+* Box
+ * Fix server-side copying a file over existing dst (nielash)
+ * Fix panic when decoding corrupted PEM from JWT file (Nick Craig-Wood)
+* Drive
+ * Add support for markdown format (Noam Ross)
+ * Implement `rclone backend rescue` to rescue orphaned files (Nick Craig-Wood)
+* Dropbox
+ * Fix server side copying over existing object (Nick Craig-Wood)
+ * Fix return status when full to be fatal error (Nick Craig-Wood)
+* FTP
+ * Implement `--ftp-no-check-upload` to allow upload to write only dirs (Nick Craig-Wood)
+ * Fix ls commands returning empty on "Microsoft FTP Service" servers (Francesco Frassinelli)
+* Gofile
+ * Fix server side copying over existing object (Nick Craig-Wood)
+* Google Cloud Storage
+ * Add access token auth with `--gcs-access-token` (Leandro Piccilli)
+ * Update docs on service account access tokens (Anthony Metzidis)
+* Googlephotos
+ * Implement `--gphotos-proxy` to allow download of full resolution media (Nick Craig-Wood)
+ * Fix nil pointer crash on upload (Nick Craig-Wood)
+* HTTP
+ * Fix incorrect URLs with initial slash (Oleg Kunitsyn)
+* Onedrive
+ * Add support for OAuth client credential flow (Martin Hassack, Nick Craig-Wood)
+ * Fix time precision for OneDrive personal (Nick Craig-Wood)
+ * Fix server side copying over existing object (Nick Craig-Wood)
+* Opendrive
+ * Add `rclone about` support to backend (quiescens)
+* Oracle Object Storage
+ * Make specifying `compartmentid` optional (Manoj Ghosh)
+ * Quit multipart uploads if the context is cancelled (Nick Craig-Wood)
+* Pikpak
+ * Add option to use original file links (wiserain)
+* Protondrive
+ * Improve performance of Proton Drive backend (Lawrence Murray)
+* Putio
+ * Fix server side copying over existing object (Nick Craig-Wood)
+* S3
+ * Add initial `--s3-directory-bucket` to support AWS Directory Buckets (Nick Craig-Wood)
+ * Add Wasabi `eu-south-1` region (Diego Monti)
+ * Fix download of compressed files from Cloudflare R2 (Nick Craig-Wood)
+ * Rename glacier storage class to flexible retrieval (Henry Lee)
+ * Quit multipart uploads if the context is cancelled (Nick Craig-Wood)
+* SFTP
+ * Allow inline ssh public certificate for sftp (Dimitar Ivanov)
+ * Fix nil check when using auth proxy (Nick Craig-Wood)
+* Smb
+ * Add initial support for Kerberos authentication (more work needed). (Francesco Frassinelli)
+ * Fix panic if stat fails (Nick Craig-Wood)
+* Sugarsync
+ * Fix server side copying over existing object (Nick Craig-Wood)
+* WebDAV
+ * Nextcloud: implement backoff and retry for 423 LOCKED errors (Nick Craig-Wood)
+ * Make `--webdav-auth-redirect` to fix 401 unauthorized on redirect (Nick Craig-Wood)
+* Yandex
+ * Fix server side copying over existing object (Nick Craig-Wood)
+* Zoho
+ * Use download server to accelerate downloads (buengese)
+ * Switch to large file upload API for larger files, fix missing URL encoding of filenames for the upload API (buengese)
+ * Print clear error message when missing oauth scope (buengese)
+ * Try to handle rate limits a bit better (buengese)
+ * Add support for private spaces (buengese)
+ * Make upload cutoff configurable (buengese)
+
## v1.68.2 - 2024-11-15
[See commits](https://github.com/rclone/rclone/compare/v1.68.1...v1.68.2)
@@ -145,6 +278,7 @@ description: "Rclone Changelog"
* Pcloud
* Implement `SetModTime` (Georg Welzel)
* Implement `OpenWriterAt` feature to enable multipart uploads (Georg Welzel)
+ * Fix failing large file uploads (Georg Welzel)
* Pikpak
* Improve data consistency by ensuring async tasks complete (wiserain)
* Implement custom hash to replace wrong sha1 (wiserain)
diff --git a/docs/content/cloudinary.md b/docs/content/cloudinary.md
index 6b822d372..e4eeb0e3b 100644
--- a/docs/content/cloudinary.md
+++ b/docs/content/cloudinary.md
@@ -149,8 +149,6 @@ Properties:
Cloudinary API Secret
-**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
-
Properties:
- Config: api_secret
diff --git a/docs/content/commands/rclone.md b/docs/content/commands/rclone.md
index 2a39a39f8..0921995de 100644
--- a/docs/content/commands/rclone.md
+++ b/docs/content/commands/rclone.md
@@ -41,6 +41,7 @@ rclone [flags]
--azureblob-description string Description of the remote
--azureblob-directory-markers Upload an empty object with a trailing slash when a new directory is created
--azureblob-disable-checksum Don't store MD5 checksum with object metadata
+ --azureblob-disable-instance-discovery Skip requesting Microsoft Entra instance metadata
--azureblob-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
--azureblob-endpoint string Endpoint for the service
--azureblob-env-auth Read credentials from runtime (environment variables, CLI or MSI)
@@ -58,6 +59,7 @@ rclone [flags]
--azureblob-tenant string ID of the service principal's tenant. Also called its directory ID
--azureblob-upload-concurrency int Concurrency for multipart uploads (default 16)
--azureblob-upload-cutoff string Cutoff for switching to chunked upload (<= 256 MiB) (deprecated)
+ --azureblob-use-az Use Azure CLI tool az for authentication
--azureblob-use-emulator Uses local storage emulator if provided as 'true'
--azureblob-use-msi Use a managed service identity to authenticate (only works in Azure)
--azureblob-username string User name (usually an email address)
@@ -109,6 +111,7 @@ rclone [flags]
--box-auth-url string Auth server URL
--box-box-config-file string Box App config.json location
--box-box-sub-type string (default "user")
+ --box-client-credentials Use client credentials OAuth flow
--box-client-id string OAuth Client Id
--box-client-secret string OAuth Client Secret
--box-commit-retries int Max number of times to try committing a multipart file (default 100)
@@ -157,6 +160,14 @@ rclone [flags]
--chunker-remote string Remote to chunk/unchunk
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
+ --cloudinary-api-key string Cloudinary API Key
+ --cloudinary-api-secret string Cloudinary API Secret
+ --cloudinary-cloud-name string Cloudinary Environment Name
+ --cloudinary-description string Description of the remote
+ --cloudinary-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
+ --cloudinary-eventually-consistent-delay Duration Wait N seconds for eventual consistency of the databases that support the backend operation (default 0s)
+ --cloudinary-upload-prefix string Specify the API endpoint for environments out of the US
+ --cloudinary-upload-preset string Upload Preset to select asset manipulation on upload
--color AUTO|NEVER|ALWAYS When to show colors (and other ANSI codes) AUTO|NEVER|ALWAYS (default AUTO)
--combine-description string Description of the remote
--combine-upstreams SpaceSepList Upstreams for combining
@@ -198,6 +209,7 @@ rclone [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user
--drive-auth-url string Auth server URL
--drive-chunk-size SizeSuffix Upload chunk size (default 8Mi)
+ --drive-client-credentials Use client credentials OAuth flow
--drive-client-id string Google Application Client Id
--drive-client-secret string OAuth Client Secret
--drive-copy-shortcut-content Server side copy contents of shortcuts instead of the shortcut
@@ -248,6 +260,7 @@ rclone [flags]
--dropbox-batch-size int Max number of files in upload batch
--dropbox-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
--dropbox-chunk-size SizeSuffix Upload chunk size (< 150Mi) (default 48Mi)
+ --dropbox-client-credentials Use client credentials OAuth flow
--dropbox-client-id string OAuth Client Id
--dropbox-client-secret string OAuth Client Secret
--dropbox-description string Description of the remote
@@ -312,6 +325,7 @@ rclone [flags]
--ftp-host string FTP host to connect to
--ftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--ftp-no-check-certificate Do not verify the TLS certificate of the server
+ --ftp-no-check-upload Don't check the upload is OK
--ftp-pass string FTP password (obscured)
--ftp-port int FTP port number (default 21)
--ftp-shut-timeout Duration Maximum time to wait for data connection closing status (default 1m0s)
@@ -320,10 +334,12 @@ rclone [flags]
--ftp-tls-cache-size int Size of TLS session cache for all control and data connections (default 32)
--ftp-user string FTP username (default "$USER")
--ftp-writing-mdtm Use MDTM to set modification time (VsFtpd quirk)
+ --gcs-access-token string Short-lived access token
--gcs-anonymous Access public buckets and objects without credentials
--gcs-auth-url string Auth server URL
--gcs-bucket-acl string Access Control List for new buckets
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies
+ --gcs-client-credentials Use client credentials OAuth flow
--gcs-client-id string OAuth Client Id
--gcs-client-secret string OAuth Client Secret
--gcs-decompress If set this will decompress gzip encoded objects
@@ -352,11 +368,13 @@ rclone [flags]
--gphotos-batch-mode string Upload file batching sync|async|off (default "sync")
--gphotos-batch-size int Max number of files in upload batch
--gphotos-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
+ --gphotos-client-credentials Use client credentials OAuth flow
--gphotos-client-id string OAuth Client Id
--gphotos-client-secret string OAuth Client Secret
--gphotos-description string Description of the remote
--gphotos-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gphotos-include-archived Also view and download archived media
+ --gphotos-proxy string Use the gphotosdl proxy for downloading the full resolution images
--gphotos-read-only Set to make the Google Photos backend read only
--gphotos-read-size Set to read the size of media items
--gphotos-start-year int Year limits the photos to be downloaded to those which are uploaded after the given year (default 2000)
@@ -379,6 +397,7 @@ rclone [flags]
-h, --help help for rclone
--hidrive-auth-url string Auth server URL
--hidrive-chunk-size SizeSuffix Chunksize for chunked uploads (default 48Mi)
+ --hidrive-client-credentials Use client credentials OAuth flow
--hidrive-client-id string OAuth Client Id
--hidrive-client-secret string OAuth Client Secret
--hidrive-description string Description of the remote
@@ -399,6 +418,11 @@ rclone [flags]
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of HTTP host to connect to
--human-readable Print numbers in a human-readable format, sizes with suffix Ki|Mi|Gi|Ti|Pi
+ --iclouddrive-apple-id string Apple ID
+ --iclouddrive-client-id string Client id (default "d39ba9916b7251055b22c7f910e2ea796ee65e98b2ddecea8f5dde8d9d1a815d")
+ --iclouddrive-description string Description of the remote
+ --iclouddrive-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --iclouddrive-password string Password (obscured)
--ignore-case Ignore case in filters (case insensitive)
--ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums
@@ -428,6 +452,7 @@ rclone [flags]
--internetarchive-secret-access-key string IAS3 Secret Key (password)
--internetarchive-wait-archive Duration Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish (default 0s)
--jottacloud-auth-url string Auth server URL
+ --jottacloud-client-credentials Use client credentials OAuth flow
--jottacloud-client-id string OAuth Client Id
--jottacloud-client-secret string OAuth Client Secret
--jottacloud-description string Description of the remote
@@ -455,6 +480,7 @@ rclone [flags]
--local-case-sensitive Force the filesystem to report itself as case sensitive
--local-description string Description of the remote
--local-encoding Encoding The encoding for the backend (default Slash,Dot)
+ --local-links Translate symlinks to/from regular files with a '.rclonelink' extension for the local backend
--local-no-check-updated Don't check to see if the files change during upload
--local-no-clone Disable reflink cloning for server-side copies
--local-no-preallocate Disable preallocation of disk space for transferred files
@@ -471,6 +497,7 @@ rclone [flags]
--low-level-retries int Number of low level retries to do (default 10)
--mailru-auth-url string Auth server URL
--mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true)
+ --mailru-client-credentials Use client credentials OAuth flow
--mailru-client-id string OAuth Client Id
--mailru-client-secret string OAuth Client Secret
--mailru-description string Description of the remote
@@ -510,7 +537,7 @@ rclone [flags]
--metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
--metadata-mapper SpaceSepList Program to run to transforming metadata before upload
--metadata-set stringArray Add metadata key=value when uploading
- --metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to (default [""])
+ --metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to
--metrics-allow-origin string Origin which cross-domain request (CORS) can be executed from
--metrics-baseurl string Prefix for URLs - leave blank for root
--metrics-cert string TLS PEM key (concatenation of certificate and CA certificate)
@@ -551,6 +578,7 @@ rclone [flags]
--onedrive-auth-url string Auth server URL
--onedrive-av-override Allows download of files the server thinks has a virus
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes) (default 10Mi)
+ --onedrive-client-credentials Use client credentials OAuth flow
--onedrive-client-id string OAuth Client Id
--onedrive-client-secret string OAuth Client Secret
--onedrive-delta If set rclone will use delta listing to implement recursive listings
@@ -570,11 +598,12 @@ rclone [flags]
--onedrive-region string Choose national cloud region for OneDrive (default "global")
--onedrive-root-folder-id string ID of the root folder
--onedrive-server-side-across-configs Deprecated: use --server-side-across-configs instead
+ --onedrive-tenant string ID of the service principal's tenant. Also called its directory ID
--onedrive-token string OAuth Access Token as a JSON blob
--onedrive-token-url string Token server url
--oos-attempt-resume-upload If true attempt to resume previously started multipart upload for the object
--oos-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
- --oos-compartment string Object storage compartment OCID
+ --oos-compartment string Specify compartment OCID, if you need to list buckets
--oos-config-file string Path to OCI config file (default "~/.oci/config")
--oos-config-profile string Profile name inside the oci config file (default "Default")
--oos-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
@@ -606,6 +635,7 @@ rclone [flags]
--partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
--password-command SpaceSepList Command for supplying password for encrypted configuration
--pcloud-auth-url string Auth server URL
+ --pcloud-client-credentials Use client credentials OAuth flow
--pcloud-client-id string OAuth Client Id
--pcloud-client-secret string OAuth Client Secret
--pcloud-description string Description of the remote
@@ -616,26 +646,25 @@ rclone [flags]
--pcloud-token string OAuth Access Token as a JSON blob
--pcloud-token-url string Token server url
--pcloud-username string Your pcloud username
- --pikpak-auth-url string Auth server URL
--pikpak-chunk-size SizeSuffix Chunk size for multipart uploads (default 5Mi)
- --pikpak-client-id string OAuth Client Id
- --pikpak-client-secret string OAuth Client Secret
--pikpak-description string Description of the remote
+ --pikpak-device-id string Device ID used for authorization
--pikpak-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot)
--pikpak-hash-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate hash if required (default 10Mi)
+ --pikpak-no-media-link Use original file links instead of media links
--pikpak-pass string Pikpak password (obscured)
--pikpak-root-folder-id string ID of the root folder
- --pikpak-token string OAuth Access Token as a JSON blob
- --pikpak-token-url string Token server url
--pikpak-trashed-only Only show files that are in the trash
--pikpak-upload-concurrency int Concurrency for multipart uploads (default 5)
--pikpak-use-trash Send files to the trash instead of deleting permanently (default true)
--pikpak-user string Pikpak username
+ --pikpak-user-agent string HTTP user agent for pikpak (default "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0")
--pixeldrain-api-key string API key for your pixeldrain account
--pixeldrain-api-url string The API endpoint to connect to. In the vast majority of cases it's fine to leave (default "https://pixeldrain.com/api")
--pixeldrain-description string Description of the remote
--pixeldrain-root-folder-id string Root of the filesystem to use (default "me")
--premiumizeme-auth-url string Auth server URL
+ --premiumizeme-client-credentials Use client credentials OAuth flow
--premiumizeme-client-id string OAuth Client Id
--premiumizeme-client-secret string OAuth Client Secret
--premiumizeme-description string Description of the remote
@@ -655,6 +684,7 @@ rclone [flags]
--protondrive-replace-existing-draft Create a new revision when filename conflict is detected
--protondrive-username string The username of your proton account
--putio-auth-url string Auth server URL
+ --putio-client-credentials Use client credentials OAuth flow
--putio-client-id string OAuth Client Id
--putio-client-secret string OAuth Client Secret
--putio-description string Description of the remote
@@ -683,7 +713,7 @@ rclone [flags]
--quatrix-skip-project-folders Skip project folders in operations
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server
- --rc-addr stringArray IPaddress:Port or :Port to bind server to (default ["localhost:5572"])
+ --rc-addr stringArray IPaddress:Port or :Port to bind server to (default localhost:5572)
--rc-allow-origin string Origin which cross-domain request (CORS) can be executed from
--rc-baseurl string Prefix for URLs - leave blank for root
--rc-cert string TLS PEM key (concatenation of certificate and CA certificate)
@@ -721,6 +751,7 @@ rclone [flags]
--s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
--s3-decompress If set this will decompress gzip encoded objects
--s3-description string Description of the remote
+ --s3-directory-bucket Set to use AWS Directory Buckets
--s3-directory-markers Upload an empty object with a trailing slash when a new directory is created
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-disable-http2 Disable usage of http2 for S3 backends
@@ -803,6 +834,7 @@ rclone [flags]
--sftp-pass string SSH password, leave blank to use ssh-agent (obscured)
--sftp-path-override string Override path used by SSH shell commands
--sftp-port int SSH port number (default 22)
+ --sftp-pubkey string SSH public certificate for public certificate based authentication
--sftp-pubkey-file string Optional path to public key file
--sftp-server-command string Specifies the path or command to run a sftp server on the remote host
--sftp-set-env SpaceSepList Environment variables to pass to sftp and commands
@@ -818,6 +850,7 @@ rclone [flags]
--sftp-user string SSH username (default "$USER")
--sharefile-auth-url string Auth server URL
--sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi)
+ --sharefile-client-credentials Use client credentials OAuth flow
--sharefile-client-id string OAuth Client Id
--sharefile-client-secret string OAuth Client Secret
--sharefile-description string Description of the remote
@@ -932,9 +965,10 @@ rclone [flags]
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.68.0")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.69.0")
-v, --verbose count Print lots more stuff (repeat for more)
-V, --version Print the version number
+ --webdav-auth-redirect Preserve authentication on redirect
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
--webdav-bearer-token-command string Command to run to get a bearer token
--webdav-description string Description of the remote
@@ -950,6 +984,7 @@ rclone [flags]
--webdav-user string User name
--webdav-vendor string Name of the WebDAV site/service/software you are using
--yandex-auth-url string Auth server URL
+ --yandex-client-credentials Use client credentials OAuth flow
--yandex-client-id string OAuth Client Id
--yandex-client-secret string OAuth Client Secret
--yandex-description string Description of the remote
@@ -959,6 +994,7 @@ rclone [flags]
--yandex-token string OAuth Access Token as a JSON blob
--yandex-token-url string Token server url
--zoho-auth-url string Auth server URL
+ --zoho-client-credentials Use client credentials OAuth flow
--zoho-client-id string OAuth Client Id
--zoho-client-secret string OAuth Client Secret
--zoho-description string Description of the remote
@@ -966,6 +1002,7 @@ rclone [flags]
--zoho-region string Zoho region to connect to
--zoho-token string OAuth Access Token as a JSON blob
--zoho-token-url string Token server url
+ --zoho-upload-cutoff SizeSuffix Cutoff for switching to large file upload api (>= 10 MiB) (default 10Mi)
```
## See Also
diff --git a/docs/content/commands/rclone_bisync.md b/docs/content/commands/rclone_bisync.md
index 55fdfdea6..4f7c80d31 100644
--- a/docs/content/commands/rclone_bisync.md
+++ b/docs/content/commands/rclone_bisync.md
@@ -83,6 +83,7 @@ Flags for anything which can copy a file
-I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
diff --git a/docs/content/commands/rclone_copy.md b/docs/content/commands/rclone_copy.md
index a243c2e16..e4da96773 100644
--- a/docs/content/commands/rclone_copy.md
+++ b/docs/content/commands/rclone_copy.md
@@ -106,6 +106,7 @@ Flags for anything which can copy a file
-I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
diff --git a/docs/content/commands/rclone_copyto.md b/docs/content/commands/rclone_copyto.md
index e2d4b05bf..4a2152c86 100644
--- a/docs/content/commands/rclone_copyto.md
+++ b/docs/content/commands/rclone_copyto.md
@@ -69,6 +69,7 @@ Flags for anything which can copy a file
-I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
diff --git a/docs/content/commands/rclone_mount.md b/docs/content/commands/rclone_mount.md
index 67bd27f55..7ce79eb5c 100644
--- a/docs/content/commands/rclone_mount.md
+++ b/docs/content/commands/rclone_mount.md
@@ -54,7 +54,9 @@ When running in background mode the user will have to stop the mount manually:
# Linux
fusermount -u /path/to/local/mount
- # OS X
+ #... or on some systems
+ fusermount3 -u /path/to/local/mount
+ # OS X or Linux when using nfsmount
umount /path/to/local/mount
The umount operation can fail, for example when the mountpoint is busy.
@@ -398,9 +400,9 @@ Note that systemd runs mount units without any environment variables including
`PATH` or `HOME`. This means that tilde (`~`) expansion will not work
and you should provide `--config` and `--cache-dir` explicitly as absolute
paths via rclone arguments.
-Since mounting requires the `fusermount` program, rclone will use the fallback
-PATH of `/bin:/usr/bin` in this scenario. Please ensure that `fusermount`
-is present on this PATH.
+Since mounting requires the `fusermount` or `fusermount3` program,
+rclone will use the fallback PATH of `/bin:/usr/bin` in this scenario.
+Please ensure that `fusermount`/`fusermount3` is present on this PATH.
## Rclone as Unix mount helper
@@ -777,6 +779,50 @@ modified files from the cache (the related global flag `--checkers` has no effec
--transfers int Number of file transfers to run in parallel (default 4)
+## Symlinks
+
+By default the VFS does not support symlinks. However this may be
+enabled with either of the following flags:
+
+ --links Translate symlinks to/from regular files with a '.rclonelink' extension.
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension. So a
+file which appears as a symlink `link-to-file.txt` would be stored on
+cloud storage as `link-to-file.txt.rclonelink` and the contents would
+be the path to the symlink destination.
+
+Note that `--links` enables symlink translation globally in rclone -
+this includes any backend which supports the concept (for example the
+local backend). `--vfs-links` just enables it for the VFS layer.
+
+This scheme is compatible with that used by the [local backend with the --local-links flag](/local/#symlinks-junction-points).
+
+The `--vfs-links` flag has been designed for `rclone mount`, `rclone
+nfsmount` and `rclone serve nfs`.
+
+It hasn't been tested with the other `rclone serve` commands yet.
+
+A limitation of the current implementation is that it expects the
+caller to resolve sub-symlinks. For example given this directory tree
+
+```
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dir
+```
+
+The VFS will correctly resolve `linked-dir` but not
+`linked-dir/file.txt`. This is not a problem for the tested commands
+but may be for other commands.
+
+**Note** that there is an outstanding issue with symlink support
+[issue #8245](https://github.com/rclone/rclone/issues/8245) with duplicate
+files being created when symlinks are moved into directories where
+there is a file of the same name (or vice versa).
+
## VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only
@@ -881,6 +927,7 @@ rclone mount remote:path /path/to/mountpoint [flags]
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for mount
+ --link-perms FileMode Link permissions (default 666)
--max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads (not supported on Windows) (default 128Ki)
--mount-case-insensitive Tristate Tell the OS the mount is case insensitive (true) or sensitive (false) regardless of the backend (auto) (default unset)
--network-mode Mount as remote network drive, instead of fixed disk drive (supported on Windows only)
@@ -903,6 +950,7 @@ rclone mount remote:path /path/to/mountpoint [flags]
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
diff --git a/docs/content/commands/rclone_move.md b/docs/content/commands/rclone_move.md
index 7c7796e06..1d9930121 100644
--- a/docs/content/commands/rclone_move.md
+++ b/docs/content/commands/rclone_move.md
@@ -81,6 +81,7 @@ Flags for anything which can copy a file
-I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
diff --git a/docs/content/commands/rclone_moveto.md b/docs/content/commands/rclone_moveto.md
index 132efe3d2..0de165dc2 100644
--- a/docs/content/commands/rclone_moveto.md
+++ b/docs/content/commands/rclone_moveto.md
@@ -72,6 +72,7 @@ Flags for anything which can copy a file
-I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
diff --git a/docs/content/commands/rclone_nfsmount.md b/docs/content/commands/rclone_nfsmount.md
index 5230b00d9..079eabbef 100644
--- a/docs/content/commands/rclone_nfsmount.md
+++ b/docs/content/commands/rclone_nfsmount.md
@@ -55,7 +55,9 @@ When running in background mode the user will have to stop the mount manually:
# Linux
fusermount -u /path/to/local/mount
- # OS X
+ #... or on some systems
+ fusermount3 -u /path/to/local/mount
+ # OS X or Linux when using nfsmount
umount /path/to/local/mount
The umount operation can fail, for example when the mountpoint is busy.
@@ -399,9 +401,9 @@ Note that systemd runs mount units without any environment variables including
`PATH` or `HOME`. This means that tilde (`~`) expansion will not work
and you should provide `--config` and `--cache-dir` explicitly as absolute
paths via rclone arguments.
-Since mounting requires the `fusermount` program, rclone will use the fallback
-PATH of `/bin:/usr/bin` in this scenario. Please ensure that `fusermount`
-is present on this PATH.
+Since mounting requires the `fusermount` or `fusermount3` program,
+rclone will use the fallback PATH of `/bin:/usr/bin` in this scenario.
+Please ensure that `fusermount`/`fusermount3` is present on this PATH.
## Rclone as Unix mount helper
@@ -778,6 +780,50 @@ modified files from the cache (the related global flag `--checkers` has no effec
--transfers int Number of file transfers to run in parallel (default 4)
+## Symlinks
+
+By default the VFS does not support symlinks. However this may be
+enabled with either of the following flags:
+
+ --links Translate symlinks to/from regular files with a '.rclonelink' extension.
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension. So a
+file which appears as a symlink `link-to-file.txt` would be stored on
+cloud storage as `link-to-file.txt.rclonelink` and the contents would
+be the path to the symlink destination.
+
+Note that `--links` enables symlink translation globally in rclone -
+this includes any backend which supports the concept (for example the
+local backend). `--vfs-links` just enables it for the VFS layer.
+
+This scheme is compatible with that used by the [local backend with the --local-links flag](/local/#symlinks-junction-points).
+
+The `--vfs-links` flag has been designed for `rclone mount`, `rclone
+nfsmount` and `rclone serve nfs`.
+
+It hasn't been tested with the other `rclone serve` commands yet.
+
+A limitation of the current implementation is that it expects the
+caller to resolve sub-symlinks. For example given this directory tree
+
+```
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dir
+```
+
+The VFS will correctly resolve `linked-dir` but not
+`linked-dir/file.txt`. This is not a problem for the tested commands
+but may be for other commands.
+
+**Note** that there is an outstanding issue with symlink support
+[issue #8245](https://github.com/rclone/rclone/issues/8245) with duplicate
+files being created when symlinks are moved into directories where
+there is a file of the same name (or vice versa).
+
## VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only
@@ -883,6 +929,7 @@ rclone nfsmount remote:path /path/to/mountpoint [flags]
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for nfsmount
+ --link-perms FileMode Link permissions (default 666)
--max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads (not supported on Windows) (default 128Ki)
--mount-case-insensitive Tristate Tell the OS the mount is case insensitive (true) or sensitive (false) regardless of the backend (auto) (default unset)
--network-mode Mount as remote network drive, instead of fixed disk drive (supported on Windows only)
@@ -909,6 +956,7 @@ rclone nfsmount remote:path /path/to/mountpoint [flags]
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
diff --git a/docs/content/commands/rclone_rcd.md b/docs/content/commands/rclone_rcd.md
index 3750bf477..912a0f677 100644
--- a/docs/content/commands/rclone_rcd.md
+++ b/docs/content/commands/rclone_rcd.md
@@ -31,8 +31,7 @@ If you set `--rc-addr` to listen on a public or LAN accessible IP address
then using Authentication is advised - see the next section for info.
You can use a unix socket by setting the url to `unix:///path/to/socket`
-or just by using an absolute path name. Note that unix sockets bypass the
-authentication - this is expected to be done with file system permissions.
+or just by using an absolute path name.
`--rc-addr` may be repeated to listen on multiple IPs/ports/sockets.
Socket activation, described further below, can also be used to accomplish the same.
@@ -59,19 +58,21 @@ https. You will need to supply the `--rc-cert` and `--rc-key` flags.
If you wish to do client side certificate validation then you will need to
supply `--rc-client-ca` also.
-`--rc-cert` should be a either a PEM encoded certificate or a concatenation
-of that with the CA certificate. `--krc-ey` should be the PEM encoded
-private key and `--rc-client-ca` should be the PEM encoded client
-certificate authority certificate.
+`--rc-cert` must be set to the path of a file containing
+either a PEM encoded certificate, or a concatenation of that with the CA
+certificate. `--rc-key` must be set to the path of a file
+with the PEM encoded private key. If setting `--rc-client-ca`,
+it should be set to the path of a file with PEM encoded client certificate
+authority certificates.
`--rc-min-tls-version` is minimum TLS version that is acceptable. Valid
- values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default
- "tls1.0").
+values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").
## Socket activation
Instead of the listening addresses specified above, rclone will listen to all
-FDs passed by the service manager, if any (and ignore any arguments passed by --rc-addr`).
+FDs passed by the service manager, if any (and ignore any arguments passed
+by `--rc-addr`).
This allows rclone to be a socket-activated service.
It can be configured with .socket and .service unit files as described in
@@ -166,7 +167,7 @@ Flags to control the Remote Control API
```
--rc Enable the remote control server
- --rc-addr stringArray IPaddress:Port or :Port to bind server to (default ["localhost:5572"])
+ --rc-addr stringArray IPaddress:Port or :Port to bind server to (default localhost:5572)
--rc-allow-origin string Origin which cross-domain request (CORS) can be executed from
--rc-baseurl string Prefix for URLs - leave blank for root
--rc-cert string TLS PEM key (concatenation of certificate and CA certificate)
diff --git a/docs/content/commands/rclone_serve_dlna.md b/docs/content/commands/rclone_serve_dlna.md
index e6ed3d6d2..9ac076f2d 100644
--- a/docs/content/commands/rclone_serve_dlna.md
+++ b/docs/content/commands/rclone_serve_dlna.md
@@ -342,6 +342,50 @@ modified files from the cache (the related global flag `--checkers` has no effec
--transfers int Number of file transfers to run in parallel (default 4)
+## Symlinks
+
+By default the VFS does not support symlinks. However this may be
+enabled with either of the following flags:
+
+ --links Translate symlinks to/from regular files with a '.rclonelink' extension.
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension. So a
+file which appears as a symlink `link-to-file.txt` would be stored on
+cloud storage as `link-to-file.txt.rclonelink` and the contents would
+be the path to the symlink destination.
+
+Note that `--links` enables symlink translation globally in rclone -
+this includes any backend which supports the concept (for example the
+local backend). `--vfs-links` just enables it for the VFS layer.
+
+This scheme is compatible with that used by the [local backend with the --local-links flag](/local/#symlinks-junction-points).
+
+The `--vfs-links` flag has been designed for `rclone mount`, `rclone
+nfsmount` and `rclone serve nfs`.
+
+It hasn't been tested with the other `rclone serve` commands yet.
+
+A limitation of the current implementation is that it expects the
+caller to resolve sub-symlinks. For example given this directory tree
+
+```
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dir
+```
+
+The VFS will correctly resolve `linked-dir` but not
+`linked-dir/file.txt`. This is not a problem for the tested commands
+but may be for other commands.
+
+**Note** that there is an outstanding issue with symlink support
+[issue #8245](https://github.com/rclone/rclone/issues/8245) with duplicate
+files being created when symlinks are moved into directories where
+there is a file of the same name (or vice versa).
+
## VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only
@@ -436,6 +480,7 @@ rclone serve dlna remote:path [flags]
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for dlna
--interface stringArray The interface to use for SSDP (repeat as necessary)
+ --link-perms FileMode Link permissions (default 666)
--log-trace Enable trace logging of SOAP traffic
--name string Name of DLNA server
--no-checksum Don't compare checksums on up/download
@@ -454,6 +499,7 @@ rclone serve dlna remote:path [flags]
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
diff --git a/docs/content/commands/rclone_serve_docker.md b/docs/content/commands/rclone_serve_docker.md
index 1a2e86e71..6c2f84f29 100644
--- a/docs/content/commands/rclone_serve_docker.md
+++ b/docs/content/commands/rclone_serve_docker.md
@@ -354,6 +354,50 @@ modified files from the cache (the related global flag `--checkers` has no effec
--transfers int Number of file transfers to run in parallel (default 4)
+## Symlinks
+
+By default the VFS does not support symlinks. However this may be
+enabled with either of the following flags:
+
+ --links Translate symlinks to/from regular files with a '.rclonelink' extension.
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension. So a
+file which appears as a symlink `link-to-file.txt` would be stored on
+cloud storage as `link-to-file.txt.rclonelink` and the contents would
+be the path to the symlink destination.
+
+Note that `--links` enables symlink translation globally in rclone -
+this includes any backend which supports the concept (for example the
+local backend). `--vfs-links` just enables it for the VFS layer.
+
+This scheme is compatible with that used by the [local backend with the --local-links flag](/local/#symlinks-junction-points).
+
+The `--vfs-links` flag has been designed for `rclone mount`, `rclone
+nfsmount` and `rclone serve nfs`.
+
+It hasn't been tested with the other `rclone serve` commands yet.
+
+A limitation of the current implementation is that it expects the
+caller to resolve sub-symlinks. For example given this directory tree
+
+```
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dir
+```
+
+The VFS will correctly resolve `linked-dir` but not
+`linked-dir/file.txt`. This is not a problem for the tested commands
+but may be for other commands.
+
+**Note** that there is an outstanding issue with symlink support
+[issue #8245](https://github.com/rclone/rclone/issues/8245) with duplicate
+files being created when symlinks are moved into directories where
+there is a file of the same name (or vice versa).
+
## VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only
@@ -460,6 +504,7 @@ rclone serve docker [flags]
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for docker
+ --link-perms FileMode Link permissions (default 666)
--max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads (not supported on Windows) (default 128Ki)
--mount-case-insensitive Tristate Tell the OS the mount is case insensitive (true) or sensitive (false) regardless of the backend (auto) (default unset)
--network-mode Mount as remote network drive, instead of fixed disk drive (supported on Windows only)
@@ -485,6 +530,7 @@ rclone serve docker [flags]
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
diff --git a/docs/content/commands/rclone_serve_ftp.md b/docs/content/commands/rclone_serve_ftp.md
index 9f22e856c..e96f68116 100644
--- a/docs/content/commands/rclone_serve_ftp.md
+++ b/docs/content/commands/rclone_serve_ftp.md
@@ -335,6 +335,50 @@ modified files from the cache (the related global flag `--checkers` has no effec
--transfers int Number of file transfers to run in parallel (default 4)
+## Symlinks
+
+By default the VFS does not support symlinks. However this may be
+enabled with either of the following flags:
+
+ --links Translate symlinks to/from regular files with a '.rclonelink' extension.
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension. So a
+file which appears as a symlink `link-to-file.txt` would be stored on
+cloud storage as `link-to-file.txt.rclonelink` and the contents would
+be the path to the symlink destination.
+
+Note that `--links` enables symlink translation globally in rclone -
+this includes any backend which supports the concept (for example the
+local backend). `--vfs-links` just enables it for the VFS layer.
+
+This scheme is compatible with that used by the [local backend with the --local-links flag](/local/#symlinks-junction-points).
+
+The `--vfs-links` flag has been designed for `rclone mount`, `rclone
+nfsmount` and `rclone serve nfs`.
+
+It hasn't been tested with the other `rclone serve` commands yet.
+
+A limitation of the current implementation is that it expects the
+caller to resolve sub-symlinks. For example given this directory tree
+
+```
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dir
+```
+
+The VFS will correctly resolve `linked-dir` but not
+`linked-dir/file.txt`. This is not a problem for the tested commands
+but may be for other commands.
+
+**Note** that there is an outstanding issue with symlink support
+[issue #8245](https://github.com/rclone/rclone/issues/8245) with duplicate
+files being created when symlinks are moved into directories where
+there is a file of the same name (or vice versa).
+
## VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only
@@ -511,6 +555,7 @@ rclone serve ftp remote:path [flags]
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for ftp
--key string TLS PEM Private key
+ --link-perms FileMode Link permissions (default 666)
--no-checksum Don't compare checksums on up/download
--no-modtime Don't read/write the modification time (can speed things up)
--no-seek Don't allow seeking in files
@@ -531,6 +576,7 @@ rclone serve ftp remote:path [flags]
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
diff --git a/docs/content/commands/rclone_serve_http.md b/docs/content/commands/rclone_serve_http.md
index 9206d6f8a..f87db8730 100644
--- a/docs/content/commands/rclone_serve_http.md
+++ b/docs/content/commands/rclone_serve_http.md
@@ -33,8 +33,7 @@ If you set `--addr` to listen on a public or LAN accessible IP address
then using Authentication is advised - see the next section for info.
You can use a unix socket by setting the url to `unix:///path/to/socket`
-or just by using an absolute path name. Note that unix sockets bypass the
-authentication - this is expected to be done with file system permissions.
+or just by using an absolute path name.
`--addr` may be repeated to listen on multiple IPs/ports/sockets.
Socket activation, described further below, can also be used to accomplish the same.
@@ -61,19 +60,21 @@ https. You will need to supply the `--cert` and `--key` flags.
If you wish to do client side certificate validation then you will need to
supply `--client-ca` also.
-`--cert` should be a either a PEM encoded certificate or a concatenation
-of that with the CA certificate. `--key` should be the PEM encoded
-private key and `--client-ca` should be the PEM encoded client
-certificate authority certificate.
+`--cert` must be set to the path of a file containing
+either a PEM encoded certificate, or a concatenation of that with the CA
+certificate. `--key` must be set to the path of a file
+with the PEM encoded private key. If setting `--client-ca`,
+it should be set to the path of a file with PEM encoded client certificate
+authority certificates.
`--min-tls-version` is minimum TLS version that is acceptable. Valid
- values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default
- "tls1.0").
+values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").
## Socket activation
Instead of the listening addresses specified above, rclone will listen to all
-FDs passed by the service manager, if any (and ignore any arguments passed by --addr`).
+FDs passed by the service manager, if any (and ignore any arguments passed
+by `--addr`).
This allows rclone to be a socket-activated service.
It can be configured with .socket and .service unit files as described in
@@ -452,6 +453,50 @@ modified files from the cache (the related global flag `--checkers` has no effec
--transfers int Number of file transfers to run in parallel (default 4)
+## Symlinks
+
+By default the VFS does not support symlinks. However this may be
+enabled with either of the following flags:
+
+ --links Translate symlinks to/from regular files with a '.rclonelink' extension.
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension. So a
+file which appears as a symlink `link-to-file.txt` would be stored on
+cloud storage as `link-to-file.txt.rclonelink` and the contents would
+be the path to the symlink destination.
+
+Note that `--links` enables symlink translation globally in rclone -
+this includes any backend which supports the concept (for example the
+local backend). `--vfs-links` just enables it for the VFS layer.
+
+This scheme is compatible with that used by the [local backend with the --local-links flag](/local/#symlinks-junction-points).
+
+The `--vfs-links` flag has been designed for `rclone mount`, `rclone
+nfsmount` and `rclone serve nfs`.
+
+It hasn't been tested with the other `rclone serve` commands yet.
+
+A limitation of the current implementation is that it expects the
+caller to resolve sub-symlinks. For example given this directory tree
+
+```
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dir
+```
+
+The VFS will correctly resolve `linked-dir` but not
+`linked-dir/file.txt`. This is not a problem for the tested commands
+but may be for other commands.
+
+**Note** that there is an outstanding issue with symlink support
+[issue #8245](https://github.com/rclone/rclone/issues/8245) with duplicate
+files being created when symlinks are moved into directories where
+there is a file of the same name (or vice versa).
+
## VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only
@@ -623,15 +668,16 @@ rclone serve http remote:path [flags]
--allow-origin string Origin which cross-domain request (CORS) can be executed from
--auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
- --cert string TLS PEM key (concatenation of certificate and CA certificate)
- --client-ca string Client certificate authority to verify clients with
+ --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
+ --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 777)
--file-perms FileMode File permissions (default 666)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for http
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string TLS PEM Private key
+ --key string Path to TLS PEM private key file
+ --link-perms FileMode Link permissions (default 666)
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--no-checksum Don't compare checksums on up/download
@@ -657,6 +703,7 @@ rclone serve http remote:path [flags]
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
diff --git a/docs/content/commands/rclone_serve_nfs.md b/docs/content/commands/rclone_serve_nfs.md
index f783876b4..72081cf56 100644
--- a/docs/content/commands/rclone_serve_nfs.md
+++ b/docs/content/commands/rclone_serve_nfs.md
@@ -53,7 +53,9 @@ that it uses an on disk cache, but the cache entries are held as
symlinks. Rclone will use the handle of the underlying file as the NFS
handle which improves performance. This sort of cache can't be backed
up and restored as the underlying handles will change. This is Linux
-only.
+only. It requres running rclone as root or with `CAP_DAC_READ_SEARCH`.
+You can run rclone with this extra permission by doing this to the
+rclone binary `sudo setcap cap_dac_read_search+ep /path/to/rclone`.
`--nfs-cache-handle-limit` controls the maximum number of cached NFS
handles stored by the caching handler. This should not be set too low
@@ -382,6 +384,50 @@ modified files from the cache (the related global flag `--checkers` has no effec
--transfers int Number of file transfers to run in parallel (default 4)
+## Symlinks
+
+By default the VFS does not support symlinks. However this may be
+enabled with either of the following flags:
+
+ --links Translate symlinks to/from regular files with a '.rclonelink' extension.
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension. So a
+file which appears as a symlink `link-to-file.txt` would be stored on
+cloud storage as `link-to-file.txt.rclonelink` and the contents would
+be the path to the symlink destination.
+
+Note that `--links` enables symlink translation globally in rclone -
+this includes any backend which supports the concept (for example the
+local backend). `--vfs-links` just enables it for the VFS layer.
+
+This scheme is compatible with that used by the [local backend with the --local-links flag](/local/#symlinks-junction-points).
+
+The `--vfs-links` flag has been designed for `rclone mount`, `rclone
+nfsmount` and `rclone serve nfs`.
+
+It hasn't been tested with the other `rclone serve` commands yet.
+
+A limitation of the current implementation is that it expects the
+caller to resolve sub-symlinks. For example given this directory tree
+
+```
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dir
+```
+
+The VFS will correctly resolve `linked-dir` but not
+`linked-dir/file.txt`. This is not a problem for the tested commands
+but may be for other commands.
+
+**Note** that there is an outstanding issue with symlink support
+[issue #8245](https://github.com/rclone/rclone/issues/8245) with duplicate
+files being created when symlinks are moved into directories where
+there is a file of the same name (or vice versa).
+
## VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only
@@ -474,6 +520,7 @@ rclone serve nfs remote:path [flags]
--file-perms FileMode File permissions (default 666)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for nfs
+ --link-perms FileMode Link permissions (default 666)
--nfs-cache-dir string The directory the NFS handle cache will use if set
--nfs-cache-handle-limit int max file handles cached simultaneously (min 5) (default 1000000)
--nfs-cache-type memory|disk|symlink Type of NFS handle cache to use (default memory)
@@ -493,6 +540,7 @@ rclone serve nfs remote:path [flags]
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
diff --git a/docs/content/commands/rclone_serve_restic.md b/docs/content/commands/rclone_serve_restic.md
index 817b406dd..dca327576 100644
--- a/docs/content/commands/rclone_serve_restic.md
+++ b/docs/content/commands/rclone_serve_restic.md
@@ -103,8 +103,7 @@ If you set `--addr` to listen on a public or LAN accessible IP address
then using Authentication is advised - see the next section for info.
You can use a unix socket by setting the url to `unix:///path/to/socket`
-or just by using an absolute path name. Note that unix sockets bypass the
-authentication - this is expected to be done with file system permissions.
+or just by using an absolute path name.
`--addr` may be repeated to listen on multiple IPs/ports/sockets.
Socket activation, described further below, can also be used to accomplish the same.
@@ -131,19 +130,21 @@ https. You will need to supply the `--cert` and `--key` flags.
If you wish to do client side certificate validation then you will need to
supply `--client-ca` also.
-`--cert` should be a either a PEM encoded certificate or a concatenation
-of that with the CA certificate. `--key` should be the PEM encoded
-private key and `--client-ca` should be the PEM encoded client
-certificate authority certificate.
+`--cert` must be set to the path of a file containing
+either a PEM encoded certificate, or a concatenation of that with the CA
+certificate. `--key` must be set to the path of a file
+with the PEM encoded private key. If setting `--client-ca`,
+it should be set to the path of a file with PEM encoded client certificate
+authority certificates.
`--min-tls-version` is minimum TLS version that is acceptable. Valid
- values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default
- "tls1.0").
+values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").
## Socket activation
Instead of the listening addresses specified above, rclone will listen to all
-FDs passed by the service manager, if any (and ignore any arguments passed by --addr`).
+FDs passed by the service manager, if any (and ignore any arguments passed
+by `--addr`).
This allows rclone to be a socket-activated service.
It can be configured with .socket and .service unit files as described in
@@ -195,11 +196,11 @@ rclone serve restic remote:path [flags]
--append-only Disallow deletion of repository data
--baseurl string Prefix for URLs - leave blank for root
--cache-objects Cache listed objects (default true)
- --cert string TLS PEM key (concatenation of certificate and CA certificate)
- --client-ca string Client certificate authority to verify clients with
+ --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
+ --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
-h, --help help for restic
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string TLS PEM Private key
+ --key string Path to TLS PEM private key file
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--pass string Password for authentication
diff --git a/docs/content/commands/rclone_serve_s3.md b/docs/content/commands/rclone_serve_s3.md
index 888affd37..40813321b 100644
--- a/docs/content/commands/rclone_serve_s3.md
+++ b/docs/content/commands/rclone_serve_s3.md
@@ -185,8 +185,7 @@ If you set `--addr` to listen on a public or LAN accessible IP address
then using Authentication is advised - see the next section for info.
You can use a unix socket by setting the url to `unix:///path/to/socket`
-or just by using an absolute path name. Note that unix sockets bypass the
-authentication - this is expected to be done with file system permissions.
+or just by using an absolute path name.
`--addr` may be repeated to listen on multiple IPs/ports/sockets.
Socket activation, described further below, can also be used to accomplish the same.
@@ -213,19 +212,21 @@ https. You will need to supply the `--cert` and `--key` flags.
If you wish to do client side certificate validation then you will need to
supply `--client-ca` also.
-`--cert` should be a either a PEM encoded certificate or a concatenation
-of that with the CA certificate. `--key` should be the PEM encoded
-private key and `--client-ca` should be the PEM encoded client
-certificate authority certificate.
+`--cert` must be set to the path of a file containing
+either a PEM encoded certificate, or a concatenation of that with the CA
+certificate. `--key` must be set to the path of a file
+with the PEM encoded private key. If setting `--client-ca`,
+it should be set to the path of a file with PEM encoded client certificate
+authority certificates.
`--min-tls-version` is minimum TLS version that is acceptable. Valid
- values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default
- "tls1.0").
+values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").
## Socket activation
Instead of the listening addresses specified above, rclone will listen to all
-FDs passed by the service manager, if any (and ignore any arguments passed by --addr`).
+FDs passed by the service manager, if any (and ignore any arguments passed
+by `--addr`).
This allows rclone to be a socket-activated service.
It can be configured with .socket and .service unit files as described in
@@ -541,6 +542,50 @@ modified files from the cache (the related global flag `--checkers` has no effec
--transfers int Number of file transfers to run in parallel (default 4)
+## Symlinks
+
+By default the VFS does not support symlinks. However this may be
+enabled with either of the following flags:
+
+ --links Translate symlinks to/from regular files with a '.rclonelink' extension.
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension. So a
+file which appears as a symlink `link-to-file.txt` would be stored on
+cloud storage as `link-to-file.txt.rclonelink` and the contents would
+be the path to the symlink destination.
+
+Note that `--links` enables symlink translation globally in rclone -
+this includes any backend which supports the concept (for example the
+local backend). `--vfs-links` just enables it for the VFS layer.
+
+This scheme is compatible with that used by the [local backend with the --local-links flag](/local/#symlinks-junction-points).
+
+The `--vfs-links` flag has been designed for `rclone mount`, `rclone
+nfsmount` and `rclone serve nfs`.
+
+It hasn't been tested with the other `rclone serve` commands yet.
+
+A limitation of the current implementation is that it expects the
+caller to resolve sub-symlinks. For example given this directory tree
+
+```
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dir
+```
+
+The VFS will correctly resolve `linked-dir` but not
+`linked-dir/file.txt`. This is not a problem for the tested commands
+but may be for other commands.
+
+**Note** that there is an outstanding issue with symlink support
+[issue #8245](https://github.com/rclone/rclone/issues/8245) with duplicate
+files being created when symlinks are moved into directories where
+there is a file of the same name (or vice versa).
+
## VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only
@@ -632,8 +677,8 @@ rclone serve s3 remote:path [flags]
--auth-key stringArray Set key pair for v4 authorization: access_key_id,secret_access_key
--auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
- --cert string TLS PEM key (concatenation of certificate and CA certificate)
- --client-ca string Client certificate authority to verify clients with
+ --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
+ --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 777)
--etag-hash string Which hash to use for the ETag, or auto or blank for off (default "MD5")
@@ -642,7 +687,8 @@ rclone serve s3 remote:path [flags]
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for s3
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string TLS PEM Private key
+ --key string Path to TLS PEM private key file
+ --link-perms FileMode Link permissions (default 666)
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--no-checksum Don't compare checksums on up/download
@@ -668,6 +714,7 @@ rclone serve s3 remote:path [flags]
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
diff --git a/docs/content/commands/rclone_serve_sftp.md b/docs/content/commands/rclone_serve_sftp.md
index a662547c4..477e35dfe 100644
--- a/docs/content/commands/rclone_serve_sftp.md
+++ b/docs/content/commands/rclone_serve_sftp.md
@@ -378,6 +378,50 @@ modified files from the cache (the related global flag `--checkers` has no effec
--transfers int Number of file transfers to run in parallel (default 4)
+## Symlinks
+
+By default the VFS does not support symlinks. However this may be
+enabled with either of the following flags:
+
+ --links Translate symlinks to/from regular files with a '.rclonelink' extension.
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension. So a
+file which appears as a symlink `link-to-file.txt` would be stored on
+cloud storage as `link-to-file.txt.rclonelink` and the contents would
+be the path to the symlink destination.
+
+Note that `--links` enables symlink translation globally in rclone -
+this includes any backend which supports the concept (for example the
+local backend). `--vfs-links` just enables it for the VFS layer.
+
+This scheme is compatible with that used by the [local backend with the --local-links flag](/local/#symlinks-junction-points).
+
+The `--vfs-links` flag has been designed for `rclone mount`, `rclone
+nfsmount` and `rclone serve nfs`.
+
+It hasn't been tested with the other `rclone serve` commands yet.
+
+A limitation of the current implementation is that it expects the
+caller to resolve sub-symlinks. For example given this directory tree
+
+```
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dir
+```
+
+The VFS will correctly resolve `linked-dir` but not
+`linked-dir/file.txt`. This is not a problem for the tested commands
+but may be for other commands.
+
+**Note** that there is an outstanding issue with symlink support
+[issue #8245](https://github.com/rclone/rclone/issues/8245) with duplicate
+files being created when symlinks are moved into directories where
+there is a file of the same name (or vice versa).
+
## VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only
@@ -554,6 +598,7 @@ rclone serve sftp remote:path [flags]
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for sftp
--key stringArray SSH private host key file (Can be multi-valued, leave blank to auto generate)
+ --link-perms FileMode Link permissions (default 666)
--no-auth Allow connections with no authentication if set
--no-checksum Don't compare checksums on up/download
--no-modtime Don't read/write the modification time (can speed things up)
@@ -574,6 +619,7 @@ rclone serve sftp remote:path [flags]
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
diff --git a/docs/content/commands/rclone_serve_webdav.md b/docs/content/commands/rclone_serve_webdav.md
index acd1d69ce..397b9a5f3 100644
--- a/docs/content/commands/rclone_serve_webdav.md
+++ b/docs/content/commands/rclone_serve_webdav.md
@@ -76,8 +76,7 @@ If you set `--addr` to listen on a public or LAN accessible IP address
then using Authentication is advised - see the next section for info.
You can use a unix socket by setting the url to `unix:///path/to/socket`
-or just by using an absolute path name. Note that unix sockets bypass the
-authentication - this is expected to be done with file system permissions.
+or just by using an absolute path name.
`--addr` may be repeated to listen on multiple IPs/ports/sockets.
Socket activation, described further below, can also be used to accomplish the same.
@@ -104,19 +103,21 @@ https. You will need to supply the `--cert` and `--key` flags.
If you wish to do client side certificate validation then you will need to
supply `--client-ca` also.
-`--cert` should be a either a PEM encoded certificate or a concatenation
-of that with the CA certificate. `--key` should be the PEM encoded
-private key and `--client-ca` should be the PEM encoded client
-certificate authority certificate.
+`--cert` must be set to the path of a file containing
+either a PEM encoded certificate, or a concatenation of that with the CA
+certificate. `--key` must be set to the path of a file
+with the PEM encoded private key. If setting `--client-ca`,
+it should be set to the path of a file with PEM encoded client certificate
+authority certificates.
`--min-tls-version` is minimum TLS version that is acceptable. Valid
- values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default
- "tls1.0").
+values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").
## Socket activation
Instead of the listening addresses specified above, rclone will listen to all
-FDs passed by the service manager, if any (and ignore any arguments passed by --addr`).
+FDs passed by the service manager, if any (and ignore any arguments passed
+by `--addr`).
This allows rclone to be a socket-activated service.
It can be configured with .socket and .service unit files as described in
@@ -495,6 +496,50 @@ modified files from the cache (the related global flag `--checkers` has no effec
--transfers int Number of file transfers to run in parallel (default 4)
+## Symlinks
+
+By default the VFS does not support symlinks. However this may be
+enabled with either of the following flags:
+
+ --links Translate symlinks to/from regular files with a '.rclonelink' extension.
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
+
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension. So a
+file which appears as a symlink `link-to-file.txt` would be stored on
+cloud storage as `link-to-file.txt.rclonelink` and the contents would
+be the path to the symlink destination.
+
+Note that `--links` enables symlink translation globally in rclone -
+this includes any backend which supports the concept (for example the
+local backend). `--vfs-links` just enables it for the VFS layer.
+
+This scheme is compatible with that used by the [local backend with the --local-links flag](/local/#symlinks-junction-points).
+
+The `--vfs-links` flag has been designed for `rclone mount`, `rclone
+nfsmount` and `rclone serve nfs`.
+
+It hasn't been tested with the other `rclone serve` commands yet.
+
+A limitation of the current implementation is that it expects the
+caller to resolve sub-symlinks. For example given this directory tree
+
+```
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dir
+```
+
+The VFS will correctly resolve `linked-dir` but not
+`linked-dir/file.txt`. This is not a problem for the tested commands
+but may be for other commands.
+
+**Note** that there is an outstanding issue with symlink support
+[issue #8245](https://github.com/rclone/rclone/issues/8245) with duplicate
+files being created when symlinks are moved into directories where
+there is a file of the same name (or vice versa).
+
## VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only
@@ -666,8 +711,8 @@ rclone serve webdav remote:path [flags]
--allow-origin string Origin which cross-domain request (CORS) can be executed from
--auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
- --cert string TLS PEM key (concatenation of certificate and CA certificate)
- --client-ca string Client certificate authority to verify clients with
+ --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
+ --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 777)
--disable-dir-list Disable HTML directory list on GET request for a directory
@@ -676,7 +721,8 @@ rclone serve webdav remote:path [flags]
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for webdav
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string TLS PEM Private key
+ --key string Path to TLS PEM private key file
+ --link-perms FileMode Link permissions (default 666)
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--no-checksum Don't compare checksums on up/download
@@ -702,6 +748,7 @@ rclone serve webdav remote:path [flags]
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
diff --git a/docs/content/commands/rclone_sync.md b/docs/content/commands/rclone_sync.md
index 82aa00056..e06b1ccd0 100644
--- a/docs/content/commands/rclone_sync.md
+++ b/docs/content/commands/rclone_sync.md
@@ -137,6 +137,7 @@ Flags for anything which can copy a file
-I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
diff --git a/docs/content/commands/rclone_test_makefiles.md b/docs/content/commands/rclone_test_makefiles.md
index 237ba1c9d..79fdfab83 100644
--- a/docs/content/commands/rclone_test_makefiles.md
+++ b/docs/content/commands/rclone_test_makefiles.md
@@ -19,6 +19,7 @@ rclone test makefiles [flags]
--chargen Fill files with a ASCII chargen pattern
--files int Number of files to create (default 1000)
--files-per-directory int Average number of files per directory (default 10)
+ --flat If set create all files in the root directory
-h, --help help for makefiles
--max-depth int Maximum depth of directory hierarchy (default 10)
--max-file-size SizeSuffix Maximum size of files to create (default 100)
diff --git a/docs/content/drive.md b/docs/content/drive.md
index aa8734f82..775163a8f 100644
--- a/docs/content/drive.md
+++ b/docs/content/drive.md
@@ -693,6 +693,19 @@ Properties:
- Type: string
- Required: false
+#### --drive-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_DRIVE_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
#### --drive-root-folder-id
ID of the root folder.
@@ -1685,6 +1698,41 @@ The result is a JSON array of matches, for example:
}
]
+### rescue
+
+Rescue or delete any orphaned files
+
+ rclone backend rescue remote: [options] [+]
+
+This command rescues or deletes any orphaned files or directories.
+
+Sometimes files can get orphaned in Google Drive. This means that they
+are no longer in any folder in Google Drive.
+
+This command finds those files and either rescues them to a directory
+you specify or deletes them.
+
+Usage:
+
+This can be used in 3 ways.
+
+First, list all orphaned files
+
+ rclone backend rescue drive:
+
+Second rescue all orphaned files to the directory indicated
+
+ rclone backend rescue drive: "relative/path/to/rescue/directory"
+
+e.g. To rescue all orphans to a directory called "Orphans" in the top level
+
+ rclone backend rescue drive: Orphans
+
+Third delete all orphaned files to the trash
+
+ rclone backend rescue drive: -o delete
+
+
{{< rem autogenerated options stop >}}
## Limitations
diff --git a/docs/content/dropbox.md b/docs/content/dropbox.md
index 17cda68b4..219e16f43 100644
--- a/docs/content/dropbox.md
+++ b/docs/content/dropbox.md
@@ -263,6 +263,19 @@ Properties:
- Type: string
- Required: false
+#### --dropbox-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_DROPBOX_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
#### --dropbox-chunk-size
Upload chunk size (< 150Mi).
diff --git a/docs/content/flags.md b/docs/content/flags.md
index c2908561a..aefd4dc64 100644
--- a/docs/content/flags.md
+++ b/docs/content/flags.md
@@ -27,6 +27,7 @@ Flags for anything which can copy a file.
-I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
@@ -115,7 +116,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.68.0")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.69.0")
```
@@ -264,7 +265,7 @@ Flags to control the Remote Control API.
```
--rc Enable the remote control server
- --rc-addr stringArray IPaddress:Port or :Port to bind server to (default ["localhost:5572"])
+ --rc-addr stringArray IPaddress:Port or :Port to bind server to (default localhost:5572)
--rc-allow-origin string Origin which cross-domain request (CORS) can be executed from
--rc-baseurl string Prefix for URLs - leave blank for root
--rc-cert string TLS PEM key (concatenation of certificate and CA certificate)
@@ -300,7 +301,7 @@ Flags to control the Remote Control API.
Flags to control the Metrics HTTP endpoint..
```
- --metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to (default [""])
+ --metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to
--metrics-allow-origin string Origin which cross-domain request (CORS) can be executed from
--metrics-baseurl string Prefix for URLs - leave blank for root
--metrics-cert string TLS PEM key (concatenation of certificate and CA certificate)
@@ -340,6 +341,7 @@ Backend-only flags (these can be set in the config file also).
--azureblob-description string Description of the remote
--azureblob-directory-markers Upload an empty object with a trailing slash when a new directory is created
--azureblob-disable-checksum Don't store MD5 checksum with object metadata
+ --azureblob-disable-instance-discovery Skip requesting Microsoft Entra instance metadata
--azureblob-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
--azureblob-endpoint string Endpoint for the service
--azureblob-env-auth Read credentials from runtime (environment variables, CLI or MSI)
@@ -357,6 +359,7 @@ Backend-only flags (these can be set in the config file also).
--azureblob-tenant string ID of the service principal's tenant. Also called its directory ID
--azureblob-upload-concurrency int Concurrency for multipart uploads (default 16)
--azureblob-upload-cutoff string Cutoff for switching to chunked upload (<= 256 MiB) (deprecated)
+ --azureblob-use-az Use Azure CLI tool az for authentication
--azureblob-use-emulator Uses local storage emulator if provided as 'true'
--azureblob-use-msi Use a managed service identity to authenticate (only works in Azure)
--azureblob-username string User name (usually an email address)
@@ -406,6 +409,7 @@ Backend-only flags (these can be set in the config file also).
--box-auth-url string Auth server URL
--box-box-config-file string Box App config.json location
--box-box-sub-type string (default "user")
+ --box-client-credentials Use client credentials OAuth flow
--box-client-id string OAuth Client Id
--box-client-secret string OAuth Client Secret
--box-commit-retries int Max number of times to try committing a multipart file (default 100)
@@ -444,6 +448,14 @@ Backend-only flags (these can be set in the config file also).
--chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks
--chunker-hash-type string Choose how chunker handles hash sums (default "md5")
--chunker-remote string Remote to chunk/unchunk
+ --cloudinary-api-key string Cloudinary API Key
+ --cloudinary-api-secret string Cloudinary API Secret
+ --cloudinary-cloud-name string Cloudinary Environment Name
+ --cloudinary-description string Description of the remote
+ --cloudinary-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
+ --cloudinary-eventually-consistent-delay Duration Wait N seconds for eventual consistency of the databases that support the backend operation (default 0s)
+ --cloudinary-upload-prefix string Specify the API endpoint for environments out of the US
+ --cloudinary-upload-preset string Upload Preset to select asset manipulation on upload
--combine-description string Description of the remote
--combine-upstreams SpaceSepList Upstreams for combining
--compress-description string Description of the remote
@@ -470,6 +482,7 @@ Backend-only flags (these can be set in the config file also).
--drive-auth-owner-only Only consider files owned by the authenticated user
--drive-auth-url string Auth server URL
--drive-chunk-size SizeSuffix Upload chunk size (default 8Mi)
+ --drive-client-credentials Use client credentials OAuth flow
--drive-client-id string Google Application Client Id
--drive-client-secret string OAuth Client Secret
--drive-copy-shortcut-content Server side copy contents of shortcuts instead of the shortcut
@@ -520,6 +533,7 @@ Backend-only flags (these can be set in the config file also).
--dropbox-batch-size int Max number of files in upload batch
--dropbox-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
--dropbox-chunk-size SizeSuffix Upload chunk size (< 150Mi) (default 48Mi)
+ --dropbox-client-credentials Use client credentials OAuth flow
--dropbox-client-id string OAuth Client Id
--dropbox-client-secret string OAuth Client Secret
--dropbox-description string Description of the remote
@@ -566,6 +580,7 @@ Backend-only flags (these can be set in the config file also).
--ftp-host string FTP host to connect to
--ftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--ftp-no-check-certificate Do not verify the TLS certificate of the server
+ --ftp-no-check-upload Don't check the upload is OK
--ftp-pass string FTP password (obscured)
--ftp-port int FTP port number (default 21)
--ftp-shut-timeout Duration Maximum time to wait for data connection closing status (default 1m0s)
@@ -574,10 +589,12 @@ Backend-only flags (these can be set in the config file also).
--ftp-tls-cache-size int Size of TLS session cache for all control and data connections (default 32)
--ftp-user string FTP username (default "$USER")
--ftp-writing-mdtm Use MDTM to set modification time (VsFtpd quirk)
+ --gcs-access-token string Short-lived access token
--gcs-anonymous Access public buckets and objects without credentials
--gcs-auth-url string Auth server URL
--gcs-bucket-acl string Access Control List for new buckets
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies
+ --gcs-client-credentials Use client credentials OAuth flow
--gcs-client-id string OAuth Client Id
--gcs-client-secret string OAuth Client Secret
--gcs-decompress If set this will decompress gzip encoded objects
@@ -606,11 +623,13 @@ Backend-only flags (these can be set in the config file also).
--gphotos-batch-mode string Upload file batching sync|async|off (default "sync")
--gphotos-batch-size int Max number of files in upload batch
--gphotos-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
+ --gphotos-client-credentials Use client credentials OAuth flow
--gphotos-client-id string OAuth Client Id
--gphotos-client-secret string OAuth Client Secret
--gphotos-description string Description of the remote
--gphotos-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gphotos-include-archived Also view and download archived media
+ --gphotos-proxy string Use the gphotosdl proxy for downloading the full resolution images
--gphotos-read-only Set to make the Google Photos backend read only
--gphotos-read-size Set to read the size of media items
--gphotos-start-year int Year limits the photos to be downloaded to those which are uploaded after the given year (default 2000)
@@ -629,6 +648,7 @@ Backend-only flags (these can be set in the config file also).
--hdfs-username string Hadoop user name
--hidrive-auth-url string Auth server URL
--hidrive-chunk-size SizeSuffix Chunksize for chunked uploads (default 48Mi)
+ --hidrive-client-credentials Use client credentials OAuth flow
--hidrive-client-id string OAuth Client Id
--hidrive-client-secret string OAuth Client Secret
--hidrive-description string Description of the remote
@@ -648,6 +668,11 @@ Backend-only flags (these can be set in the config file also).
--http-no-head Don't use HEAD requests
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of HTTP host to connect to
+ --iclouddrive-apple-id string Apple ID
+ --iclouddrive-client-id string Client id (default "d39ba9916b7251055b22c7f910e2ea796ee65e98b2ddecea8f5dde8d9d1a815d")
+ --iclouddrive-description string Description of the remote
+ --iclouddrive-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --iclouddrive-password string Password (obscured)
--imagekit-description string Description of the remote
--imagekit-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Dollar,Question,Hash,Percent,BackSlash,Del,Ctl,InvalidUtf8,Dot,SquareBracket)
--imagekit-endpoint string You can find your ImageKit.io URL endpoint in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
@@ -665,6 +690,7 @@ Backend-only flags (these can be set in the config file also).
--internetarchive-secret-access-key string IAS3 Secret Key (password)
--internetarchive-wait-archive Duration Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish (default 0s)
--jottacloud-auth-url string Auth server URL
+ --jottacloud-client-credentials Use client credentials OAuth flow
--jottacloud-client-id string OAuth Client Id
--jottacloud-client-secret string OAuth Client Secret
--jottacloud-description string Description of the remote
@@ -686,11 +712,11 @@ Backend-only flags (these can be set in the config file also).
--koofr-user string Your user name
--linkbox-description string Description of the remote
--linkbox-token string Token from https://www.linkbox.to/admin/account
- -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-case-insensitive Force the filesystem to report itself as case insensitive
--local-case-sensitive Force the filesystem to report itself as case sensitive
--local-description string Description of the remote
--local-encoding Encoding The encoding for the backend (default Slash,Dot)
+ --local-links Translate symlinks to/from regular files with a '.rclonelink' extension for the local backend
--local-no-check-updated Don't check to see if the files change during upload
--local-no-clone Disable reflink cloning for server-side copies
--local-no-preallocate Disable preallocation of disk space for transferred files
@@ -702,6 +728,7 @@ Backend-only flags (these can be set in the config file also).
--local-zero-size-links Assume the Stat size of links is zero (and read them instead) (deprecated)
--mailru-auth-url string Auth server URL
--mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true)
+ --mailru-client-credentials Use client credentials OAuth flow
--mailru-client-id string OAuth Client Id
--mailru-client-secret string OAuth Client Secret
--mailru-description string Description of the remote
@@ -732,6 +759,7 @@ Backend-only flags (these can be set in the config file also).
--onedrive-auth-url string Auth server URL
--onedrive-av-override Allows download of files the server thinks has a virus
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes) (default 10Mi)
+ --onedrive-client-credentials Use client credentials OAuth flow
--onedrive-client-id string OAuth Client Id
--onedrive-client-secret string OAuth Client Secret
--onedrive-delta If set rclone will use delta listing to implement recursive listings
@@ -751,11 +779,12 @@ Backend-only flags (these can be set in the config file also).
--onedrive-region string Choose national cloud region for OneDrive (default "global")
--onedrive-root-folder-id string ID of the root folder
--onedrive-server-side-across-configs Deprecated: use --server-side-across-configs instead
+ --onedrive-tenant string ID of the service principal's tenant. Also called its directory ID
--onedrive-token string OAuth Access Token as a JSON blob
--onedrive-token-url string Token server url
--oos-attempt-resume-upload If true attempt to resume previously started multipart upload for the object
--oos-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
- --oos-compartment string Object storage compartment OCID
+ --oos-compartment string Specify compartment OCID, if you need to list buckets
--oos-config-file string Path to OCI config file (default "~/.oci/config")
--oos-config-profile string Profile name inside the oci config file (default "Default")
--oos-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
@@ -784,6 +813,7 @@ Backend-only flags (these can be set in the config file also).
--opendrive-password string Password (obscured)
--opendrive-username string Username
--pcloud-auth-url string Auth server URL
+ --pcloud-client-credentials Use client credentials OAuth flow
--pcloud-client-id string OAuth Client Id
--pcloud-client-secret string OAuth Client Secret
--pcloud-description string Description of the remote
@@ -794,26 +824,25 @@ Backend-only flags (these can be set in the config file also).
--pcloud-token string OAuth Access Token as a JSON blob
--pcloud-token-url string Token server url
--pcloud-username string Your pcloud username
- --pikpak-auth-url string Auth server URL
--pikpak-chunk-size SizeSuffix Chunk size for multipart uploads (default 5Mi)
- --pikpak-client-id string OAuth Client Id
- --pikpak-client-secret string OAuth Client Secret
--pikpak-description string Description of the remote
+ --pikpak-device-id string Device ID used for authorization
--pikpak-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot)
--pikpak-hash-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate hash if required (default 10Mi)
+ --pikpak-no-media-link Use original file links instead of media links
--pikpak-pass string Pikpak password (obscured)
--pikpak-root-folder-id string ID of the root folder
- --pikpak-token string OAuth Access Token as a JSON blob
- --pikpak-token-url string Token server url
--pikpak-trashed-only Only show files that are in the trash
--pikpak-upload-concurrency int Concurrency for multipart uploads (default 5)
--pikpak-use-trash Send files to the trash instead of deleting permanently (default true)
--pikpak-user string Pikpak username
+ --pikpak-user-agent string HTTP user agent for pikpak (default "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0")
--pixeldrain-api-key string API key for your pixeldrain account
--pixeldrain-api-url string The API endpoint to connect to. In the vast majority of cases it's fine to leave (default "https://pixeldrain.com/api")
--pixeldrain-description string Description of the remote
--pixeldrain-root-folder-id string Root of the filesystem to use (default "me")
--premiumizeme-auth-url string Auth server URL
+ --premiumizeme-client-credentials Use client credentials OAuth flow
--premiumizeme-client-id string OAuth Client Id
--premiumizeme-client-secret string OAuth Client Secret
--premiumizeme-description string Description of the remote
@@ -831,6 +860,7 @@ Backend-only flags (these can be set in the config file also).
--protondrive-replace-existing-draft Create a new revision when filename conflict is detected
--protondrive-username string The username of your proton account
--putio-auth-url string Auth server URL
+ --putio-client-credentials Use client credentials OAuth flow
--putio-client-id string OAuth Client Id
--putio-client-secret string OAuth Client Secret
--putio-description string Description of the remote
@@ -864,6 +894,7 @@ Backend-only flags (these can be set in the config file also).
--s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
--s3-decompress If set this will decompress gzip encoded objects
--s3-description string Description of the remote
+ --s3-directory-bucket Set to use AWS Directory Buckets
--s3-directory-markers Upload an empty object with a trailing slash when a new directory is created
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-disable-http2 Disable usage of http2 for S3 backends
@@ -945,6 +976,7 @@ Backend-only flags (these can be set in the config file also).
--sftp-pass string SSH password, leave blank to use ssh-agent (obscured)
--sftp-path-override string Override path used by SSH shell commands
--sftp-port int SSH port number (default 22)
+ --sftp-pubkey string SSH public certificate for public certificate based authentication
--sftp-pubkey-file string Optional path to public key file
--sftp-server-command string Specifies the path or command to run a sftp server on the remote host
--sftp-set-env SpaceSepList Environment variables to pass to sftp and commands
@@ -960,6 +992,7 @@ Backend-only flags (these can be set in the config file also).
--sftp-user string SSH username (default "$USER")
--sharefile-auth-url string Auth server URL
--sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi)
+ --sharefile-client-credentials Use client credentials OAuth flow
--sharefile-client-id string OAuth Client Id
--sharefile-client-secret string OAuth Client Secret
--sharefile-description string Description of the remote
@@ -1049,6 +1082,7 @@ Backend-only flags (these can be set in the config file also).
--uptobox-description string Description of the remote
--uptobox-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
--uptobox-private Set to make uploaded files private
+ --webdav-auth-redirect Preserve authentication on redirect
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
--webdav-bearer-token-command string Command to run to get a bearer token
--webdav-description string Description of the remote
@@ -1064,6 +1098,7 @@ Backend-only flags (these can be set in the config file also).
--webdav-user string User name
--webdav-vendor string Name of the WebDAV site/service/software you are using
--yandex-auth-url string Auth server URL
+ --yandex-client-credentials Use client credentials OAuth flow
--yandex-client-id string OAuth Client Id
--yandex-client-secret string OAuth Client Secret
--yandex-description string Description of the remote
@@ -1073,6 +1108,7 @@ Backend-only flags (these can be set in the config file also).
--yandex-token string OAuth Access Token as a JSON blob
--yandex-token-url string Token server url
--zoho-auth-url string Auth server URL
+ --zoho-client-credentials Use client credentials OAuth flow
--zoho-client-id string OAuth Client Id
--zoho-client-secret string OAuth Client Secret
--zoho-description string Description of the remote
@@ -1080,6 +1116,7 @@ Backend-only flags (these can be set in the config file also).
--zoho-region string Zoho region to connect to
--zoho-token string OAuth Access Token as a JSON blob
--zoho-token-url string Token server url
+ --zoho-upload-cutoff SizeSuffix Cutoff for switching to large file upload api (>= 10 MiB) (default 10Mi)
```
diff --git a/docs/content/ftp.md b/docs/content/ftp.md
index 68b3d54c2..443877441 100644
--- a/docs/content/ftp.md
+++ b/docs/content/ftp.md
@@ -419,12 +419,12 @@ Properties:
Socks 5 proxy host.
- Supports the format user:pass@host:port, user@host:port, host:port.
+Supports the format user:pass@host:port, user@host:port, host:port.
- Example:
-
- myUser:myPass@localhost:9005
+Example:
+ myUser:myPass@localhost:9005
+
Properties:
@@ -433,6 +433,28 @@ Properties:
- Type: string
- Required: false
+#### --ftp-no-check-upload
+
+Don't check the upload is OK
+
+Normally rclone will try to check the upload exists after it has
+uploaded a file to make sure the size and modification time are as
+expected.
+
+This flag stops rclone doing these checks. This enables uploading to
+folders which are write only.
+
+You will likely need to use the --inplace flag also if uploading to
+a write only folder.
+
+
+Properties:
+
+- Config: no_check_upload
+- Env Var: RCLONE_FTP_NO_CHECK_UPLOAD
+- Type: bool
+- Default: false
+
#### --ftp-encoding
The encoding for the backend.
diff --git a/docs/content/googlecloudstorage.md b/docs/content/googlecloudstorage.md
index 17fa908e1..c550ef5fd 100644
--- a/docs/content/googlecloudstorage.md
+++ b/docs/content/googlecloudstorage.md
@@ -412,20 +412,6 @@ Properties:
- Type: string
- Required: false
-#### --gcs-access-token
-
-Short-lived access token.
-
-Leave blank normally.
-Needed only if you want use short-lived access tokens instead of interactive login.
-
-Properties:
-
-- Config: access_token
-- Env Var: RCLONE_GCS_ACCESS_TOKEN
-- Type: string
-- Required: false
-
#### --gcs-anonymous
Access public buckets and objects without credentials.
@@ -687,6 +673,33 @@ Properties:
- Type: string
- Required: false
+#### --gcs-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_GCS_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
+#### --gcs-access-token
+
+Short-lived access token.
+
+Leave blank normally.
+Needed only if you want use short-lived access token instead of interactive login.
+
+Properties:
+
+- Config: access_token
+- Env Var: RCLONE_GCS_ACCESS_TOKEN
+- Type: string
+- Required: false
+
#### --gcs-directory-markers
Upload an empty object with a trailing slash when a new directory is created
diff --git a/docs/content/googlephotos.md b/docs/content/googlephotos.md
index 31154a92b..26faedcc6 100644
--- a/docs/content/googlephotos.md
+++ b/docs/content/googlephotos.md
@@ -313,6 +313,19 @@ Properties:
- Type: string
- Required: false
+#### --gphotos-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_GPHOTOS_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
#### --gphotos-read-size
Set to read the size of media items.
@@ -364,6 +377,40 @@ Properties:
- Type: bool
- Default: false
+#### --gphotos-proxy
+
+Use the gphotosdl proxy for downloading the full resolution images
+
+The Google API will deliver images and video which aren't full
+resolution, and/or have EXIF data missing.
+
+However if you ue the gphotosdl proxy tnen you can download original,
+unchanged images.
+
+This runs a headless browser in the background.
+
+Download the software from [gphotosdl](https://github.com/rclone/gphotosdl)
+
+First run with
+
+ gphotosdl -login
+
+Then once you have logged into google photos close the browser window
+and run
+
+ gphotosdl
+
+Then supply the parameter `--gphotos-proxy "http://localhost:8282"` to make
+rclone use the proxy.
+
+
+Properties:
+
+- Config: proxy
+- Env Var: RCLONE_GPHOTOS_PROXY
+- Type: string
+- Required: false
+
#### --gphotos-encoding
The encoding for the backend.
diff --git a/docs/content/hidrive.md b/docs/content/hidrive.md
index 38c161409..d673ef8cb 100644
--- a/docs/content/hidrive.md
+++ b/docs/content/hidrive.md
@@ -282,6 +282,19 @@ Properties:
- Type: string
- Required: false
+#### --hidrive-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_HIDRIVE_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
#### --hidrive-scope-role
User-level that rclone should use when requesting access from HiDrive.
diff --git a/docs/content/iclouddrive.md b/docs/content/iclouddrive.md
index 272797898..63023969b 100644
--- a/docs/content/iclouddrive.md
+++ b/docs/content/iclouddrive.md
@@ -109,7 +109,7 @@ Properties:
#### --iclouddrive-trust-token
-trust token (internal use)
+Trust token (internal use)
Properties:
@@ -133,6 +133,17 @@ Properties:
Here are the Advanced options specific to iclouddrive (iCloud Drive).
+#### --iclouddrive-client-id
+
+Client id
+
+Properties:
+
+- Config: client_id
+- Env Var: RCLONE_ICLOUDDRIVE_CLIENT_ID
+- Type: string
+- Default: "d39ba9916b7251055b22c7f910e2ea796ee65e98b2ddecea8f5dde8d9d1a815d"
+
#### --iclouddrive-encoding
The encoding for the backend.
diff --git a/docs/content/jottacloud.md b/docs/content/jottacloud.md
index f2e368e1e..0aec9e4d8 100644
--- a/docs/content/jottacloud.md
+++ b/docs/content/jottacloud.md
@@ -377,6 +377,19 @@ Properties:
- Type: string
- Required: false
+#### --jottacloud-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_JOTTACLOUD_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
#### --jottacloud-md5-memory-limit
Files bigger than this will be cached on disk to calculate the MD5 if required.
diff --git a/docs/content/mailru.md b/docs/content/mailru.md
index 2ecbf152f..780278d8a 100644
--- a/docs/content/mailru.md
+++ b/docs/content/mailru.md
@@ -293,6 +293,19 @@ Properties:
- Type: string
- Required: false
+#### --mailru-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_MAILRU_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
#### --mailru-speedup-file-patterns
Comma separated list of file name patterns eligible for speedup (put by hash).
diff --git a/docs/content/onedrive.md b/docs/content/onedrive.md
index b2e599986..a14b7a79e 100644
--- a/docs/content/onedrive.md
+++ b/docs/content/onedrive.md
@@ -323,6 +323,21 @@ Properties:
- "cn"
- Azure and Office 365 operated by Vnet Group in China
+#### --onedrive-tenant
+
+ID of the service principal's tenant. Also called its directory ID.
+
+Set this if using
+- Client Credential flow
+
+
+Properties:
+
+- Config: tenant
+- Env Var: RCLONE_ONEDRIVE_TENANT
+- Type: string
+- Required: false
+
### Advanced options
Here are the Advanced options specific to onedrive (Microsoft OneDrive).
@@ -364,6 +379,19 @@ Properties:
- Type: string
- Required: false
+#### --onedrive-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_ONEDRIVE_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
#### --onedrive-chunk-size
Chunk size to upload files with - must be multiple of 320k (327,680 bytes).
diff --git a/docs/content/oracleobjectstorage/_index.md b/docs/content/oracleobjectstorage/_index.md
index 06022e480..39e8c4ad4 100644
--- a/docs/content/oracleobjectstorage/_index.md
+++ b/docs/content/oracleobjectstorage/_index.md
@@ -340,7 +340,9 @@ Properties:
#### --oos-compartment
-Object storage compartment OCID
+Specify compartment OCID, if you need to list buckets.
+
+List objects works without compartment OCID.
Properties:
@@ -348,7 +350,7 @@ Properties:
- Env Var: RCLONE_OOS_COMPARTMENT
- Provider: !no_auth
- Type: string
-- Required: true
+- Required: false
#### --oos-region
diff --git a/docs/content/pcloud.md b/docs/content/pcloud.md
index adbc07ebb..9dbaf9b3e 100644
--- a/docs/content/pcloud.md
+++ b/docs/content/pcloud.md
@@ -224,6 +224,19 @@ Properties:
- Type: string
- Required: false
+#### --pcloud-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_PCLOUD_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
#### --pcloud-encoding
The encoding for the backend.
diff --git a/docs/content/pikpak.md b/docs/content/pikpak.md
index 6c2fab232..62e288dc8 100644
--- a/docs/content/pikpak.md
+++ b/docs/content/pikpak.md
@@ -111,68 +111,29 @@ Properties:
Here are the Advanced options specific to pikpak (PikPak).
-#### --pikpak-client-id
+#### --pikpak-device-id
-OAuth Client Id.
-
-Leave blank normally.
+Device ID used for authorization.
Properties:
-- Config: client_id
-- Env Var: RCLONE_PIKPAK_CLIENT_ID
+- Config: device_id
+- Env Var: RCLONE_PIKPAK_DEVICE_ID
- Type: string
- Required: false
-#### --pikpak-client-secret
+#### --pikpak-user-agent
-OAuth Client Secret.
+HTTP user agent for pikpak.
-Leave blank normally.
+Defaults to "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0" or "--pikpak-user-agent" provided on command line.
Properties:
-- Config: client_secret
-- Env Var: RCLONE_PIKPAK_CLIENT_SECRET
+- Config: user_agent
+- Env Var: RCLONE_PIKPAK_USER_AGENT
- Type: string
-- Required: false
-
-#### --pikpak-token
-
-OAuth Access Token as a JSON blob.
-
-Properties:
-
-- Config: token
-- Env Var: RCLONE_PIKPAK_TOKEN
-- Type: string
-- Required: false
-
-#### --pikpak-auth-url
-
-Auth server URL.
-
-Leave blank to use the provider defaults.
-
-Properties:
-
-- Config: auth_url
-- Env Var: RCLONE_PIKPAK_AUTH_URL
-- Type: string
-- Required: false
-
-#### --pikpak-token-url
-
-Token server url.
-
-Leave blank to use the provider defaults.
-
-Properties:
-
-- Config: token_url
-- Env Var: RCLONE_PIKPAK_TOKEN_URL
-- Type: string
-- Required: false
+- Default: "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0"
#### --pikpak-root-folder-id
@@ -216,6 +177,19 @@ Properties:
- Type: bool
- Default: false
+#### --pikpak-no-media-link
+
+Use original file links instead of media links.
+
+This avoids issues caused by invalid media links, but may reduce download speeds.
+
+Properties:
+
+- Config: no_media_link
+- Env Var: RCLONE_PIKPAK_NO_MEDIA_LINK
+- Type: bool
+- Default: false
+
#### --pikpak-hash-memory-limit
Files bigger than this will be cached on disk to calculate hash if required.
diff --git a/docs/content/premiumizeme.md b/docs/content/premiumizeme.md
index 6ec591d2f..fc963f41f 100644
--- a/docs/content/premiumizeme.md
+++ b/docs/content/premiumizeme.md
@@ -189,6 +189,19 @@ Properties:
- Type: string
- Required: false
+#### --premiumizeme-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_PREMIUMIZEME_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
#### --premiumizeme-encoding
The encoding for the backend.
diff --git a/docs/content/putio.md b/docs/content/putio.md
index d30d7f8c6..37566c539 100644
--- a/docs/content/putio.md
+++ b/docs/content/putio.md
@@ -186,6 +186,19 @@ Properties:
- Type: string
- Required: false
+#### --putio-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_PUTIO_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
#### --putio-encoding
The encoding for the backend.
diff --git a/docs/content/rc.md b/docs/content/rc.md
index e16055e32..541093bb9 100644
--- a/docs/content/rc.md
+++ b/docs/content/rc.md
@@ -2104,6 +2104,7 @@ This takes the following parameters
- `fs` - select the VFS in use (optional)
- `id` - a numeric ID as returned from `vfs/queue`
- `expiry` - a new expiry time as floating point seconds
+- `relative` - if set, expiry is to be treated as relative to the current expiry (optional, boolean)
This returns an empty result on success, or an error.
diff --git a/docs/content/s3.md b/docs/content/s3.md
index 7ef9d0638..280f5dd6d 100644
--- a/docs/content/s3.md
+++ b/docs/content/s3.md
@@ -789,7 +789,7 @@ A simple solution is to set the `--s3-upload-cutoff 0` and force all the files t
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/s3/s3.go then run make backenddocs" >}}
### Standard options
-Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
+Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
#### --s3-provider
@@ -842,6 +842,8 @@ Properties:
- Minio Object Storage
- "Netease"
- Netease Object Storage (NOS)
+ - "Outscale"
+ - OUTSCALE Object Storage (OOS)
- "Petabox"
- Petabox Object Storage
- "RackCorp"
@@ -852,6 +854,8 @@ Properties:
- Scaleway Object Storage
- "SeaweedFS"
- SeaweedFS S3
+ - "Selectel"
+ - Selectel Object Storage
- "StackPath"
- StackPath Object Storage
- "Storj"
@@ -1103,7 +1107,7 @@ Properties:
- Config: acl
- Env Var: RCLONE_S3_ACL
-- Provider: !Storj,Synology,Cloudflare
+- Provider: !Storj,Selectel,Synology,Cloudflare
- Type: string
- Required: false
- Examples:
@@ -1207,7 +1211,7 @@ Properties:
- "ONEZONE_IA"
- One Zone Infrequent Access storage class
- "GLACIER"
- - Glacier storage class
+ - Glacier Flexible Retrieval storage class
- "DEEP_ARCHIVE"
- Glacier Deep Archive storage class
- "INTELLIGENT_TIERING"
@@ -1217,7 +1221,7 @@ Properties:
### Advanced options
-Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
+Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
#### --s3-bucket-acl
@@ -2051,6 +2055,41 @@ Properties:
- Type: Tristate
- Default: unset
+#### --s3-directory-bucket
+
+Set to use AWS Directory Buckets
+
+If you are using an AWS Directory Bucket then set this flag.
+
+This will ensure no `Content-Md5` headers are sent and ensure `ETag`
+headers are not interpreted as MD5 sums. `X-Amz-Meta-Md5chksum` will
+be set on all objects whether single or multipart uploaded.
+
+This also sets `no_check_bucket = true`.
+
+Note that Directory Buckets do not support:
+
+- Versioning
+- `Content-Encoding: gzip`
+
+Rclone limitations with Directory Buckets:
+
+- rclone does not support creating Directory Buckets with `rclone mkdir`
+- ... or removing them with `rclone rmdir` yet
+- Directory Buckets do not appear when doing `rclone lsf` at the top level.
+- Rclone can't remove auto created directories yet. In theory this should
+ work with `directory_markers = true` but it doesn't.
+- Directories don't seem to appear in recursive (ListR) listings.
+
+
+Properties:
+
+- Config: directory_bucket
+- Env Var: RCLONE_S3_DIRECTORY_BUCKET
+- Provider: AWS
+- Type: bool
+- Default: false
+
#### --s3-sdk-log-mode
Set to debug the SDK
diff --git a/docs/content/sharefile.md b/docs/content/sharefile.md
index ee201f55e..e5fa1a227 100644
--- a/docs/content/sharefile.md
+++ b/docs/content/sharefile.md
@@ -246,6 +246,19 @@ Properties:
- Type: string
- Required: false
+#### --sharefile-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_SHAREFILE_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
#### --sharefile-upload-cutoff
Cutoff for switching to multipart upload.
diff --git a/docs/content/webdav.md b/docs/content/webdav.md
index 548b3b9d4..357fea066 100644
--- a/docs/content/webdav.md
+++ b/docs/content/webdav.md
@@ -305,6 +305,29 @@ Properties:
- Type: string
- Required: false
+#### --webdav-auth-redirect
+
+Preserve authentication on redirect.
+
+If the server redirects rclone to a new domain when it is trying to
+read a file then normally rclone will drop the Authorization: header
+from the request.
+
+This is standard security practice to avoid sending your credentials
+to an unknown webserver.
+
+However this is desirable in some circumstances. If you are getting
+an error like "401 Unauthorized" when rclone is attempting to read
+files from the webdav server then you can try this option.
+
+
+Properties:
+
+- Config: auth_redirect
+- Env Var: RCLONE_WEBDAV_AUTH_REDIRECT
+- Type: bool
+- Default: false
+
#### --webdav-description
Description of the remote.
diff --git a/docs/content/yandex.md b/docs/content/yandex.md
index 6c65b1adc..fd2951e97 100644
--- a/docs/content/yandex.md
+++ b/docs/content/yandex.md
@@ -186,6 +186,19 @@ Properties:
- Type: string
- Required: false
+#### --yandex-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_YANDEX_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
#### --yandex-hard-delete
Delete files permanently rather than putting them into the trash.
diff --git a/docs/content/zoho.md b/docs/content/zoho.md
index e352f25d2..eebdb29d4 100644
--- a/docs/content/zoho.md
+++ b/docs/content/zoho.md
@@ -224,6 +224,19 @@ Properties:
- Type: string
- Required: false
+#### --zoho-client-credentials
+
+Use client credentials OAuth flow.
+
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+
+Properties:
+
+- Config: client_credentials
+- Env Var: RCLONE_ZOHO_CLIENT_CREDENTIALS
+- Type: bool
+- Default: false
+
#### --zoho-upload-cutoff
Cutoff for switching to large file upload api (>= 10 MiB).
diff --git a/rclone.1 b/rclone.1
index 12a2f7604..28e0c3304 100644
--- a/rclone.1
+++ b/rclone.1
@@ -1,7 +1,7 @@
.\"t
.\" Automatically generated by Pandoc 2.9.2.1
.\"
-.TH "rclone" "1" "Sep 08, 2024" "User Manual" ""
+.TH "rclone" "1" "Jan 12, 2025" "User Manual" ""
.hy
.SH Rclone syncs your files to cloud storage
.PP
@@ -166,6 +166,8 @@ Citrix ShareFile
.IP \[bu] 2
Cloudflare R2
.IP \[bu] 2
+Cloudinary
+.IP \[bu] 2
DigitalOcean Spaces
.IP \[bu] 2
Digi Storage
@@ -198,6 +200,8 @@ HiDrive
.IP \[bu] 2
HTTP
.IP \[bu] 2
+iCloud Drive
+.IP \[bu] 2
ImageKit
.IP \[bu] 2
Internet Archive
@@ -252,6 +256,8 @@ Oracle Cloud Storage Swift
.IP \[bu] 2
Oracle Object Storage
.IP \[bu] 2
+Outscale
+.IP \[bu] 2
ownCloud
.IP \[bu] 2
pCloud
@@ -286,6 +292,8 @@ Seagate Lyve Cloud
.IP \[bu] 2
SeaweedFS
.IP \[bu] 2
+Selectel
+.IP \[bu] 2
SFTP
.IP \[bu] 2
Sia
@@ -1218,6 +1226,8 @@ Citrix ShareFile (https://rclone.org/sharefile/)
.IP \[bu] 2
Compress (https://rclone.org/compress/)
.IP \[bu] 2
+Cloudinary (https://rclone.org/cloudinary/)
+.IP \[bu] 2
Combine (https://rclone.org/combine/)
.IP \[bu] 2
Crypt (https://rclone.org/crypt/) - to encrypt other remotes
@@ -1253,6 +1263,8 @@ HiDrive (https://rclone.org/hidrive/)
.IP \[bu] 2
HTTP (https://rclone.org/http/)
.IP \[bu] 2
+iCloud Drive (https://rclone.org/iclouddrive/)
+.IP \[bu] 2
Internet Archive (https://rclone.org/internetarchive/)
.IP \[bu] 2
Jottacloud (https://rclone.org/jottacloud/)
@@ -1594,6 +1606,7 @@ Flags for anything which can copy a file
-I, --ignore-times Don\[aq]t skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
@@ -1838,6 +1851,7 @@ Flags for anything which can copy a file
-I, --ignore-times Don\[aq]t skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
@@ -2015,6 +2029,7 @@ Flags for anything which can copy a file
-I, --ignore-times Don\[aq]t skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
@@ -3686,6 +3701,7 @@ Flags for anything which can copy a file
-I, --ignore-times Don\[aq]t skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
@@ -5156,6 +5172,7 @@ Flags for anything which can copy a file
-I, --ignore-times Don\[aq]t skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
@@ -6429,7 +6446,9 @@ manually:
\f[C]
# Linux
fusermount -u /path/to/local/mount
-# OS X
+#... or on some systems
+fusermount3 -u /path/to/local/mount
+# OS X or Linux when using nfsmount
umount /path/to/local/mount
\f[R]
.fi
@@ -6859,9 +6878,11 @@ including \f[C]PATH\f[R] or \f[C]HOME\f[R].
This means that tilde (\f[C]\[ti]\f[R]) expansion will not work and you
should provide \f[C]--config\f[R] and \f[C]--cache-dir\f[R] explicitly
as absolute paths via rclone arguments.
-Since mounting requires the \f[C]fusermount\f[R] program, rclone will
-use the fallback PATH of \f[C]/bin:/usr/bin\f[R] in this scenario.
-Please ensure that \f[C]fusermount\f[R] is present on this PATH.
+Since mounting requires the \f[C]fusermount\f[R] or
+\f[C]fusermount3\f[R] program, rclone will use the fallback PATH of
+\f[C]/bin:/usr/bin\f[R] in this scenario.
+Please ensure that \f[C]fusermount\f[R]/\f[C]fusermount3\f[R] is present
+on this PATH.
.SS Rclone as Unix mount helper
.PP
The core Unix program \f[C]/bin/mount\f[R] normally takes the
@@ -7327,6 +7348,61 @@ adjust the number of parallel uploads of modified files from the cache
--transfers int Number of file transfers to run in parallel (default 4)
\f[R]
.fi
+.SS Symlinks
+.PP
+By default the VFS does not support symlinks.
+However this may be enabled with either of the following flags:
+.IP
+.nf
+\f[C]
+--links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension.
+--vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS
+\f[R]
+.fi
+.PP
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension.
+So a file which appears as a symlink \f[C]link-to-file.txt\f[R] would be
+stored on cloud storage as \f[C]link-to-file.txt.rclonelink\f[R] and the
+contents would be the path to the symlink destination.
+.PP
+Note that \f[C]--links\f[R] enables symlink translation globally in
+rclone - this includes any backend which supports the concept (for
+example the local backend).
+\f[C]--vfs-links\f[R] just enables it for the VFS layer.
+.PP
+This scheme is compatible with that used by the local backend with the
+--local-links flag (https://rclone.org/local/#symlinks-junction-points).
+.PP
+The \f[C]--vfs-links\f[R] flag has been designed for
+\f[C]rclone mount\f[R], \f[C]rclone nfsmount\f[R] and
+\f[C]rclone serve nfs\f[R].
+.PP
+It hasn\[aq]t been tested with the other \f[C]rclone serve\f[R] commands
+yet.
+.PP
+A limitation of the current implementation is that it expects the caller
+to resolve sub-symlinks.
+For example given this directory tree
+.IP
+.nf
+\f[C]
+\&.
+\[u251C]\[u2500]\[u2500] dir
+\[br]\ \ \[u2514]\[u2500]\[u2500] file.txt
+\[u2514]\[u2500]\[u2500] linked-dir -> dir
+\f[R]
+.fi
+.PP
+The VFS will correctly resolve \f[C]linked-dir\f[R] but not
+\f[C]linked-dir/file.txt\f[R].
+This is not a problem for the tested commands but may be for other
+commands.
+.PP
+\f[B]Note\f[R] that there is an outstanding issue with symlink support
+issue #8245 (https://github.com/rclone/rclone/issues/8245) with
+duplicate files being created when symlinks are moved into directories
+where there is a file of the same name (or vice versa).
.SS VFS Case Sensitivity
.PP
Linux file systems are case-sensitive: two files can differ only by
@@ -7452,6 +7528,7 @@ rclone mount remote:path /path/to/mountpoint [flags]
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for mount
+ --link-perms FileMode Link permissions (default 666)
--max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads (not supported on Windows) (default 128Ki)
--mount-case-insensitive Tristate Tell the OS the mount is case insensitive (true) or sensitive (false) regardless of the backend (auto) (default unset)
--network-mode Mount as remote network drive, instead of fixed disk drive (supported on Windows only)
@@ -7474,6 +7551,7 @@ rclone mount remote:path /path/to/mountpoint [flags]
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off)
@@ -7605,6 +7683,7 @@ Flags for anything which can copy a file
-I, --ignore-times Don\[aq]t skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
@@ -7886,7 +7965,9 @@ manually:
\f[C]
# Linux
fusermount -u /path/to/local/mount
-# OS X
+#... or on some systems
+fusermount3 -u /path/to/local/mount
+# OS X or Linux when using nfsmount
umount /path/to/local/mount
\f[R]
.fi
@@ -8317,9 +8398,11 @@ including \f[C]PATH\f[R] or \f[C]HOME\f[R].
This means that tilde (\f[C]\[ti]\f[R]) expansion will not work and you
should provide \f[C]--config\f[R] and \f[C]--cache-dir\f[R] explicitly
as absolute paths via rclone arguments.
-Since mounting requires the \f[C]fusermount\f[R] program, rclone will
-use the fallback PATH of \f[C]/bin:/usr/bin\f[R] in this scenario.
-Please ensure that \f[C]fusermount\f[R] is present on this PATH.
+Since mounting requires the \f[C]fusermount\f[R] or
+\f[C]fusermount3\f[R] program, rclone will use the fallback PATH of
+\f[C]/bin:/usr/bin\f[R] in this scenario.
+Please ensure that \f[C]fusermount\f[R]/\f[C]fusermount3\f[R] is present
+on this PATH.
.SS Rclone as Unix mount helper
.PP
The core Unix program \f[C]/bin/mount\f[R] normally takes the
@@ -8785,6 +8868,61 @@ adjust the number of parallel uploads of modified files from the cache
--transfers int Number of file transfers to run in parallel (default 4)
\f[R]
.fi
+.SS Symlinks
+.PP
+By default the VFS does not support symlinks.
+However this may be enabled with either of the following flags:
+.IP
+.nf
+\f[C]
+--links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension.
+--vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS
+\f[R]
+.fi
+.PP
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension.
+So a file which appears as a symlink \f[C]link-to-file.txt\f[R] would be
+stored on cloud storage as \f[C]link-to-file.txt.rclonelink\f[R] and the
+contents would be the path to the symlink destination.
+.PP
+Note that \f[C]--links\f[R] enables symlink translation globally in
+rclone - this includes any backend which supports the concept (for
+example the local backend).
+\f[C]--vfs-links\f[R] just enables it for the VFS layer.
+.PP
+This scheme is compatible with that used by the local backend with the
+--local-links flag (https://rclone.org/local/#symlinks-junction-points).
+.PP
+The \f[C]--vfs-links\f[R] flag has been designed for
+\f[C]rclone mount\f[R], \f[C]rclone nfsmount\f[R] and
+\f[C]rclone serve nfs\f[R].
+.PP
+It hasn\[aq]t been tested with the other \f[C]rclone serve\f[R] commands
+yet.
+.PP
+A limitation of the current implementation is that it expects the caller
+to resolve sub-symlinks.
+For example given this directory tree
+.IP
+.nf
+\f[C]
+\&.
+\[u251C]\[u2500]\[u2500] dir
+\[br]\ \ \[u2514]\[u2500]\[u2500] file.txt
+\[u2514]\[u2500]\[u2500] linked-dir -> dir
+\f[R]
+.fi
+.PP
+The VFS will correctly resolve \f[C]linked-dir\f[R] but not
+\f[C]linked-dir/file.txt\f[R].
+This is not a problem for the tested commands but may be for other
+commands.
+.PP
+\f[B]Note\f[R] that there is an outstanding issue with symlink support
+issue #8245 (https://github.com/rclone/rclone/issues/8245) with
+duplicate files being created when symlinks are moved into directories
+where there is a file of the same name (or vice versa).
.SS VFS Case Sensitivity
.PP
Linux file systems are case-sensitive: two files can differ only by
@@ -8911,6 +9049,7 @@ rclone nfsmount remote:path /path/to/mountpoint [flags]
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for nfsmount
+ --link-perms FileMode Link permissions (default 666)
--max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads (not supported on Windows) (default 128Ki)
--mount-case-insensitive Tristate Tell the OS the mount is case insensitive (true) or sensitive (false) regardless of the backend (auto) (default unset)
--network-mode Mount as remote network drive, instead of fixed disk drive (supported on Windows only)
@@ -8937,6 +9076,7 @@ rclone nfsmount remote:path /path/to/mountpoint [flags]
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off)
@@ -9268,8 +9408,6 @@ for info.
.PP
You can use a unix socket by setting the url to
\f[C]unix:///path/to/socket\f[R] or just by using an absolute path name.
-Note that unix sockets bypass the authentication - this is expected to
-be done with file system permissions.
.PP
\f[C]--rc-addr\f[R] may be repeated to listen on multiple
IPs/ports/sockets.
@@ -9302,11 +9440,13 @@ flags.
If you wish to do client side certificate validation then you will need
to supply \f[C]--rc-client-ca\f[R] also.
.PP
-\f[C]--rc-cert\f[R] should be a either a PEM encoded certificate or a
-concatenation of that with the CA certificate.
-\f[C]--krc-ey\f[R] should be the PEM encoded private key and
-\f[C]--rc-client-ca\f[R] should be the PEM encoded client certificate
-authority certificate.
+\f[C]--rc-cert\f[R] must be set to the path of a file containing either
+a PEM encoded certificate, or a concatenation of that with the CA
+certificate.
+\f[C]--rc-key\f[R] must be set to the path of a file with the PEM
+encoded private key.
+If setting \f[C]--rc-client-ca\f[R], it should be set to the path of a
+file with PEM encoded client certificate authority certificates.
.PP
\f[C]--rc-min-tls-version\f[R] is minimum TLS version that is
acceptable.
@@ -9316,7 +9456,7 @@ and \[dq]tls1.3\[dq] (default \[dq]tls1.0\[dq]).
.PP
Instead of the listening addresses specified above, rclone will listen
to all FDs passed by the service manager, if any (and ignore any
-arguments passed by --rc-addr\[ga]).
+arguments passed by \f[C]--rc-addr\f[R]).
.PP
This allows rclone to be a socket-activated service.
It can be configured with .socket and .service unit files as described
@@ -9527,7 +9667,7 @@ Flags to control the Remote Control API
.nf
\f[C]
--rc Enable the remote control server
- --rc-addr stringArray IPaddress:Port or :Port to bind server to (default [\[dq]localhost:5572\[dq]])
+ --rc-addr stringArray IPaddress:Port or :Port to bind server to (default localhost:5572)
--rc-allow-origin string Origin which cross-domain request (CORS) can be executed from
--rc-baseurl string Prefix for URLs - leave blank for root
--rc-cert string TLS PEM key (concatenation of certificate and CA certificate)
@@ -10177,6 +10317,61 @@ adjust the number of parallel uploads of modified files from the cache
--transfers int Number of file transfers to run in parallel (default 4)
\f[R]
.fi
+.SS Symlinks
+.PP
+By default the VFS does not support symlinks.
+However this may be enabled with either of the following flags:
+.IP
+.nf
+\f[C]
+--links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension.
+--vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS
+\f[R]
+.fi
+.PP
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension.
+So a file which appears as a symlink \f[C]link-to-file.txt\f[R] would be
+stored on cloud storage as \f[C]link-to-file.txt.rclonelink\f[R] and the
+contents would be the path to the symlink destination.
+.PP
+Note that \f[C]--links\f[R] enables symlink translation globally in
+rclone - this includes any backend which supports the concept (for
+example the local backend).
+\f[C]--vfs-links\f[R] just enables it for the VFS layer.
+.PP
+This scheme is compatible with that used by the local backend with the
+--local-links flag (https://rclone.org/local/#symlinks-junction-points).
+.PP
+The \f[C]--vfs-links\f[R] flag has been designed for
+\f[C]rclone mount\f[R], \f[C]rclone nfsmount\f[R] and
+\f[C]rclone serve nfs\f[R].
+.PP
+It hasn\[aq]t been tested with the other \f[C]rclone serve\f[R] commands
+yet.
+.PP
+A limitation of the current implementation is that it expects the caller
+to resolve sub-symlinks.
+For example given this directory tree
+.IP
+.nf
+\f[C]
+\&.
+\[u251C]\[u2500]\[u2500] dir
+\[br]\ \ \[u2514]\[u2500]\[u2500] file.txt
+\[u2514]\[u2500]\[u2500] linked-dir -> dir
+\f[R]
+.fi
+.PP
+The VFS will correctly resolve \f[C]linked-dir\f[R] but not
+\f[C]linked-dir/file.txt\f[R].
+This is not a problem for the tested commands but may be for other
+commands.
+.PP
+\f[B]Note\f[R] that there is an outstanding issue with symlink support
+issue #8245 (https://github.com/rclone/rclone/issues/8245) with
+duplicate files being created when symlinks are moved into directories
+where there is a file of the same name (or vice versa).
.SS VFS Case Sensitivity
.PP
Linux file systems are case-sensitive: two files can differ only by
@@ -10292,6 +10487,7 @@ rclone serve dlna remote:path [flags]
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for dlna
--interface stringArray The interface to use for SSDP (repeat as necessary)
+ --link-perms FileMode Link permissions (default 666)
--log-trace Enable trace logging of SOAP traffic
--name string Name of DLNA server
--no-checksum Don\[aq]t compare checksums on up/download
@@ -10310,6 +10506,7 @@ rclone serve dlna remote:path [flags]
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off)
@@ -10781,6 +10978,61 @@ adjust the number of parallel uploads of modified files from the cache
--transfers int Number of file transfers to run in parallel (default 4)
\f[R]
.fi
+.SS Symlinks
+.PP
+By default the VFS does not support symlinks.
+However this may be enabled with either of the following flags:
+.IP
+.nf
+\f[C]
+--links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension.
+--vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS
+\f[R]
+.fi
+.PP
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension.
+So a file which appears as a symlink \f[C]link-to-file.txt\f[R] would be
+stored on cloud storage as \f[C]link-to-file.txt.rclonelink\f[R] and the
+contents would be the path to the symlink destination.
+.PP
+Note that \f[C]--links\f[R] enables symlink translation globally in
+rclone - this includes any backend which supports the concept (for
+example the local backend).
+\f[C]--vfs-links\f[R] just enables it for the VFS layer.
+.PP
+This scheme is compatible with that used by the local backend with the
+--local-links flag (https://rclone.org/local/#symlinks-junction-points).
+.PP
+The \f[C]--vfs-links\f[R] flag has been designed for
+\f[C]rclone mount\f[R], \f[C]rclone nfsmount\f[R] and
+\f[C]rclone serve nfs\f[R].
+.PP
+It hasn\[aq]t been tested with the other \f[C]rclone serve\f[R] commands
+yet.
+.PP
+A limitation of the current implementation is that it expects the caller
+to resolve sub-symlinks.
+For example given this directory tree
+.IP
+.nf
+\f[C]
+\&.
+\[u251C]\[u2500]\[u2500] dir
+\[br]\ \ \[u2514]\[u2500]\[u2500] file.txt
+\[u2514]\[u2500]\[u2500] linked-dir -> dir
+\f[R]
+.fi
+.PP
+The VFS will correctly resolve \f[C]linked-dir\f[R] but not
+\f[C]linked-dir/file.txt\f[R].
+This is not a problem for the tested commands but may be for other
+commands.
+.PP
+\f[B]Note\f[R] that there is an outstanding issue with symlink support
+issue #8245 (https://github.com/rclone/rclone/issues/8245) with
+duplicate files being created when symlinks are moved into directories
+where there is a file of the same name (or vice versa).
.SS VFS Case Sensitivity
.PP
Linux file systems are case-sensitive: two files can differ only by
@@ -10908,6 +11160,7 @@ rclone serve docker [flags]
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for docker
+ --link-perms FileMode Link permissions (default 666)
--max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads (not supported on Windows) (default 128Ki)
--mount-case-insensitive Tristate Tell the OS the mount is case insensitive (true) or sensitive (false) regardless of the backend (auto) (default unset)
--network-mode Mount as remote network drive, instead of fixed disk drive (supported on Windows only)
@@ -10933,6 +11186,7 @@ rclone serve docker [flags]
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off)
@@ -11376,6 +11630,61 @@ adjust the number of parallel uploads of modified files from the cache
--transfers int Number of file transfers to run in parallel (default 4)
\f[R]
.fi
+.SS Symlinks
+.PP
+By default the VFS does not support symlinks.
+However this may be enabled with either of the following flags:
+.IP
+.nf
+\f[C]
+--links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension.
+--vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS
+\f[R]
+.fi
+.PP
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension.
+So a file which appears as a symlink \f[C]link-to-file.txt\f[R] would be
+stored on cloud storage as \f[C]link-to-file.txt.rclonelink\f[R] and the
+contents would be the path to the symlink destination.
+.PP
+Note that \f[C]--links\f[R] enables symlink translation globally in
+rclone - this includes any backend which supports the concept (for
+example the local backend).
+\f[C]--vfs-links\f[R] just enables it for the VFS layer.
+.PP
+This scheme is compatible with that used by the local backend with the
+--local-links flag (https://rclone.org/local/#symlinks-junction-points).
+.PP
+The \f[C]--vfs-links\f[R] flag has been designed for
+\f[C]rclone mount\f[R], \f[C]rclone nfsmount\f[R] and
+\f[C]rclone serve nfs\f[R].
+.PP
+It hasn\[aq]t been tested with the other \f[C]rclone serve\f[R] commands
+yet.
+.PP
+A limitation of the current implementation is that it expects the caller
+to resolve sub-symlinks.
+For example given this directory tree
+.IP
+.nf
+\f[C]
+\&.
+\[u251C]\[u2500]\[u2500] dir
+\[br]\ \ \[u2514]\[u2500]\[u2500] file.txt
+\[u2514]\[u2500]\[u2500] linked-dir -> dir
+\f[R]
+.fi
+.PP
+The VFS will correctly resolve \f[C]linked-dir\f[R] but not
+\f[C]linked-dir/file.txt\f[R].
+This is not a problem for the tested commands but may be for other
+commands.
+.PP
+\f[B]Note\f[R] that there is an outstanding issue with symlink support
+issue #8245 (https://github.com/rclone/rclone/issues/8245) with
+duplicate files being created when symlinks are moved into directories
+where there is a file of the same name (or vice versa).
.SS VFS Case Sensitivity
.PP
Linux file systems are case-sensitive: two files can differ only by
@@ -11585,6 +11894,7 @@ rclone serve ftp remote:path [flags]
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for ftp
--key string TLS PEM Private key
+ --link-perms FileMode Link permissions (default 666)
--no-checksum Don\[aq]t compare checksums on up/download
--no-modtime Don\[aq]t read/write the modification time (can speed things up)
--no-seek Don\[aq]t allow seeking in files
@@ -11605,6 +11915,7 @@ rclone serve ftp remote:path [flags]
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off)
@@ -11685,8 +11996,6 @@ info.
.PP
You can use a unix socket by setting the url to
\f[C]unix:///path/to/socket\f[R] or just by using an absolute path name.
-Note that unix sockets bypass the authentication - this is expected to
-be done with file system permissions.
.PP
\f[C]--addr\f[R] may be repeated to listen on multiple
IPs/ports/sockets.
@@ -11717,11 +12026,13 @@ You will need to supply the \f[C]--cert\f[R] and \f[C]--key\f[R] flags.
If you wish to do client side certificate validation then you will need
to supply \f[C]--client-ca\f[R] also.
.PP
-\f[C]--cert\f[R] should be a either a PEM encoded certificate or a
-concatenation of that with the CA certificate.
-\f[C]--key\f[R] should be the PEM encoded private key and
-\f[C]--client-ca\f[R] should be the PEM encoded client certificate
-authority certificate.
+\f[C]--cert\f[R] must be set to the path of a file containing either a
+PEM encoded certificate, or a concatenation of that with the CA
+certificate.
+\f[C]--key\f[R] must be set to the path of a file with the PEM encoded
+private key.
+If setting \f[C]--client-ca\f[R], it should be set to the path of a file
+with PEM encoded client certificate authority certificates.
.PP
\f[C]--min-tls-version\f[R] is minimum TLS version that is acceptable.
Valid values are \[dq]tls1.0\[dq], \[dq]tls1.1\[dq], \[dq]tls1.2\[dq]
@@ -11730,7 +12041,7 @@ and \[dq]tls1.3\[dq] (default \[dq]tls1.0\[dq]).
.PP
Instead of the listening addresses specified above, rclone will listen
to all FDs passed by the service manager, if any (and ignore any
-arguments passed by --addr\[ga]).
+arguments passed by \f[C]--addr\f[R]).
.PP
This allows rclone to be a socket-activated service.
It can be configured with .socket and .service unit files as described
@@ -12284,6 +12595,61 @@ adjust the number of parallel uploads of modified files from the cache
--transfers int Number of file transfers to run in parallel (default 4)
\f[R]
.fi
+.SS Symlinks
+.PP
+By default the VFS does not support symlinks.
+However this may be enabled with either of the following flags:
+.IP
+.nf
+\f[C]
+--links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension.
+--vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS
+\f[R]
+.fi
+.PP
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension.
+So a file which appears as a symlink \f[C]link-to-file.txt\f[R] would be
+stored on cloud storage as \f[C]link-to-file.txt.rclonelink\f[R] and the
+contents would be the path to the symlink destination.
+.PP
+Note that \f[C]--links\f[R] enables symlink translation globally in
+rclone - this includes any backend which supports the concept (for
+example the local backend).
+\f[C]--vfs-links\f[R] just enables it for the VFS layer.
+.PP
+This scheme is compatible with that used by the local backend with the
+--local-links flag (https://rclone.org/local/#symlinks-junction-points).
+.PP
+The \f[C]--vfs-links\f[R] flag has been designed for
+\f[C]rclone mount\f[R], \f[C]rclone nfsmount\f[R] and
+\f[C]rclone serve nfs\f[R].
+.PP
+It hasn\[aq]t been tested with the other \f[C]rclone serve\f[R] commands
+yet.
+.PP
+A limitation of the current implementation is that it expects the caller
+to resolve sub-symlinks.
+For example given this directory tree
+.IP
+.nf
+\f[C]
+\&.
+\[u251C]\[u2500]\[u2500] dir
+\[br]\ \ \[u2514]\[u2500]\[u2500] file.txt
+\[u2514]\[u2500]\[u2500] linked-dir -> dir
+\f[R]
+.fi
+.PP
+The VFS will correctly resolve \f[C]linked-dir\f[R] but not
+\f[C]linked-dir/file.txt\f[R].
+This is not a problem for the tested commands but may be for other
+commands.
+.PP
+\f[B]Note\f[R] that there is an outstanding issue with symlink support
+issue #8245 (https://github.com/rclone/rclone/issues/8245) with
+duplicate files being created when symlinks are moved into directories
+where there is a file of the same name (or vice versa).
.SS VFS Case Sensitivity
.PP
Linux file systems are case-sensitive: two files can differ only by
@@ -12488,15 +12854,16 @@ rclone serve http remote:path [flags]
--allow-origin string Origin which cross-domain request (CORS) can be executed from
--auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
- --cert string TLS PEM key (concatenation of certificate and CA certificate)
- --client-ca string Client certificate authority to verify clients with
+ --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
+ --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 777)
--file-perms FileMode File permissions (default 666)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for http
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string TLS PEM Private key
+ --key string Path to TLS PEM private key file
+ --link-perms FileMode Link permissions (default 666)
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default \[dq]tls1.0\[dq])
--no-checksum Don\[aq]t compare checksums on up/download
@@ -12522,6 +12889,7 @@ rclone serve http remote:path [flags]
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off)
@@ -12625,6 +12993,10 @@ which improves performance.
This sort of cache can\[aq]t be backed up and restored as the underlying
handles will change.
This is Linux only.
+It requres running rclone as root or with \f[C]CAP_DAC_READ_SEARCH\f[R].
+You can run rclone with this extra permission by doing this to the
+rclone binary
+\f[C]sudo setcap cap_dac_read_search+ep /path/to/rclone\f[R].
.PP
\f[C]--nfs-cache-handle-limit\f[R] controls the maximum number of cached
NFS handles stored by the caching handler.
@@ -13024,6 +13396,61 @@ adjust the number of parallel uploads of modified files from the cache
--transfers int Number of file transfers to run in parallel (default 4)
\f[R]
.fi
+.SS Symlinks
+.PP
+By default the VFS does not support symlinks.
+However this may be enabled with either of the following flags:
+.IP
+.nf
+\f[C]
+--links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension.
+--vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS
+\f[R]
+.fi
+.PP
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension.
+So a file which appears as a symlink \f[C]link-to-file.txt\f[R] would be
+stored on cloud storage as \f[C]link-to-file.txt.rclonelink\f[R] and the
+contents would be the path to the symlink destination.
+.PP
+Note that \f[C]--links\f[R] enables symlink translation globally in
+rclone - this includes any backend which supports the concept (for
+example the local backend).
+\f[C]--vfs-links\f[R] just enables it for the VFS layer.
+.PP
+This scheme is compatible with that used by the local backend with the
+--local-links flag (https://rclone.org/local/#symlinks-junction-points).
+.PP
+The \f[C]--vfs-links\f[R] flag has been designed for
+\f[C]rclone mount\f[R], \f[C]rclone nfsmount\f[R] and
+\f[C]rclone serve nfs\f[R].
+.PP
+It hasn\[aq]t been tested with the other \f[C]rclone serve\f[R] commands
+yet.
+.PP
+A limitation of the current implementation is that it expects the caller
+to resolve sub-symlinks.
+For example given this directory tree
+.IP
+.nf
+\f[C]
+\&.
+\[u251C]\[u2500]\[u2500] dir
+\[br]\ \ \[u2514]\[u2500]\[u2500] file.txt
+\[u2514]\[u2500]\[u2500] linked-dir -> dir
+\f[R]
+.fi
+.PP
+The VFS will correctly resolve \f[C]linked-dir\f[R] but not
+\f[C]linked-dir/file.txt\f[R].
+This is not a problem for the tested commands but may be for other
+commands.
+.PP
+\f[B]Note\f[R] that there is an outstanding issue with symlink support
+issue #8245 (https://github.com/rclone/rclone/issues/8245) with
+duplicate files being created when symlinks are moved into directories
+where there is a file of the same name (or vice versa).
.SS VFS Case Sensitivity
.PP
Linux file systems are case-sensitive: two files can differ only by
@@ -13137,6 +13564,7 @@ rclone serve nfs remote:path [flags]
--file-perms FileMode File permissions (default 666)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for nfs
+ --link-perms FileMode Link permissions (default 666)
--nfs-cache-dir string The directory the NFS handle cache will use if set
--nfs-cache-handle-limit int max file handles cached simultaneously (min 5) (default 1000000)
--nfs-cache-type memory|disk|symlink Type of NFS handle cache to use (default memory)
@@ -13156,6 +13584,7 @@ rclone serve nfs remote:path [flags]
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off)
@@ -13318,8 +13747,6 @@ info.
.PP
You can use a unix socket by setting the url to
\f[C]unix:///path/to/socket\f[R] or just by using an absolute path name.
-Note that unix sockets bypass the authentication - this is expected to
-be done with file system permissions.
.PP
\f[C]--addr\f[R] may be repeated to listen on multiple
IPs/ports/sockets.
@@ -13350,11 +13777,13 @@ You will need to supply the \f[C]--cert\f[R] and \f[C]--key\f[R] flags.
If you wish to do client side certificate validation then you will need
to supply \f[C]--client-ca\f[R] also.
.PP
-\f[C]--cert\f[R] should be a either a PEM encoded certificate or a
-concatenation of that with the CA certificate.
-\f[C]--key\f[R] should be the PEM encoded private key and
-\f[C]--client-ca\f[R] should be the PEM encoded client certificate
-authority certificate.
+\f[C]--cert\f[R] must be set to the path of a file containing either a
+PEM encoded certificate, or a concatenation of that with the CA
+certificate.
+\f[C]--key\f[R] must be set to the path of a file with the PEM encoded
+private key.
+If setting \f[C]--client-ca\f[R], it should be set to the path of a file
+with PEM encoded client certificate authority certificates.
.PP
\f[C]--min-tls-version\f[R] is minimum TLS version that is acceptable.
Valid values are \[dq]tls1.0\[dq], \[dq]tls1.1\[dq], \[dq]tls1.2\[dq]
@@ -13363,7 +13792,7 @@ and \[dq]tls1.3\[dq] (default \[dq]tls1.0\[dq]).
.PP
Instead of the listening addresses specified above, rclone will listen
to all FDs passed by the service manager, if any (and ignore any
-arguments passed by --addr\[ga]).
+arguments passed by \f[C]--addr\f[R]).
.PP
This allows rclone to be a socket-activated service.
It can be configured with .socket and .service unit files as described
@@ -13430,11 +13859,11 @@ rclone serve restic remote:path [flags]
--append-only Disallow deletion of repository data
--baseurl string Prefix for URLs - leave blank for root
--cache-objects Cache listed objects (default true)
- --cert string TLS PEM key (concatenation of certificate and CA certificate)
- --client-ca string Client certificate authority to verify clients with
+ --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
+ --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
-h, --help help for restic
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string TLS PEM Private key
+ --key string Path to TLS PEM private key file
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default \[dq]tls1.0\[dq])
--pass string Password for authentication
@@ -13675,8 +14104,6 @@ info.
.PP
You can use a unix socket by setting the url to
\f[C]unix:///path/to/socket\f[R] or just by using an absolute path name.
-Note that unix sockets bypass the authentication - this is expected to
-be done with file system permissions.
.PP
\f[C]--addr\f[R] may be repeated to listen on multiple
IPs/ports/sockets.
@@ -13707,11 +14134,13 @@ You will need to supply the \f[C]--cert\f[R] and \f[C]--key\f[R] flags.
If you wish to do client side certificate validation then you will need
to supply \f[C]--client-ca\f[R] also.
.PP
-\f[C]--cert\f[R] should be a either a PEM encoded certificate or a
-concatenation of that with the CA certificate.
-\f[C]--key\f[R] should be the PEM encoded private key and
-\f[C]--client-ca\f[R] should be the PEM encoded client certificate
-authority certificate.
+\f[C]--cert\f[R] must be set to the path of a file containing either a
+PEM encoded certificate, or a concatenation of that with the CA
+certificate.
+\f[C]--key\f[R] must be set to the path of a file with the PEM encoded
+private key.
+If setting \f[C]--client-ca\f[R], it should be set to the path of a file
+with PEM encoded client certificate authority certificates.
.PP
\f[C]--min-tls-version\f[R] is minimum TLS version that is acceptable.
Valid values are \[dq]tls1.0\[dq], \[dq]tls1.1\[dq], \[dq]tls1.2\[dq]
@@ -13720,7 +14149,7 @@ and \[dq]tls1.3\[dq] (default \[dq]tls1.0\[dq]).
.PP
Instead of the listening addresses specified above, rclone will listen
to all FDs passed by the service manager, if any (and ignore any
-arguments passed by --addr\[ga]).
+arguments passed by \f[C]--addr\f[R]).
.PP
This allows rclone to be a socket-activated service.
It can be configured with .socket and .service unit files as described
@@ -14106,6 +14535,61 @@ adjust the number of parallel uploads of modified files from the cache
--transfers int Number of file transfers to run in parallel (default 4)
\f[R]
.fi
+.SS Symlinks
+.PP
+By default the VFS does not support symlinks.
+However this may be enabled with either of the following flags:
+.IP
+.nf
+\f[C]
+--links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension.
+--vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS
+\f[R]
+.fi
+.PP
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension.
+So a file which appears as a symlink \f[C]link-to-file.txt\f[R] would be
+stored on cloud storage as \f[C]link-to-file.txt.rclonelink\f[R] and the
+contents would be the path to the symlink destination.
+.PP
+Note that \f[C]--links\f[R] enables symlink translation globally in
+rclone - this includes any backend which supports the concept (for
+example the local backend).
+\f[C]--vfs-links\f[R] just enables it for the VFS layer.
+.PP
+This scheme is compatible with that used by the local backend with the
+--local-links flag (https://rclone.org/local/#symlinks-junction-points).
+.PP
+The \f[C]--vfs-links\f[R] flag has been designed for
+\f[C]rclone mount\f[R], \f[C]rclone nfsmount\f[R] and
+\f[C]rclone serve nfs\f[R].
+.PP
+It hasn\[aq]t been tested with the other \f[C]rclone serve\f[R] commands
+yet.
+.PP
+A limitation of the current implementation is that it expects the caller
+to resolve sub-symlinks.
+For example given this directory tree
+.IP
+.nf
+\f[C]
+\&.
+\[u251C]\[u2500]\[u2500] dir
+\[br]\ \ \[u2514]\[u2500]\[u2500] file.txt
+\[u2514]\[u2500]\[u2500] linked-dir -> dir
+\f[R]
+.fi
+.PP
+The VFS will correctly resolve \f[C]linked-dir\f[R] but not
+\f[C]linked-dir/file.txt\f[R].
+This is not a problem for the tested commands but may be for other
+commands.
+.PP
+\f[B]Note\f[R] that there is an outstanding issue with symlink support
+issue #8245 (https://github.com/rclone/rclone/issues/8245) with
+duplicate files being created when symlinks are moved into directories
+where there is a file of the same name (or vice versa).
.SS VFS Case Sensitivity
.PP
Linux file systems are case-sensitive: two files can differ only by
@@ -14218,8 +14702,8 @@ rclone serve s3 remote:path [flags]
--auth-key stringArray Set key pair for v4 authorization: access_key_id,secret_access_key
--auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
- --cert string TLS PEM key (concatenation of certificate and CA certificate)
- --client-ca string Client certificate authority to verify clients with
+ --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
+ --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 777)
--etag-hash string Which hash to use for the ETag, or auto or blank for off (default \[dq]MD5\[dq])
@@ -14228,7 +14712,8 @@ rclone serve s3 remote:path [flags]
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for s3
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string TLS PEM Private key
+ --key string Path to TLS PEM private key file
+ --link-perms FileMode Link permissions (default 666)
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default \[dq]tls1.0\[dq])
--no-checksum Don\[aq]t compare checksums on up/download
@@ -14254,6 +14739,7 @@ rclone serve s3 remote:path [flags]
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off)
@@ -14755,6 +15241,61 @@ adjust the number of parallel uploads of modified files from the cache
--transfers int Number of file transfers to run in parallel (default 4)
\f[R]
.fi
+.SS Symlinks
+.PP
+By default the VFS does not support symlinks.
+However this may be enabled with either of the following flags:
+.IP
+.nf
+\f[C]
+--links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension.
+--vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS
+\f[R]
+.fi
+.PP
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension.
+So a file which appears as a symlink \f[C]link-to-file.txt\f[R] would be
+stored on cloud storage as \f[C]link-to-file.txt.rclonelink\f[R] and the
+contents would be the path to the symlink destination.
+.PP
+Note that \f[C]--links\f[R] enables symlink translation globally in
+rclone - this includes any backend which supports the concept (for
+example the local backend).
+\f[C]--vfs-links\f[R] just enables it for the VFS layer.
+.PP
+This scheme is compatible with that used by the local backend with the
+--local-links flag (https://rclone.org/local/#symlinks-junction-points).
+.PP
+The \f[C]--vfs-links\f[R] flag has been designed for
+\f[C]rclone mount\f[R], \f[C]rclone nfsmount\f[R] and
+\f[C]rclone serve nfs\f[R].
+.PP
+It hasn\[aq]t been tested with the other \f[C]rclone serve\f[R] commands
+yet.
+.PP
+A limitation of the current implementation is that it expects the caller
+to resolve sub-symlinks.
+For example given this directory tree
+.IP
+.nf
+\f[C]
+\&.
+\[u251C]\[u2500]\[u2500] dir
+\[br]\ \ \[u2514]\[u2500]\[u2500] file.txt
+\[u2514]\[u2500]\[u2500] linked-dir -> dir
+\f[R]
+.fi
+.PP
+The VFS will correctly resolve \f[C]linked-dir\f[R] but not
+\f[C]linked-dir/file.txt\f[R].
+This is not a problem for the tested commands but may be for other
+commands.
+.PP
+\f[B]Note\f[R] that there is an outstanding issue with symlink support
+issue #8245 (https://github.com/rclone/rclone/issues/8245) with
+duplicate files being created when symlinks are moved into directories
+where there is a file of the same name (or vice versa).
.SS VFS Case Sensitivity
.PP
Linux file systems are case-sensitive: two files can differ only by
@@ -14964,6 +15505,7 @@ rclone serve sftp remote:path [flags]
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for sftp
--key stringArray SSH private host key file (Can be multi-valued, leave blank to auto generate)
+ --link-perms FileMode Link permissions (default 666)
--no-auth Allow connections with no authentication if set
--no-checksum Don\[aq]t compare checksums on up/download
--no-modtime Don\[aq]t read/write the modification time (can speed things up)
@@ -14984,6 +15526,7 @@ rclone serve sftp remote:path [flags]
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off)
@@ -15117,8 +15660,6 @@ info.
.PP
You can use a unix socket by setting the url to
\f[C]unix:///path/to/socket\f[R] or just by using an absolute path name.
-Note that unix sockets bypass the authentication - this is expected to
-be done with file system permissions.
.PP
\f[C]--addr\f[R] may be repeated to listen on multiple
IPs/ports/sockets.
@@ -15149,11 +15690,13 @@ You will need to supply the \f[C]--cert\f[R] and \f[C]--key\f[R] flags.
If you wish to do client side certificate validation then you will need
to supply \f[C]--client-ca\f[R] also.
.PP
-\f[C]--cert\f[R] should be a either a PEM encoded certificate or a
-concatenation of that with the CA certificate.
-\f[C]--key\f[R] should be the PEM encoded private key and
-\f[C]--client-ca\f[R] should be the PEM encoded client certificate
-authority certificate.
+\f[C]--cert\f[R] must be set to the path of a file containing either a
+PEM encoded certificate, or a concatenation of that with the CA
+certificate.
+\f[C]--key\f[R] must be set to the path of a file with the PEM encoded
+private key.
+If setting \f[C]--client-ca\f[R], it should be set to the path of a file
+with PEM encoded client certificate authority certificates.
.PP
\f[C]--min-tls-version\f[R] is minimum TLS version that is acceptable.
Valid values are \[dq]tls1.0\[dq], \[dq]tls1.1\[dq], \[dq]tls1.2\[dq]
@@ -15162,7 +15705,7 @@ and \[dq]tls1.3\[dq] (default \[dq]tls1.0\[dq]).
.PP
Instead of the listening addresses specified above, rclone will listen
to all FDs passed by the service manager, if any (and ignore any
-arguments passed by --addr\[ga]).
+arguments passed by \f[C]--addr\f[R]).
.PP
This allows rclone to be a socket-activated service.
It can be configured with .socket and .service unit files as described
@@ -15716,6 +16259,61 @@ adjust the number of parallel uploads of modified files from the cache
--transfers int Number of file transfers to run in parallel (default 4)
\f[R]
.fi
+.SS Symlinks
+.PP
+By default the VFS does not support symlinks.
+However this may be enabled with either of the following flags:
+.IP
+.nf
+\f[C]
+--links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension.
+--vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS
+\f[R]
+.fi
+.PP
+As most cloud storage systems do not support symlinks directly, rclone
+stores the symlink as a normal file with a special extension.
+So a file which appears as a symlink \f[C]link-to-file.txt\f[R] would be
+stored on cloud storage as \f[C]link-to-file.txt.rclonelink\f[R] and the
+contents would be the path to the symlink destination.
+.PP
+Note that \f[C]--links\f[R] enables symlink translation globally in
+rclone - this includes any backend which supports the concept (for
+example the local backend).
+\f[C]--vfs-links\f[R] just enables it for the VFS layer.
+.PP
+This scheme is compatible with that used by the local backend with the
+--local-links flag (https://rclone.org/local/#symlinks-junction-points).
+.PP
+The \f[C]--vfs-links\f[R] flag has been designed for
+\f[C]rclone mount\f[R], \f[C]rclone nfsmount\f[R] and
+\f[C]rclone serve nfs\f[R].
+.PP
+It hasn\[aq]t been tested with the other \f[C]rclone serve\f[R] commands
+yet.
+.PP
+A limitation of the current implementation is that it expects the caller
+to resolve sub-symlinks.
+For example given this directory tree
+.IP
+.nf
+\f[C]
+\&.
+\[u251C]\[u2500]\[u2500] dir
+\[br]\ \ \[u2514]\[u2500]\[u2500] file.txt
+\[u2514]\[u2500]\[u2500] linked-dir -> dir
+\f[R]
+.fi
+.PP
+The VFS will correctly resolve \f[C]linked-dir\f[R] but not
+\f[C]linked-dir/file.txt\f[R].
+This is not a problem for the tested commands but may be for other
+commands.
+.PP
+\f[B]Note\f[R] that there is an outstanding issue with symlink support
+issue #8245 (https://github.com/rclone/rclone/issues/8245) with
+duplicate files being created when symlinks are moved into directories
+where there is a file of the same name (or vice versa).
.SS VFS Case Sensitivity
.PP
Linux file systems are case-sensitive: two files can differ only by
@@ -15920,8 +16518,8 @@ rclone serve webdav remote:path [flags]
--allow-origin string Origin which cross-domain request (CORS) can be executed from
--auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
- --cert string TLS PEM key (concatenation of certificate and CA certificate)
- --client-ca string Client certificate authority to verify clients with
+ --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
+ --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 777)
--disable-dir-list Disable HTML directory list on GET request for a directory
@@ -15930,7 +16528,8 @@ rclone serve webdav remote:path [flags]
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for webdav
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string TLS PEM Private key
+ --key string Path to TLS PEM private key file
+ --link-perms FileMode Link permissions (default 666)
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default \[dq]tls1.0\[dq])
--no-checksum Don\[aq]t compare checksums on up/download
@@ -15956,6 +16555,7 @@ rclone serve webdav remote:path [flags]
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off)
@@ -16262,6 +16862,7 @@ rclone test makefiles [flags]
--chargen Fill files with a ASCII chargen pattern
--files int Number of files to create (default 1000)
--files-per-directory int Average number of files per directory (default 10)
+ --flat If set create all files in the root directory
-h, --help help for makefiles
--max-depth int Maximum depth of directory hierarchy (default 10)
--max-file-size SizeSuffix Maximum size of files to create (default 100)
@@ -18180,6 +18781,22 @@ them.
.SS --leave-root
.PP
During rmdirs it will not remove root directory, even if it\[aq]s empty.
+.SS --links / -l
+.PP
+Normally rclone will ignore symlinks or junction points (which behave
+like symlinks under Windows).
+.PP
+If you supply this flag then rclone will copy symbolic links from any
+supported backend backend, and store them as text files, with a
+\f[C].rclonelink\f[R] suffix in the destination.
+.PP
+The text file will contain the target of the symbolic link.
+.PP
+The \f[C]--links\f[R] / \f[C]-l\f[R] flag enables this feature for all
+supported backends and the VFS.
+There are individual flags for just enabling it for the VFS
+\f[C]--vfs-links\f[R] and the local backend \f[C]--local-links\f[R] if
+required.
.SS --log-file=FILE
.PP
Log all of rclone\[aq]s output to FILE.
@@ -19781,11 +20398,11 @@ If rclone has done a retry it will log a high priority message if the
retry was successful.
.SS List of exit codes
.IP \[bu] 2
-\f[C]0\f[R] - success
+\f[C]0\f[R] - Success
.IP \[bu] 2
-\f[C]1\f[R] - Syntax or usage error
+\f[C]1\f[R] - Error not otherwise categorised
.IP \[bu] 2
-\f[C]2\f[R] - Error not otherwise categorised
+\f[C]2\f[R] - Syntax or usage error
.IP \[bu] 2
\f[C]3\f[R] - Directory not found
.IP \[bu] 2
@@ -19836,6 +20453,49 @@ they take exactly the same form.
The options set by environment variables can be seen with the
\f[C]-vv\f[R] flag, e.g.
\f[C]rclone version -vv\f[R].
+.PP
+Options that can appear multiple times (type \f[C]stringArray\f[R]) are
+treated slighly differently as environment variables can only be defined
+once.
+In order to allow a simple mechanism for adding one or many items, the
+input is treated as a CSV encoded (https://godoc.org/encoding/csv)
+string.
+For example
+.PP
+.TS
+tab(@);
+lw(36.7n) lw(33.3n).
+T{
+Environment Variable
+T}@T{
+Equivalent options
+T}
+_
+T{
+\f[C]RCLONE_EXCLUDE=\[dq]*.jpg\[dq]\f[R]
+T}@T{
+\f[C]--exclude \[dq]*.jpg\[dq]\f[R]
+T}
+T{
+\f[C]RCLONE_EXCLUDE=\[dq]*.jpg,*.png\[dq]\f[R]
+T}@T{
+\f[C]--exclude \[dq]*.jpg\[dq]\f[R] \f[C]--exclude \[dq]*.png\[dq]\f[R]
+T}
+T{
+\f[C]RCLONE_EXCLUDE=\[aq]\[dq]*.jpg\[dq],\[dq]*.png\[dq]\[aq]\f[R]
+T}@T{
+\f[C]--exclude \[dq]*.jpg\[dq]\f[R] \f[C]--exclude \[dq]*.png\[dq]\f[R]
+T}
+T{
+\f[C]RCLONE_EXCLUDE=\[aq]\[dq]/directory with comma , in it /**\[dq]\[aq]\f[R]
+T}@T{
+\[ga]--exclude \[dq]/directory with comma , in it /**\[dq]
+T}
+.TE
+.PP
+If \f[C]stringArray\f[R] options are defined as environment variables
+\f[B]and\f[R] options on the command line then all the values will be
+used.
.SS Config file
.PP
You can set defaults for values in the config file on an individual
@@ -20860,10 +21520,10 @@ flags with \f[C]--exclude\f[R], \f[C]--exclude-from\f[R],
rules for all the files you want in the include statement.
For more flexibility use the \f[C]--filter-from\f[R] flag.
.PP
-\f[C]--exclude-from\f[R] has no effect when combined with
+\f[C]--include-from\f[R] has no effect when combined with
\f[C]--files-from\f[R] or \f[C]--files-from-raw\f[R] flags.
.PP
-\f[C]--exclude-from\f[R] followed by \f[C]-\f[R] reads filter rules from
+\f[C]--include-from\f[R] followed by \f[C]-\f[R] reads filter rules from
standard input.
.SS \f[C]--filter\f[R] - Add a file-filtering rule
.PP
@@ -20904,6 +21564,12 @@ See above for the order filter flags are processed in.
Arrange the order of filter rules with the most restrictive first and
work down.
.PP
+Lines starting with # or ; are ignored, and can be used to write
+comments.
+Inline comments are not supported.
+\f[I]Use \f[CI]-vv --dump filters\f[I] to see how they appear in the
+final regexp.\f[R]
+.PP
E.g.
for \f[C]filter-file.txt\f[R]:
.IP
@@ -20914,6 +21580,7 @@ for \f[C]filter-file.txt\f[R]:
+ *.jpg
+ *.png
+ file2.avi
+- /dir/tmp/** # WARNING! This text will be treated as part of the path.
- /dir/Trash/**
+ /dir/**
# exclude everything else
@@ -20977,6 +21644,9 @@ used.
Leading or trailing whitespace is stripped from the input lines.
Lines starting with \f[C]#\f[R] or \f[C];\f[R] are ignored.
.PP
+\f[C]--files-from\f[R] followed by \f[C]-\f[R] reads the list of files
+from standard input.
+.PP
Rclone commands with a \f[C]--files-from\f[R] flag traverse the remote,
treating the names in \f[C]--files-from\f[R] as a set of filters.
.PP
@@ -21444,26 +22114,26 @@ rcd (https://rclone.org/commands/rclone_rcd/) command.
.SS Supported parameters
.SS --rc
.PP
-Flag to start the http server listen on remote requests
+Flag to start the http server listen on remote requests.
.SS --rc-addr=IP
.PP
IPaddress:Port or :Port to bind server to.
-(default \[dq]localhost:5572\[dq])
+(default \[dq]localhost:5572\[dq]).
.SS --rc-cert=KEY
.PP
-SSL PEM key (concatenation of certificate and CA certificate)
+SSL PEM key (concatenation of certificate and CA certificate).
.SS --rc-client-ca=PATH
.PP
-Client certificate authority to verify clients with
+Client certificate authority to verify clients with.
.SS --rc-htpasswd=PATH
.PP
-htpasswd file - if not provided no authentication is done
+htpasswd file - if not provided no authentication is done.
.SS --rc-key=PATH
.PP
-SSL PEM Private key
+TLS PEM private key file.
.SS --rc-max-header-bytes=VALUE
.PP
-Maximum size of request header (default 4096)
+Maximum size of request header (default 4096).
.SS --rc-min-tls-version=VALUE
.PP
The minimum TLS version that is acceptable.
@@ -21477,13 +22147,13 @@ User name for authentication.
Password for authentication.
.SS --rc-realm=VALUE
.PP
-Realm for authentication (default \[dq]rclone\[dq])
+Realm for authentication (default \[dq]rclone\[dq]).
.SS --rc-server-read-timeout=DURATION
.PP
-Timeout for server reading data (default 1h0m0s)
+Timeout for server reading data (default 1h0m0s).
.SS --rc-server-write-timeout=DURATION
.PP
-Timeout for server writing data (default 1h0m0s)
+Timeout for server writing data (default 1h0m0s).
.SS --rc-serve
.PP
Enable the serving of remote objects via the HTTP interface.
@@ -21591,7 +22261,7 @@ User-specified template.
Rclone itself implements the remote control protocol in its
\f[C]rclone rc\f[R] command.
.PP
-You can use it like this
+You can use it like this:
.IP
.nf
\f[C]
@@ -21603,8 +22273,28 @@ $ rclone rc rc/noop param1=one param2=two
\f[R]
.fi
.PP
-Run \f[C]rclone rc\f[R] on its own to see the help for the installed
-remote control commands.
+If the remote is running on a different URL than the default
+\f[C]http://localhost:5572/\f[R], use the \f[C]--url\f[R] option to
+specify it:
+.IP
+.nf
+\f[C]
+$ rclone rc --url http://some.remote:1234/ rc/noop
+\f[R]
+.fi
+.PP
+Or, if the remote is listening on a Unix socket, use the
+\f[C]--unix-socket\f[R] option instead:
+.IP
+.nf
+\f[C]
+$ rclone rc --unix-socket /tmp/rclone.sock rc/noop
+\f[R]
+.fi
+.PP
+Run \f[C]rclone rc\f[R] on its own, without any commands, to see the
+help for the installed remote control commands.
+Note that this also needs to connect to the remote server.
.SS JSON input
.PP
\f[C]rclone rc\f[R] also supports a \f[C]--json\f[R] flag which can be
@@ -24082,6 +24772,9 @@ This takes the following parameters
\f[C]id\f[R] - a numeric ID as returned from \f[C]vfs/queue\f[R]
.IP \[bu] 2
\f[C]expiry\f[R] - a new expiry time as floating point seconds
+.IP \[bu] 2
+\f[C]relative\f[R] - if set, expiry is to be treated as relative to the
+current expiry (optional, boolean)
.PP
This returns an empty result on success, or an error.
.PP
@@ -24550,6 +25243,21 @@ T}@T{
-
T}
T{
+Cloudinary
+T}@T{
+MD5
+T}@T{
+R
+T}@T{
+No
+T}@T{
+Yes
+T}@T{
+-
+T}@T{
+-
+T}
+T{
Dropbox
T}@T{
DBHASH \[S1]
@@ -24715,6 +25423,21 @@ T}@T{
-
T}
T{
+iCloud Drive
+T}@T{
+-
+T}@T{
+R
+T}@T{
+No
+T}@T{
+No
+T}@T{
+-
+T}@T{
+-
+T}
+T{
Internet Archive
T}@T{
MD5, SHA1, CRC32
@@ -24914,7 +25637,7 @@ pCloud
T}@T{
MD5, SHA1 \[u2077]
T}@T{
-R
+R/W
T}@T{
No
T}@T{
@@ -26384,6 +27107,31 @@ T}@T{
Yes
T}
T{
+Cloudinary
+T}@T{
+No
+T}@T{
+No
+T}@T{
+No
+T}@T{
+No
+T}@T{
+No
+T}@T{
+No
+T}@T{
+Yes
+T}@T{
+No
+T}@T{
+No
+T}@T{
+No
+T}@T{
+No
+T}
+T{
Enterprise File Fabric
T}@T{
Yes
@@ -26496,7 +27244,7 @@ No
T}@T{
No
T}@T{
-Yes
+No
T}@T{
Yes
T}@T{
@@ -26634,6 +27382,31 @@ T}@T{
Yes
T}
T{
+iCloud Drive
+T}@T{
+Yes
+T}@T{
+Yes
+T}@T{
+Yes
+T}@T{
+Yes
+T}@T{
+No
+T}@T{
+No
+T}@T{
+No
+T}@T{
+No
+T}@T{
+No
+T}@T{
+No
+T}@T{
+Yes
+T}
+T{
ImageKit
T}@T{
Yes
@@ -26904,7 +27677,7 @@ No
T}@T{
No
T}@T{
-No
+Yes
T}@T{
Yes
T}
@@ -27580,6 +28353,7 @@ Flags for anything which can copy a file.
-I, --ignore-times Don\[aq]t skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
@@ -27668,7 +28442,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
- --user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.68.0\[dq])
+ --user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.69.0\[dq])
\f[R]
.fi
.SS Performance
@@ -27817,7 +28591,7 @@ Flags to control the Remote Control API.
.nf
\f[C]
--rc Enable the remote control server
- --rc-addr stringArray IPaddress:Port or :Port to bind server to (default [\[dq]localhost:5572\[dq]])
+ --rc-addr stringArray IPaddress:Port or :Port to bind server to (default localhost:5572)
--rc-allow-origin string Origin which cross-domain request (CORS) can be executed from
--rc-baseurl string Prefix for URLs - leave blank for root
--rc-cert string TLS PEM key (concatenation of certificate and CA certificate)
@@ -27853,7 +28627,7 @@ Flags to control the Metrics HTTP endpoint..
.IP
.nf
\f[C]
- --metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to (default [\[dq]\[dq]])
+ --metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to
--metrics-allow-origin string Origin which cross-domain request (CORS) can be executed from
--metrics-baseurl string Prefix for URLs - leave blank for root
--metrics-cert string TLS PEM key (concatenation of certificate and CA certificate)
@@ -27893,6 +28667,7 @@ Backend-only flags (these can be set in the config file also).
--azureblob-description string Description of the remote
--azureblob-directory-markers Upload an empty object with a trailing slash when a new directory is created
--azureblob-disable-checksum Don\[aq]t store MD5 checksum with object metadata
+ --azureblob-disable-instance-discovery Skip requesting Microsoft Entra instance metadata
--azureblob-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
--azureblob-endpoint string Endpoint for the service
--azureblob-env-auth Read credentials from runtime (environment variables, CLI or MSI)
@@ -27910,6 +28685,7 @@ Backend-only flags (these can be set in the config file also).
--azureblob-tenant string ID of the service principal\[aq]s tenant. Also called its directory ID
--azureblob-upload-concurrency int Concurrency for multipart uploads (default 16)
--azureblob-upload-cutoff string Cutoff for switching to chunked upload (<= 256 MiB) (deprecated)
+ --azureblob-use-az Use Azure CLI tool az for authentication
--azureblob-use-emulator Uses local storage emulator if provided as \[aq]true\[aq]
--azureblob-use-msi Use a managed service identity to authenticate (only works in Azure)
--azureblob-username string User name (usually an email address)
@@ -27959,6 +28735,7 @@ Backend-only flags (these can be set in the config file also).
--box-auth-url string Auth server URL
--box-box-config-file string Box App config.json location
--box-box-sub-type string (default \[dq]user\[dq])
+ --box-client-credentials Use client credentials OAuth flow
--box-client-id string OAuth Client Id
--box-client-secret string OAuth Client Secret
--box-commit-retries int Max number of times to try committing a multipart file (default 100)
@@ -27997,6 +28774,14 @@ Backend-only flags (these can be set in the config file also).
--chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks
--chunker-hash-type string Choose how chunker handles hash sums (default \[dq]md5\[dq])
--chunker-remote string Remote to chunk/unchunk
+ --cloudinary-api-key string Cloudinary API Key
+ --cloudinary-api-secret string Cloudinary API Secret
+ --cloudinary-cloud-name string Cloudinary Environment Name
+ --cloudinary-description string Description of the remote
+ --cloudinary-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
+ --cloudinary-eventually-consistent-delay Duration Wait N seconds for eventual consistency of the databases that support the backend operation (default 0s)
+ --cloudinary-upload-prefix string Specify the API endpoint for environments out of the US
+ --cloudinary-upload-preset string Upload Preset to select asset manipulation on upload
--combine-description string Description of the remote
--combine-upstreams SpaceSepList Upstreams for combining
--compress-description string Description of the remote
@@ -28023,6 +28808,7 @@ Backend-only flags (these can be set in the config file also).
--drive-auth-owner-only Only consider files owned by the authenticated user
--drive-auth-url string Auth server URL
--drive-chunk-size SizeSuffix Upload chunk size (default 8Mi)
+ --drive-client-credentials Use client credentials OAuth flow
--drive-client-id string Google Application Client Id
--drive-client-secret string OAuth Client Secret
--drive-copy-shortcut-content Server side copy contents of shortcuts instead of the shortcut
@@ -28073,6 +28859,7 @@ Backend-only flags (these can be set in the config file also).
--dropbox-batch-size int Max number of files in upload batch
--dropbox-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
--dropbox-chunk-size SizeSuffix Upload chunk size (< 150Mi) (default 48Mi)
+ --dropbox-client-credentials Use client credentials OAuth flow
--dropbox-client-id string OAuth Client Id
--dropbox-client-secret string OAuth Client Secret
--dropbox-description string Description of the remote
@@ -28119,6 +28906,7 @@ Backend-only flags (these can be set in the config file also).
--ftp-host string FTP host to connect to
--ftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--ftp-no-check-certificate Do not verify the TLS certificate of the server
+ --ftp-no-check-upload Don\[aq]t check the upload is OK
--ftp-pass string FTP password (obscured)
--ftp-port int FTP port number (default 21)
--ftp-shut-timeout Duration Maximum time to wait for data connection closing status (default 1m0s)
@@ -28127,10 +28915,12 @@ Backend-only flags (these can be set in the config file also).
--ftp-tls-cache-size int Size of TLS session cache for all control and data connections (default 32)
--ftp-user string FTP username (default \[dq]$USER\[dq])
--ftp-writing-mdtm Use MDTM to set modification time (VsFtpd quirk)
+ --gcs-access-token string Short-lived access token
--gcs-anonymous Access public buckets and objects without credentials
--gcs-auth-url string Auth server URL
--gcs-bucket-acl string Access Control List for new buckets
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies
+ --gcs-client-credentials Use client credentials OAuth flow
--gcs-client-id string OAuth Client Id
--gcs-client-secret string OAuth Client Secret
--gcs-decompress If set this will decompress gzip encoded objects
@@ -28159,11 +28949,13 @@ Backend-only flags (these can be set in the config file also).
--gphotos-batch-mode string Upload file batching sync|async|off (default \[dq]sync\[dq])
--gphotos-batch-size int Max number of files in upload batch
--gphotos-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
+ --gphotos-client-credentials Use client credentials OAuth flow
--gphotos-client-id string OAuth Client Id
--gphotos-client-secret string OAuth Client Secret
--gphotos-description string Description of the remote
--gphotos-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gphotos-include-archived Also view and download archived media
+ --gphotos-proxy string Use the gphotosdl proxy for downloading the full resolution images
--gphotos-read-only Set to make the Google Photos backend read only
--gphotos-read-size Set to read the size of media items
--gphotos-start-year int Year limits the photos to be downloaded to those which are uploaded after the given year (default 2000)
@@ -28182,6 +28974,7 @@ Backend-only flags (these can be set in the config file also).
--hdfs-username string Hadoop user name
--hidrive-auth-url string Auth server URL
--hidrive-chunk-size SizeSuffix Chunksize for chunked uploads (default 48Mi)
+ --hidrive-client-credentials Use client credentials OAuth flow
--hidrive-client-id string OAuth Client Id
--hidrive-client-secret string OAuth Client Secret
--hidrive-description string Description of the remote
@@ -28201,6 +28994,11 @@ Backend-only flags (these can be set in the config file also).
--http-no-head Don\[aq]t use HEAD requests
--http-no-slash Set this if the site doesn\[aq]t end directories with /
--http-url string URL of HTTP host to connect to
+ --iclouddrive-apple-id string Apple ID
+ --iclouddrive-client-id string Client id (default \[dq]d39ba9916b7251055b22c7f910e2ea796ee65e98b2ddecea8f5dde8d9d1a815d\[dq])
+ --iclouddrive-description string Description of the remote
+ --iclouddrive-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --iclouddrive-password string Password (obscured)
--imagekit-description string Description of the remote
--imagekit-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Dollar,Question,Hash,Percent,BackSlash,Del,Ctl,InvalidUtf8,Dot,SquareBracket)
--imagekit-endpoint string You can find your ImageKit.io URL endpoint in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
@@ -28218,6 +29016,7 @@ Backend-only flags (these can be set in the config file also).
--internetarchive-secret-access-key string IAS3 Secret Key (password)
--internetarchive-wait-archive Duration Timeout for waiting the server\[aq]s processing tasks (specifically archive and book_op) to finish (default 0s)
--jottacloud-auth-url string Auth server URL
+ --jottacloud-client-credentials Use client credentials OAuth flow
--jottacloud-client-id string OAuth Client Id
--jottacloud-client-secret string OAuth Client Secret
--jottacloud-description string Description of the remote
@@ -28239,11 +29038,11 @@ Backend-only flags (these can be set in the config file also).
--koofr-user string Your user name
--linkbox-description string Description of the remote
--linkbox-token string Token from https://www.linkbox.to/admin/account
- -l, --links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension
--local-case-insensitive Force the filesystem to report itself as case insensitive
--local-case-sensitive Force the filesystem to report itself as case sensitive
--local-description string Description of the remote
--local-encoding Encoding The encoding for the backend (default Slash,Dot)
+ --local-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the local backend
--local-no-check-updated Don\[aq]t check to see if the files change during upload
--local-no-clone Disable reflink cloning for server-side copies
--local-no-preallocate Disable preallocation of disk space for transferred files
@@ -28255,6 +29054,7 @@ Backend-only flags (these can be set in the config file also).
--local-zero-size-links Assume the Stat size of links is zero (and read them instead) (deprecated)
--mailru-auth-url string Auth server URL
--mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true)
+ --mailru-client-credentials Use client credentials OAuth flow
--mailru-client-id string OAuth Client Id
--mailru-client-secret string OAuth Client Secret
--mailru-description string Description of the remote
@@ -28285,6 +29085,7 @@ Backend-only flags (these can be set in the config file also).
--onedrive-auth-url string Auth server URL
--onedrive-av-override Allows download of files the server thinks has a virus
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes) (default 10Mi)
+ --onedrive-client-credentials Use client credentials OAuth flow
--onedrive-client-id string OAuth Client Id
--onedrive-client-secret string OAuth Client Secret
--onedrive-delta If set rclone will use delta listing to implement recursive listings
@@ -28304,11 +29105,12 @@ Backend-only flags (these can be set in the config file also).
--onedrive-region string Choose national cloud region for OneDrive (default \[dq]global\[dq])
--onedrive-root-folder-id string ID of the root folder
--onedrive-server-side-across-configs Deprecated: use --server-side-across-configs instead
+ --onedrive-tenant string ID of the service principal\[aq]s tenant. Also called its directory ID
--onedrive-token string OAuth Access Token as a JSON blob
--onedrive-token-url string Token server url
--oos-attempt-resume-upload If true attempt to resume previously started multipart upload for the object
--oos-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
- --oos-compartment string Object storage compartment OCID
+ --oos-compartment string Specify compartment OCID, if you need to list buckets
--oos-config-file string Path to OCI config file (default \[dq]\[ti]/.oci/config\[dq])
--oos-config-profile string Profile name inside the oci config file (default \[dq]Default\[dq])
--oos-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
@@ -28337,6 +29139,7 @@ Backend-only flags (these can be set in the config file also).
--opendrive-password string Password (obscured)
--opendrive-username string Username
--pcloud-auth-url string Auth server URL
+ --pcloud-client-credentials Use client credentials OAuth flow
--pcloud-client-id string OAuth Client Id
--pcloud-client-secret string OAuth Client Secret
--pcloud-description string Description of the remote
@@ -28347,26 +29150,25 @@ Backend-only flags (these can be set in the config file also).
--pcloud-token string OAuth Access Token as a JSON blob
--pcloud-token-url string Token server url
--pcloud-username string Your pcloud username
- --pikpak-auth-url string Auth server URL
--pikpak-chunk-size SizeSuffix Chunk size for multipart uploads (default 5Mi)
- --pikpak-client-id string OAuth Client Id
- --pikpak-client-secret string OAuth Client Secret
--pikpak-description string Description of the remote
+ --pikpak-device-id string Device ID used for authorization
--pikpak-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot)
--pikpak-hash-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate hash if required (default 10Mi)
+ --pikpak-no-media-link Use original file links instead of media links
--pikpak-pass string Pikpak password (obscured)
--pikpak-root-folder-id string ID of the root folder
- --pikpak-token string OAuth Access Token as a JSON blob
- --pikpak-token-url string Token server url
--pikpak-trashed-only Only show files that are in the trash
--pikpak-upload-concurrency int Concurrency for multipart uploads (default 5)
--pikpak-use-trash Send files to the trash instead of deleting permanently (default true)
--pikpak-user string Pikpak username
+ --pikpak-user-agent string HTTP user agent for pikpak (default \[dq]Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0\[dq])
--pixeldrain-api-key string API key for your pixeldrain account
--pixeldrain-api-url string The API endpoint to connect to. In the vast majority of cases it\[aq]s fine to leave (default \[dq]https://pixeldrain.com/api\[dq])
--pixeldrain-description string Description of the remote
--pixeldrain-root-folder-id string Root of the filesystem to use (default \[dq]me\[dq])
--premiumizeme-auth-url string Auth server URL
+ --premiumizeme-client-credentials Use client credentials OAuth flow
--premiumizeme-client-id string OAuth Client Id
--premiumizeme-client-secret string OAuth Client Secret
--premiumizeme-description string Description of the remote
@@ -28384,6 +29186,7 @@ Backend-only flags (these can be set in the config file also).
--protondrive-replace-existing-draft Create a new revision when filename conflict is detected
--protondrive-username string The username of your proton account
--putio-auth-url string Auth server URL
+ --putio-client-credentials Use client credentials OAuth flow
--putio-client-id string OAuth Client Id
--putio-client-secret string OAuth Client Secret
--putio-description string Description of the remote
@@ -28417,6 +29220,7 @@ Backend-only flags (these can be set in the config file also).
--s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
--s3-decompress If set this will decompress gzip encoded objects
--s3-description string Description of the remote
+ --s3-directory-bucket Set to use AWS Directory Buckets
--s3-directory-markers Upload an empty object with a trailing slash when a new directory is created
--s3-disable-checksum Don\[aq]t store MD5 checksum with object metadata
--s3-disable-http2 Disable usage of http2 for S3 backends
@@ -28498,6 +29302,7 @@ Backend-only flags (these can be set in the config file also).
--sftp-pass string SSH password, leave blank to use ssh-agent (obscured)
--sftp-path-override string Override path used by SSH shell commands
--sftp-port int SSH port number (default 22)
+ --sftp-pubkey string SSH public certificate for public certificate based authentication
--sftp-pubkey-file string Optional path to public key file
--sftp-server-command string Specifies the path or command to run a sftp server on the remote host
--sftp-set-env SpaceSepList Environment variables to pass to sftp and commands
@@ -28513,6 +29318,7 @@ Backend-only flags (these can be set in the config file also).
--sftp-user string SSH username (default \[dq]$USER\[dq])
--sharefile-auth-url string Auth server URL
--sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi)
+ --sharefile-client-credentials Use client credentials OAuth flow
--sharefile-client-id string OAuth Client Id
--sharefile-client-secret string OAuth Client Secret
--sharefile-description string Description of the remote
@@ -28602,6 +29408,7 @@ Backend-only flags (these can be set in the config file also).
--uptobox-description string Description of the remote
--uptobox-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
--uptobox-private Set to make uploaded files private
+ --webdav-auth-redirect Preserve authentication on redirect
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
--webdav-bearer-token-command string Command to run to get a bearer token
--webdav-description string Description of the remote
@@ -28617,6 +29424,7 @@ Backend-only flags (these can be set in the config file also).
--webdav-user string User name
--webdav-vendor string Name of the WebDAV site/service/software you are using
--yandex-auth-url string Auth server URL
+ --yandex-client-credentials Use client credentials OAuth flow
--yandex-client-id string OAuth Client Id
--yandex-client-secret string OAuth Client Secret
--yandex-description string Description of the remote
@@ -28626,6 +29434,7 @@ Backend-only flags (these can be set in the config file also).
--yandex-token string OAuth Access Token as a JSON blob
--yandex-token-url string Token server url
--zoho-auth-url string Auth server URL
+ --zoho-client-credentials Use client credentials OAuth flow
--zoho-client-id string OAuth Client Id
--zoho-client-secret string OAuth Client Secret
--zoho-description string Description of the remote
@@ -28633,6 +29442,7 @@ Backend-only flags (these can be set in the config file also).
--zoho-region string Zoho region to connect to
--zoho-token string OAuth Access Token as a JSON blob
--zoho-token-url string Token server url
+ --zoho-upload-cutoff SizeSuffix Cutoff for switching to large file upload api (>= 10 MiB) (default 10Mi)
\f[R]
.fi
.SH Docker Volume Plugin
@@ -28904,9 +29714,8 @@ The \f[C]path\f[R] part is optional.
.PP
Mount and VFS
options (https://rclone.org/commands/rclone_serve_docker/#options) as
-well as backend parameters (https://rclone.org/flags/#backend-flags) are
-named like their twin command-line flags without the \f[C]--\f[R] CLI
-prefix.
+well as backend parameters (https://rclone.org/flags/#backend) are named
+like their twin command-line flags without the \f[C]--\f[R] CLI prefix.
Optionally you can use underscores instead of dashes in option names.
For example, \f[C]--vfs-cache-mode full\f[R] becomes
\f[C]-o vfs-cache-mode=full\f[R] or \f[C]-o vfs_cache_mode=full\f[R].
@@ -29328,6 +30137,22 @@ sudo curl -H Content-Type:application/json -XPOST -d {} --unix-socket /run/docke
.fi
.PP
though this is rarely needed.
+.PP
+If the plugin fails to work properly, and only as a last resort after
+you tried diagnosing with the above methods, you can try clearing the
+state of the plugin.
+\f[B]Note that all existing rclone docker volumes will probably have to
+be recreated.\f[R] This might be needed because a reinstall don\[aq]t
+cleanup existing state files to allow for easy restoration, as stated
+above.
+.IP
+.nf
+\f[C]
+docker plugin disable rclone # disable the plugin to ensure no interference
+sudo rm /var/lib/docker-plugins/rclone/cache/docker-plugin.state # removing the plugin state
+docker plugin enable rclone # re-enable the plugin afterward
+\f[R]
+.fi
.SS Caveats
.PP
Finally I\[aq]d like to mention a \f[I]caveat with updating volume
@@ -30670,12 +31495,16 @@ time, using the \f[C]--max-lock\f[R] flag.
very cautious\f[R] that there is no overlap in the trees being synched
between concurrent runs, lest there be replicated files, deleted files
and general mayhem.
-.SS Return codes
+.SS Exit codes
.PP
\f[C]rclone bisync\f[R] returns the following codes to calling program:
- \f[C]0\f[R] on a successful run, - \f[C]1\f[R] for a non-critical
-failing run (a rerun may be successful), - \f[C]2\f[R] for a critically
-aborted run (requires a \f[C]--resync\f[R] to recover).
+failing run (a rerun may be successful), - \f[C]2\f[R] on syntax or
+usage error, - \f[C]7\f[R] for a critically aborted run (requires a
+\f[C]--resync\f[R] to recover).
+.PP
+See also the section about exit
+codes (https://rclone.org/docs/#exit-code) in main docs.
.SS Graceful Shutdown
.PP
Bisync has a \[dq]Graceful Shutdown\[dq] mode which is activated by
@@ -32554,6 +33383,8 @@ Magalu Object Storage
.IP \[bu] 2
Minio
.IP \[bu] 2
+Outscale
+.IP \[bu] 2
Petabox
.IP \[bu] 2
Qiniu Cloud Object Storage (Kodo)
@@ -32568,6 +33399,8 @@ Seagate Lyve Cloud
.IP \[bu] 2
SeaweedFS
.IP \[bu] 2
+Selectel
+.IP \[bu] 2
StackPath
.IP \[bu] 2
Storj
@@ -32802,7 +33635,7 @@ Choose a number from below, or type in your own value
\[rs] \[dq]STANDARD_IA\[dq]
5 / One Zone Infrequent Access storage class
\[rs] \[dq]ONEZONE_IA\[dq]
- 6 / Glacier storage class
+ 6 / Glacier Flexible Retrieval storage class
\[rs] \[dq]GLACIER\[dq]
7 / Glacier Deep Archive storage class
\[rs] \[dq]DEEP_ARCHIVE\[dq]
@@ -33001,6 +33834,127 @@ You can disable this with the --s3-no-head option - see there for more
details.
.PP
Setting this flag increases the chance for undetected upload failures.
+.SS Increasing performance
+.SS Using server-side copy
+.PP
+If you are copying objects between S3 buckets in the same region, you
+should use server-side copy.
+This is much faster than downloading and re-uploading the objects, as no
+data is transferred.
+.PP
+For rclone to use server-side copy, you must use the same remote for the
+source and destination.
+.IP
+.nf
+\f[C]
+rclone copy s3:source-bucket s3:destination-bucket
+\f[R]
+.fi
+.PP
+When using server-side copy, the performance is limited by the rate at
+which rclone issues API requests to S3.
+See below for how to increase the number of API requests rclone makes.
+.SS Increasing the rate of API requests
+.PP
+You can increase the rate of API requests to S3 by increasing the
+parallelism using \f[C]--transfers\f[R] and \f[C]--checkers\f[R]
+options.
+.PP
+Rclone uses a very conservative defaults for these settings, as not all
+providers support high rates of requests.
+Depending on your provider, you can increase significantly the number of
+transfers and checkers.
+.PP
+For example, with AWS S3, if you can increase the number of checkers to
+values like 200.
+If you are doing a server-side copy, you can also increase the number of
+transfers to 200.
+.IP
+.nf
+\f[C]
+rclone sync --transfers 200 --checkers 200 --checksum s3:source-bucket s3:destination-bucket
+\f[R]
+.fi
+.PP
+You will need to experiment with these values to find the optimal
+settings for your setup.
+.SS Data integrity
+.PP
+Rclone does its best to verify every part of an upload or download to
+the s3 provider using various hashes.
+.PP
+Every HTTP transaction to/from the provider has a
+\f[C]X-Amz-Content-Sha256\f[R] or a \f[C]Content-Md5\f[R] header to
+guard against corruption of the HTTP body.
+The HTTP Header is protected by the signature passed in the
+\f[C]Authorization\f[R] header.
+.PP
+All communications with the provider is done over https for encryption
+and additional error protection.
+.SS Single part uploads
+.IP \[bu] 2
+Rclone uploads single part uploads with a \f[C]Content-Md5\f[R] using
+the MD5 hash read from the source.
+The provider checks this is correct on receipt of the data.
+.IP \[bu] 2
+Rclone then does a HEAD request (disable with \f[C]--s3-no-head\f[R]) to
+read the \f[C]ETag\f[R] back which is the MD5 of the file and checks
+that with what it sent.
+.PP
+Note that if the source does not have an MD5 then the single part
+uploads will not have hash protection.
+In this case it is recommended to use \f[C]--s3-upload-cutoff 0\f[R] so
+all files are uploaded as multipart uploads.
+.SS Multipart uploads
+.PP
+For files above \f[C]--s3-upload-cutoff\f[R] rclone splits the file into
+multiple parts for upload.
+.IP \[bu] 2
+Each part is protected with both an \f[C]X-Amz-Content-Sha256\f[R] and a
+\f[C]Content-Md5\f[R]
+.PP
+When rclone has finished the upload of all the parts it then completes
+the upload by sending:
+.IP \[bu] 2
+The MD5 hash of each part
+.IP \[bu] 2
+The number of parts
+.IP \[bu] 2
+This info is all protected with a \f[C]X-Amz-Content-Sha256\f[R]
+.PP
+The provider checks the MD5 for all the parts it has received against
+what rclone sends and if it is good it returns OK.
+.PP
+Rclone then does a HEAD request (disable with \f[C]--s3-no-head\f[R])
+and checks the ETag is what it expects (in this case it should be the
+MD5 sum of all the MD5 sums of all the parts with the number of parts on
+the end).
+.PP
+If the source has an MD5 sum then rclone will attach the
+\f[C]X-Amz-Meta-Md5chksum\f[R] with it as the \f[C]ETag\f[R] for a
+multipart upload can\[aq]t easily be checked against the file as the
+chunk size must be known in order to calculate it.
+.SS Downloads
+.PP
+Rclone checks the MD5 hash of the data downloaded against either the
+ETag or the \f[C]X-Amz-Meta-Md5chksum\f[R] metadata (if present) which
+rclone uploads with multipart uploads.
+.SS Further checking
+.PP
+At each stage rclone and the provider are sending and checking hashes of
+\f[B]everything\f[R].
+Rclone deliberately HEADs each object after upload to check it arrived
+safely for extra security.
+(You can disable this with \f[C]--s3-no-head\f[R]).
+.PP
+If you require further assurance that your data is intact you can use
+\f[C]rclone check\f[R] to check the hashes locally vs the remote.
+.PP
+And if you are feeling ultimately paranoid use
+\f[C]rclone check --download\f[R] which will download the files and
+check them against the local copies.
+(Note that this doesn\[aq]t use disk to do this - it streams them in
+memory).
.SS Versions
.PP
When bucket versioning is enabled (this can be done with rclone with the
@@ -33395,8 +34349,8 @@ Here are the Standard options specific to s3 (Amazon S3 Compliant
Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile,
Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive,
IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease,
-Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj,
-Synology, TencentCOS, Wasabi, Qiniu and others).
+Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel,
+StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
.SS --s3-provider
.PP
Choose your S3 provider.
@@ -33534,6 +34488,12 @@ Minio Object Storage
Netease Object Storage (NOS)
.RE
.IP \[bu] 2
+\[dq]Outscale\[dq]
+.RS 2
+.IP \[bu] 2
+OUTSCALE Object Storage (OOS)
+.RE
+.IP \[bu] 2
\[dq]Petabox\[dq]
.RS 2
.IP \[bu] 2
@@ -33564,6 +34524,12 @@ Scaleway Object Storage
SeaweedFS S3
.RE
.IP \[bu] 2
+\[dq]Selectel\[dq]
+.RS 2
+.IP \[bu] 2
+Selectel Object Storage
+.RE
+.IP \[bu] 2
\[dq]StackPath\[dq]
.RS 2
.IP \[bu] 2
@@ -34113,7 +35079,7 @@ Config: acl
.IP \[bu] 2
Env Var: RCLONE_S3_ACL
.IP \[bu] 2
-Provider: !Storj,Synology,Cloudflare
+Provider: !Storj,Selectel,Synology,Cloudflare
.IP \[bu] 2
Type: string
.IP \[bu] 2
@@ -34350,7 +35316,7 @@ One Zone Infrequent Access storage class
\[dq]GLACIER\[dq]
.RS 2
.IP \[bu] 2
-Glacier storage class
+Glacier Flexible Retrieval storage class
.RE
.IP \[bu] 2
\[dq]DEEP_ARCHIVE\[dq]
@@ -34377,8 +35343,8 @@ Here are the Advanced options specific to s3 (Amazon S3 Compliant
Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile,
Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive,
IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease,
-Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj,
-Synology, TencentCOS, Wasabi, Qiniu and others).
+Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel,
+StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
.SS --s3-bucket-acl
.PP
Canned ACL used when creating buckets.
@@ -35392,6 +36358,53 @@ Env Var: RCLONE_S3_USE_MULTIPART_UPLOADS
Type: Tristate
.IP \[bu] 2
Default: unset
+.SS --s3-directory-bucket
+.PP
+Set to use AWS Directory Buckets
+.PP
+If you are using an AWS Directory Bucket then set this flag.
+.PP
+This will ensure no \f[C]Content-Md5\f[R] headers are sent and ensure
+\f[C]ETag\f[R] headers are not interpreted as MD5 sums.
+\f[C]X-Amz-Meta-Md5chksum\f[R] will be set on all objects whether single
+or multipart uploaded.
+.PP
+This also sets \f[C]no_check_bucket = true\f[R].
+.PP
+Note that Directory Buckets do not support:
+.IP \[bu] 2
+Versioning
+.IP \[bu] 2
+\f[C]Content-Encoding: gzip\f[R]
+.PP
+Rclone limitations with Directory Buckets:
+.IP \[bu] 2
+rclone does not support creating Directory Buckets with
+\f[C]rclone mkdir\f[R]
+.IP \[bu] 2
+\&...
+or removing them with \f[C]rclone rmdir\f[R] yet
+.IP \[bu] 2
+Directory Buckets do not appear when doing \f[C]rclone lsf\f[R] at the
+top level.
+.IP \[bu] 2
+Rclone can\[aq]t remove auto created directories yet.
+In theory this should work with \f[C]directory_markers = true\f[R] but
+it doesn\[aq]t.
+.IP \[bu] 2
+Directories don\[aq]t seem to appear in recursive (ListR) listings.
+.PP
+Properties:
+.IP \[bu] 2
+Config: directory_bucket
+.IP \[bu] 2
+Env Var: RCLONE_S3_DIRECTORY_BUCKET
+.IP \[bu] 2
+Provider: AWS
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS --s3-sdk-log-mode
.PP
Set to debug the SDK
@@ -35898,6 +36911,22 @@ rclone lsd :s3,provider=AWS:1000genomes
.PP
This is the provider used as main example and described in the
configuration section above.
+.SS AWS Directory Buckets
+.PP
+From rclone v1.69 Directory
+Buckets (https://docs.aws.amazon.com/AmazonS3/latest/userguide/directory-buckets-overview.html)
+are supported.
+.PP
+You will need to set the \f[C]directory_buckets = true\f[R] config
+parameter or use \f[C]--s3-directory-buckets\f[R].
+.PP
+Note that rclone cannot yet:
+.IP \[bu] 2
+Create directory buckets
+.IP \[bu] 2
+List directory buckets
+.PP
+See the --s3-directory-buckets flag for more info
.SS AWS Snowball Edge
.PP
AWS Snowball (https://aws.amazon.com/snowball/) is a hardware appliance
@@ -36107,6 +37136,9 @@ Note that Cloudflare decompresses files uploaded with
what AWS does.
If this is causing a problem then upload the files with
\f[C]--header-upload \[dq]Cache-Control: no-transform\[dq]\f[R]
+.PP
+A consequence of this is that \f[C]Content-Encoding: gzip\f[R] will
+never appear in the metadata on Cloudflare.
.SS Dreamhost
.PP
Dreamhost DreamObjects (https://www.dreamhost.com/cloud/storage/) is an
@@ -36991,6 +38023,200 @@ So once set up, for example, to copy files into a bucket
rclone copy /path/to/files minio:bucket
\f[R]
.fi
+.SS Outscale
+.PP
+OUTSCALE Object Storage
+(OOS) (https://en.outscale.com/storage/outscale-object-storage/) is an
+enterprise-grade, S3-compatible storage service provided by OUTSCALE, a
+brand of Dassault Syst\[`e]mes.
+For more information about OOS, see the official
+documentation (https://docs.outscale.com/en/userguide/OUTSCALE-Object-Storage-OOS.html).
+.PP
+Here is an example of an OOS configuration that you can paste into your
+rclone configuration file:
+.IP
+.nf
+\f[C]
+[outscale]
+type = s3
+provider = Outscale
+env_auth = false
+access_key_id = ABCDEFGHIJ0123456789
+secret_access_key = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+region = eu-west-2
+endpoint = oos.eu-west-2.outscale.com
+acl = private
+\f[R]
+.fi
+.PP
+You can also run \f[C]rclone config\f[R] to go through the interactive
+setup process:
+.IP
+.nf
+\f[C]
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+\f[R]
+.fi
+.IP
+.nf
+\f[C]
+Enter name for new remote.
+name> outscale
+\f[R]
+.fi
+.IP
+.nf
+\f[C]
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+ X / Amazon S3 Compliant Storage Providers including AWS, ...Outscale, ...and others
+ \[rs] (s3)
+[snip]
+Storage> outscale
+\f[R]
+.fi
+.IP
+.nf
+\f[C]
+Option provider.
+Choose your S3 provider.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+[snip]
+XX / OUTSCALE Object Storage (OOS)
+ \[rs] (Outscale)
+[snip]
+provider> Outscale
+\f[R]
+.fi
+.IP
+.nf
+\f[C]
+Option env_auth.
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+Only applies if access_key_id and secret_access_key is blank.
+Choose a number from below, or type in your own boolean value (true or false).
+Press Enter for the default (false).
+ 1 / Enter AWS credentials in the next step.
+ \[rs] (false)
+ 2 / Get AWS credentials from the environment (env vars or IAM).
+ \[rs] (true)
+env_auth>
+\f[R]
+.fi
+.IP
+.nf
+\f[C]
+Option access_key_id.
+AWS Access Key ID.
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+access_key_id> ABCDEFGHIJ0123456789
+\f[R]
+.fi
+.IP
+.nf
+\f[C]
+Option secret_access_key.
+AWS Secret Access Key (password).
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+secret_access_key> XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+\f[R]
+.fi
+.IP
+.nf
+\f[C]
+Option region.
+Region where your bucket will be created and your data stored.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / Paris, France
+ \[rs] (eu-west-2)
+ 2 / New Jersey, USA
+ \[rs] (us-east-2)
+ 3 / California, USA
+ \[rs] (us-west-1)
+ 4 / SecNumCloud, Paris, France
+ \[rs] (cloudgouv-eu-west-1)
+ 5 / Tokyo, Japan
+ \[rs] (ap-northeast-1)
+region> 1
+\f[R]
+.fi
+.IP
+.nf
+\f[C]
+Option endpoint.
+Endpoint for S3 API.
+Required when using an S3 clone.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / Outscale EU West 2 (Paris)
+ \[rs] (oos.eu-west-2.outscale.com)
+ 2 / Outscale US east 2 (New Jersey)
+ \[rs] (oos.us-east-2.outscale.com)
+ 3 / Outscale EU West 1 (California)
+ \[rs] (oos.us-west-1.outscale.com)
+ 4 / Outscale SecNumCloud (Paris)
+ \[rs] (oos.cloudgouv-eu-west-1.outscale.com)
+ 5 / Outscale AP Northeast 1 (Japan)
+ \[rs] (oos.ap-northeast-1.outscale.com)
+endpoint> 1
+\f[R]
+.fi
+.IP
+.nf
+\f[C]
+Option acl.
+Canned ACL used when creating buckets and storing or copying objects.
+This ACL is used for creating objects and if bucket_acl isn\[aq]t set, for creating buckets too.
+For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
+Note that this ACL is applied when server-side copying objects as S3
+doesn\[aq]t copy the ACL from the source but rather writes a fresh one.
+If the acl is an empty string then no X-Amz-Acl: header is added and
+the default (private) will be used.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ / Owner gets FULL_CONTROL.
+ 1 | No one else has access rights (default).
+ \[rs] (private)
+[snip]
+acl> 1
+\f[R]
+.fi
+.IP
+.nf
+\f[C]
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+\f[R]
+.fi
+.IP
+.nf
+\f[C]
+Configuration complete.
+Options:
+- type: s3
+- provider: Outscale
+- access_key_id: ABCDEFGHIJ0123456789
+- secret_access_key: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+- endpoint: oos.eu-west-2.outscale.com
+Keep this \[dq]outscale\[dq] remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+\f[R]
+.fi
.SS Qiniu Cloud Object Storage (Kodo)
.PP
Qiniu Cloud Object Storage
@@ -37301,11 +38527,12 @@ copy_cutoff = 5M
\f[R]
.fi
.PP
-C14 Cold Storage (https://www.online.net/en/storage/c14-cold-storage) is
+Scaleway Glacier (https://www.scaleway.com/en/glacier-cold-storage/) is
the low-cost S3 Glacier alternative from Scaleway and it works the same
way as on S3 by accepting the \[dq]GLACIER\[dq] \f[C]storage_class\f[R].
So you can configure your remote with the
-\f[C]storage_class = GLACIER\f[R] option to upload directly to C14.
+\f[C]storage_class = GLACIER\f[R] option to upload directly to Scaleway
+Glacier.
Don\[aq]t forget that in this state you can\[aq]t read files back after,
you will need to restore them to \[dq]STANDARD\[dq] storage_class first
before being able to read them (see \[dq]restore\[dq] section above)
@@ -37536,6 +38763,128 @@ So once set up, for example to copy files into a bucket
rclone copy /path/to/files seaweedfs_s3:foo
\f[R]
.fi
+.SS Selectel
+.PP
+Selectel Cloud Storage (https://selectel.ru/services/cloud/storage/) is
+an S3 compatible storage system which features triple redundancy
+storage, automatic scaling, high availability and a comprehensive IAM
+system.
+.PP
+Selectel have a section on their website for configuring
+rclone (https://docs.selectel.ru/en/cloud/object-storage/tools/rclone/)
+which shows how to make the right API keys.
+.PP
+From rclone v1.69 Selectel is a supported operator - please choose the
+\f[C]Selectel\f[R] provider type.
+.PP
+Note that you should use \[dq]vHosted\[dq] access for the buckets (which
+is the recommended default), not \[dq]path style\[dq].
+.PP
+You can use \f[C]rclone config\f[R] to make a new provider like this
+.IP
+.nf
+\f[C]
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+
+Enter name for new remote.
+name> selectel
+
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+XX / Amazon S3 Compliant Storage Providers including ..., Selectel, ...
+ \[rs] (s3)
+[snip]
+Storage> s3
+
+Option provider.
+Choose your S3 provider.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+[snip]
+XX / Selectel Object Storage
+ \[rs] (Selectel)
+[snip]
+provider> Selectel
+
+Option env_auth.
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+Only applies if access_key_id and secret_access_key is blank.
+Choose a number from below, or type in your own boolean value (true or false).
+Press Enter for the default (false).
+ 1 / Enter AWS credentials in the next step.
+ \[rs] (false)
+ 2 / Get AWS credentials from the environment (env vars or IAM).
+ \[rs] (true)
+env_auth> 1
+
+Option access_key_id.
+AWS Access Key ID.
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+access_key_id> ACCESS_KEY
+
+Option secret_access_key.
+AWS Secret Access Key (password).
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+secret_access_key> SECRET_ACCESS_KEY
+
+Option region.
+Region where your data stored.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / St. Petersburg
+ \[rs] (ru-1)
+region> 1
+
+Option endpoint.
+Endpoint for Selectel Object Storage.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / Saint Petersburg
+ \[rs] (s3.ru-1.storage.selcloud.ru)
+endpoint> 1
+
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+
+Configuration complete.
+Options:
+- type: s3
+- provider: Selectel
+- access_key_id: ACCESS_KEY
+- secret_access_key: SECRET_ACCESS_KEY
+- region: ru-1
+- endpoint: s3.ru-1.storage.selcloud.ru
+Keep this \[dq]selectel\[dq] remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+\f[R]
+.fi
+.PP
+And your config should end up looking like this:
+.IP
+.nf
+\f[C]
+[selectel]
+type = s3
+provider = Selectel
+access_key_id = ACCESS_KEY
+secret_access_key = SECRET_ACCESS_KEY
+region = ru-1
+endpoint = s3.ru-1.storage.selcloud.ru
+\f[R]
+.fi
.SS Wasabi
.PP
Wasabi (https://wasabi.com) is a cloud-based object storage service for
@@ -40096,6 +41445,7 @@ This will dump something like this showing the lifecycle rules.
{
\[dq]daysFromHidingToDeleting\[dq]: 1,
\[dq]daysFromUploadingToHiding\[dq]: null,
+ \[dq]daysFromStartingToCancelingUnfinishedLargeFiles\[dq]: null,
\[dq]fileNamePrefix\[dq]: \[dq]\[dq]
}
]
@@ -40139,6 +41489,9 @@ Options:
this many days it is deleted.
0 is off.
.IP \[bu] 2
+\[dq]daysFromStartingToCancelingUnfinishedLargeFiles\[dq]: Cancels any
+unfinished large file versions after this many days
+.IP \[bu] 2
\[dq]daysFromUploadingToHiding\[dq]: This many days after uploading a
file is hidden
.SS cleanup
@@ -40266,7 +41619,7 @@ If not sure try Y. If Y failed, try N.
y) Yes
n) No
y/n> y
-If your browser doesn\[aq]t open automatically go to the following link: http://127.0.0.1:53682/auth
+If your browser doesn\[aq]t open automatically go to the following link: http://127.0.0.1:53682/auth?state=XXXXXXXXXXXXXXXXXXXXXX
Log in and authorize rclone for access
Waiting for code...
Got code
@@ -40659,6 +42012,22 @@ Env Var: RCLONE_BOX_TOKEN_URL
Type: string
.IP \[bu] 2
Required: false
+.SS --box-client-credentials
+.PP
+Use client credentials OAuth flow.
+.PP
+This will use the OAUTH2 client Credentials Flow as described in RFC
+6749.
+.PP
+Properties:
+.IP \[bu] 2
+Config: client_credentials
+.IP \[bu] 2
+Env Var: RCLONE_BOX_CLIENT_CREDENTIALS
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS --box-root-folder-id
.PP
Fill in for rclone to use a non root folder as its starting point.
@@ -42302,6 +43671,243 @@ Env Var: RCLONE_CHUNKER_DESCRIPTION
Type: string
.IP \[bu] 2
Required: false
+.SH Cloudinary
+.PP
+This is a backend for the Cloudinary (https://cloudinary.com/) platform
+.SS About Cloudinary
+.PP
+Cloudinary (https://cloudinary.com/) is an image and video API platform.
+Trusted by 1.5 million developers and 10,000 enterprise and hyper-growth
+companies as a critical part of their tech stack to deliver visually
+engaging experiences.
+.SS Accounts & Pricing
+.PP
+To use this backend, you need to create a free
+account (https://cloudinary.com/users/register_free) on Cloudinary.
+Start with a free plan with generous usage limits.
+Then, as your requirements grow, upgrade to a plan that best fits your
+needs.
+See the pricing details (https://cloudinary.com/pricing).
+.SS Securing Your Credentials
+.PP
+Please refer to the
+docs (https://rclone.org/docs/#configuration-encryption-cheatsheet)
+.SS Configuration
+.PP
+Here is an example of making a Cloudinary configuration.
+.PP
+First, create a
+cloudinary.com (https://cloudinary.com/users/register_free) account and
+choose a plan.
+.PP
+You will need to log in and get the \f[C]API Key\f[R] and
+\f[C]API Secret\f[R] for your account from the developer section.
+.PP
+Now run
+.PP
+\f[C]rclone config\f[R]
+.PP
+Follow the interactive setup process:
+.IP
+.nf
+\f[C]
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+
+Enter the name for the new remote.
+name> cloudinary-media-library
+
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+XX / cloudinary.com
+\[rs] (cloudinary)
+[snip]
+Storage> cloudinary
+
+Option cloud_name.
+You can find your cloudinary.com cloud_name in your [dashboard](https://console.cloudinary.com/pm/developer-dashboard)
+Enter a value.
+cloud_name> ****************************
+
+Option api_key.
+You can find your cloudinary.com api key in your [dashboard](https://console.cloudinary.com/pm/developer-dashboard)
+Enter a value.
+api_key> ****************************
+
+Option api_secret.
+You can find your cloudinary.com api secret in your [dashboard](https://console.cloudinary.com/pm/developer-dashboard)
+This value must be a single character, one of the following: y, g.
+y/g> y
+Enter a value.
+api_secret> ****************************
+
+Option upload_prefix.
+[Upload prefix](https://cloudinary.com/documentation/cloudinary_sdks#configuration_parameters) to specify alternative data center
+Enter a value.
+upload_prefix>
+
+Option upload_preset.
+[Upload presets](https://cloudinary.com/documentation/upload_presets) can be defined for different upload profiles
+Enter a value.
+upload_preset>
+
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+
+Configuration complete.
+Options:
+- type: cloudinary
+- api_key: ****************************
+- api_secret: ****************************
+- cloud_name: ****************************
+- upload_prefix:
+- upload_preset:
+
+Keep this \[dq]cloudinary-media-library\[dq] remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+\f[R]
+.fi
+.PP
+List directories in the top level of your Media Library
+.PP
+\f[C]rclone lsd cloudinary-media-library:\f[R]
+.PP
+Make a new directory.
+.PP
+\f[C]rclone mkdir cloudinary-media-library:directory\f[R]
+.PP
+List the contents of a directory.
+.PP
+\f[C]rclone ls cloudinary-media-library:directory\f[R]
+.SS Modified time and hashes
+.PP
+Cloudinary stores md5 and timestamps for any successful Put
+automatically and read-only.
+.SS Standard options
+.PP
+Here are the Standard options specific to cloudinary (Cloudinary).
+.SS --cloudinary-cloud-name
+.PP
+Cloudinary Environment Name
+.PP
+Properties:
+.IP \[bu] 2
+Config: cloud_name
+.IP \[bu] 2
+Env Var: RCLONE_CLOUDINARY_CLOUD_NAME
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: true
+.SS --cloudinary-api-key
+.PP
+Cloudinary API Key
+.PP
+Properties:
+.IP \[bu] 2
+Config: api_key
+.IP \[bu] 2
+Env Var: RCLONE_CLOUDINARY_API_KEY
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: true
+.SS --cloudinary-api-secret
+.PP
+Cloudinary API Secret
+.PP
+Properties:
+.IP \[bu] 2
+Config: api_secret
+.IP \[bu] 2
+Env Var: RCLONE_CLOUDINARY_API_SECRET
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: true
+.SS --cloudinary-upload-prefix
+.PP
+Specify the API endpoint for environments out of the US
+.PP
+Properties:
+.IP \[bu] 2
+Config: upload_prefix
+.IP \[bu] 2
+Env Var: RCLONE_CLOUDINARY_UPLOAD_PREFIX
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.SS --cloudinary-upload-preset
+.PP
+Upload Preset to select asset manipulation on upload
+.PP
+Properties:
+.IP \[bu] 2
+Config: upload_preset
+.IP \[bu] 2
+Env Var: RCLONE_CLOUDINARY_UPLOAD_PRESET
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.SS Advanced options
+.PP
+Here are the Advanced options specific to cloudinary (Cloudinary).
+.SS --cloudinary-encoding
+.PP
+The encoding for the backend.
+.PP
+See the encoding section in the
+overview (https://rclone.org/overview/#encoding) for more info.
+.PP
+Properties:
+.IP \[bu] 2
+Config: encoding
+.IP \[bu] 2
+Env Var: RCLONE_CLOUDINARY_ENCODING
+.IP \[bu] 2
+Type: Encoding
+.IP \[bu] 2
+Default:
+Slash,LtGt,DoubleQuote,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot
+.SS --cloudinary-eventually-consistent-delay
+.PP
+Wait N seconds for eventual consistency of the databases that support
+the backend operation
+.PP
+Properties:
+.IP \[bu] 2
+Config: eventually_consistent_delay
+.IP \[bu] 2
+Env Var: RCLONE_CLOUDINARY_EVENTUALLY_CONSISTENT_DELAY
+.IP \[bu] 2
+Type: Duration
+.IP \[bu] 2
+Default: 0s
+.SS --cloudinary-description
+.PP
+Description of the remote.
+.PP
+Properties:
+.IP \[bu] 2
+Config: description
+.IP \[bu] 2
+Env Var: RCLONE_CLOUDINARY_DESCRIPTION
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
.SH Citrix ShareFile
.PP
Citrix ShareFile (https://sharefile.com) is a secure file sharing and
@@ -42678,6 +44284,22 @@ Env Var: RCLONE_SHAREFILE_TOKEN_URL
Type: string
.IP \[bu] 2
Required: false
+.SS --sharefile-client-credentials
+.PP
+Use client credentials OAuth flow.
+.PP
+This will use the OAUTH2 client Credentials Flow as described in RFC
+6749.
+.PP
+Properties:
+.IP \[bu] 2
+Config: client_credentials
+.IP \[bu] 2
+Env Var: RCLONE_SHAREFILE_CLIENT_CREDENTIALS
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS --sharefile-upload-cutoff
.PP
Cutoff for switching to multipart upload.
@@ -44494,6 +46116,22 @@ Env Var: RCLONE_DROPBOX_TOKEN_URL
Type: string
.IP \[bu] 2
Required: false
+.SS --dropbox-client-credentials
+.PP
+Use client credentials OAuth flow.
+.PP
+This will use the OAUTH2 client Credentials Flow as described in RFC
+6749.
+.PP
+Properties:
+.IP \[bu] 2
+Config: client_credentials
+.IP \[bu] 2
+Env Var: RCLONE_DROPBOX_CLIENT_CREDENTIALS
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS --dropbox-chunk-size
.PP
Upload chunk size (< 150Mi).
@@ -45885,15 +47523,15 @@ Default: false
.SS --ftp-socks-proxy
.PP
Socks 5 proxy host.
+.PP
+Supports the format user:pass\[at]host:port, user\[at]host:port,
+host:port.
+.PP
+Example:
.IP
.nf
\f[C]
- Supports the format user:pass\[at]host:port, user\[at]host:port, host:port.
-
- Example:
-
- myUser:myPass\[at]localhost:9005
-
+myUser:myPass\[at]localhost:9005
\f[R]
.fi
.PP
@@ -45906,6 +47544,29 @@ Env Var: RCLONE_FTP_SOCKS_PROXY
Type: string
.IP \[bu] 2
Required: false
+.SS --ftp-no-check-upload
+.PP
+Don\[aq]t check the upload is OK
+.PP
+Normally rclone will try to check the upload exists after it has
+uploaded a file to make sure the size and modification time are as
+expected.
+.PP
+This flag stops rclone doing these checks.
+This enables uploading to folders which are write only.
+.PP
+You will likely need to use the --inplace flag also if uploading to a
+write only folder.
+.PP
+Properties:
+.IP \[bu] 2
+Config: no_check_upload
+.IP \[bu] 2
+Env Var: RCLONE_FTP_NO_CHECK_UPLOAD
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS --ftp-encoding
.PP
The encoding for the backend.
@@ -46608,6 +48269,82 @@ If you\[aq]d rather stuff the contents of the credentials file into the
rclone config file, you can set \f[C]service_account_credentials\f[R]
with the actual contents of the file instead, or set the equivalent
environment variable.
+.SS Service Account Authentication with Access Tokens
+.PP
+Another option for service account authentication is to use access
+tokens via \f[I]gcloud impersonate-service-account\f[R].
+Access tokens protect security by avoiding the use of the JSON key file,
+which can be breached.
+They also bypass oauth login flow, which is simpler on remote VMs that
+lack a web browser.
+.PP
+If you already have a working service account, skip to step 3.
+.SS 1. Create a service account using
+.IP
+.nf
+\f[C]
+gcloud iam service-accounts create gcs-read-only
+\f[R]
+.fi
+.PP
+You can re-use an existing service account as well (like the one created
+above)
+.SS 2. Attach a Viewer (read-only) or User (read-write) role to the service account
+.IP
+.nf
+\f[C]
+ $ PROJECT_ID=my-project
+ $ gcloud --verbose iam service-accounts add-iam-policy-binding \[rs]
+ gcs-read-only\[at]${PROJECT_ID}.iam.gserviceaccount.com \[rs]
+ --member=serviceAccount:gcs-read-only\[at]${PROJECT_ID}.iam.gserviceaccount.com \[rs]
+ --role=roles/storage.objectViewer
+\f[R]
+.fi
+.PP
+Use the Google Cloud console to identify a limited role.
+Some relevant pre-defined roles:
+.IP \[bu] 2
+\f[I]roles/storage.objectUser\f[R] -- read-write access but no admin
+privileges
+.IP \[bu] 2
+\f[I]roles/storage.objectViewer\f[R] -- read-only access to objects
+.IP \[bu] 2
+\f[I]roles/storage.admin\f[R] -- create buckets & administrative roles
+.SS 3. Get a temporary access key for the service account
+.IP
+.nf
+\f[C]
+$ gcloud auth application-default print-access-token \[rs]
+ --impersonate-service-account \[rs]
+ gcs-read-only\[at]${PROJECT_ID}.iam.gserviceaccount.com
+
+ya29.c.c0ASRK0GbAFEewXD [truncated]
+\f[R]
+.fi
+.SS 4. Update \f[C]access_token\f[R] setting
+.PP
+hit \f[C]CTRL-C\f[R] when you see \f[I]waiting for code\f[R].
+This will save the config without doing oauth flow
+.IP
+.nf
+\f[C]
+rclone config update ${REMOTE_NAME} access_token ya29.c.c0Axxxx
+\f[R]
+.fi
+.SS 5. Run rclone as usual
+.IP
+.nf
+\f[C]
+rclone ls dev-gcs:${MY_BUCKET}/
+\f[R]
+.fi
+.SS More Info on Service Accounts
+.IP \[bu] 2
+Official GCS
+Docs (https://cloud.google.com/compute/docs/access/service-accounts)
+.IP \[bu] 2
+Guide on Service Accounts using Key Files (less secure, but similar
+concepts) (https://forum.rclone.org/t/access-using-google-service-account/24822/2)
.SS Anonymous Access
.PP
For downloads of objects that permit public access you can configure
@@ -47364,6 +49101,39 @@ Env Var: RCLONE_GCS_TOKEN_URL
Type: string
.IP \[bu] 2
Required: false
+.SS --gcs-client-credentials
+.PP
+Use client credentials OAuth flow.
+.PP
+This will use the OAUTH2 client Credentials Flow as described in RFC
+6749.
+.PP
+Properties:
+.IP \[bu] 2
+Config: client_credentials
+.IP \[bu] 2
+Env Var: RCLONE_GCS_CLIENT_CREDENTIALS
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
+.SS --gcs-access-token
+.PP
+Short-lived access token.
+.PP
+Leave blank normally.
+Needed only if you want use short-lived access token instead of
+interactive login.
+.PP
+Properties:
+.IP \[bu] 2
+Config: access_token
+.IP \[bu] 2
+Env Var: RCLONE_GCS_ACCESS_TOKEN
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
.SS --gcs-directory-markers
.PP
Upload an empty object with a trailing slash when a new directory is
@@ -48247,6 +50017,13 @@ T}@T{
JSON Text Format for Google Apps scripts
T}
T{
+md
+T}@T{
+text/markdown
+T}@T{
+Markdown Text Format
+T}
+T{
odp
T}@T{
application/vnd.oasis.opendocument.presentation
@@ -48576,6 +50353,22 @@ Env Var: RCLONE_DRIVE_TOKEN_URL
Type: string
.IP \[bu] 2
Required: false
+.SS --drive-client-credentials
+.PP
+Use client credentials OAuth flow.
+.PP
+This will use the OAUTH2 client Credentials Flow as described in RFC
+6749.
+.PP
+Properties:
+.IP \[bu] 2
+Config: client_credentials
+.IP \[bu] 2
+Env Var: RCLONE_DRIVE_CLIENT_CREDENTIALS
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS --drive-root-folder-id
.PP
ID of the root folder.
@@ -49974,6 +51767,61 @@ The result is a JSON array of matches, for example:
]
\f[R]
.fi
+.SS rescue
+.PP
+Rescue or delete any orphaned files
+.IP
+.nf
+\f[C]
+rclone backend rescue remote: [options] [+]
+\f[R]
+.fi
+.PP
+This command rescues or deletes any orphaned files or directories.
+.PP
+Sometimes files can get orphaned in Google Drive.
+This means that they are no longer in any folder in Google Drive.
+.PP
+This command finds those files and either rescues them to a directory
+you specify or deletes them.
+.PP
+Usage:
+.PP
+This can be used in 3 ways.
+.PP
+First, list all orphaned files
+.IP
+.nf
+\f[C]
+rclone backend rescue drive:
+\f[R]
+.fi
+.PP
+Second rescue all orphaned files to the directory indicated
+.IP
+.nf
+\f[C]
+rclone backend rescue drive: \[dq]relative/path/to/rescue/directory\[dq]
+\f[R]
+.fi
+.PP
+e.g.
+To rescue all orphans to a directory called \[dq]Orphans\[dq] in the top
+level
+.IP
+.nf
+\f[C]
+rclone backend rescue drive: Orphans
+\f[R]
+.fi
+.PP
+Third delete all orphaned files to the trash
+.IP
+.nf
+\f[C]
+rclone backend rescue drive: -o delete
+\f[R]
+.fi
.SS Limitations
.PP
Drive has quite a lot of rate limiting.
@@ -50124,9 +51972,9 @@ It will show you a client ID and client secret.
Make a note of these.
.RS 4
.PP
-(If you selected \[dq]External\[dq] at Step 5 continue to Step 9.
+(If you selected \[dq]External\[dq] at Step 5 continue to Step 10.
If you chose \[dq]Internal\[dq] you don\[aq]t need to publish and can
-skip straight to Step 10 but your destination drive must be part of the
+skip straight to Step 11 but your destination drive must be part of the
same Google Workspace.)
.RE
.IP "10." 4
@@ -50526,6 +52374,22 @@ Env Var: RCLONE_GPHOTOS_TOKEN_URL
Type: string
.IP \[bu] 2
Required: false
+.SS --gphotos-client-credentials
+.PP
+Use client credentials OAuth flow.
+.PP
+This will use the OAUTH2 client Credentials Flow as described in RFC
+6749.
+.PP
+Properties:
+.IP \[bu] 2
+Config: client_credentials
+.IP \[bu] 2
+Env Var: RCLONE_GPHOTOS_CLIENT_CREDENTIALS
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS --gphotos-read-size
.PP
Set to read the size of media items.
@@ -50586,6 +52450,51 @@ Env Var: RCLONE_GPHOTOS_INCLUDE_ARCHIVED
Type: bool
.IP \[bu] 2
Default: false
+.SS --gphotos-proxy
+.PP
+Use the gphotosdl proxy for downloading the full resolution images
+.PP
+The Google API will deliver images and video which aren\[aq]t full
+resolution, and/or have EXIF data missing.
+.PP
+However if you ue the gphotosdl proxy tnen you can download original,
+unchanged images.
+.PP
+This runs a headless browser in the background.
+.PP
+Download the software from
+gphotosdl (https://github.com/rclone/gphotosdl)
+.PP
+First run with
+.IP
+.nf
+\f[C]
+gphotosdl -login
+\f[R]
+.fi
+.PP
+Then once you have logged into google photos close the browser window
+and run
+.IP
+.nf
+\f[C]
+gphotosdl
+\f[R]
+.fi
+.PP
+Then supply the parameter
+\f[C]--gphotos-proxy \[dq]http://localhost:8282\[dq]\f[R] to make rclone
+use the proxy.
+.PP
+Properties:
+.IP \[bu] 2
+Config: proxy
+.IP \[bu] 2
+Env Var: RCLONE_GPHOTOS_PROXY
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
.SS --gphotos-encoding
.PP
The encoding for the backend.
@@ -50745,6 +52654,9 @@ relying on \[dq]Google Photos\[dq] as a backup of your photos. You will
not be able to use rclone to redownload original images. You could use
\[aq]google takeout\[aq] to recover the original photos as a last
resort\f[R]
+.PP
+\f[B]NB\f[R] you \f[B]can\f[R] use the --gphotos-proxy flag to use a
+headless browser to download images in full resolution.
.SS Downloading Videos
.PP
When videos are downloaded they are downloaded in a really compressed
@@ -50752,6 +52664,9 @@ version of the video compared to downloading it via the Google Photos
web interface.
This is covered by bug
#113672044 (https://issuetracker.google.com/issues/113672044).
+.PP
+\f[B]NB\f[R] you \f[B]can\f[R] use the --gphotos-proxy flag to use a
+headless browser to download images in full resolution.
.SS Duplicates
.PP
If a file name is duplicated in a directory then rclone will add the
@@ -51885,6 +53800,22 @@ Env Var: RCLONE_HIDRIVE_TOKEN_URL
Type: string
.IP \[bu] 2
Required: false
+.SS --hidrive-client-credentials
+.PP
+Use client credentials OAuth flow.
+.PP
+This will use the OAUTH2 client Credentials Flow as described in RFC
+6749.
+.PP
+Properties:
+.IP \[bu] 2
+Config: client_credentials
+.IP \[bu] 2
+Env Var: RCLONE_HIDRIVE_CLIENT_CREDENTIALS
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS --hidrive-scope-role
.PP
User-level that rclone should use when requesting access from HiDrive.
@@ -52839,6 +54770,194 @@ T}
.TE
.PP
See the metadata (https://rclone.org/docs/#metadata) docs for more info.
+.SH iCloud Drive
+.SS Configuration
+.PP
+The initial setup for an iCloud Drive backend involves getting a trust
+token/session.
+This can be done by simply using the regular iCloud password, and
+accepting the code prompt on another iCloud connected device.
+.PP
+\f[B]IMPORTANT\f[R]: At the moment an app specific password won\[aq]t be
+accepted.
+Only use your regular password and 2FA.
+.PP
+\f[C]rclone config\f[R] walks you through the token creation.
+The trust token is valid for 30 days.
+After which you will have to reauthenticate with
+\f[C]rclone reconnect\f[R] or \f[C]rclone config\f[R].
+.PP
+Here is an example of how to make a remote called \f[C]iclouddrive\f[R].
+First run:
+.IP
+.nf
+\f[C]
+ rclone config
+\f[R]
+.fi
+.PP
+This will guide you through an interactive setup process:
+.IP
+.nf
+\f[C]
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> iclouddrive
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+XX / iCloud Drive
+ \[rs] (iclouddrive)
+[snip]
+Storage> iclouddrive
+Option apple_id.
+Apple ID.
+Enter a value.
+apple_id> APPLEID
+Option password.
+Password.
+Choose an alternative below.
+y) Yes, type in my own password
+g) Generate random password
+y/g> y
+Enter the password:
+password:
+Confirm the password:
+password:
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+Option config_2fa.
+Two-factor authentication: please enter your 2FA code
+Enter a value.
+config_2fa> 2FACODE
+Remote config
+--------------------
+[koofr]
+- type: iclouddrive
+- apple_id: APPLEID
+- password: *** ENCRYPTED ***
+- cookies: ****************************
+- trust_token: ****************************
+--------------------
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+\f[R]
+.fi
+.SS Advanced Data Protection
+.PP
+ADP is currently unsupported and need to be disabled
+.SS Standard options
+.PP
+Here are the Standard options specific to iclouddrive (iCloud Drive).
+.SS --iclouddrive-apple-id
+.PP
+Apple ID.
+.PP
+Properties:
+.IP \[bu] 2
+Config: apple_id
+.IP \[bu] 2
+Env Var: RCLONE_ICLOUDDRIVE_APPLE_ID
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: true
+.SS --iclouddrive-password
+.PP
+Password.
+.PP
+\f[B]NB\f[R] Input to this must be obscured - see rclone
+obscure (https://rclone.org/commands/rclone_obscure/).
+.PP
+Properties:
+.IP \[bu] 2
+Config: password
+.IP \[bu] 2
+Env Var: RCLONE_ICLOUDDRIVE_PASSWORD
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: true
+.SS --iclouddrive-trust-token
+.PP
+Trust token (internal use)
+.PP
+Properties:
+.IP \[bu] 2
+Config: trust_token
+.IP \[bu] 2
+Env Var: RCLONE_ICLOUDDRIVE_TRUST_TOKEN
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.SS --iclouddrive-cookies
+.PP
+cookies (internal use only)
+.PP
+Properties:
+.IP \[bu] 2
+Config: cookies
+.IP \[bu] 2
+Env Var: RCLONE_ICLOUDDRIVE_COOKIES
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.SS Advanced options
+.PP
+Here are the Advanced options specific to iclouddrive (iCloud Drive).
+.SS --iclouddrive-client-id
+.PP
+Client id
+.PP
+Properties:
+.IP \[bu] 2
+Config: client_id
+.IP \[bu] 2
+Env Var: RCLONE_ICLOUDDRIVE_CLIENT_ID
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default:
+\[dq]d39ba9916b7251055b22c7f910e2ea796ee65e98b2ddecea8f5dde8d9d1a815d\[dq]
+.SS --iclouddrive-encoding
+.PP
+The encoding for the backend.
+.PP
+See the encoding section in the
+overview (https://rclone.org/overview/#encoding) for more info.
+.PP
+Properties:
+.IP \[bu] 2
+Config: encoding
+.IP \[bu] 2
+Env Var: RCLONE_ICLOUDDRIVE_ENCODING
+.IP \[bu] 2
+Type: Encoding
+.IP \[bu] 2
+Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
+.SS --iclouddrive-description
+.PP
+Description of the remote.
+.PP
+Properties:
+.IP \[bu] 2
+Config: description
+.IP \[bu] 2
+Env Var: RCLONE_ICLOUDDRIVE_DESCRIPTION
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
.SH Internet Archive
.PP
The Internet Archive backend utilizes Items on
@@ -53829,6 +55948,22 @@ Env Var: RCLONE_JOTTACLOUD_TOKEN_URL
Type: string
.IP \[bu] 2
Required: false
+.SS --jottacloud-client-credentials
+.PP
+Use client credentials OAuth flow.
+.PP
+This will use the OAUTH2 client Credentials Flow as described in RFC
+6749.
+.PP
+Properties:
+.IP \[bu] 2
+Config: client_credentials
+.IP \[bu] 2
+Env Var: RCLONE_JOTTACLOUD_CLIENT_CREDENTIALS
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS --jottacloud-md5-memory-limit
.PP
Files bigger than this will be cached on disk to calculate the MD5 if
@@ -54987,6 +57122,22 @@ Env Var: RCLONE_MAILRU_TOKEN_URL
Type: string
.IP \[bu] 2
Required: false
+.SS --mailru-client-credentials
+.PP
+Use client credentials OAuth flow.
+.PP
+This will use the OAUTH2 client Credentials Flow as described in RFC
+6749.
+.PP
+Properties:
+.IP \[bu] 2
+Config: client_credentials
+.IP \[bu] 2
+Env Var: RCLONE_MAILRU_CLIENT_CREDENTIALS
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS --mailru-speedup-file-patterns
.PP
Comma separated list of file name patterns eligible for speedup (put by
@@ -56363,6 +58514,16 @@ identity, the user-assigned identity will be used by default.
If the resource has multiple user-assigned identities you will need to
unset \f[C]env_auth\f[R] and set \f[C]use_msi\f[R] instead.
See the \f[C]use_msi\f[R] section.
+.PP
+If you are operating in disconnected clouds, or private clouds such as
+Azure Stack you may want to set
+\f[C]disable_instance_discovery = true\f[R].
+This determines whether rclone requests Microsoft Entra instance
+metadata from \f[C]https://login.microsoft.com/\f[R] before
+authenticating.
+Setting this to \f[C]true\f[R] will skip this request, making you
+responsible for ensuring the configured authority is valid and
+trustworthy.
.SS Env Auth: 3. Azure CLI credentials (as used by the az tool)
.PP
Credentials created with the \f[C]az\f[R] tool can be picked up using
@@ -56526,6 +58687,16 @@ be explicitly specified using exactly one of the
If none of \f[C]msi_object_id\f[R], \f[C]msi_client_id\f[R], or
\f[C]msi_mi_res_id\f[R] is set, this is is equivalent to using
\f[C]env_auth\f[R].
+.SS Azure CLI tool \f[C]az\f[R]
+.PP
+Set to use the Azure CLI tool
+\f[C]az\f[R] (https://learn.microsoft.com/en-us/cli/azure/) as the sole
+means of authentication.
+.PP
+Setting this can be useful if you wish to use the \f[C]az\f[R] CLI on a
+host with a System Managed Identity that you do not want to use.
+.PP
+Don\[aq]t set \f[C]env_auth\f[R] at the same time.
.SS Anonymous
.PP
If you want to access resources with public anonymous access then set
@@ -56782,6 +58953,28 @@ Env Var: RCLONE_AZUREBLOB_SERVICE_PRINCIPAL_FILE
Type: string
.IP \[bu] 2
Required: false
+.SS --azureblob-disable-instance-discovery
+.PP
+Skip requesting Microsoft Entra instance metadata
+.PP
+This should be set true only by applications authenticating in
+disconnected clouds, or private clouds such as Azure Stack.
+.PP
+It determines whether rclone requests Microsoft Entra instance metadata
+from \f[C]https://login.microsoft.com/\f[R] before authenticating.
+.PP
+Setting this to true will skip this request, making you responsible for
+ensuring the configured authority is valid and trustworthy.
+.PP
+Properties:
+.IP \[bu] 2
+Config: disable_instance_discovery
+.IP \[bu] 2
+Env Var: RCLONE_AZUREBLOB_DISABLE_INSTANCE_DISCOVERY
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS --azureblob-use-msi
.PP
Use a managed service identity to authenticate (only works in Azure).
@@ -56867,6 +59060,28 @@ Env Var: RCLONE_AZUREBLOB_USE_EMULATOR
Type: bool
.IP \[bu] 2
Default: false
+.SS --azureblob-use-az
+.PP
+Use Azure CLI tool az for authentication
+.PP
+Set to use the Azure CLI tool
+az (https://learn.microsoft.com/en-us/cli/azure/) as the sole means of
+authentication.
+.PP
+Setting this can be useful if you wish to use the az CLI on a host with
+a System Managed Identity that you do not want to use.
+.PP
+Don\[aq]t set env_auth at the same time.
+.PP
+Properties:
+.IP \[bu] 2
+Config: use_az
+.IP \[bu] 2
+Env Var: RCLONE_AZUREBLOB_USE_AZ
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS --azureblob-endpoint
.PP
Endpoint for the service.
@@ -58440,6 +60655,37 @@ Note: If you have a special region, you may need a different host in
step 4 and 5.
Here are some
hints (https://github.com/rclone/rclone/blob/bc23bf11db1c78c6ebbf8ea538fbebf7058b4176/backend/onedrive/onedrive.go#L86).
+.SS Using OAuth Client Credential flow
+.PP
+OAuth Client Credential flow will allow rclone to use permissions
+directly associated with the Azure AD Enterprise application, rather
+that adopting the context of an Azure AD user account.
+.PP
+This flow can be enabled by following the steps below:
+.IP "1." 3
+Create the Enterprise App registration in the Azure AD portal and obtain
+a Client ID and Client Secret as described above.
+.IP "2." 3
+Ensure that the application has the appropriate permissions and they are
+assigned as \f[I]Application Permissions\f[R]
+.IP "3." 3
+Configure the remote, ensuring that \f[I]Client ID\f[R] and \f[I]Client
+Secret\f[R] are entered correctly.
+.IP "4." 3
+In the \f[I]Advanced Config\f[R] section, enter \f[C]true\f[R] for
+\f[C]client_credentials\f[R] and in the \f[C]tenant\f[R] section enter
+the tenant ID.
+.PP
+When it comes to choosing the type of the connection work with the
+client credentials flow.
+In particular the \[dq]onedrive\[dq] option does not work.
+You can use the \[dq]sharepoint\[dq] option or if that does not find the
+correct drive ID type it in manually with the \[dq]driveid\[dq] option.
+.PP
+\f[B]NOTE\f[R] Assigning permissions directly to the application means
+that anyone with the \f[I]Client ID\f[R] and \f[I]Client Secret\f[R] can
+access your OneDrive files.
+Take care to safeguard these credentials.
.SS Modification times and hashes
.PP
OneDrive allows modification times to be set on objects accurate to 1
@@ -58709,6 +60955,22 @@ Microsoft Cloud Germany
Azure and Office 365 operated by Vnet Group in China
.RE
.RE
+.SS --onedrive-tenant
+.PP
+ID of the service principal\[aq]s tenant.
+Also called its directory ID.
+.PP
+Set this if using - Client Credential flow
+.PP
+Properties:
+.IP \[bu] 2
+Config: tenant
+.IP \[bu] 2
+Env Var: RCLONE_ONEDRIVE_TENANT
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
.SS Advanced options
.PP
Here are the Advanced options specific to onedrive (Microsoft OneDrive).
@@ -58755,6 +61017,22 @@ Env Var: RCLONE_ONEDRIVE_TOKEN_URL
Type: string
.IP \[bu] 2
Required: false
+.SS --onedrive-client-credentials
+.PP
+Use client credentials OAuth flow.
+.PP
+This will use the OAUTH2 client Credentials Flow as described in RFC
+6749.
+.PP
+Properties:
+.IP \[bu] 2
+Config: client_credentials
+.IP \[bu] 2
+Env Var: RCLONE_ONEDRIVE_CLIENT_CREDENTIALS
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS --onedrive-chunk-size
.PP
Chunk size to upload files with - must be multiple of 320k (327,680
@@ -60770,7 +63048,9 @@ Type: string
Required: true
.SS --oos-compartment
.PP
-Object storage compartment OCID
+Specify compartment OCID, if you need to list buckets.
+.PP
+List objects works without compartment OCID.
.PP
Properties:
.IP \[bu] 2
@@ -60782,7 +63062,7 @@ Provider: !no_auth
.IP \[bu] 2
Type: string
.IP \[bu] 2
-Required: true
+Required: false
.SS --oos-region
.PP
Object storage Region
@@ -63403,6 +65683,10 @@ Pcloud App Client Id - leave blank normally.
client_id>
Pcloud App Client Secret - leave blank normally.
client_secret>
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
Remote config
Use web browser to automatically authenticate rclone with remote?
* Say Y if the machine running rclone has a web browser you can use
@@ -63432,6 +65716,10 @@ y/e/d> y
See the remote setup docs (https://rclone.org/remote_setup/) for how to
set it up on a machine with no Internet browser available.
.PP
+Note if you are using remote config with rclone authorize while your
+pcloud server is the EU region, you will need to set the hostname in
+\[aq]Edit advanced config\[aq], otherwise you might get a token error.
+.PP
Note that rclone runs a webserver on your local machine to collect the
token as returned from pCloud.
This only runs from the moment it opens your browser to the moment you
@@ -63617,6 +65905,22 @@ Env Var: RCLONE_PCLOUD_TOKEN_URL
Type: string
.IP \[bu] 2
Required: false
+.SS --pcloud-client-credentials
+.PP
+Use client credentials OAuth flow.
+.PP
+This will use the OAUTH2 client Credentials Flow as described in RFC
+6749.
+.PP
+Properties:
+.IP \[bu] 2
+Config: client_credentials
+.IP \[bu] 2
+Env Var: RCLONE_PCLOUD_CLIENT_CREDENTIALS
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS --pcloud-encoding
.PP
The encoding for the backend.
@@ -63841,79 +66145,37 @@ Required: true
.SS Advanced options
.PP
Here are the Advanced options specific to pikpak (PikPak).
-.SS --pikpak-client-id
+.SS --pikpak-device-id
.PP
-OAuth Client Id.
-.PP
-Leave blank normally.
+Device ID used for authorization.
.PP
Properties:
.IP \[bu] 2
-Config: client_id
+Config: device_id
.IP \[bu] 2
-Env Var: RCLONE_PIKPAK_CLIENT_ID
+Env Var: RCLONE_PIKPAK_DEVICE_ID
.IP \[bu] 2
Type: string
.IP \[bu] 2
Required: false
-.SS --pikpak-client-secret
+.SS --pikpak-user-agent
.PP
-OAuth Client Secret.
+HTTP user agent for pikpak.
.PP
-Leave blank normally.
+Defaults to \[dq]Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0)
+Gecko/20100101 Firefox/129.0\[dq] or \[dq]--pikpak-user-agent\[dq]
+provided on command line.
.PP
Properties:
.IP \[bu] 2
-Config: client_secret
+Config: user_agent
.IP \[bu] 2
-Env Var: RCLONE_PIKPAK_CLIENT_SECRET
+Env Var: RCLONE_PIKPAK_USER_AGENT
.IP \[bu] 2
Type: string
.IP \[bu] 2
-Required: false
-.SS --pikpak-token
-.PP
-OAuth Access Token as a JSON blob.
-.PP
-Properties:
-.IP \[bu] 2
-Config: token
-.IP \[bu] 2
-Env Var: RCLONE_PIKPAK_TOKEN
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.SS --pikpak-auth-url
-.PP
-Auth server URL.
-.PP
-Leave blank to use the provider defaults.
-.PP
-Properties:
-.IP \[bu] 2
-Config: auth_url
-.IP \[bu] 2
-Env Var: RCLONE_PIKPAK_AUTH_URL
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.SS --pikpak-token-url
-.PP
-Token server url.
-.PP
-Leave blank to use the provider defaults.
-.PP
-Properties:
-.IP \[bu] 2
-Config: token_url
-.IP \[bu] 2
-Env Var: RCLONE_PIKPAK_TOKEN_URL
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
+Default: \[dq]Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0)
+Gecko/20100101 Firefox/129.0\[dq]
.SS --pikpak-root-folder-id
.PP
ID of the root folder.
@@ -63962,6 +66224,22 @@ Env Var: RCLONE_PIKPAK_TRASHED_ONLY
Type: bool
.IP \[bu] 2
Default: false
+.SS --pikpak-no-media-link
+.PP
+Use original file links instead of media links.
+.PP
+This avoids issues caused by invalid media links, but may reduce
+download speeds.
+.PP
+Properties:
+.IP \[bu] 2
+Config: no_media_link
+.IP \[bu] 2
+Env Var: RCLONE_PIKPAK_NO_MEDIA_LINK
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS --pikpak-hash-memory-limit
.PP
Files bigger than this will be cached on disk to calculate hash if
@@ -64615,6 +66893,22 @@ Env Var: RCLONE_PREMIUMIZEME_TOKEN_URL
Type: string
.IP \[bu] 2
Required: false
+.SS --premiumizeme-client-credentials
+.PP
+Use client credentials OAuth flow.
+.PP
+This will use the OAUTH2 client Credentials Flow as described in RFC
+6749.
+.PP
+Properties:
+.IP \[bu] 2
+Config: client_credentials
+.IP \[bu] 2
+Env Var: RCLONE_PREMIUMIZEME_CLIENT_CREDENTIALS
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS --premiumizeme-encoding
.PP
The encoding for the backend.
@@ -65294,6 +67588,22 @@ Env Var: RCLONE_PUTIO_TOKEN_URL
Type: string
.IP \[bu] 2
Required: false
+.SS --putio-client-credentials
+.PP
+Use client credentials OAuth flow.
+.PP
+This will use the OAUTH2 client Credentials Flow as described in RFC
+6749.
+.PP
+Properties:
+.IP \[bu] 2
+Config: client_credentials
+.IP \[bu] 2
+Env Var: RCLONE_PUTIO_CLIENT_CREDENTIALS
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS --putio-encoding
.PP
The encoding for the backend.
@@ -66458,7 +68768,7 @@ If you have a certificate you may use it to sign your public key,
creating a separate SSH user certificate that should be used instead of
the plain public key extracted from the private key.
Then you must provide the path to the user certificate public key file
-in \f[C]pubkey_file\f[R].
+in \f[C]pubkey_file\f[R] or the content of the file in \f[C]pubkey\f[R].
.PP
Note: This is not the traditional public key paired with your private
key, typically saved as \f[C]/home/$USER/.ssh/id_rsa.pub\f[R].
@@ -66880,6 +69190,22 @@ Env Var: RCLONE_SFTP_KEY_FILE_PASS
Type: string
.IP \[bu] 2
Required: false
+.SS --sftp-pubkey
+.PP
+SSH public certificate for public certificate based authentication.
+Set this if you have a signed certificate you want to use for
+authentication.
+If specified will override pubkey_file.
+.PP
+Properties:
+.IP \[bu] 2
+Config: pubkey
+.IP \[bu] 2
+Env Var: RCLONE_SFTP_PUBKEY
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
.SS --sftp-pubkey-file
.PP
Optional path to public key file.
@@ -67696,9 +70022,8 @@ details (https://docs.hetzner.com/robot/storage-box/access/access-ssh-rsync-borg
SMB is a communication protocol to share files over
network (https://en.wikipedia.org/wiki/Server_Message_Block).
.PP
-This relies on go-smb2
-library (https://github.com/hirochachacha/go-smb2/) for communication
-with SMB protocol.
+This relies on go-smb2 library (https://github.com/CloudSoda/go-smb2/)
+for communication with SMB protocol.
.PP
Paths are specified as \f[C]remote:sharename\f[R] (or \f[C]remote:\f[R]
for the \f[C]lsd\f[R] command.) You may put subdirectories in too, e.g.
@@ -70390,6 +72715,31 @@ Env Var: RCLONE_WEBDAV_UNIX_SOCKET
Type: string
.IP \[bu] 2
Required: false
+.SS --webdav-auth-redirect
+.PP
+Preserve authentication on redirect.
+.PP
+If the server redirects rclone to a new domain when it is trying to read
+a file then normally rclone will drop the Authorization: header from the
+request.
+.PP
+This is standard security practice to avoid sending your credentials to
+an unknown webserver.
+.PP
+However this is desirable in some circumstances.
+If you are getting an error like \[dq]401 Unauthorized\[dq] when rclone
+is attempting to read files from the webdav server then you can try this
+option.
+.PP
+Properties:
+.IP \[bu] 2
+Config: auth_redirect
+.IP \[bu] 2
+Env Var: RCLONE_WEBDAV_AUTH_REDIRECT
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS --webdav-description
.PP
Description of the remote.
@@ -70836,6 +73186,22 @@ Env Var: RCLONE_YANDEX_TOKEN_URL
Type: string
.IP \[bu] 2
Required: false
+.SS --yandex-client-credentials
+.PP
+Use client credentials OAuth flow.
+.PP
+This will use the OAUTH2 client Credentials Flow as described in RFC
+6749.
+.PP
+Properties:
+.IP \[bu] 2
+Config: client_credentials
+.IP \[bu] 2
+Env Var: RCLONE_YANDEX_CLIENT_CREDENTIALS
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS --yandex-hard-delete
.PP
Delete files permanently rather than putting them into the trash.
@@ -71196,6 +73562,35 @@ Env Var: RCLONE_ZOHO_TOKEN_URL
Type: string
.IP \[bu] 2
Required: false
+.SS --zoho-client-credentials
+.PP
+Use client credentials OAuth flow.
+.PP
+This will use the OAUTH2 client Credentials Flow as described in RFC
+6749.
+.PP
+Properties:
+.IP \[bu] 2
+Config: client_credentials
+.IP \[bu] 2
+Env Var: RCLONE_ZOHO_CLIENT_CREDENTIALS
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
+.SS --zoho-upload-cutoff
+.PP
+Cutoff for switching to large file upload api (>= 10 MiB).
+.PP
+Properties:
+.IP \[bu] 2
+Config: upload_cutoff
+.IP \[bu] 2
+Env Var: RCLONE_ZOHO_UPLOAD_CUTOFF
+.IP \[bu] 2
+Type: SizeSuffix
+.IP \[bu] 2
+Default: 10Mi
.SS --zoho-encoding
.PP
The encoding for the backend.
@@ -71779,14 +74174,14 @@ $ rclone -L ls /tmp/a
6 b/one
\f[R]
.fi
-.SS --links, -l
+.SS --local-links, --links, -l
.PP
Normally rclone will ignore symlinks or junction points (which behave
like symlinks under Windows).
.PP
If you supply this flag then rclone will copy symbolic links from the
local storage, and store them as text files, with a
-\[aq].rclonelink\[aq] suffix in the remote storage.
+\f[C].rclonelink\f[R] suffix in the remote storage.
.PP
The text file will contain the target of the symbolic link (see
example).
@@ -71812,7 +74207,7 @@ $ rclone copy -l /tmp/a/ remote:/tmp/a/
\f[R]
.fi
.PP
-The remote files are created with a \[aq].rclonelink\[aq] suffix
+The remote files are created with a \f[C].rclonelink\f[R] suffix
.IP
.nf
\f[C]
@@ -71873,6 +74268,11 @@ $ tree /tmp/c
\f[R]
.fi
.PP
+Note that \f[C]--local-links\f[R] just enables this feature for the
+local backend.
+\f[C]--links\f[R] and \f[C]-l\f[R] enable the feature for all supported
+backends and the VFS.
+.PP
Note that this flag is incompatible with \f[C]-copy-links\f[R] /
\f[C]-L\f[R].
.SS Restricting filesystems with --one-file-system
@@ -71965,10 +74365,10 @@ Env Var: RCLONE_LOCAL_COPY_LINKS
Type: bool
.IP \[bu] 2
Default: false
-.SS --links / -l
+.SS --local-links
.PP
Translate symlinks to/from regular files with a \[aq].rclonelink\[aq]
-extension.
+extension for the local backend.
.PP
Properties:
.IP \[bu] 2
@@ -72472,6 +74872,509 @@ Options:
.IP \[bu] 2
\[dq]error\[dq]: return an error based on option value
.SH Changelog
+.SS v1.69.0 - 2025-01-12
+.PP
+See commits (https://github.com/rclone/rclone/compare/v1.68.0...v1.69.0)
+.IP \[bu] 2
+New backends
+.RS 2
+.IP \[bu] 2
+ICloud Drive (https://rclone.org/iclouddrive/) (lostb1t)
+.IP \[bu] 2
+Cloudinary (https://rclone.org/cloudinary/) (yuval-cloudinary)
+.IP \[bu] 2
+New S3 providers:
+.RS 2
+.IP \[bu] 2
+Outscale (https://rclone.org/s3/#outscale) (Matthias Gatto)
+.IP \[bu] 2
+Selectel (https://rclone.org/s3/#selectel) (Nick Craig-Wood)
+.RE
+.RE
+.IP \[bu] 2
+Security fixes
+.RS 2
+.IP \[bu] 2
+serve sftp: Resolve CVE-2024-45337 - Misuse of
+ServerConfig.PublicKeyCallback may cause authorization bypass
+(dependabot)
+.RS 2
+.IP \[bu] 2
+Rclone was \f[B]not\f[R] vulnerable to this.
+.IP \[bu] 2
+See https://github.com/advisories/GHSA-v778-237x-gjrc
+.RE
+.IP \[bu] 2
+build: Update golang.org/x/net to v0.33.0 to fix CVE-2024-45338 -
+Non-linear parsing of case-insensitive content (Nick Craig-Wood)
+.RS 2
+.IP \[bu] 2
+Rclone was \f[B]not\f[R] vulnerable to this.
+.IP \[bu] 2
+See https://github.com/advisories/GHSA-w32m-9786-jp63
+.RE
+.RE
+.IP \[bu] 2
+New Features
+.RS 2
+.IP \[bu] 2
+accounting: Write the current bwlimit to the log on SIGUSR2 (Nick
+Craig-Wood)
+.IP \[bu] 2
+bisync: Change exit code from 2 to 7 for critically aborted run
+(albertony)
+.IP \[bu] 2
+build
+.RS 2
+.IP \[bu] 2
+Update all dependencies (Nick Craig-Wood)
+.IP \[bu] 2
+Replace Windows-specific \f[C]NewLazyDLL\f[R] with
+\f[C]NewLazySystemDLL\f[R] (albertony)
+.RE
+.IP \[bu] 2
+cmd: Change exit code from 1 to 2 for syntax and usage errors
+(albertony)
+.IP \[bu] 2
+docker serve: make sure all mount and VFS options are parsed (Nick
+Craig-Wood)
+.IP \[bu] 2
+doc fixes (albertony, Alexandre Hamez, Anthony Metzidis, buengese, Dan
+McArdle, David Seifert, Francesco Frassinelli, Michael R.
+Davis, Nick Craig-Wood, Pawel Palucha, Randy Bush, remygrandin, Sam
+Harrison, shenpengfeng, tgfisher, Thomas ten Cate, ToM, Tony Metzidis,
+vintagefuture, Yxxx)
+.IP \[bu] 2
+fs: Make \f[C]--links\f[R] flag global and add new
+\f[C]--local-links\f[R] and \f[C]--vfs-links\f[R] flags (Nick
+Craig-Wood)
+.IP \[bu] 2
+http servers: Disable automatic authentication skipping for unix sockets
+in http servers (Moises Lima)
+.RS 2
+.IP \[bu] 2
+This was making it impossible to use unix sockets with an proxy
+.IP \[bu] 2
+This might now cause rclone to need authenticaton where it didn\[aq]t
+before
+.RE
+.IP \[bu] 2
+oauthutil: add support for OAuth client credential flow (Martin Hassack,
+Nick Craig-Wood)
+.IP \[bu] 2
+operations: make log messages consistent for mkdir/rmdir at INFO level
+(Nick Craig-Wood)
+.IP \[bu] 2
+rc: Add \f[C]relative\f[R] to
+vfs/queue-set-expiry (https://rclone.org/rc/#vfs-queue-set-expiry) (Nick
+Craig-Wood)
+.IP \[bu] 2
+serve dlna: Sort the directory entries by directories first then
+alphabetically by name (Nick Craig-Wood)
+.IP \[bu] 2
+serve nfs
+.RS 2
+.IP \[bu] 2
+Introduce symlink support (Nick Craig-Wood)
+.IP \[bu] 2
+Implement \f[C]--nfs-cache-type\f[R] symlink (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+size: Make output compatible with \f[C]-P\f[R] (Nick Craig-Wood)
+.IP \[bu] 2
+test makefiles: Add \f[C]--flat\f[R] flag for making directories with
+many entries (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Bug Fixes
+.RS 2
+.IP \[bu] 2
+accounting
+.RS 2
+.IP \[bu] 2
+Fix global error acounting (Benjamin Legrand)
+.IP \[bu] 2
+Fix debug printing when debug wasn\[aq]t set (Nick Craig-Wood)
+.IP \[bu] 2
+Fix race stopping/starting the stats counter (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+rc/job: Use mutex for adding listeners thread safety (hayden.pan)
+.IP \[bu] 2
+serve docker: Fix incorrect GID assignment (TAKEI Yuya)
+.IP \[bu] 2
+serve nfs: Fix missing inode numbers which was messing up
+\f[C]ls -laR\f[R] (Nick Craig-Wood)
+.IP \[bu] 2
+serve s3: Fix \f[C]Last-Modified\f[R] timestamp (Nick Craig-Wood)
+.IP \[bu] 2
+serve sftp: Fix loading of authorized keys file with comment on last
+line (albertony)
+.RE
+.IP \[bu] 2
+Mount
+.RS 2
+.IP \[bu] 2
+Introduce symlink support (Filipe Azevedo, Nick Craig-Wood)
+.IP \[bu] 2
+Better snap mount error message (divinity76)
+.IP \[bu] 2
+mount2: Fix missing \f[C].\f[R] and \f[C]..\f[R] entries (Filipe
+Azevedo)
+.RE
+.IP \[bu] 2
+VFS
+.RS 2
+.IP \[bu] 2
+With \f[C]--vfs-used-is-size\f[R] value is calculated and then thrown
+away (Ilias Ozgur Can Leonard)
+.IP \[bu] 2
+Add symlink support to VFS (Filipe Azevedo, Nick Craig-Wood)
+.RS 2
+.IP \[bu] 2
+This can be enabled with the specific \f[C]--vfs-links\f[R] flag or the
+global \f[C]--links\f[R] flag
+.RE
+.IP \[bu] 2
+Fix open files disappearing from directory listings (Nick Craig-Wood)
+.IP \[bu] 2
+Add remote name to vfs cache log messages (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Cache
+.RS 2
+.IP \[bu] 2
+Fix parent not getting pinned when remote is a file (nielash)
+.RE
+.IP \[bu] 2
+Azure Blob
+.RS 2
+.IP \[bu] 2
+Add \f[C]--azureblob-disable-instance-discovery\f[R] (Nick Craig-Wood)
+.IP \[bu] 2
+Add \f[C]--azureblob-use-az\f[R] to force the use of the Azure CLI for
+auth (Nick Craig-Wood)
+.IP \[bu] 2
+Quit multipart uploads if the context is cancelled (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Azurefiles
+.RS 2
+.IP \[bu] 2
+Fix missing x-ms-file-request-intent header (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+B2
+.RS 2
+.IP \[bu] 2
+Add \f[C]daysFromStartingToCancelingUnfinishedLargeFiles\f[R] to
+\f[C]backend lifecycle\f[R] command (Louis Laureys)
+.RE
+.IP \[bu] 2
+Box
+.RS 2
+.IP \[bu] 2
+Fix server-side copying a file over existing dst (nielash)
+.IP \[bu] 2
+Fix panic when decoding corrupted PEM from JWT file (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Drive
+.RS 2
+.IP \[bu] 2
+Add support for markdown format (Noam Ross)
+.IP \[bu] 2
+Implement \f[C]rclone backend rescue\f[R] to rescue orphaned files (Nick
+Craig-Wood)
+.RE
+.IP \[bu] 2
+Dropbox
+.RS 2
+.IP \[bu] 2
+Fix server side copying over existing object (Nick Craig-Wood)
+.IP \[bu] 2
+Fix return status when full to be fatal error (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+FTP
+.RS 2
+.IP \[bu] 2
+Implement \f[C]--ftp-no-check-upload\f[R] to allow upload to write only
+dirs (Nick Craig-Wood)
+.IP \[bu] 2
+Fix ls commands returning empty on \[dq]Microsoft FTP Service\[dq]
+servers (Francesco Frassinelli)
+.RE
+.IP \[bu] 2
+Gofile
+.RS 2
+.IP \[bu] 2
+Fix server side copying over existing object (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Google Cloud Storage
+.RS 2
+.IP \[bu] 2
+Add access token auth with \f[C]--gcs-access-token\f[R] (Leandro
+Piccilli)
+.IP \[bu] 2
+Update docs on service account access tokens (Anthony Metzidis)
+.RE
+.IP \[bu] 2
+Googlephotos
+.RS 2
+.IP \[bu] 2
+Implement \f[C]--gphotos-proxy\f[R] to allow download of full resolution
+media (Nick Craig-Wood)
+.IP \[bu] 2
+Fix nil pointer crash on upload (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+HTTP
+.RS 2
+.IP \[bu] 2
+Fix incorrect URLs with initial slash (Oleg Kunitsyn)
+.RE
+.IP \[bu] 2
+Onedrive
+.RS 2
+.IP \[bu] 2
+Add support for OAuth client credential flow (Martin Hassack, Nick
+Craig-Wood)
+.IP \[bu] 2
+Fix time precision for OneDrive personal (Nick Craig-Wood)
+.IP \[bu] 2
+Fix server side copying over existing object (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Opendrive
+.RS 2
+.IP \[bu] 2
+Add \f[C]rclone about\f[R] support to backend (quiescens)
+.RE
+.IP \[bu] 2
+Oracle Object Storage
+.RS 2
+.IP \[bu] 2
+Make specifying \f[C]compartmentid\f[R] optional (Manoj Ghosh)
+.IP \[bu] 2
+Quit multipart uploads if the context is cancelled (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Pikpak
+.RS 2
+.IP \[bu] 2
+Add option to use original file links (wiserain)
+.RE
+.IP \[bu] 2
+Protondrive
+.RS 2
+.IP \[bu] 2
+Improve performance of Proton Drive backend (Lawrence Murray)
+.RE
+.IP \[bu] 2
+Putio
+.RS 2
+.IP \[bu] 2
+Fix server side copying over existing object (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+S3
+.RS 2
+.IP \[bu] 2
+Add initial \f[C]--s3-directory-bucket\f[R] to support AWS Directory
+Buckets (Nick Craig-Wood)
+.IP \[bu] 2
+Add Wasabi \f[C]eu-south-1\f[R] region (Diego Monti)
+.IP \[bu] 2
+Fix download of compressed files from Cloudflare R2 (Nick Craig-Wood)
+.IP \[bu] 2
+Rename glacier storage class to flexible retrieval (Henry Lee)
+.IP \[bu] 2
+Quit multipart uploads if the context is cancelled (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+SFTP
+.RS 2
+.IP \[bu] 2
+Allow inline ssh public certificate for sftp (Dimitar Ivanov)
+.IP \[bu] 2
+Fix nil check when using auth proxy (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Smb
+.RS 2
+.IP \[bu] 2
+Add initial support for Kerberos authentication (more work needed).
+(Francesco Frassinelli)
+.IP \[bu] 2
+Fix panic if stat fails (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Sugarsync
+.RS 2
+.IP \[bu] 2
+Fix server side copying over existing object (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+WebDAV
+.RS 2
+.IP \[bu] 2
+Nextcloud: implement backoff and retry for 423 LOCKED errors (Nick
+Craig-Wood)
+.IP \[bu] 2
+Make \f[C]--webdav-auth-redirect\f[R] to fix 401 unauthorized on
+redirect (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Yandex
+.RS 2
+.IP \[bu] 2
+Fix server side copying over existing object (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Zoho
+.RS 2
+.IP \[bu] 2
+Use download server to accelerate downloads (buengese)
+.IP \[bu] 2
+Switch to large file upload API for larger files, fix missing URL
+encoding of filenames for the upload API (buengese)
+.IP \[bu] 2
+Print clear error message when missing oauth scope (buengese)
+.IP \[bu] 2
+Try to handle rate limits a bit better (buengese)
+.IP \[bu] 2
+Add support for private spaces (buengese)
+.IP \[bu] 2
+Make upload cutoff configurable (buengese)
+.RE
+.SS v1.68.2 - 2024-11-15
+.PP
+See commits (https://github.com/rclone/rclone/compare/v1.68.1...v1.68.2)
+.IP \[bu] 2
+Security fixes
+.RS 2
+.IP \[bu] 2
+local backend: CVE-2024-52522: fix permission and ownership on symlinks
+with \f[C]--links\f[R] and \f[C]--metadata\f[R] (Nick Craig-Wood)
+.RS 2
+.IP \[bu] 2
+Only affects users using \f[C]--metadata\f[R] and \f[C]--links\f[R] and
+copying files to the local backend
+.IP \[bu] 2
+See
+https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
+.RE
+.IP \[bu] 2
+build: bump github.com/golang-jwt/jwt/v4 from 4.5.0 to 4.5.1
+(dependabot)
+.RS 2
+.IP \[bu] 2
+This is an issue in a dependency which is used for JWT certificates
+.IP \[bu] 2
+See
+https://github.com/golang-jwt/jwt/security/advisories/GHSA-29wx-vh33-7x7r
+.RE
+.RE
+.IP \[bu] 2
+Bug Fixes
+.RS 2
+.IP \[bu] 2
+accounting: Fix wrong message on SIGUSR2 to enable/disable bwlimit (Nick
+Craig-Wood)
+.IP \[bu] 2
+bisync: Fix output capture restoring the wrong output for logrus
+(Dimitrios Slamaris)
+.IP \[bu] 2
+dlna: Fix loggingResponseWriter disregarding log level (Simon Bos)
+.IP \[bu] 2
+serve s3: Fix excess locking which was making serve s3 single threaded
+(Nick Craig-Wood)
+.IP \[bu] 2
+doc fixes (Nick Craig-Wood, tgfisher, Alexandre Hamez, Randy Bush)
+.RE
+.IP \[bu] 2
+Local
+.RS 2
+.IP \[bu] 2
+Fix permission and ownership on symlinks with \f[C]--links\f[R] and
+\f[C]--metadata\f[R] (Nick Craig-Wood)
+.IP \[bu] 2
+Fix \f[C]--copy-links\f[R] on macOS when cloning (nielash)
+.RE
+.IP \[bu] 2
+Onedrive
+.RS 2
+.IP \[bu] 2
+Fix Retry-After handling to look at 503 errors also (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Pikpak
+.RS 2
+.IP \[bu] 2
+Fix cid/gcid calculations for fs.OverrideRemote (wiserain)
+.IP \[bu] 2
+Fix fatal crash on startup with token that can\[aq]t be refreshed (Nick
+Craig-Wood)
+.RE
+.IP \[bu] 2
+S3
+.RS 2
+.IP \[bu] 2
+Fix crash when using \f[C]--s3-download-url\f[R] after migration to
+SDKv2 (Nick Craig-Wood)
+.IP \[bu] 2
+Storj provider: fix server-side copy of files bigger than 5GB (Kaloyan
+Raev)
+.IP \[bu] 2
+Fix multitenant multipart uploads with CEPH (Nick Craig-Wood)
+.RE
+.SS v1.68.1 - 2024-09-24
+.PP
+See commits (https://github.com/rclone/rclone/compare/v1.68.0...v1.68.1)
+.IP \[bu] 2
+Bug Fixes
+.RS 2
+.IP \[bu] 2
+build: Fix docker release build (ttionya)
+.IP \[bu] 2
+doc fixes (Nick Craig-Wood, Pawel Palucha)
+.IP \[bu] 2
+fs
+.RS 2
+.IP \[bu] 2
+Fix \f[C]--dump filters\f[R] not always appearing (Nick Craig-Wood)
+.IP \[bu] 2
+Fix setting \f[C]stringArray\f[R] config values from environment
+variables (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+rc: Fix default value of \f[C]--metrics-addr\f[R] (Nick Craig-Wood)
+.IP \[bu] 2
+serve docker: Add missing \f[C]vfs-read-chunk-streams\f[R] option in
+docker volume driver (Divyam)
+.RE
+.IP \[bu] 2
+Onedrive
+.RS 2
+.IP \[bu] 2
+Fix spurious \[dq]Couldn\[aq]t decode error response: EOF\[dq] DEBUG
+(Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Pikpak
+.RS 2
+.IP \[bu] 2
+Fix login issue where token retrieval fails (wiserain)
+.RE
+.IP \[bu] 2
+S3
+.RS 2
+.IP \[bu] 2
+Fix rclone ignoring static credentials when \f[C]env_auth=true\f[R]
+(Nick Craig-Wood)
+.RE
.SS v1.68.0 - 2024-09-08
.PP
See commits (https://github.com/rclone/rclone/compare/v1.67.0...v1.68.0)
@@ -72738,6 +75641,8 @@ Implement \f[C]SetModTime\f[R] (Georg Welzel)
.IP \[bu] 2
Implement \f[C]OpenWriterAt\f[R] feature to enable multipart uploads
(Georg Welzel)
+.IP \[bu] 2
+Fix failing large file uploads (Georg Welzel)
.RE
.IP \[bu] 2
Pikpak
@@ -89013,6 +91918,80 @@ fsantagostinobietti
<6057026+fsantagostinobietti@users.noreply.github.com>
.IP \[bu] 2
Oleg Kunitsyn <114359669+hiddenmarten@users.noreply.github.com>
+.IP \[bu] 2
+Divyam <47589864+divyam234@users.noreply.github.com>
+.IP \[bu] 2
+ttionya
+.IP \[bu] 2
+quiescens
+.IP \[bu] 2
+rishi.sridhar
+.IP \[bu] 2
+Lawrence Murray
+.IP \[bu] 2
+Leandro Piccilli
+.IP \[bu] 2
+Benjamin Legrand
+.IP \[bu] 2
+Noam Ross
+.IP \[bu] 2
+lostb1t
+.IP \[bu] 2
+Matthias Gatto
+.IP \[bu] 2
+Andr\['e] Tran
+.IP \[bu] 2
+Simon Bos
+.IP \[bu] 2
+Alexandre Hamez <199517+ahamez@users.noreply.github.com>
+.IP \[bu] 2
+Randy Bush
+.IP \[bu] 2
+Diego Monti
+.IP \[bu] 2
+tgfisher
+.IP \[bu] 2
+Moises Lima
+.IP \[bu] 2
+Dimitar Ivanov
+.IP \[bu] 2
+shenpengfeng
+.IP \[bu] 2
+Dimitrios Slamaris
+.IP \[bu] 2
+vintagefuture <39503528+vintagefuture@users.noreply.github.com>
+.IP \[bu] 2
+David Seifert
+.IP \[bu] 2
+Michael R.
+Davis
+.IP \[bu] 2
+remygrandin
+.IP \[bu] 2
+Ilias Ozgur Can Leonard
+.IP \[bu] 2
+divinity76
+.IP \[bu] 2
+Martin Hassack
+.IP \[bu] 2
+Filipe Azevedo
+.IP \[bu] 2
+hayden.pan
+.IP \[bu] 2
+Yxxx <45665172+marsjane@users.noreply.github.com>
+.IP \[bu] 2
+Thomas ten Cate
+.IP \[bu] 2
+Louis Laureys
+.IP \[bu] 2
+Henry Lee
+.IP \[bu] 2
+ToM
+.IP \[bu] 2
+TAKEI Yuya <853320+takei-yuya@users.noreply.github.com>
+.IP \[bu] 2
+Francesco Frassinelli
+
.SH Contact the rclone project
.SS Forum
.PP