diff --git a/MANUAL.html b/MANUAL.html index d4b683fec..29c5bfac1 100644 --- a/MANUAL.html +++ b/MANUAL.html @@ -81,7 +81,7 @@

rclone(1) User Manual

Nick Craig-Wood

-

Sep 08, 2024

+

Jan 12, 2025

Rclone syncs your files to cloud storage

rclone logo

@@ -148,6 +148,7 @@
  • Arvan Cloud Object Storage (AOS)
  • Citrix ShareFile
  • Cloudflare R2
  • +
  • Cloudinary
  • DigitalOcean Spaces
  • Digi Storage
  • Dreamhost
  • @@ -164,6 +165,7 @@
  • Hetzner Storage Box
  • HiDrive
  • HTTP
  • +
  • iCloud Drive
  • ImageKit
  • Internet Archive
  • Jottacloud
  • @@ -191,6 +193,7 @@
  • OpenStack Swift
  • Oracle Cloud Storage Swift
  • Oracle Object Storage
  • +
  • Outscale
  • ownCloud
  • pCloud
  • Petabox
  • @@ -208,6 +211,7 @@
  • Seafile
  • Seagate Lyve Cloud
  • SeaweedFS
  • +
  • Selectel
  • SFTP
  • Sia
  • SMB / CIFS
  • @@ -508,6 +512,7 @@ go build
  • Chunker - transparently splits large files for other remotes
  • Citrix ShareFile
  • Compress
  • +
  • Cloudinary
  • Combine
  • Crypt - to encrypt other remotes
  • DigitalOcean Spaces
  • @@ -525,6 +530,7 @@ go build
  • Hetzner Storage Box
  • HiDrive
  • HTTP
  • +
  • iCloud Drive
  • Internet Archive
  • Jottacloud
  • Koofr
  • @@ -650,6 +656,7 @@ destpath/sourcepath/two.txt -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally --immutable Do not modify files, fail if existing files have been modified --inplace Download directly to destination file instead of atomic download to temp/rename + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension --max-backlog int Maximum number of objects in sync or check backlog (default 10000) --max-duration Duration Maximum duration rclone will transfer data for (default 0s) --max-transfer SizeSuffix Maximum size of data to transfer (default off) @@ -776,6 +783,7 @@ destpath/sourcepath/two.txt -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally --immutable Do not modify files, fail if existing files have been modified --inplace Download directly to destination file instead of atomic download to temp/rename + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension --max-backlog int Maximum number of objects in sync or check backlog (default 10000) --max-duration Duration Maximum duration rclone will transfer data for (default 0s) --max-transfer SizeSuffix Maximum size of data to transfer (default off) @@ -880,6 +888,7 @@ destpath/sourcepath/two.txt -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally --immutable Do not modify files, fail if existing files have been modified --inplace Download directly to destination file instead of atomic download to temp/rename + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension --max-backlog int Maximum number of objects in sync or check backlog (default 10000) --max-duration Duration Maximum duration rclone will transfer data for (default 0s) --max-transfer SizeSuffix Maximum size of data to transfer (default off) @@ -1708,6 +1717,7 @@ rclone backend help <backendname> -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally --immutable Do not modify files, fail if existing files have been modified --inplace Download directly to destination file instead of atomic download to temp/rename + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension --max-backlog int Maximum number of objects in sync or check backlog (default 10000) --max-duration Duration Maximum duration rclone will transfer data for (default 0s) --max-transfer SizeSuffix Maximum size of data to transfer (default off) @@ -2363,6 +2373,7 @@ if src is directory -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally --immutable Do not modify files, fail if existing files have been modified --inplace Download directly to destination file instead of atomic download to temp/rename + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension --max-backlog int Maximum number of objects in sync or check backlog (default 10000) --max-duration Duration Maximum duration rclone will transfer data for (default 0s) --max-transfer SizeSuffix Maximum size of data to transfer (default off) @@ -2964,7 +2975,9 @@ rclone mount remote:path/to/files \\cloud\remote

    When running in background mode the user will have to stop the mount manually:

    # Linux
     fusermount -u /path/to/local/mount
    -# OS X
    +#... or on some systems
    +fusermount3 -u /path/to/local/mount
    +# OS X or Linux when using nfsmount
     umount /path/to/local/mount

    The umount operation can fail, for example when the mountpoint is busy. When that happens, it is the user's responsibility to stop the mount manually.

    The size of the mounted file system will be set according to information retrieved from the remote, the same as returned by the rclone about command. Remotes with unlimited storage may report the used size only, then an additional 1 PiB of free space is assumed. If the remote does not support the about feature at all, then 1 PiB is set as both the total and the free size.

    @@ -3048,7 +3061,7 @@ sudo ln -s /opt/local/lib/libfuse.2.dylib

    Note that all the rclone filters can be used to select a subset of the files to be visible in the mount.

    systemd

    When running rclone mount as a systemd service, it is possible to use Type=notify. In this case the service will enter the started state after the mountpoint has been successfully set up. Units having the rclone mount service specified as a requirement will see all files and folders immediately in this mode.

    -

    Note that systemd runs mount units without any environment variables including PATH or HOME. This means that tilde (~) expansion will not work and you should provide --config and --cache-dir explicitly as absolute paths via rclone arguments. Since mounting requires the fusermount program, rclone will use the fallback PATH of /bin:/usr/bin in this scenario. Please ensure that fusermount is present on this PATH.

    +

    Note that systemd runs mount units without any environment variables including PATH or HOME. This means that tilde (~) expansion will not work and you should provide --config and --cache-dir explicitly as absolute paths via rclone arguments. Since mounting requires the fusermount or fusermount3 program, rclone will use the fallback PATH of /bin:/usr/bin in this scenario. Please ensure that fusermount/fusermount3 is present on this PATH.

    Rclone as Unix mount helper

    The core Unix program /bin/mount normally takes the -t FSTYPE argument then runs the /sbin/mount.FSTYPE helper program passing it mount options as -o key=val,... or --opt=.... Automount (classic or systemd) behaves in a similar way.

    rclone by default expects GNU-style flags --key val. To run it as a mount helper you should symlink rclone binary to /sbin/mount.rclone and optionally /usr/bin/rclonefs, e.g. ln -s /usr/bin/rclone /sbin/mount.rclone. rclone will detect it and translate command-line arguments appropriately.

    @@ -3197,6 +3210,22 @@ WantedBy=multi-user.target --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)

    When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).

    --transfers int  Number of file transfers to run in parallel (default 4)
    + +

    By default the VFS does not support symlinks. However this may be enabled with either of the following flags:

    +
    --links      Translate symlinks to/from regular files with a '.rclonelink' extension.
    +--vfs-links  Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
    +

    As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink link-to-file.txt would be stored on cloud storage as link-to-file.txt.rclonelink and the contents would be the path to the symlink destination.

    +

    Note that --links enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). --vfs-links just enables it for the VFS layer.

    +

    This scheme is compatible with that used by the local backend with the --local-links flag.

    +

    The --vfs-links flag has been designed for rclone mount, rclone nfsmount and rclone serve nfs.

    +

    It hasn't been tested with the other rclone serve commands yet.

    +

    A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree

    +
    .
    +├── dir
    +│   └── file.txt
    +└── linked-dir -> dir
    +

    The VFS will correctly resolve linked-dir but not linked-dir/file.txt. This is not a problem for the tested commands but may be for other commands.

    +

    Note that there is an outstanding issue with symlink support issue #8245 with duplicate files being created when symlinks are moved into directories where there is a file of the same name (or vice versa).

    VFS Case Sensitivity

    Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.

    File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.

    @@ -3233,6 +3262,7 @@ WantedBy=multi-user.target --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required) --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000) -h, --help help for mount + --link-perms FileMode Link permissions (default 666) --max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads (not supported on Windows) (default 128Ki) --mount-case-insensitive Tristate Tell the OS the mount is case insensitive (true) or sensitive (false) regardless of the backend (auto) (default unset) --network-mode Mount as remote network drive, instead of fixed disk drive (supported on Windows only) @@ -3255,6 +3285,7 @@ WantedBy=multi-user.target --vfs-case-insensitive If a file name not found, find a case insensitive match --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection + --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) @@ -3330,6 +3361,7 @@ if src is directory -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally --immutable Do not modify files, fail if existing files have been modified --inplace Download directly to destination file instead of atomic download to temp/rename + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension --max-backlog int Maximum number of objects in sync or check backlog (default 10000) --max-duration Duration Maximum duration rclone will transfer data for (default 0s) --max-transfer SizeSuffix Maximum size of data to transfer (default off) @@ -3482,7 +3514,9 @@ rclone nfsmount remote:path/to/files \\cloud\remote

    When running in background mode the user will have to stop the mount manually:

    # Linux
     fusermount -u /path/to/local/mount
    -# OS X
    +#... or on some systems
    +fusermount3 -u /path/to/local/mount
    +# OS X or Linux when using nfsmount
     umount /path/to/local/mount

    The umount operation can fail, for example when the mountpoint is busy. When that happens, it is the user's responsibility to stop the mount manually.

    The size of the mounted file system will be set according to information retrieved from the remote, the same as returned by the rclone about command. Remotes with unlimited storage may report the used size only, then an additional 1 PiB of free space is assumed. If the remote does not support the about feature at all, then 1 PiB is set as both the total and the free size.

    @@ -3566,7 +3600,7 @@ sudo ln -s /opt/local/lib/libfuse.2.dylib

    Note that all the rclone filters can be used to select a subset of the files to be visible in the mount.

    systemd

    When running rclone nfsmount as a systemd service, it is possible to use Type=notify. In this case the service will enter the started state after the mountpoint has been successfully set up. Units having the rclone nfsmount service specified as a requirement will see all files and folders immediately in this mode.

    -

    Note that systemd runs mount units without any environment variables including PATH or HOME. This means that tilde (~) expansion will not work and you should provide --config and --cache-dir explicitly as absolute paths via rclone arguments. Since mounting requires the fusermount program, rclone will use the fallback PATH of /bin:/usr/bin in this scenario. Please ensure that fusermount is present on this PATH.

    +

    Note that systemd runs mount units without any environment variables including PATH or HOME. This means that tilde (~) expansion will not work and you should provide --config and --cache-dir explicitly as absolute paths via rclone arguments. Since mounting requires the fusermount or fusermount3 program, rclone will use the fallback PATH of /bin:/usr/bin in this scenario. Please ensure that fusermount/fusermount3 is present on this PATH.

    Rclone as Unix mount helper

    The core Unix program /bin/mount normally takes the -t FSTYPE argument then runs the /sbin/mount.FSTYPE helper program passing it mount options as -o key=val,... or --opt=.... Automount (classic or systemd) behaves in a similar way.

    rclone by default expects GNU-style flags --key val. To run it as a mount helper you should symlink rclone binary to /sbin/mount.rclone and optionally /usr/bin/rclonefs, e.g. ln -s /usr/bin/rclone /sbin/mount.rclone. rclone will detect it and translate command-line arguments appropriately.

    @@ -3715,6 +3749,22 @@ WantedBy=multi-user.target --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)

    When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).

    --transfers int  Number of file transfers to run in parallel (default 4)
    + +

    By default the VFS does not support symlinks. However this may be enabled with either of the following flags:

    +
    --links      Translate symlinks to/from regular files with a '.rclonelink' extension.
    +--vfs-links  Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
    +

    As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink link-to-file.txt would be stored on cloud storage as link-to-file.txt.rclonelink and the contents would be the path to the symlink destination.

    +

    Note that --links enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). --vfs-links just enables it for the VFS layer.

    +

    This scheme is compatible with that used by the local backend with the --local-links flag.

    +

    The --vfs-links flag has been designed for rclone mount, rclone nfsmount and rclone serve nfs.

    +

    It hasn't been tested with the other rclone serve commands yet.

    +

    A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree

    +
    .
    +├── dir
    +│   └── file.txt
    +└── linked-dir -> dir
    +

    The VFS will correctly resolve linked-dir but not linked-dir/file.txt. This is not a problem for the tested commands but may be for other commands.

    +

    Note that there is an outstanding issue with symlink support issue #8245 with duplicate files being created when symlinks are moved into directories where there is a file of the same name (or vice versa).

    VFS Case Sensitivity

    Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.

    File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.

    @@ -3752,6 +3802,7 @@ WantedBy=multi-user.target --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required) --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000) -h, --help help for nfsmount + --link-perms FileMode Link permissions (default 666) --max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads (not supported on Windows) (default 128Ki) --mount-case-insensitive Tristate Tell the OS the mount is case insensitive (true) or sensitive (false) regardless of the backend (auto) (default unset) --network-mode Mount as remote network drive, instead of fixed disk drive (supported on Windows only) @@ -3778,6 +3829,7 @@ WantedBy=multi-user.target --vfs-case-insensitive If a file name not found, find a case insensitive match --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection + --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) @@ -3912,17 +3964,17 @@ ffmpeg - | rclone rcat remote:path/to/file

    Server options

    Use --rc-addr to specify which IP address and port the server should listen on, eg --rc-addr 1.2.3.4:8000 or --rc-addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.

    If you set --rc-addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.

    -

    You can use a unix socket by setting the url to unix:///path/to/socket or just by using an absolute path name. Note that unix sockets bypass the authentication - this is expected to be done with file system permissions.

    +

    You can use a unix socket by setting the url to unix:///path/to/socket or just by using an absolute path name.

    --rc-addr may be repeated to listen on multiple IPs/ports/sockets. Socket activation, described further below, can also be used to accomplish the same.

    --rc-server-read-timeout and --rc-server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.

    --rc-max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.

    --rc-baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --rc-baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --rc-baseurl, so --rc-baseurl "rclone", --rc-baseurl "/rclone" and --rc-baseurl "/rclone/" are all treated identically.

    TLS (SSL)

    By default this will serve over http. If you want you can serve over https. You will need to supply the --rc-cert and --rc-key flags. If you wish to do client side certificate validation then you will need to supply --rc-client-ca also.

    -

    --rc-cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --krc-ey should be the PEM encoded private key and --rc-client-ca should be the PEM encoded client certificate authority certificate.

    +

    --rc-cert must be set to the path of a file containing either a PEM encoded certificate, or a concatenation of that with the CA certificate. --rc-key must be set to the path of a file with the PEM encoded private key. If setting --rc-client-ca, it should be set to the path of a file with PEM encoded client certificate authority certificates.

    --rc-min-tls-version is minimum TLS version that is acceptable. Valid values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").

    Socket activation

    -

    Instead of the listening addresses specified above, rclone will listen to all FDs passed by the service manager, if any (and ignore any arguments passed by --rc-addr`).

    +

    Instead of the listening addresses specified above, rclone will listen to all FDs passed by the service manager, if any (and ignore any arguments passed by --rc-addr).

    This allows rclone to be a socket-activated service. It can be configured with .socket and .service unit files as described in https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html

    Socket activation can be tested ad-hoc with the systemd-socket-activatecommand

       systemd-socket-activate -l 8000 -- rclone serve
    @@ -4056,7 +4108,7 @@ htpasswd -B htpasswd anotherUser

    RC Options

    Flags to control the Remote Control API

          --rc                                 Enable the remote control server
    -      --rc-addr stringArray                IPaddress:Port or :Port to bind server to (default ["localhost:5572"])
    +      --rc-addr stringArray                IPaddress:Port or :Port to bind server to (default localhost:5572)
           --rc-allow-origin string             Origin which cross-domain request (CORS) can be executed from
           --rc-baseurl string                  Prefix for URLs - leave blank for root
           --rc-cert string                     TLS PEM key (concatenation of certificate and CA certificate)
    @@ -4281,6 +4333,22 @@ htpasswd -B htpasswd anotherUser
    --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)

    When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).

    --transfers int  Number of file transfers to run in parallel (default 4)
    + +

    By default the VFS does not support symlinks. However this may be enabled with either of the following flags:

    +
    --links      Translate symlinks to/from regular files with a '.rclonelink' extension.
    +--vfs-links  Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
    +

    As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink link-to-file.txt would be stored on cloud storage as link-to-file.txt.rclonelink and the contents would be the path to the symlink destination.

    +

    Note that --links enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). --vfs-links just enables it for the VFS layer.

    +

    This scheme is compatible with that used by the local backend with the --local-links flag.

    +

    The --vfs-links flag has been designed for rclone mount, rclone nfsmount and rclone serve nfs.

    +

    It hasn't been tested with the other rclone serve commands yet.

    +

    A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree

    +
    .
    +├── dir
    +│   └── file.txt
    +└── linked-dir -> dir
    +

    The VFS will correctly resolve linked-dir but not linked-dir/file.txt. This is not a problem for the tested commands but may be for other commands.

    +

    Note that there is an outstanding issue with symlink support issue #8245 with duplicate files being created when symlinks are moved into directories where there is a file of the same name (or vice versa).

    VFS Case Sensitivity

    Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.

    File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.

    @@ -4307,6 +4375,7 @@ htpasswd -B htpasswd anotherUser --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000) -h, --help help for dlna --interface stringArray The interface to use for SSDP (repeat as necessary) + --link-perms FileMode Link permissions (default 666) --log-trace Enable trace logging of SOAP traffic --name string Name of DLNA server --no-checksum Don't compare checksums on up/download @@ -4325,6 +4394,7 @@ htpasswd -B htpasswd anotherUser --vfs-case-insensitive If a file name not found, find a case insensitive match --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection + --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) @@ -4486,6 +4556,22 @@ htpasswd -B htpasswd anotherUser --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)

    When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).

    --transfers int  Number of file transfers to run in parallel (default 4)
    + +

    By default the VFS does not support symlinks. However this may be enabled with either of the following flags:

    +
    --links      Translate symlinks to/from regular files with a '.rclonelink' extension.
    +--vfs-links  Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
    +

    As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink link-to-file.txt would be stored on cloud storage as link-to-file.txt.rclonelink and the contents would be the path to the symlink destination.

    +

    Note that --links enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). --vfs-links just enables it for the VFS layer.

    +

    This scheme is compatible with that used by the local backend with the --local-links flag.

    +

    The --vfs-links flag has been designed for rclone mount, rclone nfsmount and rclone serve nfs.

    +

    It hasn't been tested with the other rclone serve commands yet.

    +

    A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree

    +
    .
    +├── dir
    +│   └── file.txt
    +└── linked-dir -> dir
    +

    The VFS will correctly resolve linked-dir but not linked-dir/file.txt. This is not a problem for the tested commands but may be for other commands.

    +

    Note that there is an outstanding issue with symlink support issue #8245 with duplicate files being created when symlinks are moved into directories where there is a file of the same name (or vice versa).

    VFS Case Sensitivity

    Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.

    File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.

    @@ -4524,6 +4610,7 @@ htpasswd -B htpasswd anotherUser --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required) --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000) -h, --help help for docker + --link-perms FileMode Link permissions (default 666) --max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads (not supported on Windows) (default 128Ki) --mount-case-insensitive Tristate Tell the OS the mount is case insensitive (true) or sensitive (false) regardless of the backend (auto) (default unset) --network-mode Mount as remote network drive, instead of fixed disk drive (supported on Windows only) @@ -4549,6 +4636,7 @@ htpasswd -B htpasswd anotherUser --vfs-case-insensitive If a file name not found, find a case insensitive match --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection + --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) @@ -4712,6 +4800,22 @@ htpasswd -B htpasswd anotherUser --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)

    When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).

    --transfers int  Number of file transfers to run in parallel (default 4)
    + +

    By default the VFS does not support symlinks. However this may be enabled with either of the following flags:

    +
    --links      Translate symlinks to/from regular files with a '.rclonelink' extension.
    +--vfs-links  Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
    +

    As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink link-to-file.txt would be stored on cloud storage as link-to-file.txt.rclonelink and the contents would be the path to the symlink destination.

    +

    Note that --links enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). --vfs-links just enables it for the VFS layer.

    +

    This scheme is compatible with that used by the local backend with the --local-links flag.

    +

    The --vfs-links flag has been designed for rclone mount, rclone nfsmount and rclone serve nfs.

    +

    It hasn't been tested with the other rclone serve commands yet.

    +

    A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree

    +
    .
    +├── dir
    +│   └── file.txt
    +└── linked-dir -> dir
    +

    The VFS will correctly resolve linked-dir but not linked-dir/file.txt. This is not a problem for the tested commands but may be for other commands.

    +

    Note that there is an outstanding issue with symlink support issue #8245 with duplicate files being created when symlinks are moved into directories where there is a file of the same name (or vice versa).

    VFS Case Sensitivity

    Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.

    File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.

    @@ -4769,6 +4873,7 @@ htpasswd -B htpasswd anotherUser --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000) -h, --help help for ftp --key string TLS PEM Private key + --link-perms FileMode Link permissions (default 666) --no-checksum Don't compare checksums on up/download --no-modtime Don't read/write the modification time (can speed things up) --no-seek Don't allow seeking in files @@ -4789,6 +4894,7 @@ htpasswd -B htpasswd anotherUser --vfs-case-insensitive If a file name not found, find a case insensitive match --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection + --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) @@ -4837,17 +4943,17 @@ htpasswd -B htpasswd anotherUser

    Server options

    Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.

    If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.

    -

    You can use a unix socket by setting the url to unix:///path/to/socket or just by using an absolute path name. Note that unix sockets bypass the authentication - this is expected to be done with file system permissions.

    +

    You can use a unix socket by setting the url to unix:///path/to/socket or just by using an absolute path name.

    --addr may be repeated to listen on multiple IPs/ports/sockets. Socket activation, described further below, can also be used to accomplish the same.

    --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.

    --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.

    --baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically.

    TLS (SSL)

    By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.

    -

    --cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.

    +

    --cert must be set to the path of a file containing either a PEM encoded certificate, or a concatenation of that with the CA certificate. --key must be set to the path of a file with the PEM encoded private key. If setting --client-ca, it should be set to the path of a file with PEM encoded client certificate authority certificates.

    --min-tls-version is minimum TLS version that is acceptable. Valid values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").

    Socket activation

    -

    Instead of the listening addresses specified above, rclone will listen to all FDs passed by the service manager, if any (and ignore any arguments passed by --addr`).

    +

    Instead of the listening addresses specified above, rclone will listen to all FDs passed by the service manager, if any (and ignore any arguments passed by --addr).

    This allows rclone to be a socket-activated service. It can be configured with .socket and .service unit files as described in https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html

    Socket activation can be tested ad-hoc with the systemd-socket-activatecommand

       systemd-socket-activate -l 8000 -- rclone serve
    @@ -5087,6 +5193,22 @@ htpasswd -B htpasswd anotherUser --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)

    When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).

    --transfers int  Number of file transfers to run in parallel (default 4)
    + +

    By default the VFS does not support symlinks. However this may be enabled with either of the following flags:

    +
    --links      Translate symlinks to/from regular files with a '.rclonelink' extension.
    +--vfs-links  Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
    +

    As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink link-to-file.txt would be stored on cloud storage as link-to-file.txt.rclonelink and the contents would be the path to the symlink destination.

    +

    Note that --links enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). --vfs-links just enables it for the VFS layer.

    +

    This scheme is compatible with that used by the local backend with the --local-links flag.

    +

    The --vfs-links flag has been designed for rclone mount, rclone nfsmount and rclone serve nfs.

    +

    It hasn't been tested with the other rclone serve commands yet.

    +

    A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree

    +
    .
    +├── dir
    +│   └── file.txt
    +└── linked-dir -> dir
    +

    The VFS will correctly resolve linked-dir but not linked-dir/file.txt. This is not a problem for the tested commands but may be for other commands.

    +

    Note that there is an outstanding issue with symlink support issue #8245 with duplicate files being created when symlinks are moved into directories where there is a file of the same name (or vice versa).

    VFS Case Sensitivity

    Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.

    File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.

    @@ -5139,15 +5261,16 @@ htpasswd -B htpasswd anotherUser --allow-origin string Origin which cross-domain request (CORS) can be executed from --auth-proxy string A program to use to create the backend from the auth --baseurl string Prefix for URLs - leave blank for root - --cert string TLS PEM key (concatenation of certificate and CA certificate) - --client-ca string Client certificate authority to verify clients with + --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates) + --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with --dir-cache-time Duration Time to cache directory entries for (default 5m0s) --dir-perms FileMode Directory permissions (default 777) --file-perms FileMode File permissions (default 666) --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000) -h, --help help for http --htpasswd string A htpasswd file - if not provided no authentication is done - --key string TLS PEM Private key + --key string Path to TLS PEM private key file + --link-perms FileMode Link permissions (default 666) --max-header-bytes int Maximum size of request header (default 4096) --min-tls-version string Minimum TLS version that is acceptable (default "tls1.0") --no-checksum Don't compare checksums on up/download @@ -5173,6 +5296,7 @@ htpasswd -B htpasswd anotherUser --vfs-case-insensitive If a file name not found, find a case insensitive match --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection + --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) @@ -5222,7 +5346,7 @@ htpasswd -B htpasswd anotherUser

    Modifying files through the NFS protocol requires VFS caching. Usually you will need to specify --vfs-cache-mode in order to be able to write to the mountpoint (full is recommended). If you don't specify VFS cache mode, the mount will be read-only.

    --nfs-cache-type controls the type of the NFS handle cache. By default this is memory where new handles will be randomly allocated when needed. These are stored in memory. If the server is restarted the handle cache will be lost and connected NFS clients will get stale handle errors.

    --nfs-cache-type disk uses an on disk NFS handle cache. Rclone hashes the path of the object and stores it in a file named after the hash. These hashes are stored on disk the directory controlled by --cache-dir or the exact directory may be specified with --nfs-cache-dir. Using this means that the NFS server can be restarted at will without affecting the connected clients.

    -

    --nfs-cache-type symlink is similar to --nfs-cache-type disk in that it uses an on disk cache, but the cache entries are held as symlinks. Rclone will use the handle of the underlying file as the NFS handle which improves performance. This sort of cache can't be backed up and restored as the underlying handles will change. This is Linux only.

    +

    --nfs-cache-type symlink is similar to --nfs-cache-type disk in that it uses an on disk cache, but the cache entries are held as symlinks. Rclone will use the handle of the underlying file as the NFS handle which improves performance. This sort of cache can't be backed up and restored as the underlying handles will change. This is Linux only. It requres running rclone as root or with CAP_DAC_READ_SEARCH. You can run rclone with this extra permission by doing this to the rclone binary sudo setcap cap_dac_read_search+ep /path/to/rclone.

    --nfs-cache-handle-limit controls the maximum number of cached NFS handles stored by the caching handler. This should not be set too low or you may experience errors when trying to access files. The default is 1000000, but consider lowering this limit if the server's system resource usage causes problems. This is only used by the memory type cache.

    To serve NFS over the network use following command:

    rclone serve nfs remote: --addr 0.0.0.0:$PORT --vfs-cache-mode=full
    @@ -5343,6 +5467,22 @@ htpasswd -B htpasswd anotherUser --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)

    When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).

    --transfers int  Number of file transfers to run in parallel (default 4)
    + +

    By default the VFS does not support symlinks. However this may be enabled with either of the following flags:

    +
    --links      Translate symlinks to/from regular files with a '.rclonelink' extension.
    +--vfs-links  Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
    +

    As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink link-to-file.txt would be stored on cloud storage as link-to-file.txt.rclonelink and the contents would be the path to the symlink destination.

    +

    Note that --links enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). --vfs-links just enables it for the VFS layer.

    +

    This scheme is compatible with that used by the local backend with the --local-links flag.

    +

    The --vfs-links flag has been designed for rclone mount, rclone nfsmount and rclone serve nfs.

    +

    It hasn't been tested with the other rclone serve commands yet.

    +

    A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree

    +
    .
    +├── dir
    +│   └── file.txt
    +└── linked-dir -> dir
    +

    The VFS will correctly resolve linked-dir but not linked-dir/file.txt. This is not a problem for the tested commands but may be for other commands.

    +

    Note that there is an outstanding issue with symlink support issue #8245 with duplicate files being created when symlinks are moved into directories where there is a file of the same name (or vice versa).

    VFS Case Sensitivity

    Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.

    File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.

    @@ -5367,6 +5507,7 @@ htpasswd -B htpasswd anotherUser --file-perms FileMode File permissions (default 666) --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000) -h, --help help for nfs + --link-perms FileMode Link permissions (default 666) --nfs-cache-dir string The directory the NFS handle cache will use if set --nfs-cache-handle-limit int max file handles cached simultaneously (min 5) (default 1000000) --nfs-cache-type memory|disk|symlink Type of NFS handle cache to use (default memory) @@ -5386,6 +5527,7 @@ htpasswd -B htpasswd anotherUser --vfs-case-insensitive If a file name not found, find a case insensitive match --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection + --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) @@ -5470,17 +5612,17 @@ $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/

    Server options

    Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.

    If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.

    -

    You can use a unix socket by setting the url to unix:///path/to/socket or just by using an absolute path name. Note that unix sockets bypass the authentication - this is expected to be done with file system permissions.

    +

    You can use a unix socket by setting the url to unix:///path/to/socket or just by using an absolute path name.

    --addr may be repeated to listen on multiple IPs/ports/sockets. Socket activation, described further below, can also be used to accomplish the same.

    --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.

    --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.

    --baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically.

    TLS (SSL)

    By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.

    -

    --cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.

    +

    --cert must be set to the path of a file containing either a PEM encoded certificate, or a concatenation of that with the CA certificate. --key must be set to the path of a file with the PEM encoded private key. If setting --client-ca, it should be set to the path of a file with PEM encoded client certificate authority certificates.

    --min-tls-version is minimum TLS version that is acceptable. Valid values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").

    Socket activation

    -

    Instead of the listening addresses specified above, rclone will listen to all FDs passed by the service manager, if any (and ignore any arguments passed by --addr`).

    +

    Instead of the listening addresses specified above, rclone will listen to all FDs passed by the service manager, if any (and ignore any arguments passed by --addr).

    This allows rclone to be a socket-activated service. It can be configured with .socket and .service unit files as described in https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html

    Socket activation can be tested ad-hoc with the systemd-socket-activatecommand

       systemd-socket-activate -l 8000 -- rclone serve
    @@ -5503,11 +5645,11 @@ htpasswd -B htpasswd anotherUser --append-only Disallow deletion of repository data --baseurl string Prefix for URLs - leave blank for root --cache-objects Cache listed objects (default true) - --cert string TLS PEM key (concatenation of certificate and CA certificate) - --client-ca string Client certificate authority to verify clients with + --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates) + --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with -h, --help help for restic --htpasswd string A htpasswd file - if not provided no authentication is done - --key string TLS PEM Private key + --key string Path to TLS PEM private key file --max-header-bytes int Maximum size of request header (default 4096) --min-tls-version string Minimum TLS version that is acceptable (default "tls1.0") --pass string Password for authentication @@ -5602,17 +5744,17 @@ htpasswd -B htpasswd anotherUser

    Server options

    Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.

    If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.

    -

    You can use a unix socket by setting the url to unix:///path/to/socket or just by using an absolute path name. Note that unix sockets bypass the authentication - this is expected to be done with file system permissions.

    +

    You can use a unix socket by setting the url to unix:///path/to/socket or just by using an absolute path name.

    --addr may be repeated to listen on multiple IPs/ports/sockets. Socket activation, described further below, can also be used to accomplish the same.

    --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.

    --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.

    --baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically.

    TLS (SSL)

    By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.

    -

    --cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.

    +

    --cert must be set to the path of a file containing either a PEM encoded certificate, or a concatenation of that with the CA certificate. --key must be set to the path of a file with the PEM encoded private key. If setting --client-ca, it should be set to the path of a file with PEM encoded client certificate authority certificates.

    --min-tls-version is minimum TLS version that is acceptable. Valid values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").

    Socket activation

    -

    Instead of the listening addresses specified above, rclone will listen to all FDs passed by the service manager, if any (and ignore any arguments passed by --addr`).

    +

    Instead of the listening addresses specified above, rclone will listen to all FDs passed by the service manager, if any (and ignore any arguments passed by --addr).

    This allows rclone to be a socket-activated service. It can be configured with .socket and .service unit files as described in https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html

    Socket activation can be tested ad-hoc with the systemd-socket-activatecommand

       systemd-socket-activate -l 8000 -- rclone serve
    @@ -5729,6 +5871,22 @@ htpasswd -B htpasswd anotherUser --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)

    When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).

    --transfers int  Number of file transfers to run in parallel (default 4)
    + +

    By default the VFS does not support symlinks. However this may be enabled with either of the following flags:

    +
    --links      Translate symlinks to/from regular files with a '.rclonelink' extension.
    +--vfs-links  Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
    +

    As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink link-to-file.txt would be stored on cloud storage as link-to-file.txt.rclonelink and the contents would be the path to the symlink destination.

    +

    Note that --links enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). --vfs-links just enables it for the VFS layer.

    +

    This scheme is compatible with that used by the local backend with the --local-links flag.

    +

    The --vfs-links flag has been designed for rclone mount, rclone nfsmount and rclone serve nfs.

    +

    It hasn't been tested with the other rclone serve commands yet.

    +

    A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree

    +
    .
    +├── dir
    +│   └── file.txt
    +└── linked-dir -> dir
    +

    The VFS will correctly resolve linked-dir but not linked-dir/file.txt. This is not a problem for the tested commands but may be for other commands.

    +

    Note that there is an outstanding issue with symlink support issue #8245 with duplicate files being created when symlinks are moved into directories where there is a file of the same name (or vice versa).

    VFS Case Sensitivity

    Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.

    File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.

    @@ -5752,8 +5910,8 @@ htpasswd -B htpasswd anotherUser --auth-key stringArray Set key pair for v4 authorization: access_key_id,secret_access_key --auth-proxy string A program to use to create the backend from the auth --baseurl string Prefix for URLs - leave blank for root - --cert string TLS PEM key (concatenation of certificate and CA certificate) - --client-ca string Client certificate authority to verify clients with + --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates) + --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with --dir-cache-time Duration Time to cache directory entries for (default 5m0s) --dir-perms FileMode Directory permissions (default 777) --etag-hash string Which hash to use for the ETag, or auto or blank for off (default "MD5") @@ -5762,7 +5920,8 @@ htpasswd -B htpasswd anotherUser --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000) -h, --help help for s3 --htpasswd string A htpasswd file - if not provided no authentication is done - --key string TLS PEM Private key + --key string Path to TLS PEM private key file + --link-perms FileMode Link permissions (default 666) --max-header-bytes int Maximum size of request header (default 4096) --min-tls-version string Minimum TLS version that is acceptable (default "tls1.0") --no-checksum Don't compare checksums on up/download @@ -5788,6 +5947,7 @@ htpasswd -B htpasswd anotherUser --vfs-case-insensitive If a file name not found, find a case insensitive match --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection + --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) @@ -5960,6 +6120,22 @@ htpasswd -B htpasswd anotherUser --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)

    When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).

    --transfers int  Number of file transfers to run in parallel (default 4)
    + +

    By default the VFS does not support symlinks. However this may be enabled with either of the following flags:

    +
    --links      Translate symlinks to/from regular files with a '.rclonelink' extension.
    +--vfs-links  Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
    +

    As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink link-to-file.txt would be stored on cloud storage as link-to-file.txt.rclonelink and the contents would be the path to the symlink destination.

    +

    Note that --links enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). --vfs-links just enables it for the VFS layer.

    +

    This scheme is compatible with that used by the local backend with the --local-links flag.

    +

    The --vfs-links flag has been designed for rclone mount, rclone nfsmount and rclone serve nfs.

    +

    It hasn't been tested with the other rclone serve commands yet.

    +

    A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree

    +
    .
    +├── dir
    +│   └── file.txt
    +└── linked-dir -> dir
    +

    The VFS will correctly resolve linked-dir but not linked-dir/file.txt. This is not a problem for the tested commands but may be for other commands.

    +

    Note that there is an outstanding issue with symlink support issue #8245 with duplicate files being created when symlinks are moved into directories where there is a file of the same name (or vice versa).

    VFS Case Sensitivity

    Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.

    File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.

    @@ -6017,6 +6193,7 @@ htpasswd -B htpasswd anotherUser --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000) -h, --help help for sftp --key stringArray SSH private host key file (Can be multi-valued, leave blank to auto generate) + --link-perms FileMode Link permissions (default 666) --no-auth Allow connections with no authentication if set --no-checksum Don't compare checksums on up/download --no-modtime Don't read/write the modification time (can speed things up) @@ -6037,6 +6214,7 @@ htpasswd -B htpasswd anotherUser --vfs-case-insensitive If a file name not found, find a case insensitive match --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection + --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) @@ -6097,17 +6275,17 @@ htpasswd -B htpasswd anotherUser

    Server options

    Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.

    If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.

    -

    You can use a unix socket by setting the url to unix:///path/to/socket or just by using an absolute path name. Note that unix sockets bypass the authentication - this is expected to be done with file system permissions.

    +

    You can use a unix socket by setting the url to unix:///path/to/socket or just by using an absolute path name.

    --addr may be repeated to listen on multiple IPs/ports/sockets. Socket activation, described further below, can also be used to accomplish the same.

    --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.

    --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.

    --baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically.

    TLS (SSL)

    By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.

    -

    --cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.

    +

    --cert must be set to the path of a file containing either a PEM encoded certificate, or a concatenation of that with the CA certificate. --key must be set to the path of a file with the PEM encoded private key. If setting --client-ca, it should be set to the path of a file with PEM encoded client certificate authority certificates.

    --min-tls-version is minimum TLS version that is acceptable. Valid values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").

    Socket activation

    -

    Instead of the listening addresses specified above, rclone will listen to all FDs passed by the service manager, if any (and ignore any arguments passed by --addr`).

    +

    Instead of the listening addresses specified above, rclone will listen to all FDs passed by the service manager, if any (and ignore any arguments passed by --addr).

    This allows rclone to be a socket-activated service. It can be configured with .socket and .service unit files as described in https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html

    Socket activation can be tested ad-hoc with the systemd-socket-activatecommand

       systemd-socket-activate -l 8000 -- rclone serve
    @@ -6347,6 +6525,22 @@ htpasswd -B htpasswd anotherUser --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)

    When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).

    --transfers int  Number of file transfers to run in parallel (default 4)
    + +

    By default the VFS does not support symlinks. However this may be enabled with either of the following flags:

    +
    --links      Translate symlinks to/from regular files with a '.rclonelink' extension.
    +--vfs-links  Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
    +

    As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink link-to-file.txt would be stored on cloud storage as link-to-file.txt.rclonelink and the contents would be the path to the symlink destination.

    +

    Note that --links enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). --vfs-links just enables it for the VFS layer.

    +

    This scheme is compatible with that used by the local backend with the --local-links flag.

    +

    The --vfs-links flag has been designed for rclone mount, rclone nfsmount and rclone serve nfs.

    +

    It hasn't been tested with the other rclone serve commands yet.

    +

    A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree

    +
    .
    +├── dir
    +│   └── file.txt
    +└── linked-dir -> dir
    +

    The VFS will correctly resolve linked-dir but not linked-dir/file.txt. This is not a problem for the tested commands but may be for other commands.

    +

    Note that there is an outstanding issue with symlink support issue #8245 with duplicate files being created when symlinks are moved into directories where there is a file of the same name (or vice versa).

    VFS Case Sensitivity

    Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.

    File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.

    @@ -6399,8 +6593,8 @@ htpasswd -B htpasswd anotherUser --allow-origin string Origin which cross-domain request (CORS) can be executed from --auth-proxy string A program to use to create the backend from the auth --baseurl string Prefix for URLs - leave blank for root - --cert string TLS PEM key (concatenation of certificate and CA certificate) - --client-ca string Client certificate authority to verify clients with + --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates) + --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with --dir-cache-time Duration Time to cache directory entries for (default 5m0s) --dir-perms FileMode Directory permissions (default 777) --disable-dir-list Disable HTML directory list on GET request for a directory @@ -6409,7 +6603,8 @@ htpasswd -B htpasswd anotherUser --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000) -h, --help help for webdav --htpasswd string A htpasswd file - if not provided no authentication is done - --key string TLS PEM Private key + --key string Path to TLS PEM private key file + --link-perms FileMode Link permissions (default 666) --max-header-bytes int Maximum size of request header (default 4096) --min-tls-version string Minimum TLS version that is acceptable (default "tls1.0") --no-checksum Don't compare checksums on up/download @@ -6435,6 +6630,7 @@ htpasswd -B htpasswd anotherUser --vfs-case-insensitive If a file name not found, find a case insensitive match --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection + --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) @@ -6583,6 +6779,7 @@ htpasswd -B htpasswd anotherUser --chargen Fill files with a ASCII chargen pattern --files int Number of files to create (default 1000) --files-per-directory int Average number of files per directory (default 10) + --flat If set create all files in the root directory -h, --help help for makefiles --max-depth int Maximum depth of directory hierarchy (default 10) --max-file-size SizeSuffix Maximum size of files to create (default 100) @@ -7257,6 +7454,11 @@ y/n/s/!/q> n

    --leave-root

    During rmdirs it will not remove root directory, even if it's empty.

    + +

    Normally rclone will ignore symlinks or junction points (which behave like symlinks under Windows).

    +

    If you supply this flag then rclone will copy symbolic links from any supported backend backend, and store them as text files, with a .rclonelink suffix in the destination.

    +

    The text file will contain the target of the symbolic link.

    +

    The --links / -l flag enables this feature for all supported backends and the VFS. There are individual flags for just enabling it for the VFS --vfs-links and the local backend --local-links if required.

    --log-file=FILE

    Log all of rclone's output to FILE. This is not active by default. This can be useful for tracking down problems with syncs in combination with the -v flag. See the Logging section for more info.

    If FILE exists then rclone will append to it.

    @@ -7328,55 +7530,55 @@ y/n/s/!/q> n
  • ID is the source ID of the object if known.
  • Metadata is the backend specific metadata as described in the backend docs.
  • -
    {
    -    "SrcFs": "gdrive:",
    -    "SrcFsType": "drive",
    -    "DstFs": "newdrive:user",
    -    "DstFsType": "onedrive",
    -    "Remote": "test.txt",
    -    "Size": 6,
    -    "MimeType": "text/plain; charset=utf-8",
    -    "ModTime": "2022-10-11T17:53:10.286745272+01:00",
    -    "IsDir": false,
    -    "ID": "xyz",
    -    "Metadata": {
    -        "btime": "2022-10-11T16:53:11Z",
    -        "content-type": "text/plain; charset=utf-8",
    -        "mtime": "2022-10-11T17:53:10.286745272+01:00",
    -        "owner": "user1@domain1.com",
    -        "permissions": "...",
    -        "description": "my nice file",
    -        "starred": "false"
    -    }
    -}
    +
    {
    +    "SrcFs": "gdrive:",
    +    "SrcFsType": "drive",
    +    "DstFs": "newdrive:user",
    +    "DstFsType": "onedrive",
    +    "Remote": "test.txt",
    +    "Size": 6,
    +    "MimeType": "text/plain; charset=utf-8",
    +    "ModTime": "2022-10-11T17:53:10.286745272+01:00",
    +    "IsDir": false,
    +    "ID": "xyz",
    +    "Metadata": {
    +        "btime": "2022-10-11T16:53:11Z",
    +        "content-type": "text/plain; charset=utf-8",
    +        "mtime": "2022-10-11T17:53:10.286745272+01:00",
    +        "owner": "user1@domain1.com",
    +        "permissions": "...",
    +        "description": "my nice file",
    +        "starred": "false"
    +    }
    +}

    The program should then modify the input as desired and send it to STDOUT. The returned Metadata field will be used in its entirety for the destination object. Any other fields will be ignored. Note in this example we translate user names and permissions and add something to the description:

    -
    {
    -    "Metadata": {
    -        "btime": "2022-10-11T16:53:11Z",
    -        "content-type": "text/plain; charset=utf-8",
    -        "mtime": "2022-10-11T17:53:10.286745272+01:00",
    -        "owner": "user1@domain2.com",
    -        "permissions": "...",
    -        "description": "my nice file [migrated from domain1]",
    -        "starred": "false"
    -    }
    -}
    +
    {
    +    "Metadata": {
    +        "btime": "2022-10-11T16:53:11Z",
    +        "content-type": "text/plain; charset=utf-8",
    +        "mtime": "2022-10-11T17:53:10.286745272+01:00",
    +        "owner": "user1@domain2.com",
    +        "permissions": "...",
    +        "description": "my nice file [migrated from domain1]",
    +        "starred": "false"
    +    }
    +}

    Metadata can be removed here too.

    An example python program might look something like this to implement the above transformations.

    -
    import sys, json
    -
    -i = json.load(sys.stdin)
    -metadata = i["Metadata"]
    -# Add tag to description
    -if "description" in metadata:
    -    metadata["description"] += " [migrated from domain1]"
    -else:
    -    metadata["description"] = "[migrated from domain1]"
    -# Modify owner
    -if "owner" in metadata:
    -    metadata["owner"] = metadata["owner"].replace("domain1.com", "domain2.com")
    -o = { "Metadata": metadata }
    -json.dump(o, sys.stdout, indent="\t")
    +
    import sys, json
    +
    +i = json.load(sys.stdin)
    +metadata = i["Metadata"]
    +# Add tag to description
    +if "description" in metadata:
    +    metadata["description"] += " [migrated from domain1]"
    +else:
    +    metadata["description"] = "[migrated from domain1]"
    +# Modify owner
    +if "owner" in metadata:
    +    metadata["owner"] = metadata["owner"].replace("domain1.com", "domain2.com")
    +o = { "Metadata": metadata }
    +json.dump(o, sys.stdout, indent="\t")

    You can find this example (slightly expanded) in the rclone source code at bin/test_metadata_mapper.py.

    If you want to see the input to the metadata mapper and the output returned from it in the log you can use -vv --dump mapper.

    See the metadata section for more info.

    @@ -7827,9 +8029,9 @@ export RCLONE_CONFIG_PASS

    When rclone is running it will accumulate errors as it goes along, and only exit with a non-zero exit code if (after retries) there were still failed transfers. For every error counted there will be a high priority log message (visible with -q) showing the message and which file caused the problem. A high priority message is also shown when starting a retry so the user can see that any previous error messages may not be valid after the retry. If rclone has done a retry it will log a high priority message if the retry was successful.

    List of exit codes

    This returns an empty result on success, or an error.

    This command takes an "fs" parameter. If this parameter is not supplied and if there is only one VFS in use then that VFS will be used. If there is more than one VFS in use then the "fs" parameter must be supplied.

    @@ -10164,6 +10406,15 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - +Cloudinary +MD5 +R +No +Yes +- +- + + Dropbox DBHASH ¹ R @@ -10172,7 +10423,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - - - + Enterprise File Fabric - R/W @@ -10181,7 +10432,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total R/W - - + Files.com MD5, CRC32 DR/W @@ -10190,7 +10441,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total R - - + FTP - R/W ¹⁰ @@ -10199,7 +10450,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - - - + Gofile MD5 DR/W @@ -10208,7 +10459,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total R - - + Google Cloud Storage MD5 R/W @@ -10217,7 +10468,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total R/W - - + Google Drive MD5, SHA1, SHA256 DR/W @@ -10226,7 +10477,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total R/W DRWU - + Google Photos - - @@ -10235,7 +10486,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total R - - + HDFS - R/W @@ -10244,7 +10495,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - - - + HiDrive HiDrive ¹² R/W @@ -10253,7 +10504,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - - - + HTTP - R @@ -10262,6 +10513,15 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total R - + +iCloud Drive +- +R +No +No +- +- + Internet Archive MD5, SHA1, CRC32 @@ -10382,7 +10642,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total pCloud MD5, SHA1 ⁷ -R +R/W No No W @@ -11215,6 +11475,20 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total Yes +Cloudinary +No +No +No +No +No +No +Yes +No +No +No +No + + Enterprise File Fabric Yes Yes @@ -11228,7 +11502,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total No Yes - + Files.com Yes Yes @@ -11242,7 +11516,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total No Yes - + FTP No No @@ -11256,7 +11530,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total No Yes - + Gofile Yes Yes @@ -11270,21 +11544,21 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total Yes Yes - + Google Cloud Storage Yes Yes No No No -Yes +No Yes No No No No - + Google Drive Yes Yes @@ -11298,7 +11572,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total Yes Yes - + Google Photos No No @@ -11312,7 +11586,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total No No - + HDFS Yes No @@ -11326,7 +11600,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total Yes Yes - + HiDrive Yes Yes @@ -11340,7 +11614,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total No Yes - + HTTP No No @@ -11354,6 +11628,20 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total No Yes + +iCloud Drive +Yes +Yes +Yes +Yes +No +No +No +No +No +No +Yes + ImageKit Yes @@ -11505,7 +11793,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total No No No -No +Yes Yes @@ -11868,6 +12156,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally --immutable Do not modify files, fail if existing files have been modified --inplace Download directly to destination file instead of atomic download to temp/rename + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension --max-backlog int Maximum number of objects in sync or check backlog (default 10000) --max-duration Duration Maximum duration rclone will transfer data for (default 0s) --max-transfer SizeSuffix Maximum size of data to transfer (default off) @@ -11932,7 +12221,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total --tpslimit float Limit HTTP transactions per second to this --tpslimit-burst int Max burst of transactions for --tpslimit (default 1) --use-cookies Enable session cookiejar - --user-agent string Set the user-agent to a specified string (default "rclone/v1.68.0")
    + --user-agent string Set the user-agent to a specified string (default "rclone/v1.69.0")

    Performance

    Flags helpful for increasing performance.

          --buffer-size SizeSuffix   In memory buffer size when reading files for each --transfer (default 16Mi)
    @@ -12033,7 +12322,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
     

    RC

    Flags to control the Remote Control API.

          --rc                                 Enable the remote control server
    -      --rc-addr stringArray                IPaddress:Port or :Port to bind server to (default ["localhost:5572"])
    +      --rc-addr stringArray                IPaddress:Port or :Port to bind server to (default localhost:5572)
           --rc-allow-origin string             Origin which cross-domain request (CORS) can be executed from
           --rc-baseurl string                  Prefix for URLs - leave blank for root
           --rc-cert string                     TLS PEM key (concatenation of certificate and CA certificate)
    @@ -12063,7 +12352,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --rc-web-gui-update                  Check and update to latest version of web gui

    Metrics

    Flags to control the Metrics HTTP endpoint..

    -
          --metrics-addr stringArray                IPaddress:Port or :Port to bind metrics server to (default [""])
    +
          --metrics-addr stringArray                IPaddress:Port or :Port to bind metrics server to
           --metrics-allow-origin string             Origin which cross-domain request (CORS) can be executed from
           --metrics-baseurl string                  Prefix for URLs - leave blank for root
           --metrics-cert string                     TLS PEM key (concatenation of certificate and CA certificate)
    @@ -12097,6 +12386,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --azureblob-description string                        Description of the remote
           --azureblob-directory-markers                         Upload an empty object with a trailing slash when a new directory is created
           --azureblob-disable-checksum                          Don't store MD5 checksum with object metadata
    +      --azureblob-disable-instance-discovery                Skip requesting Microsoft Entra instance metadata
           --azureblob-encoding Encoding                         The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
           --azureblob-endpoint string                           Endpoint for the service
           --azureblob-env-auth                                  Read credentials from runtime (environment variables, CLI or MSI)
    @@ -12114,6 +12404,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --azureblob-tenant string                             ID of the service principal's tenant. Also called its directory ID
           --azureblob-upload-concurrency int                    Concurrency for multipart uploads (default 16)
           --azureblob-upload-cutoff string                      Cutoff for switching to chunked upload (<= 256 MiB) (deprecated)
    +      --azureblob-use-az                                    Use Azure CLI tool az for authentication
           --azureblob-use-emulator                              Uses local storage emulator if provided as 'true'
           --azureblob-use-msi                                   Use a managed service identity to authenticate (only works in Azure)
           --azureblob-username string                           User name (usually an email address)
    @@ -12163,6 +12454,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --box-auth-url string                                 Auth server URL
           --box-box-config-file string                          Box App config.json location
           --box-box-sub-type string                              (default "user")
    +      --box-client-credentials                              Use client credentials OAuth flow
           --box-client-id string                                OAuth Client Id
           --box-client-secret string                            OAuth Client Secret
           --box-commit-retries int                              Max number of times to try committing a multipart file (default 100)
    @@ -12201,6 +12493,14 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --chunker-fail-hard                                   Choose how chunker should handle files with missing or invalid chunks
           --chunker-hash-type string                            Choose how chunker handles hash sums (default "md5")
           --chunker-remote string                               Remote to chunk/unchunk
    +      --cloudinary-api-key string                           Cloudinary API Key
    +      --cloudinary-api-secret string                        Cloudinary API Secret
    +      --cloudinary-cloud-name string                        Cloudinary Environment Name
    +      --cloudinary-description string                       Description of the remote
    +      --cloudinary-encoding Encoding                        The encoding for the backend (default Slash,LtGt,DoubleQuote,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
    +      --cloudinary-eventually-consistent-delay Duration     Wait N seconds for eventual consistency of the databases that support the backend operation (default 0s)
    +      --cloudinary-upload-prefix string                     Specify the API endpoint for environments out of the US
    +      --cloudinary-upload-preset string                     Upload Preset to select asset manipulation on upload
           --combine-description string                          Description of the remote
           --combine-upstreams SpaceSepList                      Upstreams for combining
           --compress-description string                         Description of the remote
    @@ -12227,6 +12527,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --drive-auth-owner-only                               Only consider files owned by the authenticated user
           --drive-auth-url string                               Auth server URL
           --drive-chunk-size SizeSuffix                         Upload chunk size (default 8Mi)
    +      --drive-client-credentials                            Use client credentials OAuth flow
           --drive-client-id string                              Google Application Client Id
           --drive-client-secret string                          OAuth Client Secret
           --drive-copy-shortcut-content                         Server side copy contents of shortcuts instead of the shortcut
    @@ -12277,6 +12578,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --dropbox-batch-size int                              Max number of files in upload batch
           --dropbox-batch-timeout Duration                      Max time to allow an idle upload batch before uploading (default 0s)
           --dropbox-chunk-size SizeSuffix                       Upload chunk size (< 150Mi) (default 48Mi)
    +      --dropbox-client-credentials                          Use client credentials OAuth flow
           --dropbox-client-id string                            OAuth Client Id
           --dropbox-client-secret string                        OAuth Client Secret
           --dropbox-description string                          Description of the remote
    @@ -12323,6 +12625,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --ftp-host string                                     FTP host to connect to
           --ftp-idle-timeout Duration                           Max time before closing idle connections (default 1m0s)
           --ftp-no-check-certificate                            Do not verify the TLS certificate of the server
    +      --ftp-no-check-upload                                 Don't check the upload is OK
           --ftp-pass string                                     FTP password (obscured)
           --ftp-port int                                        FTP port number (default 21)
           --ftp-shut-timeout Duration                           Maximum time to wait for data connection closing status (default 1m0s)
    @@ -12331,10 +12634,12 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --ftp-tls-cache-size int                              Size of TLS session cache for all control and data connections (default 32)
           --ftp-user string                                     FTP username (default "$USER")
           --ftp-writing-mdtm                                    Use MDTM to set modification time (VsFtpd quirk)
    +      --gcs-access-token string                             Short-lived access token
           --gcs-anonymous                                       Access public buckets and objects without credentials
           --gcs-auth-url string                                 Auth server URL
           --gcs-bucket-acl string                               Access Control List for new buckets
           --gcs-bucket-policy-only                              Access checks should use bucket-level IAM policies
    +      --gcs-client-credentials                              Use client credentials OAuth flow
           --gcs-client-id string                                OAuth Client Id
           --gcs-client-secret string                            OAuth Client Secret
           --gcs-decompress                                      If set this will decompress gzip encoded objects
    @@ -12363,11 +12668,13 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --gphotos-batch-mode string                           Upload file batching sync|async|off (default "sync")
           --gphotos-batch-size int                              Max number of files in upload batch
           --gphotos-batch-timeout Duration                      Max time to allow an idle upload batch before uploading (default 0s)
    +      --gphotos-client-credentials                          Use client credentials OAuth flow
           --gphotos-client-id string                            OAuth Client Id
           --gphotos-client-secret string                        OAuth Client Secret
           --gphotos-description string                          Description of the remote
           --gphotos-encoding Encoding                           The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
           --gphotos-include-archived                            Also view and download archived media
    +      --gphotos-proxy string                                Use the gphotosdl proxy for downloading the full resolution images
           --gphotos-read-only                                   Set to make the Google Photos backend read only
           --gphotos-read-size                                   Set to read the size of media items
           --gphotos-start-year int                              Year limits the photos to be downloaded to those which are uploaded after the given year (default 2000)
    @@ -12386,6 +12693,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --hdfs-username string                                Hadoop user name
           --hidrive-auth-url string                             Auth server URL
           --hidrive-chunk-size SizeSuffix                       Chunksize for chunked uploads (default 48Mi)
    +      --hidrive-client-credentials                          Use client credentials OAuth flow
           --hidrive-client-id string                            OAuth Client Id
           --hidrive-client-secret string                        OAuth Client Secret
           --hidrive-description string                          Description of the remote
    @@ -12405,6 +12713,11 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --http-no-head                                        Don't use HEAD requests
           --http-no-slash                                       Set this if the site doesn't end directories with /
           --http-url string                                     URL of HTTP host to connect to
    +      --iclouddrive-apple-id string                         Apple ID
    +      --iclouddrive-client-id string                        Client id (default "d39ba9916b7251055b22c7f910e2ea796ee65e98b2ddecea8f5dde8d9d1a815d")
    +      --iclouddrive-description string                      Description of the remote
    +      --iclouddrive-encoding Encoding                       The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
    +      --iclouddrive-password string                         Password (obscured)
           --imagekit-description string                         Description of the remote
           --imagekit-encoding Encoding                          The encoding for the backend (default Slash,LtGt,DoubleQuote,Dollar,Question,Hash,Percent,BackSlash,Del,Ctl,InvalidUtf8,Dot,SquareBracket)
           --imagekit-endpoint string                            You can find your ImageKit.io URL endpoint in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
    @@ -12422,6 +12735,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --internetarchive-secret-access-key string            IAS3 Secret Key (password)
           --internetarchive-wait-archive Duration               Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish (default 0s)
           --jottacloud-auth-url string                          Auth server URL
    +      --jottacloud-client-credentials                       Use client credentials OAuth flow
           --jottacloud-client-id string                         OAuth Client Id
           --jottacloud-client-secret string                     OAuth Client Secret
           --jottacloud-description string                       Description of the remote
    @@ -12443,11 +12757,11 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --koofr-user string                                   Your user name
           --linkbox-description string                          Description of the remote
           --linkbox-token string                                Token from https://www.linkbox.to/admin/account
    -  -l, --links                                               Translate symlinks to/from regular files with a '.rclonelink' extension
           --local-case-insensitive                              Force the filesystem to report itself as case insensitive
           --local-case-sensitive                                Force the filesystem to report itself as case sensitive
           --local-description string                            Description of the remote
           --local-encoding Encoding                             The encoding for the backend (default Slash,Dot)
    +      --local-links                                         Translate symlinks to/from regular files with a '.rclonelink' extension for the local backend
           --local-no-check-updated                              Don't check to see if the files change during upload
           --local-no-clone                                      Disable reflink cloning for server-side copies
           --local-no-preallocate                                Disable preallocation of disk space for transferred files
    @@ -12459,6 +12773,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --local-zero-size-links                               Assume the Stat size of links is zero (and read them instead) (deprecated)
           --mailru-auth-url string                              Auth server URL
           --mailru-check-hash                                   What should copy do if file checksum is mismatched or invalid (default true)
    +      --mailru-client-credentials                           Use client credentials OAuth flow
           --mailru-client-id string                             OAuth Client Id
           --mailru-client-secret string                         OAuth Client Secret
           --mailru-description string                           Description of the remote
    @@ -12489,6 +12804,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --onedrive-auth-url string                            Auth server URL
           --onedrive-av-override                                Allows download of files the server thinks has a virus
           --onedrive-chunk-size SizeSuffix                      Chunk size to upload files with - must be multiple of 320k (327,680 bytes) (default 10Mi)
    +      --onedrive-client-credentials                         Use client credentials OAuth flow
           --onedrive-client-id string                           OAuth Client Id
           --onedrive-client-secret string                       OAuth Client Secret
           --onedrive-delta                                      If set rclone will use delta listing to implement recursive listings
    @@ -12508,11 +12824,12 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --onedrive-region string                              Choose national cloud region for OneDrive (default "global")
           --onedrive-root-folder-id string                      ID of the root folder
           --onedrive-server-side-across-configs                 Deprecated: use --server-side-across-configs instead
    +      --onedrive-tenant string                              ID of the service principal's tenant. Also called its directory ID
           --onedrive-token string                               OAuth Access Token as a JSON blob
           --onedrive-token-url string                           Token server url
           --oos-attempt-resume-upload                           If true attempt to resume previously started multipart upload for the object
           --oos-chunk-size SizeSuffix                           Chunk size to use for uploading (default 5Mi)
    -      --oos-compartment string                              Object storage compartment OCID
    +      --oos-compartment string                              Specify compartment OCID, if you need to list buckets
           --oos-config-file string                              Path to OCI config file (default "~/.oci/config")
           --oos-config-profile string                           Profile name inside the oci config file (default "Default")
           --oos-copy-cutoff SizeSuffix                          Cutoff for switching to multipart copy (default 4.656Gi)
    @@ -12541,6 +12858,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --opendrive-password string                           Password (obscured)
           --opendrive-username string                           Username
           --pcloud-auth-url string                              Auth server URL
    +      --pcloud-client-credentials                           Use client credentials OAuth flow
           --pcloud-client-id string                             OAuth Client Id
           --pcloud-client-secret string                         OAuth Client Secret
           --pcloud-description string                           Description of the remote
    @@ -12551,26 +12869,25 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --pcloud-token string                                 OAuth Access Token as a JSON blob
           --pcloud-token-url string                             Token server url
           --pcloud-username string                              Your pcloud username
    -      --pikpak-auth-url string                              Auth server URL
           --pikpak-chunk-size SizeSuffix                        Chunk size for multipart uploads (default 5Mi)
    -      --pikpak-client-id string                             OAuth Client Id
    -      --pikpak-client-secret string                         OAuth Client Secret
           --pikpak-description string                           Description of the remote
    +      --pikpak-device-id string                             Device ID used for authorization
           --pikpak-encoding Encoding                            The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot)
           --pikpak-hash-memory-limit SizeSuffix                 Files bigger than this will be cached on disk to calculate hash if required (default 10Mi)
    +      --pikpak-no-media-link                                Use original file links instead of media links
           --pikpak-pass string                                  Pikpak password (obscured)
           --pikpak-root-folder-id string                        ID of the root folder
    -      --pikpak-token string                                 OAuth Access Token as a JSON blob
    -      --pikpak-token-url string                             Token server url
           --pikpak-trashed-only                                 Only show files that are in the trash
           --pikpak-upload-concurrency int                       Concurrency for multipart uploads (default 5)
           --pikpak-use-trash                                    Send files to the trash instead of deleting permanently (default true)
           --pikpak-user string                                  Pikpak username
    +      --pikpak-user-agent string                            HTTP user agent for pikpak (default "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0")
           --pixeldrain-api-key string                           API key for your pixeldrain account
           --pixeldrain-api-url string                           The API endpoint to connect to. In the vast majority of cases it's fine to leave (default "https://pixeldrain.com/api")
           --pixeldrain-description string                       Description of the remote
           --pixeldrain-root-folder-id string                    Root of the filesystem to use (default "me")
           --premiumizeme-auth-url string                        Auth server URL
    +      --premiumizeme-client-credentials                     Use client credentials OAuth flow
           --premiumizeme-client-id string                       OAuth Client Id
           --premiumizeme-client-secret string                   OAuth Client Secret
           --premiumizeme-description string                     Description of the remote
    @@ -12588,6 +12905,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --protondrive-replace-existing-draft                  Create a new revision when filename conflict is detected
           --protondrive-username string                         The username of your proton account
           --putio-auth-url string                               Auth server URL
    +      --putio-client-credentials                            Use client credentials OAuth flow
           --putio-client-id string                              OAuth Client Id
           --putio-client-secret string                          OAuth Client Secret
           --putio-description string                            Description of the remote
    @@ -12621,6 +12939,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --s3-copy-cutoff SizeSuffix                           Cutoff for switching to multipart copy (default 4.656Gi)
           --s3-decompress                                       If set this will decompress gzip encoded objects
           --s3-description string                               Description of the remote
    +      --s3-directory-bucket                                 Set to use AWS Directory Buckets
           --s3-directory-markers                                Upload an empty object with a trailing slash when a new directory is created
           --s3-disable-checksum                                 Don't store MD5 checksum with object metadata
           --s3-disable-http2                                    Disable usage of http2 for S3 backends
    @@ -12702,6 +13021,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --sftp-pass string                                    SSH password, leave blank to use ssh-agent (obscured)
           --sftp-path-override string                           Override path used by SSH shell commands
           --sftp-port int                                       SSH port number (default 22)
    +      --sftp-pubkey string                                  SSH public certificate for public certificate based authentication
           --sftp-pubkey-file string                             Optional path to public key file
           --sftp-server-command string                          Specifies the path or command to run a sftp server on the remote host
           --sftp-set-env SpaceSepList                           Environment variables to pass to sftp and commands
    @@ -12717,6 +13037,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --sftp-user string                                    SSH username (default "$USER")
           --sharefile-auth-url string                           Auth server URL
           --sharefile-chunk-size SizeSuffix                     Upload chunk size (default 64Mi)
    +      --sharefile-client-credentials                        Use client credentials OAuth flow
           --sharefile-client-id string                          OAuth Client Id
           --sharefile-client-secret string                      OAuth Client Secret
           --sharefile-description string                        Description of the remote
    @@ -12806,6 +13127,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --uptobox-description string                          Description of the remote
           --uptobox-encoding Encoding                           The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
           --uptobox-private                                     Set to make uploaded files private
    +      --webdav-auth-redirect                                Preserve authentication on redirect
           --webdav-bearer-token string                          Bearer token instead of user/pass (e.g. a Macaroon)
           --webdav-bearer-token-command string                  Command to run to get a bearer token
           --webdav-description string                           Description of the remote
    @@ -12821,6 +13143,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --webdav-user string                                  User name
           --webdav-vendor string                                Name of the WebDAV site/service/software you are using
           --yandex-auth-url string                              Auth server URL
    +      --yandex-client-credentials                           Use client credentials OAuth flow
           --yandex-client-id string                             OAuth Client Id
           --yandex-client-secret string                         OAuth Client Secret
           --yandex-description string                           Description of the remote
    @@ -12830,13 +13153,15 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --yandex-token string                                 OAuth Access Token as a JSON blob
           --yandex-token-url string                             Token server url
           --zoho-auth-url string                                Auth server URL
    +      --zoho-client-credentials                             Use client credentials OAuth flow
           --zoho-client-id string                               OAuth Client Id
           --zoho-client-secret string                           OAuth Client Secret
           --zoho-description string                             Description of the remote
           --zoho-encoding Encoding                              The encoding for the backend (default Del,Ctl,InvalidUtf8)
           --zoho-region string                                  Zoho region to connect to
           --zoho-token string                                   OAuth Access Token as a JSON blob
    -      --zoho-token-url string                               Token server url
    + --zoho-token-url string Token server url + --zoho-upload-cutoff SizeSuffix Cutoff for switching to large file upload api (>= 10 MiB) (default 10Mi)

    Docker Volume Plugin

    Introduction

    Docker 1.9 has added support for creating named volumes via command-line interface and mounting them in containers as a way to share data between them. Since Docker 1.10 you can create named volumes with Docker Compose by descriptions in docker-compose.yml files for use by container groups on a single host. As of Docker 1.12 volumes are supported by Docker Swarm included with Docker Engine and created from descriptions in swarm compose v3 files for use with swarm stacks across multiple cluster nodes.

    @@ -12909,7 +13234,7 @@ docker volume inspect vol1

    is equivalent to the combined syntax

    -o remote=:backend:dir/subdir

    but is arguably easier to parameterize in scripts. The path part is optional.

    -

    Mount and VFS options as well as backend parameters are named like their twin command-line flags without the -- CLI prefix. Optionally you can use underscores instead of dashes in option names. For example, --vfs-cache-mode full becomes -o vfs-cache-mode=full or -o vfs_cache_mode=full. Boolean CLI flags without value will gain the true value, e.g. --allow-other becomes -o allow-other=true or -o allow_other=true.

    +

    Mount and VFS options as well as backend parameters are named like their twin command-line flags without the -- CLI prefix. Optionally you can use underscores instead of dashes in option names. For example, --vfs-cache-mode full becomes -o vfs-cache-mode=full or -o vfs_cache_mode=full. Boolean CLI flags without value will gain the true value, e.g. --allow-other becomes -o allow-other=true or -o allow_other=true.

    Please note that you can provide parameters only for the backend immediately referenced by the backend type of mounted remote. If this is a wrapping backend like alias, chunker or crypt, you cannot provide options for the referred to remote or backend. This limitation is imposed by the rclone connection string parser. The only workaround is to feed plugin with rclone.conf or configure plugin arguments (see below).

    Special Volume Options

    mount-type determines the mount method and in general can be one of: mount, cmount, or mount2. This can be aliased as mount_type. It should be noted that the managed rclone docker plugin currently does not support the cmount method and mount2 is rarely needed. This option defaults to the first found method, which is usually mount so you generally won't need it.

    @@ -13010,6 +13335,10 @@ sudo runc --root /run/docker/runtime-runc/plugins.moby exec $PLUGID rclone versi PLUGID=123abc... sudo curl -H Content-Type:application/json -XPOST -d {} --unix-socket /run/docker/plugins/$PLUGID/rclone.sock http://localhost/Plugin.Activate

    though this is rarely needed.

    +

    If the plugin fails to work properly, and only as a last resort after you tried diagnosing with the above methods, you can try clearing the state of the plugin. Note that all existing rclone docker volumes will probably have to be recreated. This might be needed because a reinstall don't cleanup existing state files to allow for easy restoration, as stated above.

    +
    docker plugin disable rclone # disable the plugin to ensure no interference
    +sudo rm /var/lib/docker-plugins/rclone/cache/docker-plugin.state # removing the plugin state
    +docker plugin enable rclone # re-enable the plugin afterward

    Caveats

    Finally I'd like to mention a caveat with updating volume settings. Docker CLI does not have a dedicated command like docker volume update. It may be tempting to invoke docker volume create with updated options on existing volume, but there is a gotcha. The command will do nothing, it won't even return an error. I hope that docker maintainers will fix this some day. In the meantime be aware that you must remove your volume before recreating it with new settings:

    docker volume remove my_vol
    @@ -13420,8 +13749,9 @@ rclone copy Path1 Path2 [--create-empty-src-dirs]

    Lock file

    When bisync is running, a lock file is created in the bisync working directory, typically at ~/.cache/rclone/bisync/PATH1..PATH2.lck on Linux. If bisync should crash or hang, the lock file will remain in place and block any further runs of bisync for the same paths. Delete the lock file as part of debugging the situation. The lock file effectively blocks follow-on (e.g., scheduled by cron) runs when the prior invocation is taking a long time. The lock file contains PID of the blocking process, which may help in debug. Lock files can be set to automatically expire after a certain amount of time, using the --max-lock flag.

    Note that while concurrent bisync runs are allowed, be very cautious that there is no overlap in the trees being synched between concurrent runs, lest there be replicated files, deleted files and general mayhem.

    -

    Return codes

    -

    rclone bisync returns the following codes to calling program: - 0 on a successful run, - 1 for a non-critical failing run (a rerun may be successful), - 2 for a critically aborted run (requires a --resync to recover).

    +

    Exit codes

    +

    rclone bisync returns the following codes to calling program: - 0 on a successful run, - 1 for a non-critical failing run (a rerun may be successful), - 2 on syntax or usage error, - 7 for a critically aborted run (requires a --resync to recover).

    +

    See also the section about exit codes in main docs.

    Graceful Shutdown

    Bisync has a "Graceful Shutdown" mode which is activated by sending SIGINT or pressing Ctrl+C during a run. Once triggered, bisync will use best efforts to exit cleanly before the timer runs out. If bisync is in the middle of transferring files, it will attempt to cleanly empty its queue by finishing what it has started but not taking more. If it cannot do so within 30 seconds, it will cancel the in-progress transfers at that point and then give itself a maximum of 60 seconds to wrap up, save its state for next time, and exit. With the -vP flags you will see constant status updates and a final confirmation of whether or not the graceful shutdown was successful.

    At any point during the "Graceful Shutdown" sequence, a second SIGINT or Ctrl+C will trigger an immediate, un-graceful exit, which will leave things in a messier state. Usually a robust recovery will still be possible if using --recover mode, otherwise you will need to do a --resync.

    @@ -14249,6 +14579,7 @@ e/n/d/r/c/s/q> q
  • Linode Object Storage
  • Magalu Object Storage
  • Minio
  • +
  • Outscale
  • Petabox
  • Qiniu Cloud Object Storage (Kodo)
  • RackCorp Object Storage
  • @@ -14256,6 +14587,7 @@ e/n/d/r/c/s/q> q
  • Scaleway
  • Seagate Lyve Cloud
  • SeaweedFS
  • +
  • Selectel
  • StackPath
  • Storj
  • Synology C2 Object Storage
  • @@ -14443,7 +14775,7 @@ Choose a number from below, or type in your own value \ "STANDARD_IA" 5 / One Zone Infrequent Access storage class \ "ONEZONE_IA" - 6 / Glacier storage class + 6 / Glacier Flexible Retrieval storage class \ "GLACIER" 7 / Glacier Deep Archive storage class \ "DEEP_ARCHIVE" @@ -14529,6 +14861,48 @@ y/e/d>

    By default, rclone will HEAD every object it uploads. It does this to check the object got uploaded correctly.

    You can disable this with the --s3-no-head option - see there for more details.

    Setting this flag increases the chance for undetected upload failures.

    +

    Increasing performance

    +

    Using server-side copy

    +

    If you are copying objects between S3 buckets in the same region, you should use server-side copy. This is much faster than downloading and re-uploading the objects, as no data is transferred.

    +

    For rclone to use server-side copy, you must use the same remote for the source and destination.

    +
    rclone copy s3:source-bucket s3:destination-bucket
    +

    When using server-side copy, the performance is limited by the rate at which rclone issues API requests to S3. See below for how to increase the number of API requests rclone makes.

    +

    Increasing the rate of API requests

    +

    You can increase the rate of API requests to S3 by increasing the parallelism using --transfers and --checkers options.

    +

    Rclone uses a very conservative defaults for these settings, as not all providers support high rates of requests. Depending on your provider, you can increase significantly the number of transfers and checkers.

    +

    For example, with AWS S3, if you can increase the number of checkers to values like 200. If you are doing a server-side copy, you can also increase the number of transfers to 200.

    +
    rclone sync --transfers 200 --checkers 200 --checksum s3:source-bucket s3:destination-bucket
    +

    You will need to experiment with these values to find the optimal settings for your setup.

    +

    Data integrity

    +

    Rclone does its best to verify every part of an upload or download to the s3 provider using various hashes.

    +

    Every HTTP transaction to/from the provider has a X-Amz-Content-Sha256 or a Content-Md5 header to guard against corruption of the HTTP body. The HTTP Header is protected by the signature passed in the Authorization header.

    +

    All communications with the provider is done over https for encryption and additional error protection.

    +

    Single part uploads

    + +

    Note that if the source does not have an MD5 then the single part uploads will not have hash protection. In this case it is recommended to use --s3-upload-cutoff 0 so all files are uploaded as multipart uploads.

    +

    Multipart uploads

    +

    For files above --s3-upload-cutoff rclone splits the file into multiple parts for upload.

    + +

    When rclone has finished the upload of all the parts it then completes the upload by sending:

    + +

    The provider checks the MD5 for all the parts it has received against what rclone sends and if it is good it returns OK.

    +

    Rclone then does a HEAD request (disable with --s3-no-head) and checks the ETag is what it expects (in this case it should be the MD5 sum of all the MD5 sums of all the parts with the number of parts on the end).

    +

    If the source has an MD5 sum then rclone will attach the X-Amz-Meta-Md5chksum with it as the ETag for a multipart upload can't easily be checked against the file as the chunk size must be known in order to calculate it.

    +

    Downloads

    +

    Rclone checks the MD5 hash of the data downloaded against either the ETag or the X-Amz-Meta-Md5chksum metadata (if present) which rclone uploads with multipart uploads.

    +

    Further checking

    +

    At each stage rclone and the provider are sending and checking hashes of everything. Rclone deliberately HEADs each object after upload to check it arrived safely for extra security. (You can disable this with --s3-no-head).

    +

    If you require further assurance that your data is intact you can use rclone check to check the hashes locally vs the remote.

    +

    And if you are feeling ultimately paranoid use rclone check --download which will download the files and check them against the local copies. (Note that this doesn't use disk to do this - it streams them in memory).

    Versions

    When bucket versioning is enabled (this can be done with rclone with the rclone backend versioning command) when rclone uploads a new version of a file it creates a new version of it Likewise when you delete a file, the old version will be marked hidden and still be available.

    Old versions of files, where available, are visible using the --s3-versions flag.

    @@ -14611,7 +14985,7 @@ $ rclone -q --s3-versions ls s3:cleanup-test -

    Multipart uploads

    +

    Multipart uploads

    rclone supports multipart uploads with S3 which means that it can upload files bigger than 5 GiB.

    Note that files uploaded both with multipart upload and through crypt remotes do not have MD5 sums.

    rclone switches from single part uploads to multipart uploads at the point specified by --s3-upload-cutoff. This can be a maximum of 5 GiB and a minimum of 0 (ie always upload multipart files).

    @@ -14715,7 +15089,7 @@ $ rclone -q --s3-versions ls s3:cleanup-test

    As mentioned in the Modification times and hashes section, small files that are not uploaded as multipart, use a different tag, causing the upload to fail. A simple solution is to set the --s3-upload-cutoff 0 and force all the files to be uploaded as multipart.

    Standard options

    -

    Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).

    +

    Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).

    --s3-provider

    Choose your S3 provider.

    Properties:

    @@ -14806,6 +15180,10 @@ $ rclone -q --s3-versions ls s3:cleanup-test +
  • "Outscale" +
  • "Petabox" +

    --s3-directory-bucket

    +

    Set to use AWS Directory Buckets

    +

    If you are using an AWS Directory Bucket then set this flag.

    +

    This will ensure no Content-Md5 headers are sent and ensure ETag headers are not interpreted as MD5 sums. X-Amz-Meta-Md5chksum will be set on all objects whether single or multipart uploaded.

    +

    This also sets no_check_bucket = true.

    +

    Note that Directory Buckets do not support:

    + +

    Rclone limitations with Directory Buckets:

    + +

    Properties:

    +

    --s3-sdk-log-mode

    Set to debug the SDK

    This can be set to a comma separated list of the following functions:

    @@ -16177,6 +16585,15 @@ provider = AWS

    Providers

    AWS S3

    This is the provider used as main example and described in the configuration section above.

    +

    AWS Directory Buckets

    +

    From rclone v1.69 Directory Buckets are supported.

    +

    You will need to set the directory_buckets = true config parameter or use --s3-directory-buckets.

    +

    Note that rclone cannot yet:

    + +

    See the --s3-directory-buckets flag for more info

    AWS Snowball Edge

    AWS Snowball is a hardware appliance used for transferring bulk data back to AWS. Its main software interface is S3 object storage.

    To use rclone with AWS Snowball Edge devices, configure as standard for an 'S3 Compatible Service'.

    @@ -16302,6 +16719,7 @@ acl = private

    Now run rclone lsf r2: to see your buckets and rclone lsf r2:bucket to look within a bucket.

    For R2 tokens with the "Object Read & Write" permission, you may also need to add no_check_bucket = true for object uploads to work correctly.

    Note that Cloudflare decompresses files uploaded with Content-Encoding: gzip by default which is a deviation from what AWS does. If this is causing a problem then upload the files with --header-upload "Cache-Control: no-transform"

    +

    A consequence of this is that Content-Encoding: gzip will never appear in the metadata on Cloudflare.

    Dreamhost

    Dreamhost DreamObjects is an object storage system based on CEPH.

    To use rclone with Dreamhost, configure as above but leave the region blank and set the endpoint. You should end up with something like this in your config:

    @@ -16903,6 +17321,125 @@ location_constraint = server_side_encryption =

    So once set up, for example, to copy files into a bucket

    rclone copy /path/to/files minio:bucket
    +

    Outscale

    +

    OUTSCALE Object Storage (OOS) is an enterprise-grade, S3-compatible storage service provided by OUTSCALE, a brand of Dassault Systèmes. For more information about OOS, see the official documentation.

    +

    Here is an example of an OOS configuration that you can paste into your rclone configuration file:

    +
    [outscale]
    +type = s3
    +provider = Outscale
    +env_auth = false
    +access_key_id = ABCDEFGHIJ0123456789
    +secret_access_key = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    +region = eu-west-2
    +endpoint = oos.eu-west-2.outscale.com
    +acl = private
    +

    You can also run rclone config to go through the interactive setup process:

    +
    No remotes found, make a new one?
    +n) New remote
    +s) Set configuration password
    +q) Quit config
    +n/s/q> n
    +
    Enter name for new remote.
    +name> outscale
    +
    Option Storage.
    +Type of storage to configure.
    +Choose a number from below, or type in your own value.
    +[snip]
    + X / Amazon S3 Compliant Storage Providers including AWS, ...Outscale, ...and others
    +   \ (s3)
    +[snip]
    +Storage> outscale
    +
    Option provider.
    +Choose your S3 provider.
    +Choose a number from below, or type in your own value.
    +Press Enter to leave empty.
    +[snip]
    +XX / OUTSCALE Object Storage (OOS)
    +   \ (Outscale)
    +[snip]
    +provider> Outscale
    +
    Option env_auth.
    +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
    +Only applies if access_key_id and secret_access_key is blank.
    +Choose a number from below, or type in your own boolean value (true or false).
    +Press Enter for the default (false).
    + 1 / Enter AWS credentials in the next step.
    +   \ (false)
    + 2 / Get AWS credentials from the environment (env vars or IAM).
    +   \ (true)
    +env_auth> 
    +
    Option access_key_id.
    +AWS Access Key ID.
    +Leave blank for anonymous access or runtime credentials.
    +Enter a value. Press Enter to leave empty.
    +access_key_id> ABCDEFGHIJ0123456789
    +
    Option secret_access_key.
    +AWS Secret Access Key (password).
    +Leave blank for anonymous access or runtime credentials.
    +Enter a value. Press Enter to leave empty.
    +secret_access_key> XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    +
    Option region.
    +Region where your bucket will be created and your data stored.
    +Choose a number from below, or type in your own value.
    +Press Enter to leave empty.
    + 1 / Paris, France
    +   \ (eu-west-2)
    + 2 / New Jersey, USA
    +   \ (us-east-2)
    + 3 / California, USA
    +   \ (us-west-1)
    + 4 / SecNumCloud, Paris, France
    +   \ (cloudgouv-eu-west-1)
    + 5 / Tokyo, Japan
    +   \ (ap-northeast-1)
    +region> 1
    +
    Option endpoint.
    +Endpoint for S3 API.
    +Required when using an S3 clone.
    +Choose a number from below, or type in your own value.
    +Press Enter to leave empty.
    + 1 / Outscale EU West 2 (Paris)
    +   \ (oos.eu-west-2.outscale.com)
    + 2 / Outscale US east 2 (New Jersey)
    +   \ (oos.us-east-2.outscale.com)
    + 3 / Outscale EU West 1 (California)
    +   \ (oos.us-west-1.outscale.com)
    + 4 / Outscale SecNumCloud (Paris)
    +   \ (oos.cloudgouv-eu-west-1.outscale.com)
    + 5 / Outscale AP Northeast 1 (Japan)
    +   \ (oos.ap-northeast-1.outscale.com)
    +endpoint> 1
    +
    Option acl.
    +Canned ACL used when creating buckets and storing or copying objects.
    +This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
    +For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
    +Note that this ACL is applied when server-side copying objects as S3
    +doesn't copy the ACL from the source but rather writes a fresh one.
    +If the acl is an empty string then no X-Amz-Acl: header is added and
    +the default (private) will be used.
    +Choose a number from below, or type in your own value.
    +Press Enter to leave empty.
    +   / Owner gets FULL_CONTROL.
    + 1 | No one else has access rights (default).
    +   \ (private)
    +[snip]
    +acl> 1
    +
    Edit advanced config?
    +y) Yes
    +n) No (default)
    +y/n> n
    +
    Configuration complete.
    +Options:
    +- type: s3
    +- provider: Outscale
    +- access_key_id: ABCDEFGHIJ0123456789
    +- secret_access_key: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    +- endpoint: oos.eu-west-2.outscale.com
    +Keep this "outscale" remote?
    +y) Yes this is OK (default)
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y

    Qiniu Cloud Object Storage (Kodo)

    Qiniu Cloud Object Storage (Kodo), a completely independent-researched core technology which is proven by repeated customer experience has occupied absolute leading market leader position. Kodo can be widely applied to mass data management.

    To configure access to Qiniu Kodo, follow the steps below:

    @@ -17123,7 +17660,7 @@ acl = private upload_cutoff = 5M chunk_size = 5M copy_cutoff = 5M -

    C14 Cold Storage is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" storage_class. So you can configure your remote with the storage_class = GLACIER option to upload directly to C14. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above)

    +

    Scaleway Glacier is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" storage_class. So you can configure your remote with the storage_class = GLACIER option to upload directly to Scaleway Glacier. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above)

    Seagate Lyve Cloud

    Seagate Lyve Cloud is an S3 compatible object storage platform from Seagate intended for enterprise use.

    Here is a config run through for a remote called remote - you may choose a different name of course. Note that to create an access key and secret key you will need to create a service account first.

    @@ -17252,6 +17789,105 @@ secret_access_key = any endpoint = localhost:8333

    So once set up, for example to copy files into a bucket

    rclone copy /path/to/files seaweedfs_s3:foo
    +

    Selectel

    +

    Selectel Cloud Storage is an S3 compatible storage system which features triple redundancy storage, automatic scaling, high availability and a comprehensive IAM system.

    +

    Selectel have a section on their website for configuring rclone which shows how to make the right API keys.

    +

    From rclone v1.69 Selectel is a supported operator - please choose the Selectel provider type.

    +

    Note that you should use "vHosted" access for the buckets (which is the recommended default), not "path style".

    +

    You can use rclone config to make a new provider like this

    +
    No remotes found, make a new one?
    +n) New remote
    +s) Set configuration password
    +q) Quit config
    +n/s/q> n
    +
    +Enter name for new remote.
    +name> selectel
    +
    +Option Storage.
    +Type of storage to configure.
    +Choose a number from below, or type in your own value.
    +[snip]
    +XX / Amazon S3 Compliant Storage Providers including ..., Selectel, ...
    +   \ (s3)
    +[snip]
    +Storage> s3
    +
    +Option provider.
    +Choose your S3 provider.
    +Choose a number from below, or type in your own value.
    +Press Enter to leave empty.
    +[snip]
    +XX / Selectel Object Storage
    +   \ (Selectel)
    +[snip]
    +provider> Selectel
    +
    +Option env_auth.
    +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
    +Only applies if access_key_id and secret_access_key is blank.
    +Choose a number from below, or type in your own boolean value (true or false).
    +Press Enter for the default (false).
    + 1 / Enter AWS credentials in the next step.
    +   \ (false)
    + 2 / Get AWS credentials from the environment (env vars or IAM).
    +   \ (true)
    +env_auth> 1
    +
    +Option access_key_id.
    +AWS Access Key ID.
    +Leave blank for anonymous access or runtime credentials.
    +Enter a value. Press Enter to leave empty.
    +access_key_id> ACCESS_KEY
    +
    +Option secret_access_key.
    +AWS Secret Access Key (password).
    +Leave blank for anonymous access or runtime credentials.
    +Enter a value. Press Enter to leave empty.
    +secret_access_key> SECRET_ACCESS_KEY
    +
    +Option region.
    +Region where your data stored.
    +Choose a number from below, or type in your own value.
    +Press Enter to leave empty.
    + 1 / St. Petersburg
    +   \ (ru-1)
    +region> 1
    +
    +Option endpoint.
    +Endpoint for Selectel Object Storage.
    +Choose a number from below, or type in your own value.
    +Press Enter to leave empty.
    + 1 / Saint Petersburg
    +   \ (s3.ru-1.storage.selcloud.ru)
    +endpoint> 1
    +
    +Edit advanced config?
    +y) Yes
    +n) No (default)
    +y/n> n
    +
    +Configuration complete.
    +Options:
    +- type: s3
    +- provider: Selectel
    +- access_key_id: ACCESS_KEY
    +- secret_access_key: SECRET_ACCESS_KEY
    +- region: ru-1
    +- endpoint: s3.ru-1.storage.selcloud.ru
    +Keep this "selectel" remote?
    +y) Yes this is OK (default)
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    And your config should end up looking like this:

    +
    [selectel]
    +type = s3
    +provider = Selectel
    +access_key_id = ACCESS_KEY
    +secret_access_key = SECRET_ACCESS_KEY
    +region = ru-1
    +endpoint = s3.ru-1.storage.selcloud.ru

    Wasabi

    Wasabi is a cloud-based object storage service for a broad range of applications and use cases. Wasabi is designed for individuals and organizations that require a high-performance, reliable, and secure data storage infrastructure at minimal cost.

    Wasabi provides an S3 interface which can be configured for use with rclone like this.

    @@ -18322,7 +18958,7 @@ cos s3

    For Netease NOS configure as per the configurator rclone config setting the provider Netease. This will automatically set force_path_style = false which is necessary for it to run properly.

    Petabox

    Here is an example of making a Petabox configuration. First run:

    -
    rclone config
    +
    rclone config

    This will guide you through an interactive setup process.

    No remotes found, make a new one?
     n) New remote
    @@ -19054,6 +19690,7 @@ https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxx
         {
             "daysFromHidingToDeleting": 1,
             "daysFromUploadingToHiding": null,
    +        "daysFromStartingToCancelingUnfinishedLargeFiles": null,
             "fileNamePrefix": ""
         }
     ]
    @@ -19069,6 +19706,7 @@ rclone backend lifecycle b2:bucket -o daysFromUploadingToHiding=5 -o daysFromHid

    Options:

    cleanup

    @@ -19141,7 +19779,7 @@ If not sure try Y. If Y failed, try N. y) Yes n) No y/n> y -If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth +If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=XXXXXXXXXXXXXXXXXXXXXX Log in and authorize rclone for access Waiting for code... Got code @@ -19384,6 +20022,16 @@ y/e/d> y
  • Type: string
  • Required: false
  • +

    --box-client-credentials

    +

    Use client credentials OAuth flow.

    +

    This will use the OAUTH2 client Credentials Flow as described in RFC 6749.

    +

    Properties:

    +

    --box-root-folder-id

    Fill in for rclone to use a non root folder as its starting point.

    Properties:

    @@ -20205,9 +20853,173 @@ y/e/d> y
  • Type: string
  • Required: false
  • +

    Cloudinary

    +

    This is a backend for the Cloudinary platform

    +

    About Cloudinary

    +

    Cloudinary is an image and video API platform. Trusted by 1.5 million developers and 10,000 enterprise and hyper-growth companies as a critical part of their tech stack to deliver visually engaging experiences.

    +

    Accounts & Pricing

    +

    To use this backend, you need to create a free account on Cloudinary. Start with a free plan with generous usage limits. Then, as your requirements grow, upgrade to a plan that best fits your needs. See the pricing details.

    +

    Securing Your Credentials

    +

    Please refer to the docs

    +

    Configuration

    +

    Here is an example of making a Cloudinary configuration.

    +

    First, create a cloudinary.com account and choose a plan.

    +

    You will need to log in and get the API Key and API Secret for your account from the developer section.

    +

    Now run

    +

    rclone config

    +

    Follow the interactive setup process:

    +
    No remotes found, make a new one?
    +n) New remote
    +s) Set configuration password
    +q) Quit config
    +n/s/q> n
    +
    +Enter the name for the new remote.
    +name> cloudinary-media-library
    +
    +Option Storage.
    +Type of storage to configure.
    +Choose a number from below, or type in your own value.
    +[snip]
    +XX / cloudinary.com
    +\ (cloudinary)
    +[snip]
    +Storage> cloudinary
    +
    +Option cloud_name.
    +You can find your cloudinary.com cloud_name in your [dashboard](https://console.cloudinary.com/pm/developer-dashboard)
    +Enter a value.
    +cloud_name> ****************************
    +
    +Option api_key.
    +You can find your cloudinary.com api key in your [dashboard](https://console.cloudinary.com/pm/developer-dashboard)
    +Enter a value.
    +api_key> ****************************
    +
    +Option api_secret.
    +You can find your cloudinary.com api secret in your [dashboard](https://console.cloudinary.com/pm/developer-dashboard)
    +This value must be a single character, one of the following: y, g.
    +y/g> y
    +Enter a value.
    +api_secret> ****************************
    +
    +Option upload_prefix.
    +[Upload prefix](https://cloudinary.com/documentation/cloudinary_sdks#configuration_parameters) to specify alternative data center
    +Enter a value.
    +upload_prefix>
    +
    +Option upload_preset.
    +[Upload presets](https://cloudinary.com/documentation/upload_presets) can be defined for different upload profiles
    +Enter a value.
    +upload_preset>
    +
    +Edit advanced config?
    +y) Yes
    +n) No (default)
    +y/n> n
    +
    +Configuration complete.
    +Options:
    +- type: cloudinary
    +- api_key: ****************************
    +- api_secret: ****************************
    +- cloud_name: ****************************
    +- upload_prefix:
    +- upload_preset:
    +
    +Keep this "cloudinary-media-library" remote?
    +y) Yes this is OK (default)
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    List directories in the top level of your Media Library

    +

    rclone lsd cloudinary-media-library:

    +

    Make a new directory.

    +

    rclone mkdir cloudinary-media-library:directory

    +

    List the contents of a directory.

    +

    rclone ls cloudinary-media-library:directory

    +

    Modified time and hashes

    +

    Cloudinary stores md5 and timestamps for any successful Put automatically and read-only.

    +

    Standard options

    +

    Here are the Standard options specific to cloudinary (Cloudinary).

    +

    --cloudinary-cloud-name

    +

    Cloudinary Environment Name

    +

    Properties:

    + +

    --cloudinary-api-key

    +

    Cloudinary API Key

    +

    Properties:

    + +

    --cloudinary-api-secret

    +

    Cloudinary API Secret

    +

    Properties:

    + +

    --cloudinary-upload-prefix

    +

    Specify the API endpoint for environments out of the US

    +

    Properties:

    + +

    --cloudinary-upload-preset

    +

    Upload Preset to select asset manipulation on upload

    +

    Properties:

    + +

    Advanced options

    +

    Here are the Advanced options specific to cloudinary (Cloudinary).

    +

    --cloudinary-encoding

    +

    The encoding for the backend.

    +

    See the encoding section in the overview for more info.

    +

    Properties:

    + +

    --cloudinary-eventually-consistent-delay

    +

    Wait N seconds for eventual consistency of the databases that support the backend operation

    +

    Properties:

    + +

    --cloudinary-description

    +

    Description of the remote.

    +

    Properties:

    +

    Citrix ShareFile

    Citrix ShareFile is a secure file sharing and transfer service aimed as business.

    -

    Configuration

    +

    Configuration

    The initial setup for Citrix ShareFile involves getting a token from Citrix ShareFile which you can in your browser. rclone config walks you through it.

    Here is an example of how to make a remote called remote. First run:

     rclone config
    @@ -20360,7 +21172,7 @@ y/e/d> y

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to sharefile (Citrix Sharefile).

    --sharefile-client-id

    OAuth Client Id.

    @@ -20415,7 +21227,7 @@ y/e/d> y -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to sharefile (Citrix Sharefile).

    --sharefile-token

    OAuth Access Token as a JSON blob.

    @@ -20446,6 +21258,16 @@ y/e/d> y
  • Type: string
  • Required: false
  • +

    --sharefile-client-credentials

    +

    Use client credentials OAuth flow.

    +

    This will use the OAUTH2 client Credentials Flow as described in RFC 6749.

    +

    Properties:

    +

    --sharefile-upload-cutoff

    Cutoff for switching to multipart upload.

    Properties:

    @@ -20508,7 +21330,7 @@ y/e/d> y

    The encryption is a secret-key encryption (also called symmetric key encryption) algorithm, where a password (or pass phrase) is used to generate real encryption key. The password can be supplied by user, or you may chose to let rclone generate one. It will be stored in the configuration file, in a lightly obscured form. If you are in an environment where you are not able to keep your configuration secured, you should add configuration encryption as protection. As long as you have this configuration file, you will be able to decrypt your data. Without the configuration file, as long as you remember the password (or keep it in a safe place), you can re-create the configuration and gain access to the existing data. You may also configure a corresponding remote in a different installation to access the same data. See below for guidance to changing password.

    Encryption uses cryptographic salt, to permute the encryption key so that the same string may be encrypted in different ways. When configuring the crypt remote it is optional to enter a salt, or to let rclone generate a unique salt. If omitted, rclone uses a built-in unique string. Normally in cryptography, the salt is stored together with the encrypted content, and do not have to be memorized by the user. This is not the case in rclone, because rclone does not store any additional information on the remotes. Use of custom salt is effectively a second password that must be memorized.

    File content encryption is performed using NaCl SecretBox, based on XSalsa20 cipher and Poly1305 for integrity. Names (file- and directory names) are also encrypted by default, but this has some implications and is therefore possible to be turned off.

    -

    Configuration

    +

    Configuration

    Here is an example of how to make a remote called secret.

    To use crypt, first set up the underlying remote. Follow the rclone config instructions for the specific backend.

    Before configuring the crypt remote, check the underlying remote is working. In this example the underlying remote is called remote. We will configure a path path within this remote to contain the encrypted content. Anything inside remote:path will be encrypted and anything outside will not.

    @@ -20698,7 +21520,7 @@ $ rclone -q ls secret:

    Crypt stores modification times using the underlying remote so support depends on that.

    Hashes are not stored for crypt. However the data integrity is protected by an extremely strong crypto authenticator.

    Use the rclone cryptcheck command to check the integrity of an encrypted remote instead of rclone check which can't check the checksums properly.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to crypt (Encrypt/Decrypt a remote).

    --crypt-remote

    Remote to encrypt/decrypt.

    @@ -20778,7 +21600,7 @@ $ rclone -q ls secret:
  • Type: string
  • Required: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to crypt (Encrypt/Decrypt a remote).

    --crypt-server-side-across-configs

    Deprecated: use --server-side-across-configs instead.

    @@ -20984,7 +21806,7 @@ rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfile

    Warning

    This remote is currently experimental. Things may break and data may be lost. Anything you do with this remote is at your own risk. Please understand the risks associated with using experimental code and don't use this remote in critical applications.

    The Compress remote adds compression to another remote. It is best used with remotes containing many large compressible files.

    -

    Configuration

    +

    Configuration

    To use this remote, all you need to do is specify another remote and a compression mode to use:

    Current remotes:
     
    @@ -21039,7 +21861,7 @@ y/e/d> y

    If you open a remote wrapped by compress, you will see that there are many files with an extension corresponding to the compression algorithm you chose. These files are standard files that can be opened by various archive programs, but they have some hidden metadata that allows them to be used by rclone. While you may download and decompress these files at will, do not manually delete or rename files. Files without correct metadata files will not be recognized by rclone.

    File names

    The compressed files will be named *.###########.gz where * is the base file and the # part is base64 encoded size of the uncompressed file. The file names should not be changed by anything other than the rclone compression backend.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to compress (Compress a remote).

    --compress-remote

    Remote to compress.

    @@ -21066,7 +21888,7 @@ y/e/d> y -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to compress (Compress a remote).

    --compress-level

    GZIP compression level (-2 to 9).

    @@ -21125,7 +21947,7 @@ y/e/d> y

    You'd do this by specifying an upstreams parameter in the config like this

    upstreams = images=s3:imagesbucket files=drive:important/files

    During the initial setup with rclone config you will specify the upstreams remotes as a space separated list. The upstream remotes can either be a local paths or other remotes.

    -

    Configuration

    +

    Configuration

    Here is an example of how to make a combine called remote for the example above. First run:

     rclone config

    This will guide you through an interactive setup process:

    @@ -21180,7 +22002,7 @@ type = combine upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"

    If you then add that config to your config file (find it with rclone config file) then you can access all the shared drives in one place with the AllDrives: remote.

    See the Google Drive docs for full info.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to combine (Combine several remotes into one).

    --combine-upstreams

    Upstreams for combining

    @@ -21196,7 +22018,7 @@ upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"Type: SpaceSepList
  • Default:
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to combine (Combine several remotes into one).

    --combine-description

    Description of the remote.

    @@ -21213,7 +22035,7 @@ upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"Dropbox

    Paths are specified as remote:path

    Dropbox paths may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Configuration

    +

    Configuration

    The initial setup for dropbox involves getting a token from Dropbox which you need to do in your browser. rclone config walks you through it.

    Here is an example of how to make a remote called remote. First run:

     rclone config
    @@ -21337,7 +22159,7 @@ y/e/d> y

    This provides the maximum possible upload speed especially with lots of small files, however rclone can't check the file got uploaded properly using this mode.

    If you are using this mode then using "rclone check" after the transfer completes is recommended. Or you could do an initial transfer with --dropbox-batch-mode async then do a final transfer with --dropbox-batch-mode sync (the default).

    Note that there may be a pause when quitting rclone while rclone finishes up the last batch using this mode.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to dropbox (Dropbox).

    --dropbox-client-id

    OAuth Client Id.

    @@ -21359,7 +22181,7 @@ y/e/d> y
  • Type: string
  • Required: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to dropbox (Dropbox).

    --dropbox-token

    OAuth Access Token as a JSON blob.

    @@ -21390,6 +22212,16 @@ y/e/d> y
  • Type: string
  • Required: false
  • +

    --dropbox-client-credentials

    +

    Use client credentials OAuth flow.

    +

    This will use the OAUTH2 client Credentials Flow as described in RFC 6749.

    +

    Properties:

    +

    --dropbox-chunk-size

    Upload chunk size (< 150Mi).

    Any files larger than this will be uploaded in chunks of this size.

    @@ -21554,7 +22386,7 @@ y/e/d> y

    Enterprise File Fabric

    This backend supports Storage Made Easy's Enterprise File Fabric™ which provides a software solution to integrate and unify File and Object Storage accessible through a global file system.

    -

    Configuration

    +

    Configuration

    The initial setup for the Enterprise File Fabric backend involves getting a token from the Enterprise File Fabric which you need to do in your browser. rclone config walks you through it.

    Here is an example of how to make a remote called remote. First run:

     rclone config
    @@ -21648,7 +22480,7 @@ y/e/d> y 120673757,My contacts/ 120673761,S3 Storage/

    The ID for "S3 Storage" would be 120673761.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to filefabric (Enterprise File Fabric).

    --filefabric-url

    URL of the Enterprise File Fabric to connect to.

    @@ -21697,7 +22529,7 @@ y/e/d> y
  • Type: string
  • Required: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to filefabric (Enterprise File Fabric).

    --filefabric-token

    Session Token.

    @@ -21752,7 +22584,7 @@ y/e/d> y

    Files.com

    Files.com is a cloud storage service that provides a secure and easy way to store and share files.

    The initial setup for filescom involves authenticating with your Files.com account. You can do this by providing your site subdomain, username, and password. Alternatively, you can authenticate using an API Key from Files.com. rclone config walks you through it.

    -

    Configuration

    +

    Configuration

    Here is an example of how to make a remote called remote. First run:

    rclone config

    This will guide you through an interactive setup process:

    @@ -21821,7 +22653,7 @@ y/e/d> y
    rclone ls remote:

    Sync /home/local/directory to the remote directory, deleting any excess files in the directory.

    rclone sync --interactive /home/local/directory remote:dir
    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to filescom (Files.com).

    --filescom-site

    Your site subdomain (e.g. mysite) or custom domain (e.g. myfiles.customdomain.com).

    @@ -21851,7 +22683,7 @@ y/e/d> y
  • Type: string
  • Required: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to filescom (Files.com).

    --filescom-api-key

    The API key used to authenticate with Files.com.

    @@ -21885,7 +22717,7 @@ y/e/d> y

    FTP is the File Transfer Protocol. Rclone FTP support is provided using the github.com/jlaffaye/ftp package.

    Limitations of Rclone's FTP backend

    Paths are specified as remote:path. If the path does not begin with a / it is relative to the home directory of the user. An empty path remote: refers to the user's home directory.

    -

    Configuration

    +

    Configuration

    To create an FTP configuration named remote, run

    rclone config

    Rclone config guides you through an interactive setup process. A minimal rclone FTP remote definition only requires host, username and password. For an anonymous FTP server, see below.

    @@ -22001,7 +22833,7 @@ rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=IXs2wc8OJOz7SYLBk47

    This backend's interactive configuration wizard provides a selection of sensible encoding settings for major FTP servers: ProFTPd, PureFTPd, VsFTPd. Just hit a selection number when prompted.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to ftp (FTP).

    --ftp-host

    FTP host to connect to.

    @@ -22061,7 +22893,7 @@ rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=IXs2wc8OJOz7SYLBk47
  • Type: bool
  • Default: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to ftp (FTP).

    --ftp-concurrency

    Maximum number of FTP simultaneous connections, 0 for unlimited.

    @@ -22190,12 +23022,9 @@ rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=IXs2wc8OJOz7SYLBk47

    --ftp-socks-proxy

    Socks 5 proxy host.

    -
        Supports the format user:pass@host:port, user@host:port, host:port.
    -    
    -    Example:
    -    
    -        myUser:myPass@localhost:9005
    -    
    +

    Supports the format user:pass@host:port, user@host:port, host:port.

    +

    Example:

    +
    myUser:myPass@localhost:9005

    Properties:

    +

    --ftp-no-check-upload

    +

    Don't check the upload is OK

    +

    Normally rclone will try to check the upload exists after it has uploaded a file to make sure the size and modification time are as expected.

    +

    This flag stops rclone doing these checks. This enables uploading to folders which are write only.

    +

    You will likely need to use the --inplace flag also if uploading to a write only folder.

    +

    Properties:

    +

    --ftp-encoding

    The encoding for the backend.

    See the encoding section in the overview for more info.

    @@ -22255,7 +23096,7 @@ rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=IXs2wc8OJOz7SYLBk47

    Gofile is a content storage and distribution platform. Its aim is to provide as much service as possible for free or at a very low price.

    The initial setup for Gofile involves logging in to the web interface and going to the "My Profile" section. Copy the "Account API token" for use in the config file.

    Note that if you wish to connect rclone to Gofile you will need a premium account.

    -

    Configuration

    +

    Configuration

    Here is an example of how to make a remote called remote. First run:

     rclone config

    This will guide you through an interactive setup process:

    @@ -22394,7 +23235,7 @@ d50e356c-29ca-4b27-a3a7-494d91026e04;Videos/

    The ID to use is the part before the ; so you could set

    root_folder_id = d6341f53-ee65-4f29-9f59-d11e8070b2a0

    To restrict rclone to the Files directory.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to gofile (Gofile).

    --gofile-access-token

    API Access token

    @@ -22406,7 +23247,7 @@ d50e356c-29ca-4b27-a3a7-494d91026e04;Videos/
  • Type: string
  • Required: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to gofile (Gofile).

    --gofile-root-folder-id

    ID of the root folder

    @@ -22468,7 +23309,7 @@ d50e356c-29ca-4b27-a3a7-494d91026e04;Videos/

    Use rclone dedupe to fix duplicated files.

    Google Cloud Storage

    Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir.

    -

    Configuration

    +

    Configuration

    The initial setup for google cloud storage involves getting a token from Google Cloud Storage which you need to do in your browser. rclone config walks you through it.

    Here is an example of how to make a remote called remote. First run:

     rclone config
    @@ -22608,6 +23449,40 @@ y/e/d> y

    You can set up rclone with Google Cloud Storage in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don't have actively logged-in users, for example build machines.

    To get credentials for Google Cloud Platform IAM Service Accounts, please head to the Service Account section of the Google Developer Console. Service Accounts behave just like normal User permissions in Google Cloud Storage ACLs, so you can limit their access (e.g. make them read only). After creating an account, a JSON file containing the Service Account's credentials will be downloaded onto your machines. These credentials are what rclone will use for authentication.

    To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file prompt and rclone won't use the browser based authentication flow. If you'd rather stuff the contents of the credentials file into the rclone config file, you can set service_account_credentials with the actual contents of the file instead, or set the equivalent environment variable.

    +

    Service Account Authentication with Access Tokens

    +

    Another option for service account authentication is to use access tokens via gcloud impersonate-service-account. Access tokens protect security by avoiding the use of the JSON key file, which can be breached. They also bypass oauth login flow, which is simpler on remote VMs that lack a web browser.

    +

    If you already have a working service account, skip to step 3.

    +

    1. Create a service account using

    +
    gcloud iam service-accounts create gcs-read-only 
    +

    You can re-use an existing service account as well (like the one created above)

    +

    2. Attach a Viewer (read-only) or User (read-write) role to the service account

    +
     $ PROJECT_ID=my-project
    + $ gcloud --verbose iam service-accounts add-iam-policy-binding \
    +    gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com  \
    +    --member=serviceAccount:gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com \
    +    --role=roles/storage.objectViewer
    +

    Use the Google Cloud console to identify a limited role. Some relevant pre-defined roles:

    + +

    3. Get a temporary access key for the service account

    +
    $ gcloud auth application-default print-access-token  \
    +   --impersonate-service-account \
    +      gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com  
    +
    +ya29.c.c0ASRK0GbAFEewXD [truncated]
    +

    4. Update access_token setting

    +

    hit CTRL-C when you see waiting for code. This will save the config without doing oauth flow

    +
    rclone config update ${REMOTE_NAME} access_token ya29.c.c0Axxxx
    +

    5. Run rclone as usual

    +
    rclone ls dev-gcs:${MY_BUCKET}/
    +

    More Info on Service Accounts

    +

    Anonymous Access

    For downloads of objects that permit public access you can configure rclone to use anonymous access by setting anonymous to true. With unauthorized access you can't write or create files but only read or list those buckets and objects that have public read access.

    Application Default Credentials

    @@ -22665,7 +23540,7 @@ y/e/d> y

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).

    --gcs-client-id

    OAuth Client Id.

    @@ -23050,7 +23925,7 @@ y/e/d> y -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).

    --gcs-token

    OAuth Access Token as a JSON blob.

    @@ -23081,6 +23956,26 @@ y/e/d> y
  • Type: string
  • Required: false
  • +

    --gcs-client-credentials

    +

    Use client credentials OAuth flow.

    +

    This will use the OAUTH2 client Credentials Flow as described in RFC 6749.

    +

    Properties:

    + +

    --gcs-access-token

    +

    Short-lived access token.

    +

    Leave blank normally. Needed only if you want use short-lived access token instead of interactive login.

    +

    Properties:

    +

    --gcs-directory-markers

    Upload an empty object with a trailing slash when a new directory is created

    Empty folders are unsupported for bucket based remotes, this option creates an empty object ending with "/", to persist the folder.

    @@ -23147,7 +24042,7 @@ y/e/d> y

    Google Drive

    Paths are specified as drive:path

    Drive paths may be as deep as required, e.g. drive:directory/subdirectory.

    -

    Configuration

    +

    Configuration

    The initial setup for drive involves getting a token from Google drive which you need to do in your browser. rclone config walks you through it.

    Here is an example of how to make a remote called remote. First run:

     rclone config
    @@ -23538,81 +24433,86 @@ trashed=false and 'c' in parents JSON Text Format for Google Apps scripts +md +text/markdown +Markdown Text Format + + odp application/vnd.oasis.opendocument.presentation Openoffice Presentation - + ods application/vnd.oasis.opendocument.spreadsheet Openoffice Spreadsheet - + ods application/x-vnd.oasis.opendocument.spreadsheet Openoffice Spreadsheet - + odt application/vnd.oasis.opendocument.text Openoffice Document - + pdf application/pdf Adobe PDF Format - + pjpeg image/pjpeg Progressive JPEG Image - + png image/png PNG Image Format - + pptx application/vnd.openxmlformats-officedocument.presentationml.presentation Microsoft Office Powerpoint - + rtf application/rtf Rich Text Format - + svg image/svg+xml Scalable Vector Graphics Format - + tsv text/tab-separated-values Standard TSV format for spreadsheets - + txt text/plain Plain Text - + wmf application/x-msmetafile Windows Meta File - + xls application/vnd.ms-excel Classic Excel file - + xlsx application/vnd.openxmlformats-officedocument.spreadsheetml.sheet Microsoft Office Spreadsheet - + zip application/zip A ZIP file of HTML, Images CSS @@ -23651,7 +24551,7 @@ trashed=false and 'c' in parents -

    Standard options

    +

    Standard options

    Here are the Standard options specific to drive (Google Drive).

    --drive-client-id

    Google Application Client Id Setting your own is recommended. See https://rclone.org/drive/#making-your-own-client-id for how to create your own. If you leave this blank, it will use an internal key which is low performance.

    @@ -23728,7 +24628,7 @@ trashed=false and 'c' in parents
  • Type: bool
  • Default: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to drive (Google Drive).

    --drive-token

    OAuth Access Token as a JSON blob.

    @@ -23759,6 +24659,16 @@ trashed=false and 'c' in parents
  • Type: string
  • Required: false
  • +

    --drive-client-credentials

    +

    Use client credentials OAuth flow.

    +

    This will use the OAUTH2 client Credentials Flow as described in RFC 6749.

    +

    Properties:

    +

    --drive-root-folder-id

    ID of the root folder. Leave blank normally.

    Fill in to access "Computers" folders (see docs), or for rclone to use a non root folder as its starting point.

    @@ -24526,6 +25436,22 @@ rclone backend copyid drive: ID1 path1 ID2 path2 "webViewLink": "https://drive.google.com/file/d/0AxBe_CDEF4zkGHI4d0FjYko2QkD/view?usp=drivesdk\u0026resourcekey=0-ABCDEFGHIXJQpIGqBJq3MC" } ] +

    rescue

    +

    Rescue or delete any orphaned files

    +
    rclone backend rescue remote: [options] [<arguments>+]
    +

    This command rescues or deletes any orphaned files or directories.

    +

    Sometimes files can get orphaned in Google Drive. This means that they are no longer in any folder in Google Drive.

    +

    This command finds those files and either rescues them to a directory you specify or deletes them.

    +

    Usage:

    +

    This can be used in 3 ways.

    +

    First, list all orphaned files

    +
    rclone backend rescue drive:
    +

    Second rescue all orphaned files to the directory indicated

    +
    rclone backend rescue drive: "relative/path/to/rescue/directory"
    +

    e.g. To rescue all orphans to a directory called "Orphans" in the top level

    +
    rclone backend rescue drive: Orphans
    +

    Third delete all orphaned files to the trash

    +
    rclone backend rescue drive: -o delete

    Limitations

    Drive has quite a lot of rate limiting. This causes rclone to be limited to transferring about 2 files per second only. Individual files may be transferred much faster at 100s of MiB/s but lots of small files can take a long time.

    Server side copies are also subject to a separate rate limit. If you see User rate limit exceeded errors, wait at least 24 hours and retry. You can disable server-side copies with --disable copy to download and upload the files if you prefer.

    @@ -24568,7 +25494,7 @@ rclone backend copyid drive: ID1 path1 ID2 path2
  • Click on the "+ CREATE CREDENTIALS" button at the top of the screen, then select "OAuth client ID".

  • Choose an application type of "Desktop app" and click "Create". (the default name is fine)

  • It will show you a client ID and client secret. Make a note of these.

    -

    (If you selected "External" at Step 5 continue to Step 9. If you chose "Internal" you don't need to publish and can skip straight to Step 10 but your destination drive must be part of the same Google Workspace.)

  • +

    (If you selected "External" at Step 5 continue to Step 10. If you chose "Internal" you don't need to publish and can skip straight to Step 11 but your destination drive must be part of the same Google Workspace.)

  • Go to "Oauth consent screen" and then click "PUBLISH APP" button and confirm. You will also want to add yourself as a test user.

  • Provide the noted client ID and client secret to rclone.

  • @@ -24578,7 +25504,7 @@ rclone backend copyid drive: ID1 path1 ID2 path2

    Google Photos

    The rclone backend for Google Photos is a specialized backend for transferring photos and videos to and from Google Photos.

    NB The Google Photos API which rclone uses has quite a few limitations, so please read the limitations section carefully to make sure it is suitable for your use.

    -

    Configuration

    +

    Configuration

    The initial setup for google cloud storage involves getting a token from Google Photos which you need to do in your browser. rclone config walks you through it.

    Here is an example of how to make a remote called remote. First run:

     rclone config
    @@ -24733,7 +25659,7 @@ y/e/d> y

    This means that you can use the album path pretty much like a normal filesystem and it is a good target for repeated syncing.

    The shared-album directory shows albums shared with you or by you. This is similar to the Sharing tab in the Google Photos web interface.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to google photos (Google Photos).

    --gphotos-client-id

    OAuth Client Id.

    @@ -24765,7 +25691,7 @@ y/e/d> y
  • Type: bool
  • Default: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to google photos (Google Photos).

    --gphotos-token

    OAuth Access Token as a JSON blob.

    @@ -24796,6 +25722,16 @@ y/e/d> y
  • Type: string
  • Required: false
  • +

    --gphotos-client-credentials

    +

    Use client credentials OAuth flow.

    +

    This will use the OAUTH2 client Credentials Flow as described in RFC 6749.

    +

    Properties:

    +

    --gphotos-read-size

    Set to read the size of media items.

    Normally rclone does not read the size of media items since this takes another transaction. This isn't necessary for syncing. However rclone mount needs to know the size of files in advance of reading them, so setting this flag when using rclone mount is recommended if you want to read the media.

    @@ -24828,6 +25764,24 @@ y/e/d> y
  • Type: bool
  • Default: false
  • +

    --gphotos-proxy

    +

    Use the gphotosdl proxy for downloading the full resolution images

    +

    The Google API will deliver images and video which aren't full resolution, and/or have EXIF data missing.

    +

    However if you ue the gphotosdl proxy tnen you can download original, unchanged images.

    +

    This runs a headless browser in the background.

    +

    Download the software from gphotosdl

    +

    First run with

    +
    gphotosdl -login
    +

    Then once you have logged into google photos close the browser window and run

    +
    gphotosdl
    +

    Then supply the parameter --gphotos-proxy "http://localhost:8282" to make rclone use the proxy.

    +

    Properties:

    +

    --gphotos-encoding

    The encoding for the backend.

    See the encoding section in the overview for more info.

    @@ -24915,8 +25869,10 @@ y/e/d> y

    Downloading Images

    When Images are downloaded this strips EXIF location (according to the docs and my tests). This is a limitation of the Google Photos API and is covered by bug #112096115.

    The current google API does not allow photos to be downloaded at original resolution. This is very important if you are, for example, relying on "Google Photos" as a backup of your photos. You will not be able to use rclone to redownload original images. You could use 'google takeout' to recover the original photos as a last resort

    +

    NB you can use the --gphotos-proxy flag to use a headless browser to download images in full resolution.

    Downloading Videos

    When videos are downloaded they are downloaded in a really compressed version of the video compared to downloading it via the Google Photos web interface. This is covered by bug #113672044.

    +

    NB you can use the --gphotos-proxy flag to use a headless browser to download images in full resolution.

    Duplicates

    If a file name is duplicated in a directory then rclone will add the file ID into its name. So two files called file.jpg would then appear as file {123456}.jpg and file {ABCDEF}.jpg (the actual IDs are a lot longer alas!).

    If you upload the same image (with the same binary data) twice then Google Photos will deduplicate it. However it will retain the filename from the first upload which may confuse rclone. For example if you uploaded an image to upload then uploaded the same image to album/my_album the filename of the image in album/my_album will be what it was uploaded with initially, not what you uploaded it with to album. In practise this shouldn't cause too many problems.

    @@ -25016,7 +25972,7 @@ rclone backend drop Hasher:
    rclone backend stickyimport hasher:path/to/data sha1 remote:/path/to/sum.sha1

    stickyimport is similar to import but works much faster because it does not need to stat existing files and skips initial tree walk. Instead of binding cache entries to file fingerprints it creates sticky entries bound to the file name alone ignoring size, modification time etc. Such hash entries can be replaced only by purge, delete, backend drop or by full re-read/re-write of the files.

    Configuration reference

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to hasher (Better checksums for other remotes).

    --hasher-remote

    Remote to cache checksums for (e.g. myRemote:path).

    @@ -25045,7 +26001,7 @@ rclone backend drop Hasher:
  • Type: Duration
  • Default: off
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to hasher (Better checksums for other remotes).

    --hasher-auto-size

    Auto-update checksum for files smaller than this size (disabled by default).

    @@ -25123,7 +26079,7 @@ rclone backend drop Hasher:

    HDFS

    HDFS is a distributed file-system, part of the Apache Hadoop framework.

    Paths are specified as remote: or remote:path/to/dir.

    -

    Configuration

    +

    Configuration

    Here is an example of how to make a remote called remote. First run:

     rclone config

    This will guide you through an interactive setup process:

    @@ -25231,7 +26187,7 @@ username = root

    Invalid UTF-8 bytes will also be replaced.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to hdfs (Hadoop distributed file system).

    --hdfs-namenode

    Hadoop name nodes and ports.

    @@ -25259,7 +26215,7 @@ username = root -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to hdfs (Hadoop distributed file system).

    --hdfs-service-principal-name

    Kerberos service principal name for the namenode.

    @@ -25316,7 +26272,7 @@ username = root

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    The initial setup for hidrive involves getting a token from HiDrive which you need to do in your browser. rclone config walks you through it.

    -

    Configuration

    +

    Configuration

    Here is an example of how to make a remote called remote. First run:

     rclone config

    This will guide you through an interactive setup process:

    @@ -25414,7 +26370,7 @@ rclone lsd remote:/users/test/path

    By default, rclone will know the number of directory members contained in a directory. For example, rclone lsd uses this information.

    The acquisition of this information will result in additional time costs for HiDrive's API. When dealing with large directory structures, it may be desirable to circumvent this time cost, especially when this information is not explicitly needed. For this, the disable_fetching_member_count option can be used.

    See the below section about configuration options for more details.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to hidrive (HiDrive).

    --hidrive-client-id

    OAuth Client Id.

    @@ -25456,7 +26412,7 @@ rclone lsd remote:/users/test/path -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to hidrive (HiDrive).

    --hidrive-token

    OAuth Access Token as a JSON blob.

    @@ -25487,6 +26443,16 @@ rclone lsd remote:/users/test/path
  • Type: string
  • Required: false
  • +

    --hidrive-client-credentials

    +

    Use client credentials OAuth flow.

    +

    This will use the OAUTH2 client Credentials Flow as described in RFC 6749.

    +

    Properties:

    +

    --hidrive-scope-role

    User-level that rclone should use when requesting access from HiDrive.

    Properties:

    @@ -25626,7 +26592,7 @@ rclone lsd remote:/users/test/path

    The remote: represents the configured url, and any path following it will be resolved relative to this url, according to the URL standard. This means with remote url https://beta.rclone.org/branch and path fix, the resolved URL will be https://beta.rclone.org/branch/fix, while with path /fix the resolved URL will be https://beta.rclone.org/fix as the absolute path is resolved from the root of the domain.

    If the path following the remote: ends with / it will be assumed to point to a directory. If the path does not end with /, then a HEAD request is sent and the response used to decide if it it is treated as a file or a directory (run with -vv to see details). When --http-no-head is specified, a path without ending / is always assumed to be a file. If rclone incorrectly assumes the path is a file, the solution is to specify the path with ending /. When you know the path is a directory, ending it with / is always better as it avoids the initial HEAD request.

    To just download a single file it is easier to use copyurl.

    -

    Configuration

    +

    Configuration

    Here is an example of how to make a remote called remote. First run:

     rclone config

    This will guide you through an interactive setup process:

    @@ -25690,7 +26656,7 @@ e/n/d/r/c/s/q> q
    rclone lsd --http-url https://beta.rclone.org :http:

    or:

    rclone lsd :http,url='https://beta.rclone.org':
    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to http (HTTP).

    --http-url

    URL of HTTP host to connect to.

    @@ -25711,7 +26677,7 @@ e/n/d/r/c/s/q> q
  • Type: bool
  • Default: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to http (HTTP).

    --http-headers

    Set HTTP headers for all transactions.

    @@ -25788,9 +26754,9 @@ rclone rc backend/command command=set fs=remote: -o url=https://example.comThis is a backend for the ImageKit.io storage service.

    About ImageKit

    ImageKit.io provides real-time image and video optimizations, transformations, and CDN delivery. Over 1,000 businesses and 70,000 developers trust ImageKit with their images and videos on the web.

    -

    Accounts & Pricing

    +

    Accounts & Pricing

    To use this backend, you need to create an account on ImageKit. Start with a free plan with generous usage limits. Then, as your requirements grow, upgrade to a plan that best fits your needs. See the pricing details.

    -

    Configuration

    +

    Configuration

    Here is an example of making an imagekit configuration.

    Firstly create a ImageKit.io account and choose a plan.

    You will need to log in and get the publicKey and privateKey for your account from the developer section.

    @@ -25853,11 +26819,11 @@ y/e/d> y
    rclone mkdir imagekit-media-library:directory

    List the contents of a directory.

    rclone ls imagekit-media-library:directory
    -

    Modified time and hashes

    +

    Modified time and hashes

    ImageKit does not support modification times or hashes yet.

    Checksums

    No checksums are supported.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to imagekit (ImageKit.io).

    --imagekit-endpoint

    You can find your ImageKit.io URL endpoint in your dashboard

    @@ -25886,7 +26852,7 @@ y/e/d> y
  • Type: string
  • Required: true
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to imagekit (ImageKit.io).

    --imagekit-only-signed

    If you have configured Restrict unsigned image URLs in your dashboard settings, set this to true.

    @@ -26035,6 +27001,134 @@ y/e/d> y

    See the metadata docs for more info.

    +

    iCloud Drive

    +

    Configuration

    +

    The initial setup for an iCloud Drive backend involves getting a trust token/session. This can be done by simply using the regular iCloud password, and accepting the code prompt on another iCloud connected device.

    +

    IMPORTANT: At the moment an app specific password won't be accepted. Only use your regular password and 2FA.

    +

    rclone config walks you through the token creation. The trust token is valid for 30 days. After which you will have to reauthenticate with rclone reconnect or rclone config.

    +

    Here is an example of how to make a remote called iclouddrive. First run:

    +
     rclone config
    +

    This will guide you through an interactive setup process:

    +
    No remotes found, make a new one?
    +n) New remote
    +s) Set configuration password
    +q) Quit config
    +n/s/q> n
    +name> iclouddrive
    +Option Storage.
    +Type of storage to configure.
    +Choose a number from below, or type in your own value.
    +[snip]
    +XX / iCloud Drive
    +   \ (iclouddrive)
    +[snip]
    +Storage> iclouddrive
    +Option apple_id.
    +Apple ID.
    +Enter a value.
    +apple_id> APPLEID  
    +Option password.
    +Password.
    +Choose an alternative below.
    +y) Yes, type in my own password
    +g) Generate random password
    +y/g> y
    +Enter the password:
    +password:
    +Confirm the password:
    +password:
    +Edit advanced config?
    +y) Yes
    +n) No (default)
    +y/n> n
    +Option config_2fa.
    +Two-factor authentication: please enter your 2FA code
    +Enter a value.
    +config_2fa> 2FACODE
    +Remote config
    +--------------------
    +[koofr]
    +- type: iclouddrive
    +- apple_id: APPLEID
    +- password: *** ENCRYPTED ***
    +- cookies: ****************************
    +- trust_token: ****************************
    +--------------------
    +y) Yes this is OK (default)
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    Advanced Data Protection

    +

    ADP is currently unsupported and need to be disabled

    +

    Standard options

    +

    Here are the Standard options specific to iclouddrive (iCloud Drive).

    +

    --iclouddrive-apple-id

    +

    Apple ID.

    +

    Properties:

    + +

    --iclouddrive-password

    +

    Password.

    +

    NB Input to this must be obscured - see rclone obscure.

    +

    Properties:

    + +

    --iclouddrive-trust-token

    +

    Trust token (internal use)

    +

    Properties:

    + +

    --iclouddrive-cookies

    +

    cookies (internal use only)

    +

    Properties:

    + +

    Advanced options

    +

    Here are the Advanced options specific to iclouddrive (iCloud Drive).

    +

    --iclouddrive-client-id

    +

    Client id

    +

    Properties:

    + +

    --iclouddrive-encoding

    +

    The encoding for the backend.

    +

    See the encoding section in the overview for more info.

    +

    Properties:

    + +

    --iclouddrive-description

    +

    Description of the remote.

    +

    Properties:

    +

    Internet Archive

    The Internet Archive backend utilizes Items on archive.org

    Refer to IAS3 API documentation for the API this backend uses.

    @@ -26062,7 +27156,7 @@ y/e/d> y

    These auto-created files can be excluded from the sync using metadata filtering.

    rclone sync ... --metadata-exclude "source=metadata" --metadata-exclude "format=Metadata"

    Which excludes from the sync any files which have the source=metadata or format=Metadata flags which are added to Internet Archive auto-created files.

    -

    Configuration

    +

    Configuration

    Here is an example of making an internetarchive configuration. Most applies to the other providers as well, any differences are described below.

    First run

    rclone config
    @@ -26131,7 +27225,7 @@ y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y -

    Standard options

    +

    Standard options

    Here are the Standard options specific to internetarchive (Internet Archive).

    --internetarchive-access-key-id

    IAS3 Access Key.

    @@ -26153,7 +27247,7 @@ y/e/d> y
  • Type: string
  • Required: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to internetarchive (Internet Archive).

    --internetarchive-endpoint

    IAS3 Endpoint.

    @@ -26359,7 +27453,7 @@ Response: {"error":"invalid_grant","error_description&q

    Onlime has sold access to Jottacloud proper, while providing localized support to Danish Customers, but have recently set up their own hosting, transferring their customers from Jottacloud servers to their own ones.

    This, of course, necessitates using their servers for authentication, but otherwise functionality and architecture seems equivalent to Jottacloud.

    To setup rclone to use Onlime Cloud Storage, choose Onlime Cloud authentication in the setup. The rest of the setup is identical to the default setup.

    -

    Configuration

    +

    Configuration

    Here is an example of how to make a remote called remote with the default setup. First run:

    rclone config

    This will guide you through an interactive setup process:

    @@ -26523,7 +27617,7 @@ y/e/d> y

    Versioning can be disabled by --jottacloud-no-versions option. This is achieved by deleting the remote file prior to uploading a new version. If the upload the fails no version of the file will be available in the remote.

    Quota information

    To view your current quota you can use the rclone about remote: command which will display your usage limit (unless it is unlimited) and the current usage.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to jottacloud (Jottacloud).

    --jottacloud-client-id

    OAuth Client Id.

    @@ -26545,7 +27639,7 @@ y/e/d> y
  • Type: string
  • Required: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to jottacloud (Jottacloud).

    --jottacloud-token

    OAuth Access Token as a JSON blob.

    @@ -26576,6 +27670,16 @@ y/e/d> y
  • Type: string
  • Required: false
  • +

    --jottacloud-client-credentials

    +

    Use client credentials OAuth flow.

    +

    This will use the OAUTH2 client Credentials Flow as described in RFC 6749.

    +

    Properties:

    +

    --jottacloud-md5-memory-limit

    Files bigger than this will be cached on disk to calculate the MD5 if required.

    Properties:

    @@ -26702,7 +27806,7 @@ y/e/d> y

    Koofr

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Configuration

    +

    Configuration

    The initial setup for Koofr involves creating an application password for rclone. You can do that by opening the Koofr web application, giving the password a nice name like rclone and clicking on generate.

    Here is an example of how to make a remote called koofr. First run:

     rclone config
    @@ -26789,7 +27893,7 @@ y/e/d> y

    Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers).

    --koofr-provider

    Choose your storage provider.

    @@ -26845,7 +27949,7 @@ y/e/d> y
  • Type: string
  • Required: true
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers).

    --koofr-mountid

    Mount ID of the mount to use.

    @@ -27016,7 +28120,7 @@ d) Delete this remote y/e/d> y

    Linkbox

    Linkbox is a private cloud drive.

    -

    Configuration

    +

    Configuration

    Here is an example of making a remote for Linkbox.

    First run:

     rclone config
    @@ -27052,7 +28156,7 @@ e) Edit this remote d) Delete this remote y/e/d> y -

    Standard options

    +

    Standard options

    Here are the Standard options specific to linkbox (Linkbox).

    Token from https://www.linkbox.to/admin/account

    @@ -27063,7 +28167,7 @@ y/e/d> y
  • Type: string
  • Required: true
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to linkbox (Linkbox).

    Description of the remote.

    @@ -27089,7 +28193,7 @@ y/e/d> y
  • Storage keeps hash for all files and performs transparent deduplication, the hash algorithm is a modified SHA1
  • If a particular file is already present in storage, one can quickly submit file hash instead of long file upload (this optimization is supported by rclone)
  • -

    Configuration

    +

    Configuration

    Here is an example of making a mailru configuration.

    First create a Mail.ru Cloud account and choose a tariff.

    You will need to log in and create an app password for rclone. Rclone will not work with your normal username and password - it will give an error like oauth2: server response missing access_token.

    @@ -27230,7 +28334,7 @@ y/e/d> y

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to mailru (Mail.ru Cloud).

    --mailru-client-id

    OAuth Client Id.

    @@ -27293,7 +28397,7 @@ y/e/d> y -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to mailru (Mail.ru Cloud).

    --mailru-token

    OAuth Access Token as a JSON blob.

    @@ -27324,6 +28428,16 @@ y/e/d> y
  • Type: string
  • Required: false
  • +

    --mailru-client-credentials

    +

    Use client credentials OAuth flow.

    +

    This will use the OAUTH2 client Credentials Flow as described in RFC 6749.

    +

    Properties:

    +

    --mailru-speedup-file-patterns

    Comma separated list of file name patterns eligible for speedup (put by hash).

    Patterns are case insensitive and can contain '*' or '?' meta characters.

    @@ -27469,7 +28583,7 @@ y/e/d> y

    This is an rclone backend for Mega which supports the file transfer features of Mega using the same client side encryption.

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Configuration

    +

    Configuration

    Here is an example of how to make a remote called remote. First run:

     rclone config

    This will guide you through an interactive setup process:

    @@ -27570,7 +28684,7 @@ me@example.com:/$

    Note that once blocked, the use of other tools (such as megacmd) is not a sure workaround: following megacmd login times have been observed in succession for blocked remote: 7 minutes, 20 min, 30min, 30 min, 30min. Web access looks unaffected though.

    Investigation is continuing in relation to workarounds based on timeouts, pacers, retrials and tpslimits - if you discover something relevant, please post on the forum.

    So, if rclone was working nicely and suddenly you are unable to log-in and you are sure the user and the password are correct, likely you have got the remote blocked for a while.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to mega (Mega).

    --mega-user

    User name.

    @@ -27591,7 +28705,7 @@ me@example.com:/$
  • Type: string
  • Required: true
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to mega (Mega).

    --mega-debug

    Output more debug from Mega.

    @@ -27650,7 +28764,7 @@ me@example.com:/$

    Memory

    The memory backend is an in RAM backend. It does not persist its data - use the local backend for that.

    The memory backend behaves like a bucket-based remote (e.g. like s3). Because it has no parameters you can just use it with the :memory: remote name.

    -

    Configuration

    +

    Configuration

    You can configure it as a remote like this with rclone config too if you want to:

    No remotes found, make a new one?
     n) New remote
    @@ -27686,7 +28800,7 @@ rclone serve sftp :memory:

    The memory backend supports MD5 hashes and modification times accurate to 1 nS.

    Restricted filename characters

    The memory backend replaces the default restricted characters set.

    -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to memory (In memory object storage system.).

    --memory-description

    Description of the remote.

    @@ -27701,7 +28815,7 @@ rclone serve sftp :memory:

    Paths are specified as remote: You may put subdirectories in too, e.g. remote:/path/to/dir. If you have a CP code you can use that as the folder after the domain such as <domain>/<cpcode>/<internal directories within cpcode>.

    For example, this is commonly configured with or without a CP code: * With a CP code. [your-domain-prefix]-nsu.akamaihd.net/123456/subdirectory/ * Without a CP code. [your-domain-prefix]-nsu.akamaihd.net

    See all buckets rclone lsd remote: The initial setup for Netstorage involves getting an account and secret. Use rclone config to walk you through the setup process.

    -

    Configuration

    +

    Configuration

    Here's an example of how to make a remote called ns1.

    1. To begin the interactive configuration process, enter this command:
    2. @@ -27809,7 +28923,7 @@ y/e/d> y

      Purge

      NetStorage remote supports the purge feature by using the "quick-delete" NetStorage API action. The quick-delete action is disabled by default for security reasons and can be enabled for the account through the Akamai portal. Rclone will first try to use quick-delete action for the purge command and if this functionality is disabled then will fall back to a standard delete method.

      Note: Read the NetStorage Usage API for considerations when using "quick-delete". In general, using quick-delete method will not delete the tree immediately and objects targeted for quick-delete may still be accessible.

      -

      Standard options

      +

      Standard options

      Here are the Standard options specific to netstorage (Akamai NetStorage).

      --netstorage-host

      Domain+path of NetStorage host to connect to.

      @@ -27841,7 +28955,7 @@ y/e/d> y
    3. Type: string
    4. Required: true
    5. -

      Advanced options

      +

      Advanced options

      Here are the Advanced options specific to netstorage (Akamai NetStorage).

      --netstorage-protocol

      Select between HTTP or HTTPS protocol.

      @@ -27890,7 +29004,7 @@ y/e/d> y

      The desired path location (including applicable sub-directories) ending in the object that will be the target of the symlink (for example, /links/mylink). Include the file extension for the object, if applicable. rclone backend symlink <src> <path>

      Microsoft Azure Blob Storage

      Paths are specified as remote:container (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:container/path/to/dir.

      -

      Configuration

      +

      Configuration

      Here is an example of making a Microsoft Azure Blob Storage configuration. For a remote called remote. First run:

       rclone config

      This will guide you through an interactive setup process:

      @@ -28028,6 +29142,7 @@ y/e/d> y
      Env Auth: 2. Managed Service Identity Credentials

      When using Managed Service Identity if the VM(SS) on which this program is running has a system-assigned identity, it will be used by default. If the resource has no system-assigned but exactly one user-assigned identity, the user-assigned identity will be used by default.

      If the resource has multiple user-assigned identities you will need to unset env_auth and set use_msi instead. See the use_msi section.

      +

      If you are operating in disconnected clouds, or private clouds such as Azure Stack you may want to set disable_instance_discovery = true. This determines whether rclone requests Microsoft Entra instance metadata from https://login.microsoft.com/ before authenticating. Setting this to true will skip this request, making you responsible for ensuring the configured authority is valid and trustworthy.

      Env Auth: 3. Azure CLI credentials (as used by the az tool)

      Credentials created with the az tool can be picked up using env_auth.

      For example if you were to login with a service principal like this:

      @@ -28084,10 +29199,14 @@ container/

      If use_msi is set then managed service identity credentials are used. This authentication only works when running in an Azure service. env_auth needs to be unset to use this.

      However if you have multiple user identities to choose from these must be explicitly specified using exactly one of the msi_object_id, msi_client_id, or msi_mi_res_id parameters.

      If none of msi_object_id, msi_client_id, or msi_mi_res_id is set, this is is equivalent to using env_auth.

      +

      Azure CLI tool az

      +

      Set to use the Azure CLI tool az as the sole means of authentication.

      +

      Setting this can be useful if you wish to use the az CLI on a host with a System Managed Identity that you do not want to use.

      +

      Don't set env_auth at the same time.

      Anonymous

      If you want to access resources with public anonymous access then set account only. You can do this without making an rclone config:

      rclone lsf :azureblob,account=ACCOUNT:CONTAINER
      -

      Standard options

      +

      Standard options

      Here are the Standard options specific to azureblob (Microsoft Azure Blob Storage).

      --azureblob-account

      Azure Storage Account Name.

      @@ -28183,7 +29302,7 @@ container/
    6. Type: string
    7. Required: false
    8. -

      Advanced options

      +

      Advanced options

      Here are the Advanced options specific to azureblob (Microsoft Azure Blob Storage).

      --azureblob-client-send-certificate-chain

      Send the certificate chain when using certificate auth.

      @@ -28233,6 +29352,18 @@ container/
    9. Type: string
    10. Required: false
    11. +

      --azureblob-disable-instance-discovery

      +

      Skip requesting Microsoft Entra instance metadata

      +

      This should be set true only by applications authenticating in disconnected clouds, or private clouds such as Azure Stack.

      +

      It determines whether rclone requests Microsoft Entra instance metadata from https://login.microsoft.com/ before authenticating.

      +

      Setting this to true will skip this request, making you responsible for ensuring the configured authority is valid and trustworthy.

      +

      Properties:

      +

      --azureblob-use-msi

      Use a managed service identity to authenticate (only works in Azure).

      When true, use a managed service identity to authenticate to Azure Storage instead of a SAS token or account key.

      @@ -28284,6 +29415,18 @@ container/
    12. Type: bool
    13. Default: false
    14. +

      --azureblob-use-az

      +

      Use Azure CLI tool az for authentication

      +

      Set to use the Azure CLI tool az as the sole means of authentication.

      +

      Setting this can be useful if you wish to use the az CLI on a host with a System Managed Identity that you do not want to use.

      +

      Don't set env_auth at the same time.

      +

      Properties:

      +

      --azureblob-endpoint

      Endpoint for the service.

      Leave blank normally.

      @@ -28505,7 +29648,7 @@ container/

      Also, if you want to access a storage emulator instance running on a different machine, you can override the endpoint parameter in the advanced settings, setting it to http(s)://<host>:<port>/devstoreaccount1 (e.g. http://10.254.2.5:10000/devstoreaccount1).

      Microsoft Azure Files Storage

      Paths are specified as remote: You may put subdirectories in too, e.g. remote:path/to/dir.

      -

      Configuration

      +

      Configuration

      Here is an example of making a Microsoft Azure Files Storage configuration. For a remote called remote. First run:

       rclone config

      This will guide you through an interactive setup process:

      @@ -28750,7 +29893,7 @@ y/e/d>

      If use_msi is set then managed service identity credentials are used. This authentication only works when running in an Azure service. env_auth needs to be unset to use this.

      However if you have multiple user identities to choose from these must be explicitly specified using exactly one of the msi_object_id, msi_client_id, or msi_mi_res_id parameters.

      If none of msi_object_id, msi_client_id, or msi_mi_res_id is set, this is is equivalent to using env_auth.

      -

      Standard options

      +

      Standard options

      Here are the Standard options specific to azurefiles (Microsoft Azure Files).

      --azurefiles-account

      Azure Storage Account Name.

      @@ -28865,7 +30008,7 @@ y/e/d>
    15. Type: string
    16. Required: false
    17. -

      Advanced options

      +

      Advanced options

      Here are the Advanced options specific to azurefiles (Microsoft Azure Files).

      --azurefiles-client-send-certificate-chain

      Send the certificate chain when using certificate auth.

      @@ -29040,7 +30183,7 @@ y/e/d>

      Microsoft OneDrive

      Paths are specified as remote:path

      Paths may be as deep as required, e.g. remote:directory/subdirectory.

      -

      Configuration

      +

      Configuration

      The initial setup for OneDrive involves getting a token from Microsoft which you need to do in your browser. rclone config walks you through it.

      Here is an example of how to make a remote called remote. First run:

       rclone config
      @@ -29152,6 +30295,17 @@ y/e/d> y
    18. In the rclone config, set token_url to https://login.microsoftonline.com/YOUR_TENANT_ID/oauth2/v2.0/token.

    Note: If you have a special region, you may need a different host in step 4 and 5. Here are some hints.

    +

    Using OAuth Client Credential flow

    +

    OAuth Client Credential flow will allow rclone to use permissions directly associated with the Azure AD Enterprise application, rather that adopting the context of an Azure AD user account.

    +

    This flow can be enabled by following the steps below:

    +
      +
    1. Create the Enterprise App registration in the Azure AD portal and obtain a Client ID and Client Secret as described above.
    2. +
    3. Ensure that the application has the appropriate permissions and they are assigned as Application Permissions
    4. +
    5. Configure the remote, ensuring that Client ID and Client Secret are entered correctly.
    6. +
    7. In the Advanced Config section, enter true for client_credentials and in the tenant section enter the tenant ID.
    8. +
    +

    When it comes to choosing the type of the connection work with the client credentials flow. In particular the "onedrive" option does not work. You can use the "sharepoint" option or if that does not find the correct drive ID type it in manually with the "driveid" option.

    +

    NOTE Assigning permissions directly to the application means that anyone with the Client ID and Client Secret can access your OneDrive files. Take care to safeguard these credentials.

    Modification times and hashes

    OneDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.

    OneDrive Personal, OneDrive for Business and Sharepoint Server support QuickXorHash.

    @@ -29265,7 +30419,7 @@ y/e/d> y

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    Deleting files

    Any files you delete with rclone will end up in the trash. Microsoft doesn't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Microsoft's apps or via the OneDrive website.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to onedrive (Microsoft OneDrive).

    --onedrive-client-id

    OAuth Client Id.

    @@ -29315,7 +30469,17 @@ y/e/d> y -

    Advanced options

    +

    --onedrive-tenant

    +

    ID of the service principal's tenant. Also called its directory ID.

    +

    Set this if using - Client Credential flow

    +

    Properties:

    + +

    Advanced options

    Here are the Advanced options specific to onedrive (Microsoft OneDrive).

    --onedrive-token

    OAuth Access Token as a JSON blob.

    @@ -29346,6 +30510,16 @@ y/e/d> y
  • Type: string
  • Required: false
  • +

    --onedrive-client-credentials

    +

    Use client credentials OAuth flow.

    +

    This will use the OAUTH2 client Credentials Flow as described in RFC 6749.

    +

    Properties:

    +

    --onedrive-chunk-size

    Chunk size to upload files with - must be multiple of 320k (327,680 bytes).

    Above this size files will be chunked - must be multiple of 320k (327,680 bytes) and should not exceed 250M (262,144,000 bytes) else you may encounter "Microsoft.SharePoint.Client.InvalidClientQueryException: The request message is too big." Note that the chunks will be buffered into memory.

    @@ -29660,75 +30834,75 @@ rclone rc vfs/refresh recursive=true

    Permissions are also supported, if --onedrive-metadata-permissions is set. The accepted values for --onedrive-metadata-permissions are "read", "write", "read,write", and "off" (the default). "write" supports adding new permissions, updating the "role" of existing permissions, and removing permissions. Updating and removing require the Permission ID to be known, so it is recommended to use "read,write" instead of "write" if you wish to update/remove permissions.

    Permissions are read/written in JSON format using the same schema as the OneDrive API, which differs slightly between OneDrive Personal and Business.

    Example for OneDrive Personal:

    -
    [
    -    {
    -        "id": "1234567890ABC!123",
    -        "grantedTo": {
    -            "user": {
    -                "id": "ryan@contoso.com"
    -            },
    -            "application": {},
    -            "device": {}
    -        },
    -        "invitation": {
    -            "email": "ryan@contoso.com"
    -        },
    -        "link": {
    -            "webUrl": "https://1drv.ms/t/s!1234567890ABC"
    -        },
    -        "roles": [
    -            "read"
    -        ],
    -        "shareId": "s!1234567890ABC"
    -    }
    -]
    +
    [
    +    {
    +        "id": "1234567890ABC!123",
    +        "grantedTo": {
    +            "user": {
    +                "id": "ryan@contoso.com"
    +            },
    +            "application": {},
    +            "device": {}
    +        },
    +        "invitation": {
    +            "email": "ryan@contoso.com"
    +        },
    +        "link": {
    +            "webUrl": "https://1drv.ms/t/s!1234567890ABC"
    +        },
    +        "roles": [
    +            "read"
    +        ],
    +        "shareId": "s!1234567890ABC"
    +    }
    +]

    Example for OneDrive Business:

    -
    [
    -    {
    -        "id": "48d31887-5fad-4d73-a9f5-3c356e68a038",
    -        "grantedToIdentities": [
    -            {
    -                "user": {
    -                    "displayName": "ryan@contoso.com"
    -                },
    -                "application": {},
    -                "device": {}
    -            }
    -        ],
    -        "link": {
    -            "type": "view",
    -            "scope": "users",
    -            "webUrl": "https://contoso.sharepoint.com/:w:/t/design/a577ghg9hgh737613bmbjf839026561fmzhsr85ng9f3hjck2t5s"
    -        },
    -        "roles": [
    -            "read"
    -        ],
    -        "shareId": "u!LKj1lkdlals90j1nlkascl"
    -    },
    -    {
    -        "id": "5D33DD65C6932946",
    -        "grantedTo": {
    -            "user": {
    -                "displayName": "John Doe",
    -                "id": "efee1b77-fb3b-4f65-99d6-274c11914d12"
    -            },
    -            "application": {},
    -            "device": {}
    -        },
    -        "roles": [
    -            "owner"
    -        ],
    -        "shareId": "FWxc1lasfdbEAGM5fI7B67aB5ZMPDMmQ11U"
    -    }
    -]
    +
    [
    +    {
    +        "id": "48d31887-5fad-4d73-a9f5-3c356e68a038",
    +        "grantedToIdentities": [
    +            {
    +                "user": {
    +                    "displayName": "ryan@contoso.com"
    +                },
    +                "application": {},
    +                "device": {}
    +            }
    +        ],
    +        "link": {
    +            "type": "view",
    +            "scope": "users",
    +            "webUrl": "https://contoso.sharepoint.com/:w:/t/design/a577ghg9hgh737613bmbjf839026561fmzhsr85ng9f3hjck2t5s"
    +        },
    +        "roles": [
    +            "read"
    +        ],
    +        "shareId": "u!LKj1lkdlals90j1nlkascl"
    +    },
    +    {
    +        "id": "5D33DD65C6932946",
    +        "grantedTo": {
    +            "user": {
    +                "displayName": "John Doe",
    +                "id": "efee1b77-fb3b-4f65-99d6-274c11914d12"
    +            },
    +            "application": {},
    +            "device": {}
    +        },
    +        "roles": [
    +            "owner"
    +        ],
    +        "shareId": "FWxc1lasfdbEAGM5fI7B67aB5ZMPDMmQ11U"
    +    }
    +]

    To write permissions, pass in a "permissions" metadata key using this same format. The --metadata-mapper tool can be very helpful for this.

    When adding permissions, an email address can be provided in the User.ID or DisplayName properties of grantedTo or grantedToIdentities. Alternatively, an ObjectID can be provided in User.ID. At least one valid recipient must be provided in order to add a permission for a user. Creating a Public Link is also supported, if Link.Scope is set to "anonymous".

    Example request to add a "read" permission with --metadata-mapper:

    -
    {
    -    "Metadata": {
    -        "permissions": "[{\"grantedToIdentities\":[{\"user\":{\"id\":\"ryan@contoso.com\"}}],\"roles\":[\"read\"]}]"
    -    }
    -}
    +
    {
    +    "Metadata": {
    +        "permissions": "[{\"grantedToIdentities\":[{\"user\":{\"id\":\"ryan@contoso.com\"}}],\"roles\":[\"read\"]}]"
    +    }
    +}

    Note that adding a permission can fail if a conflicting permission already exists for the file/folder.

    To update an existing permission, include both the Permission ID and the new roles to be assigned. roles is the only property that can be changed.

    To remove permissions, pass in a blob containing only the permissions you wish to keep (which can be empty, to remove all.) Note that the owner role will be ignored, as it cannot be removed.

    @@ -29973,7 +31147,7 @@ ERROR : 20230203_123826234_iOS.heic: vfs cache: failed to download: vfs reader:

    OpenDrive

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Configuration

    +

    Configuration

    Here is an example of how to make a remote called remote. First run:

     rclone config

    This will guide you through an interactive setup process:

    @@ -30118,7 +31292,7 @@ y/e/d> y

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to opendrive (OpenDrive).

    --opendrive-username

    Username.

    @@ -30139,7 +31313,7 @@ y/e/d> y
  • Type: string
  • Required: true
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to opendrive (OpenDrive).

    --opendrive-encoding

    The encoding for the backend.

    @@ -30184,7 +31358,7 @@ y/e/d> y

    Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir.

    Sample command to transfer local artifacts to remote:bucket in oracle object storage:

    rclone -vvv --progress --stats-one-line --max-stats-groups 10 --log-format date,time,UTC,longfile --fast-list --buffer-size 256Mi --oos-no-check-bucket --oos-upload-cutoff 10Mi --multi-thread-cutoff 16Mi --multi-thread-streams 3000 --transfers 3000 --checkers 64 --retries 2 --oos-chunk-size 10Mi --oos-upload-concurrency 10000 --oos-attempt-resume-upload --oos-leave-parts-on-error sync ./artifacts remote:bucket -vv

    -

    Configuration

    +

    Configuration

    Here is an example of making an oracle object storage configuration. rclone config walks you through it.

    Here is an example of how to make a remote called remote. First run:

     rclone config
    @@ -30367,7 +31541,7 @@ provider = no_auth

    If the modification time needs to be updated rclone will attempt to perform a server side copy to update the modification if the object can be copied in a single part. In the case the object is larger than 5Gb, the object will be uploaded rather than copied.

    Note that reading this from the object takes an additional HEAD request as the metadata isn't returned in object listings.

    The MD5 hash algorithm is supported.

    -

    Multipart uploads

    +

    Multipart uploads

    rclone supports multipart uploads with OOS which means that it can upload files bigger than 5 GiB.

    Note that files uploaded both with multipart upload and through crypt remotes do not have MD5 sums.

    rclone switches from single part uploads to multipart uploads at the point specified by --oos-upload-cutoff. This can be a maximum of 5 GiB and a minimum of 0 (ie always upload multipart files).

    @@ -30375,7 +31549,7 @@ provider = no_auth

    Multipart uploads will use --transfers * --oos-upload-concurrency * --oos-chunk-size extra memory. Single part uploads to not use extra memory.

    Single part transfers can be faster than multipart transfers or slower depending on your latency from oos - the more latency, the more likely single part transfers will be faster.

    Increasing --oos-upload-concurrency will increase throughput (8 would be a sensible value) and increasing --oos-chunk-size also increases throughput (16M would be sensible). Increasing either of these will use more memory. The default values are high enough to gain most of the possible performance without using too much memory.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage).

    --oos-provider

    Choose your Auth Provider

    @@ -30428,14 +31602,15 @@ provider = no_auth
  • Required: true
  • --oos-compartment

    -

    Object storage compartment OCID

    +

    Specify compartment OCID, if you need to list buckets.

    +

    List objects works without compartment OCID.

    Properties:

    --oos-region

    Object storage Region

    @@ -30490,7 +31665,7 @@ provider = no_auth -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage).

    --oos-storage-tier

    The storage class to use when storing new objects in storage. https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm

    @@ -30809,7 +31984,7 @@ if not.

    Mounting Buckets

    QingStor

    Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir.

    -

    Configuration

    +

    Configuration

    Here is an example of making an QingStor configuration. First run

    rclone config

    This will guide you through an interactive setup process.

    @@ -30880,7 +32055,7 @@ y/e/d> y
    rclone sync --interactive /home/local/directory remote:bucket

    --fast-list

    This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

    -

    Multipart uploads

    +

    Multipart uploads

    rclone supports multipart uploads with QingStor which means that it can upload files bigger than 5 GiB. Note that files uploaded with multipart upload don't have an MD5SUM.

    Note that incomplete multipart uploads older than 24 hours can be removed with rclone cleanup remote:bucket just for one bucket rclone cleanup remote: for all buckets. QingStor does not ever remove incomplete multipart uploads so it may be necessary to run this from time to time.

    Buckets and Zone

    @@ -30905,7 +32080,7 @@ y/e/d> y

    Restricted filename characters

    The control characters 0x00-0x1F and / are replaced as in the default restricted characters set. Note that 0x7F is not replaced.

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to qingstor (QingCloud Object Storage).

    --qingstor-env-auth

    Get QingStor credentials from runtime.

    @@ -30986,7 +32161,7 @@ y/e/d> y -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to qingstor (QingCloud Object Storage).

    --qingstor-connection-retries

    Number of connection retries.

    @@ -31059,7 +32234,7 @@ y/e/d> y

    Paths may be as deep as required, e.g., remote:directory/subdirectory.

    The initial setup for Quatrix involves getting an API Key from Quatrix. You can get the API key in the user's profile at https://<account>/profile/api-keys or with the help of the API - https://docs.maytech.net/quatrix/quatrix-api/api-explorer#/API-Key/post_api_key_create.

    See complete Swagger documentation for Quatrix - https://docs.maytech.net/quatrix/quatrix-api/api-explorer

    -

    Configuration

    +

    Configuration

    Here is an example of how to make a remote called remote. First run:

     rclone config

    This will guide you through an interactive setup process:

    @@ -31152,7 +32327,7 @@ y/e/d> y

    For files above 50 MiB rclone will use a chunked transfer. Rclone will upload up to --transfers chunks at the same time (shared among all multipart uploads). Chunks are buffered in memory, and the minimal chunk size is 10_000_000 bytes by default, and it can be changed in the advanced configuration, so increasing --transfers will increase the memory use. The chunk size has a maximum size limit, which is set to 100_000_000 bytes by default and can be changed in the advanced configuration. The size of the uploaded chunk will dynamically change depending on the upload speed. The total memory use equals the number of transfers multiplied by the minimal chunk size. In case there's free memory allocated for the upload (which equals the difference of maximal_summary_chunk_size and minimal_chunk_size * transfers), the chunk size may increase in case of high upload speed. As well as it can decrease in case of upload speed problems. If no free memory is available, all chunks will equal minimal_chunk_size.

    Deleting files

    Files you delete with rclone will end up in Trash and be stored there for 30 days. Quatrix also provides an API to permanently delete files and an API to empty the Trash so that you can remove files permanently from your account.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to quatrix (Quatrix by Maytech).

    --quatrix-api-key

    API key for accessing Quatrix account

    @@ -31172,7 +32347,7 @@ y/e/d> y
  • Type: string
  • Required: true
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to quatrix (Quatrix by Maytech).

    --quatrix-encoding

    The encoding for the backend.

    @@ -31249,7 +32424,7 @@ y/e/d> y

    rclone interacts with Sia network by talking to the Sia daemon via HTTP API which is usually available on port 9980. By default you will run the daemon locally on the same computer so it's safe to leave the API password blank (the API URL will be http://127.0.0.1:9980 making external access impossible).

    However, if you want to access Sia daemon running on another node, for example due to memory constraints or because you want to share single daemon between several rclone and Sia-UI instances, you'll need to make a few more provisions: - Ensure you have Sia daemon installed directly or in a docker container because Sia-UI does not support this mode natively. - Run it on externally accessible port, for example provide --api-addr :9980 and --disable-api-security arguments on the daemon command line. - Enforce API password for the siad daemon via environment variable SIA_API_PASSWORD or text file named apipassword in the daemon directory. - Set rclone backend option api_password taking it from above locations.

    Notes: 1. If your wallet is locked, rclone cannot unlock it automatically. You should either unlock it in advance by using Sia-UI or via command line siac wallet unlock. Alternatively you can make siad unlock your wallet automatically upon startup by running it with environment variable SIA_WALLET_PASSWORD. 2. If siad cannot find the SIA_API_PASSWORD variable or the apipassword file in the SIA_DIR directory, it will generate a random password and store in the text file named apipassword under YOUR_HOME/.sia/ directory on Unix or C:\Users\YOUR_HOME\AppData\Local\Sia\apipassword on Windows. Remember this when you configure password in rclone. 3. The only way to use siad without API password is to run it on localhost with command line argument --authorize-api=false, but this is insecure and strongly discouraged.

    -

    Configuration

    +

    Configuration

    Here is an example of how to make a sia remote called mySia. First, run:

     rclone config

    This will guide you through an interactive setup process:

    @@ -31309,7 +32484,7 @@ y/e/d> y
  • Upload a local directory to the Sia directory called backup
  • rclone copy /home/source mySia:backup
    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to sia (Sia Decentralized Cloud).

    --sia-api-url

    Sia daemon API URL, like http://sia.daemon.host:9980.

    @@ -31332,7 +32507,7 @@ y/e/d> y
  • Type: string
  • Required: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to sia (Sia Decentralized Cloud).

    --sia-user-agent

    Siad User Agent

    @@ -31382,7 +32557,7 @@ y/e/d> y
  • IBM Bluemix Cloud ObjectStorage Swift
  • Paths are specified as remote:container (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:container/path/to/dir.

    -

    Configuration

    +

    Configuration

    Here is an example of making a swift configuration. First run

    rclone config

    This will guide you through an interactive setup process.

    @@ -31548,7 +32723,7 @@ rclone lsd myremote:

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)).

    --swift-env-auth

    Get swift credentials from environment variables in standard OpenStack form.

    @@ -31786,7 +32961,7 @@ rclone lsd myremote: -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)).

    --swift-leave-parts-on-error

    If true avoid calling abort upload on a failure.

    @@ -31910,7 +33085,7 @@ rclone lsd myremote:

    pCloud

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Configuration

    +

    Configuration

    The initial setup for pCloud involves getting a token from pCloud which you need to do in your browser. rclone config walks you through it.

    Here is an example of how to make a remote called remote. First run:

     rclone config
    @@ -31932,6 +33107,10 @@ Pcloud App Client Id - leave blank normally. client_id> Pcloud App Client Secret - leave blank normally. client_secret> +Edit advanced config? +y) Yes +n) No (default) +y/n> n Remote config Use web browser to automatically authenticate rclone with remote? * Say Y if the machine running rclone has a web browser you can use @@ -31956,6 +33135,7 @@ e) Edit this remote d) Delete this remote y/e/d> y

    See the remote setup docs for how to set it up on a machine with no Internet browser available.

    +

    Note if you are using remote config with rclone authorize while your pcloud server is the EU region, you will need to set the hostname in 'Edit advanced config', otherwise you might get a token error.

    Note that rclone runs a webserver on your local machine to collect the token as returned from pCloud. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.

    Once configured you can then use rclone like this,

    List directories in top level of your pCloud

    @@ -31996,7 +33176,7 @@ y/e/d> y

    However you can set this to restrict rclone to a specific folder hierarchy.

    In order to do this you will have to find the Folder ID of the directory you wish rclone to display. This will be the folder field of the URL when you open the relevant folder in the pCloud web interface.

    So if the folder you want rclone to use has a URL which looks like https://my.pcloud.com/#page=filemanager&folder=5xxxxxxxx8&tpl=foldergrid in the browser, then you use 5xxxxxxxx8 as the root_folder_id in the config.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to pcloud (Pcloud).

    --pcloud-client-id

    OAuth Client Id.

    @@ -32018,7 +33198,7 @@ y/e/d> y
  • Type: string
  • Required: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to pcloud (Pcloud).

    --pcloud-token

    OAuth Access Token as a JSON blob.

    @@ -32049,6 +33229,16 @@ y/e/d> y
  • Type: string
  • Required: false
  • +

    --pcloud-client-credentials

    +

    Use client credentials OAuth flow.

    +

    This will use the OAUTH2 client Credentials Flow as described in RFC 6749.

    +

    Properties:

    +

    --pcloud-encoding

    The encoding for the backend.

    See the encoding section in the overview for more info.

    @@ -32121,7 +33311,7 @@ y/e/d> y

    PikPak

    PikPak is a private cloud drive.

    Paths are specified as remote:path, and may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Configuration

    +

    Configuration

    Here is an example of making a remote for PikPak.

    First run:

     rclone config
    @@ -32177,7 +33367,7 @@ y/e/d> y

    Modification times and hashes

    PikPak keeps modification times on objects, and updates them when uploading objects, but it does not support changing only the modification time

    The MD5 hash algorithm is supported.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to pikpak (PikPak).

    --pikpak-user

    Pikpak username.

    @@ -32198,56 +33388,26 @@ y/e/d> y
  • Type: string
  • Required: true
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to pikpak (PikPak).

    -

    --pikpak-client-id

    -

    OAuth Client Id.

    -

    Leave blank normally.

    +

    --pikpak-device-id

    +

    Device ID used for authorization.

    Properties:

    -

    --pikpak-client-secret

    -

    OAuth Client Secret.

    -

    Leave blank normally.

    +

    --pikpak-user-agent

    +

    HTTP user agent for pikpak.

    +

    Defaults to "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0" or "--pikpak-user-agent" provided on command line.

    Properties:

    -

    --pikpak-token

    -

    OAuth Access Token as a JSON blob.

    -

    Properties:

    - -

    --pikpak-auth-url

    -

    Auth server URL.

    -

    Leave blank to use the provider defaults.

    -

    Properties:

    - -

    --pikpak-token-url

    -

    Token server url.

    -

    Leave blank to use the provider defaults.

    -

    Properties:

    -

    --pikpak-root-folder-id

    ID of the root folder. Leave blank normally.

    @@ -32279,6 +33439,16 @@ y/e/d> y
  • Type: bool
  • Default: false
  • + +

    Use original file links instead of media links.

    +

    This avoids issues caused by invalid media links, but may reduce download speeds.

    +

    Properties:

    +

    --pikpak-hash-memory-limit

    Files bigger than this will be cached on disk to calculate hash if required.

    Properties:

    @@ -32439,7 +33609,7 @@ e/n/d/r/c/s/q> q

    rclone lsf Pixeldrain: --dirs-only -Fpi

    This will print directories in your Pixeldrain home directory and their public IDs.

    Enter this directory ID in the rclone config and you will be able to access the directory.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to pixeldrain (Pixeldrain Filesystem).

    --pixeldrain-api-key

    API key for your pixeldrain account. Found on https://pixeldrain.com/user/api_keys.

    @@ -32460,7 +33630,7 @@ e/n/d/r/c/s/q> q
  • Type: string
  • Default: "me"
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to pixeldrain (Pixeldrain Filesystem).

    --pixeldrain-api-url

    The API endpoint to connect to. In the vast majority of cases it's fine to leave this at default. It is only intended to be changed for testing purposes.

    @@ -32528,7 +33698,7 @@ e/n/d/r/c/s/q> q

    premiumize.me

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Configuration

    +

    Configuration

    The initial setup for premiumize.me involves getting a token from premiumize.me which you need to do in your browser. rclone config walks you through it.

    Here is an example of how to make a remote called remote. First run:

     rclone config
    @@ -32605,7 +33775,7 @@ y/e/d>

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to premiumizeme (premiumize.me).

    --premiumizeme-client-id

    OAuth Client Id.

    @@ -32637,7 +33807,7 @@ y/e/d>
  • Type: string
  • Required: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to premiumizeme (premiumize.me).

    --premiumizeme-token

    OAuth Access Token as a JSON blob.

    @@ -32668,6 +33838,16 @@ y/e/d>
  • Type: string
  • Required: false
  • +

    --premiumizeme-client-credentials

    +

    Use client credentials OAuth flow.

    +

    This will use the OAUTH2 client Credentials Flow as described in RFC 6749.

    +

    Properties:

    +

    --premiumizeme-encoding

    The encoding for the backend.

    See the encoding section in the overview for more info.

    @@ -32760,7 +33940,7 @@ y/e/d> y

    Please set your mailbox password in the advanced config section.

    Caching

    The cache is currently built for the case when the rclone is the only instance performing operations to the mount point. The event system, which is the proton API system that provides visibility of what has changed on the drive, is yet to be implemented, so updates from other clients won’t be reflected in the cache. Thus, if there are concurrent clients accessing the same mount point, then we might have a problem with caching the stale data.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to protondrive (Proton Drive).

    --protondrive-username

    The username of your proton account

    @@ -32792,7 +33972,7 @@ y/e/d> y
  • Type: string
  • Required: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to protondrive (Proton Drive).

    --protondrive-mailbox-password

    The mailbox password of your two-password proton account.

    @@ -32912,7 +34092,7 @@ y/e/d> y

    put.io

    Paths are specified as remote:path

    put.io paths may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Configuration

    +

    Configuration

    The initial setup for put.io involves getting a token from put.io which you need to do in your browser. rclone config walks you through it.

    Here is an example of how to make a remote called remote. First run:

     rclone config
    @@ -32996,7 +34176,7 @@ e/n/d/r/c/s/q> q

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to putio (Put.io).

    --putio-client-id

    OAuth Client Id.

    @@ -33018,7 +34198,7 @@ e/n/d/r/c/s/q> q
  • Type: string
  • Required: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to putio (Put.io).

    --putio-token

    OAuth Access Token as a JSON blob.

    @@ -33049,6 +34229,16 @@ e/n/d/r/c/s/q> q
  • Type: string
  • Required: false
  • +

    --putio-client-credentials

    +

    Use client credentials OAuth flow.

    +

    This will use the OAUTH2 client Credentials Flow as described in RFC 6749.

    +

    Properties:

    +

    --putio-encoding

    The encoding for the backend.

    See the encoding section in the overview for more info.

    @@ -33140,7 +34330,7 @@ y/e/d> y

    Please set your mailbox password in the advanced config section.

    Caching

    The cache is currently built for the case when the rclone is the only instance performing operations to the mount point. The event system, which is the proton API system that provides visibility of what has changed on the drive, is yet to be implemented, so updates from other clients won’t be reflected in the cache. Thus, if there are concurrent clients accessing the same mount point, then we might have a problem with caching the stale data.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to protondrive (Proton Drive).

    --protondrive-username

    The username of your proton account

    @@ -33172,7 +34362,7 @@ y/e/d> y
  • Type: string
  • Required: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to protondrive (Proton Drive).

    --protondrive-mailbox-password

    The mailbox password of your two-password proton account.

    @@ -33291,7 +34481,7 @@ y/e/d> y

    The Proton-API-Bridge, attempts to bridge the gap, so rclone can be built on top of this quickly. This codebase handles the intricate tasks before and after calling Proton APIs, particularly the complex encryption scheme, allowing developers to implement features for other software on top of this codebase. There are likely quite a few errors in this library, as there isn't official documentation available.

    Seafile

    This is a backend for the Seafile storage service: - It works with both the free community edition or the professional edition. - Seafile versions 6.x, 7.x, 8.x and 9.x are all supported. - Encrypted libraries are also supported. - It supports 2FA enabled users - Using a Library API Token is not supported

    -

    Configuration

    +

    Configuration

    There are two distinct modes you can setup your remote: - you point your remote to the root of the server, meaning you don't specify a library during the configuration: Paths are specified as remote:library. You may put subdirectories in too, e.g. remote:library/path/to/dir. - you point your remote to a specific library during the configuration: Paths are specified as remote:path/to/dir. This is the recommended mode when using encrypted libraries. (This mode is possibly slightly faster than the root mode)

    Configuration in root mode

    Here is an example of making a seafile configuration for a user with no two-factor authentication. First run

    @@ -33492,7 +34682,7 @@ http://my.seafile.server/d/9ea2455f6f55478bbb0d/

    It has been actively developed using the seafile docker image of these versions: - 6.3.4 community edition - 7.0.5 community edition - 7.1.3 community edition - 9.0.10 community edition

    Versions below 6.0 are not supported. Versions between 6.0 and 6.3 haven't been tested and might not work properly.

    Each new version of rclone is automatically tested against the latest docker image of the seafile community server.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to seafile (seafile).

    --seafile-url

    URL of seafile host to connect to.

    @@ -33568,7 +34758,7 @@ http://my.seafile.server/d/9ea2455f6f55478bbb0d/
  • Type: string
  • Required: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to seafile (seafile).

    --seafile-create-library

    Should rclone create a library if it doesn't exist.

    @@ -33609,7 +34799,7 @@ http://my.seafile.server/d/9ea2455f6f55478bbb0d/

    Paths are specified as remote:path. If the path does not begin with a / it is relative to the home directory of the user. An empty path remote: refers to the user's home directory. For example, rclone lsd remote: would list the home directory of the user configured in the rclone remote config (i.e /home/sftpuser). However, rclone lsd remote:/ would list the root directory for remote machine (i.e. /)

    Note that some SFTP servers will need the leading / - Synology is a good example of this. rsync.net and Hetzner, on the other hand, requires users to OMIT the leading /.

    Note that by default rclone will try to execute shell commands on the server, see shell access considerations.

    -

    Configuration

    +

    Configuration

    Here is an example of making an SFTP configuration. First run

    rclone config

    This will guide you through an interactive setup process.

    @@ -33688,7 +34878,7 @@ y/e/d> y

    If you set the ask_password option, rclone will prompt for a password when needed and no password has been configured.

    Certificate-signed keys

    With traditional key-based authentication, you configure your private key only, and the public key built into it will be used during the authentication process.

    -

    If you have a certificate you may use it to sign your public key, creating a separate SSH user certificate that should be used instead of the plain public key extracted from the private key. Then you must provide the path to the user certificate public key file in pubkey_file.

    +

    If you have a certificate you may use it to sign your public key, creating a separate SSH user certificate that should be used instead of the plain public key extracted from the private key. Then you must provide the path to the user certificate public key file in pubkey_file or the content of the file in pubkey.

    Note: This is not the traditional public key paired with your private key, typically saved as /home/$USER/.ssh/id_rsa.pub. Setting this path in pubkey_file will not work.

    Example:

    [remote]
    @@ -33753,7 +34943,7 @@ known_hosts_file = ~/.ssh/known_hosts

    About command

    The about command returns the total space, free space, and used space on the remote for the disk of the specified path on the remote or, if not set, the disk of the root on the remote.

    SFTP usually supports the about command, but it depends on the server. If the server implements the vendor-specific VFS statistics extension, which is normally the case with OpenSSH instances, it will be used. If not, but the same login has access to a Unix shell, where the df command is available (e.g. in the remote's PATH), then this will be used instead. If the server shell is PowerShell, probably with a Windows OpenSSH server, rclone will use a built-in shell command (see shell access). If none of the above is applicable, about will fail.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to sftp (SSH/SFTP).

    --sftp-host

    SSH host to connect to.

    @@ -33829,6 +35019,15 @@ known_hosts_file = ~/.ssh/known_hosts
  • Type: string
  • Required: false
  • +

    --sftp-pubkey

    +

    SSH public certificate for public certificate based authentication. Set this if you have a signed certificate you want to use for authentication. If specified will override pubkey_file.

    +

    Properties:

    +

    --sftp-pubkey-file

    Optional path to public key file.

    Set this if you have a signed certificate you want to use for authentication.

    @@ -33908,7 +35107,7 @@ known_hosts_file = ~/.ssh/known_hosts
  • Type: SpaceSepList
  • Default:
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to sftp (SSH/SFTP).

    --sftp-known-hosts-file

    Optional path to known_hosts file.

    @@ -34247,13 +35446,13 @@ server_command = sudo /usr/libexec/openssh/sftp-server

    See Hetzner's documentation for details

    SMB

    SMB is a communication protocol to share files over network.

    -

    This relies on go-smb2 library for communication with SMB protocol.

    +

    This relies on go-smb2 library for communication with SMB protocol.

    Paths are specified as remote:sharename (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:item/path/to/dir.

    Notes

    The first path segment must be the name of the share, which you entered when you started to share on Windows. On smbd, it's the section title in smb.conf (usually in /etc/samba/) file. You can find shares by querying the root if you're unsure (e.g. rclone lsd remote:).

    You can't access to the shared printers from rclone, obviously.

    You can't use Anonymous access for logging in. You have to use the guest user with an empty password instead. The rclone client tries to avoid 8.3 names when uploading files by encoding trailing spaces and periods. Alternatively, the local backend on Windows can access SMB servers using UNC paths, by \\server\share. This doesn't apply to non-Windows OSes, such as Linux and macOS.

    -

    Configuration

    +

    Configuration

    Here is an example of making a SMB configuration.

    First run

    rclone config
    @@ -34328,7 +35527,7 @@ y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> d -

    Standard options

    +

    Standard options

    Here are the Standard options specific to smb (SMB / CIFS).

    --smb-host

    SMB server hostname to connect to.

    @@ -34389,7 +35588,7 @@ y/e/d> d
  • Type: string
  • Required: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to smb (SMB / CIFS).

    --smb-idle-timeout

    Max time before closing idle connections.

    @@ -34500,7 +35699,7 @@ y/e/d> d
  • S3 backend: secret encryption key is shared with the gateway
  • -

    Configuration

    +

    Configuration

    To make a new Storj configuration you need one of the following: * Access Grant that someone else shared with you. * API Key of a Storj project you are a member of.

    Here is an example of how to make a remote called remote. First run:

     rclone config
    @@ -34597,7 +35796,7 @@ y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y -

    Standard options

    +

    Standard options

    Here are the Standard options specific to storj (Storj Decentralized Cloud Storage).

    --storj-provider

    Choose an authentication method.

    @@ -34676,7 +35875,7 @@ y/e/d> y
  • Type: string
  • Required: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to storj (Storj Decentralized Cloud Storage).

    --storj-description

    Description of the remote.

    @@ -34749,7 +35948,7 @@ y/e/d> y

    To fix these, please raise your system limits. You can do this issuing a ulimit -n 65536 just before you run rclone. To change the limits more permanently you can add this to your shell startup script, e.g. $HOME/.bashrc, or change the system-wide configuration, usually /etc/sysctl.conf and/or /etc/security/limits.conf, but please refer to your operating system manual.

    SugarSync

    SugarSync is a cloud service that enables active synchronization of files across computers and other devices for file backup, access, syncing, and sharing.

    -

    Configuration

    +

    Configuration

    The initial setup for SugarSync involves getting a token from SugarSync which you can do with rclone. rclone config walks you through it.

    Here is an example of how to make a remote called remote. First run:

     rclone config
    @@ -34822,7 +36021,7 @@ y/e/d> y

    Deleting files

    Deleted files will be moved to the "Deleted items" folder by default.

    However you can supply the flag --sugarsync-hard-delete or set the config parameter hard_delete = true if you would like files to be deleted straight away.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to sugarsync (Sugarsync).

    --sugarsync-app-id

    Sugarsync App ID.

    @@ -34863,7 +36062,7 @@ y/e/d> y
  • Type: bool
  • Default: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to sugarsync (Sugarsync).

    --sugarsync-refresh-token

    Sugarsync refresh token.

    @@ -34951,7 +36150,7 @@ y/e/d> y

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    The initial setup for Uloz.to involves filling in the user credentials. rclone config walks you through it.

    -

    Configuration

    +

    Configuration

    Here is an example of how to make a remote called remote. First run:

     rclone config

    This will guide you through an interactive setup process:

    @@ -35044,7 +36243,7 @@ y/e/d> y

    In order to do this you will have to find the Folder slug of the folder you wish to use as root. This will be the last segment of the URL when you open the relevant folder in the Uloz.to web interface.

    For example, for exploring a folder with URL https://uloz.to/fm/my-files/foobar, foobar should be used as the root slug.

    root_folder_slug can be used alongside a specific path in the remote path. For example, if your remote's root_folder_slug corresponds to /foo/bar, remote:baz/qux will refer to ABSOLUTE_ULOZTO_ROOT/foo/bar/baz/qux.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to ulozto (Uloz.to).

    --ulozto-app-token

    The application token identifying the app. An app API key can be either found in the API doc https://uloz.to/upload-resumable-api-beta or obtained from customer service.

    @@ -35074,7 +36273,7 @@ y/e/d> y
  • Type: string
  • Required: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to ulozto (Uloz.to).

    --ulozto-root-folder-slug

    If set, rclone will use this folder as the root folder for all operations. For example, if the slug identifies 'foo/bar/', 'ulozto:baz' is equivalent to 'ulozto:foo/bar/baz' without any root slug set.

    @@ -35123,7 +36322,7 @@ y/e/d> y

    This is a Backend for Uptobox file storage service. Uptobox is closer to a one-click hoster than a traditional cloud storage provider and therefore not suitable for long term storage.

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Configuration

    +

    Configuration

    To configure an Uptobox backend you'll need your personal api token. You'll find it in your account settings

    Here is an example of how to make a remote called remote with the default setup. First run:

    rclone config
    @@ -35203,7 +36402,7 @@ y/e/d>

    Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to uptobox (Uptobox).

    --uptobox-access-token

    Your access token.

    @@ -35215,7 +36414,7 @@ y/e/d>
  • Type: string
  • Required: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to uptobox (Uptobox).

    --uptobox-private

    Set to make uploaded files private

    @@ -35259,7 +36458,7 @@ y/e/d>

    Subfolders can be used in upstream remotes. Assume a union remote named backup with the remotes mydrive:private/backup. Invoking rclone mkdir backup:desktop is exactly the same as invoking rclone mkdir mydrive:private/backup/desktop.

    There is no special handling of paths containing .. segments. Invoking rclone mkdir backup:../desktop is exactly the same as invoking rclone mkdir mydrive:private/backup/../desktop.

    -

    Configuration

    +

    Configuration

    Here is an example of how to make a union called remote for local folders. First run:

     rclone config

    This will guide you through an interactive setup process:

    @@ -35492,7 +36691,7 @@ upstreams = /local:writeback remote:dir

    When files are written, they will be written to both remote:dir and /local.

    As many remotes as desired can be added to upstreams but there should only be one :writeback tag.

    Rclone does not manage the :writeback remote in any way other than writing files back to it. So if you need to expire old files or manage the size then you will have to do this yourself.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to union (Union merges the contents of several upstream fs).

    --union-upstreams

    List of space separated upstreams.

    @@ -35541,7 +36740,7 @@ upstreams = /local:writeback remote:dir
  • Type: int
  • Default: 120
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to union (Union merges the contents of several upstream fs).

    --union-min-free-space

    Minimum viable free space for lfs/eplfs policies.

    @@ -35568,7 +36767,7 @@ upstreams = /local:writeback remote:dir

    WebDAV

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Configuration

    +

    Configuration

    To configure the WebDAV remote you will need to have a URL for it, and a username and password. If you know what kind of system you are connecting to then rclone can enable extra features.

    Here is an example of how to make a remote called remote. First run:

     rclone config
    @@ -35645,7 +36844,7 @@ y/e/d> y

    Modification times and hashes

    Plain WebDAV does not support modified times. However when used with Fastmail Files, Owncloud or Nextcloud rclone will support modified times.

    Likewise plain WebDAV does not support hashes, however when used with Fastmail Files, Owncloud or Nextcloud rclone will support SHA1 and MD5 hashes. Depending on the exact version of Owncloud or Nextcloud hashes may appear on all objects, or only on objects which had a hash uploaded with them.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to webdav (WebDAV).

    --webdav-url

    URL of http host to connect to.

    @@ -35726,7 +36925,7 @@ y/e/d> y
  • Type: string
  • Required: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to webdav (WebDAV).

    --webdav-bearer-token-command

    Command to run to get a bearer token.

    @@ -35808,6 +37007,18 @@ y/e/d> y
  • Type: string
  • Required: false
  • +

    --webdav-auth-redirect

    +

    Preserve authentication on redirect.

    +

    If the server redirects rclone to a new domain when it is trying to read a file then normally rclone will drop the Authorization: header from the request.

    +

    This is standard security practice to avoid sending your credentials to an unknown webserver.

    +

    However this is desirable in some circumstances. If you are getting an error like "401 Unauthorized" when rclone is attempting to read files from the webdav server then you can try this option.

    +

    Properties:

    +

    --webdav-description

    Description of the remote.

    Properties:

    @@ -35895,7 +37106,7 @@ vendor = other bearer_token_command = oidc-token XDC

    Yandex Disk

    Yandex Disk is a cloud storage solution created by Yandex.

    -

    Configuration

    +

    Configuration

    Here is an example of making a yandex configuration. First run

    rclone config

    This will guide you through an interactive setup process:

    @@ -35960,7 +37171,7 @@ y/e/d> y

    Restricted filename characters

    The default restricted characters set are replaced.

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to yandex (Yandex Disk).

    --yandex-client-id

    OAuth Client Id.

    @@ -35982,7 +37193,7 @@ y/e/d> y
  • Type: string
  • Required: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to yandex (Yandex Disk).

    --yandex-token

    OAuth Access Token as a JSON blob.

    @@ -36013,6 +37224,16 @@ y/e/d> y
  • Type: string
  • Required: false
  • +

    --yandex-client-credentials

    +

    Use client credentials OAuth flow.

    +

    This will use the OAUTH2 client Credentials Flow as described in RFC 6749.

    +

    Properties:

    +

    --yandex-hard-delete

    Delete files permanently rather than putting them into the trash.

    Properties:

    @@ -36056,7 +37277,7 @@ y/e/d> y
    [403 - DiskUnsupportedUserAccountTypeError] User account type is not supported.

    Zoho Workdrive

    Zoho WorkDrive is a cloud storage solution created by Zoho.

    -

    Configuration

    +

    Configuration

    Here is an example of making a zoho configuration. First run

    rclone config

    This will guide you through an interactive setup process:

    @@ -36136,7 +37357,7 @@ y/e/d>

    To view your current quota you can use the rclone about remote: command which will display your current usage.

    Restricted filename characters

    Only control characters and invalid UTF-8 are replaced. In addition most Unicode full-width characters are not supported at all and will be removed from filenames during upload.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to zoho (Zoho).

    --zoho-client-id

    OAuth Client Id.

    @@ -36195,7 +37416,7 @@ y/e/d> -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to zoho (Zoho).

    --zoho-token

    OAuth Access Token as a JSON blob.

    @@ -36226,6 +37447,25 @@ y/e/d>
  • Type: string
  • Required: false
  • +

    --zoho-client-credentials

    +

    Use client credentials OAuth flow.

    +

    This will use the OAUTH2 client Credentials Flow as described in RFC 6749.

    +

    Properties:

    + +

    --zoho-upload-cutoff

    +

    Cutoff for switching to large file upload api (>= 10 MiB).

    +

    Properties:

    +

    --zoho-encoding

    The encoding for the backend.

    See the encoding section in the overview for more info.

    @@ -36257,7 +37497,7 @@ y/e/d>

    Local paths are specified as normal filesystem paths, e.g. /path/to/wherever, so

    rclone sync --interactive /home/source /tmp/destination

    Will sync /home/source to /tmp/destination.

    -

    Configuration

    +

    Configuration

    For consistencies sake one can also configure a remote of type local in the config file, and access the local filesystem using rclone remote paths, e.g. remote:path/to/wherever, but it is probably easier not to.

    Modification times

    Rclone reads and writes the modification times using an accuracy determined by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second on OS X.

    @@ -36568,9 +37808,9 @@ nounc = true 6 two/three 6 b/two 6 b/one - +

    Normally rclone will ignore symlinks or junction points (which behave like symlinks under Windows).

    -

    If you supply this flag then rclone will copy symbolic links from the local storage, and store them as text files, with a '.rclonelink' suffix in the remote storage.

    +

    If you supply this flag then rclone will copy symbolic links from the local storage, and store them as text files, with a .rclonelink suffix in the remote storage.

    The text file will contain the target of the symbolic link (see example).

    This flag applies to all commands.

    For example, supposing you have a directory structure like this

    @@ -36580,7 +37820,7 @@ nounc = true └── file2 -> /home/user/file3

    Copying the entire directory with '-l'

    $ rclone copy -l /tmp/a/ remote:/tmp/a/
    -

    The remote files are created with a '.rclonelink' suffix

    +

    The remote files are created with a .rclonelink suffix

    $ rclone ls remote:/tmp/a
            5 file1.rclonelink
           14 file2.rclonelink
    @@ -36610,6 +37850,7 @@ $ tree /tmp/b $ tree /tmp/c /tmp/c └── file1 -> ./file4 +

    Note that --local-links just enables this feature for the local backend. --links and -l enable the feature for all supported backends and the VFS.

    Note that this flag is incompatible with -copy-links / -L.

    Restricting filesystems with --one-file-system

    Normally rclone will recurse through filesystems as mounted.

    @@ -36633,7 +37874,7 @@ $ tree /tmp/c 0 file2

    NB Rclone (like most unix tools such as du, rsync and tar) treats a bind mount to the same device as being on the same filesystem.

    NB This flag is only available on Unix based systems. On systems where it isn't supported (e.g. Windows) it will be ignored.

    -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to local (Local Disk).

    --local-nounc

    Disable UNC (long path names) conversion on Windows.

    @@ -36660,8 +37901,8 @@ $ tree /tmp/c
  • Type: bool
  • Default: false
  • - -

    Translate symlinks to/from regular files with a '.rclonelink' extension.

    + +

    Translate symlinks to/from regular files with a '.rclonelink' extension for the local backend.

    Properties:

    Changelog

    +

    v1.69.0 - 2025-01-12

    +

    See commits

    + +

    v1.68.2 - 2024-11-15

    +

    See commits

    + +

    v1.68.1 - 2024-09-24

    +

    See commits

    +

    v1.68.0 - 2024-09-08

    See commits