Source and destination paths are specified by the name you gave the storage system in the config file then the sub path, e.g. "drive:myfolder" to look at "myfolder" in Google drive.
You can define as many storage paths as you like in the config file.
-
Please use the -i / --interactive flag while learning rclone to avoid accidental data loss.
+
Please use the --interactive/-i flag while learning rclone to avoid accidental data loss.
Subcommands
rclone uses a system of subcommands. For example
rclone ls remote:path # lists a remote
rclone copy /local/path remote:path # copies /local/path to the remote
-rclone sync -i /local/path remote:path # syncs /local/path to the remote
+rclone sync --interactive /local/path remote:path # syncs /local/path to the remote
rclone config
Enter an interactive configuration session.
Synopsis
@@ -504,7 +512,7 @@ destpath/sourcepath/two.txt
Synopsis
Sync the source to the destination, changing the destination only. Doesn't transfer files that are identical on source and destination, testing by size and modification time or MD5SUM. Destination is updated to match source, including deleting files if necessary (except duplicate objects, see below). If you don't want to delete files from destination, use the copy command instead.
Important: Since this can cause data loss, test first with the --dry-run or the --interactive/-i flag.
Note that files in the destination won't be deleted if there were any errors at any point. Duplicate objects (files with the same name, on those providers that support it) are also not yet handled.
It is always the contents of the directory that is synced, not the directory itself. So when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents. See extended explanation in the copy command if unsure.
If dest:path doesn't exist, it is created and the source:path contents go there.
@@ -964,11 +972,13 @@ Other: 8849156022
Remote authorization.
Synopsis
Remote authorization. Used to authorize a remote or headless rclone from a machine with a browser - use as instructed by rclone config.
-
Use the --auth-no-open-browser to prevent rclone to open auth link in default browser automatically.
+
Use --auth-no-open-browser to prevent rclone to open auth link in default browser automatically.
+
Use --template to generate HTML output via a custom Go template. If a blank string is provided as an argument to this flag, the default template is used.
rclone authorize [flags]
Options
--auth-no-open-browser Do not automatically open auth link in default browser
- -h, --help help for authorize
+ -h, --help help for authorize
+ --template string The path to a custom Go template for generating HTML responses
The FUSE emulation layer on Windows must convert between the POSIX-based permission model used in FUSE, and the permission model used in Windows, based on access-control lists (ACL).
-
The mounted filesystem will normally get three entries in its access-control list (ACL), representing permissions for the POSIX permission scopes: Owner, group and others. By default, the owner and group will be taken from the current user, and the built-in group "Everyone" will be used to represent others. The user/group can be customized with FUSE options "UserName" and "GroupName", e.g. -o UserName=user123 -o GroupName="Authenticated Users". The permissions on each entry will be set according to options--dir-perms and --file-perms, which takes a value in traditional numeric notation.
-
The default permissions corresponds to --file-perms 0666 --dir-perms 0777, i.e. read and write permissions to everyone. This means you will not be able to start any programs from the mount. To be able to do that you must add execute permissions, e.g. --file-perms 0777 --dir-perms 0777 to add it to everyone. If the program needs to write files, chances are you will have to enable VFS File Caching as well (see also limitations).
-
Note that the mapping of permissions is not always trivial, and the result you see in Windows Explorer may not be exactly like you expected. For example, when setting a value that includes write access, this will be mapped to individual permissions "write attributes", "write data" and "append data", but not "write extended attributes". Windows will then show this as basic permission "Special" instead of "Write", because "Write" includes the "write extended attributes" permission.
-
If you set POSIX permissions for only allowing access to the owner, using --file-perms 0600 --dir-perms 0700, the user group and the built-in "Everyone" group will still be given some special permissions, such as "read attributes" and "read permissions", in Windows. This is done for compatibility reasons, e.g. to allow users without additional permissions to be able to read basic metadata about files like in UNIX. One case that may arise is that other programs (incorrectly) interprets this as the file being accessible by everyone. For example an SSH client may warn about "unprotected private key file".
-
WinFsp 2021 (version 1.9) introduces a new FUSE option "FileSecurity", that allows the complete specification of file security descriptors using SDDL. With this you can work around issues such as the mentioned "unprotected private key file" by specifying -o FileSecurity="D:P(A;;FA;;;OW)", for file all access (FA) to the owner (OW).
+
The mounted filesystem will normally get three entries in its access-control list (ACL), representing permissions for the POSIX permission scopes: Owner, group and others. By default, the owner and group will be taken from the current user, and the built-in group "Everyone" will be used to represent others. The user/group can be customized with FUSE options "UserName" and "GroupName", e.g. -o UserName=user123 -o GroupName="Authenticated Users". The permissions on each entry will be set according to options--dir-perms and --file-perms, which takes a value in traditional Unix numeric notation.
+
The default permissions corresponds to --file-perms 0666 --dir-perms 0777, i.e. read and write permissions to everyone. This means you will not be able to start any programs from the mount. To be able to do that you must add execute permissions, e.g. --file-perms 0777 --dir-perms 0777 to add it to everyone. If the program needs to write files, chances are you will have to enable VFS File Caching as well (see also limitations). Note that the default write permission have some restrictions for accounts other than the owner, specifically it lacks the "write extended attributes", as explained next.
+
The mapping of permissions is not always trivial, and the result you see in Windows Explorer may not be exactly like you expected. For example, when setting a value that includes write access for the group or others scope, this will be mapped to individual permissions "write attributes", "write data" and "append data", but not "write extended attributes". Windows will then show this as basic permission "Special" instead of "Write", because "Write" also covers the "write extended attributes" permission. When setting digit 0 for group or others, to indicate no permissions, they will still get individual permissions "read attributes", "read extended attributes" and "read permissions". This is done for compatibility reasons, e.g. to allow users without additional permissions to be able to read basic metadata about files like in Unix.
+
WinFsp 2021 (version 1.9) introduced a new FUSE option "FileSecurity", that allows the complete specification of file security descriptors using SDDL. With this you get detailed control of the resulting permissions, compared to use of the POSIX permissions described above, and no additional permissions will be added automatically for compatibility with Unix. Some example use cases will following.
+
If you set POSIX permissions for only allowing access to the owner, using --file-perms 0600 --dir-perms 0700, the user group and the built-in "Everyone" group will still be given some special permissions, as described above. Some programs may then (incorrectly) interpret this as the file being accessible by everyone, for example an SSH client may warn about "unprotected private key file". You can work around this by specifying -o FileSecurity="D:P(A;;FA;;;OW)", which sets file all access (FA) to the owner (OW), and nothing else.
+
When setting write permissions then, except for the owner, this does not include the "write extended attributes" permission, as mentioned above. This may prevent applications from writing to files, giving permission denied error instead. To set working write permissions for the built-in "Everyone" group, similar to what it gets by default but with the addition of the "write extended attributes", you can specify -o FileSecurity="D:P(A;;FRFW;;;WD)", which sets file read (FR) and file write (FW) to everyone (WD). If file execute (FX) is also needed, then change to -o FileSecurity="D:P(A;;FRFWFX;;;WD)", or set file all access (FA) to get full access permissions, including delete, with -o FileSecurity="D:P(A;;FA;;;WD)".
Windows caveats
Drives created as Administrator are not visible to other accounts, not even an account that was elevated to Administrator with the User Account Control (UAC) feature. A result of this is that if you mount to a drive letter from a Command Prompt run as Administrator, and then try to access the same drive from Windows Explorer (which does not run as Administrator), you will not be able to see the mounted drive.
If you don't need to access the drive from applications running with administrative privileges, the easiest way around this is to always create the mount from a non-elevated command prompt.
To make mapped drives available to the user account that created them regardless if elevated or not, there is a special Windows setting called linked connections that can be enabled.
-
It is also possible to make a drive mount available to everyone on the system, by running the process creating it as the built-in SYSTEM account. There are several ways to do this: One is to use the command-line utility PsExec, from Microsoft's Sysinternals suite, which has option -s to start processes as the SYSTEM account. Another alternative is to run the mount command from a Windows Scheduled Task, or a Windows Service, configured to run as the SYSTEM account. A third alternative is to use the WinFsp.Launcher infrastructure). Note that when running rclone as another user, it will not use the configuration file from your profile unless you tell it to with the --config option. Read more in the install documentation.
+
It is also possible to make a drive mount available to everyone on the system, by running the process creating it as the built-in SYSTEM account. There are several ways to do this: One is to use the command-line utility PsExec, from Microsoft's Sysinternals suite, which has option -s to start processes as the SYSTEM account. Another alternative is to run the mount command from a Windows Scheduled Task, or a Windows Service, configured to run as the SYSTEM account. A third alternative is to use the WinFsp.Launcher infrastructure). Read more in the install documentation. Note that when running rclone as another user, it will not use the configuration file from your profile unless you tell it to with the --config option. Note also that it is now the SYSTEM account that will have the owner permissions, and other accounts will have permissions according to the group or others scopes. As mentioned above, these will then not get the "write extended attributes" permission, and this may prevent writing to files. You can work around this with the FileSecurity option, see example above.
Note that mapping to a directory path, instead of a drive letter, does not suffer from the same limitations.
+
Mounting on macOS
+
Mounting on macOS can be done either via macFUSE (also known as osxfuse) or FUSE-T. macFUSE is a traditional FUSE driver utilizing a macOS kernel extension (kext). FUSE-T is an alternative FUSE system which "mounts" via an NFSv4 local server.
+
FUSE-T Limitations, Caveats, and Notes
+
There are some limitations, caveats, and notes about how it works. These are current as of FUSE-T version 1.0.14.
File access and modification times cannot be set separately as it seems to be an issue with the NFS client which always modifies both. Can be reproduced with 'touch -m' and 'touch -a' commands
+
+
This means that viewing files with various tools, notably macOS Finder, will cause rlcone to update the modification time of the file. This may make rclone upload a full new copy of the file.
+
Unicode Normalization
+
Rclone includes flags for unicode normalization with macFUSE that should be updated for FUSE-T. See this forum post and FUSE-T issue #16. The following flag should be added to the rclone mount command.
+
-o modules=iconv,from_code=UTF-8,to_code=UTF-8
+
Read Only mounts
+
When mounting with --read-only, attempts to write to files will fail silently as opposed to with a clear warning as in macFUSE.
Limitations
Without the use of --vfs-cache-mode this can only write files sequentially, it can only seek when reading. This means that many applications won't work with their files on an rclone mount without --vfs-cache-mode writes or --vfs-cache-mode full. See the VFS File Caching section for more info.
The bucket-based remotes (e.g. Swift, S3, Google Compute Storage, B2) do not support the concept of empty directories, so empty directories will have a tendency to disappear once they fall out of the directory cache.
Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df on the filesystem, then pass the flag --vfs-used-is-size to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.
WARNING. Contrary to rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
+
Auth Proxy
+
If you supply the parameter --auth-proxy /path/to/program then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocol with input on STDIN and output on STDOUT.
+
PLEASE NOTE:--auth-proxy and --authorized-keys cannot be used together, if --auth-proxy is set the authorized keys option will be ignored.
+
There is an example program bin/test_proxy.py in the rclone source code.
+
The program's job is to take a user and pass on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won't use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config.
+
This config generated must have this extra parameter - _root - root to use for the backend
+
And it may have this parameter - _obscure - comma separated strings for parameters to obscure
+
If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this:
+
{
+ "user": "me",
+ "pass": "mypassword"
+}
+
If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this:
This would mean that an SFTP backend would be created on the fly for the user and pass/public_key returned in the output to the host given. Note that since _obscure is set to pass, rclone will obscure the pass parameter before creating the backend (which is required for sftp backends).
+
The program can manipulate the supplied user in any way, for example to make proxy to many different sftp backends, you could make the user be user@example.com and then set the host to example.com in the output and the user to user. For security you'd probably want to restrict the host to a limited list.
+
Note that an internal cache is keyed on user so only use that for configuration, don't use pass or public_key. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.
+
This can be used to build general purpose proxies to any kind of backend that rclone supports.
rclone serve http remote:path [flags]
Options
--addr stringArray IPaddress:Port or :Port to bind server to (default [127.0.0.1:8080])
+ --auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
--cert string TLS PEM key (concatenation of certificate and CA certificate)
--client-ca string Client certificate authority to verify clients with
@@ -3515,7 +3572,7 @@ htpasswd -B htpasswd anotherUser
Alternate report of used bytes
Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df on the filesystem, then pass the flag --vfs-used-is-size to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.
WARNING. Contrary to rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
-
Auth Proxy
+
Auth Proxy
If you supply the parameter --auth-proxy /path/to/program then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocol with input on STDIN and output on STDOUT.
PLEASE NOTE:--auth-proxy and --authorized-keys cannot be used together, if --auth-proxy is set the authorized keys option will be ignored.
There is an example program bin/test_proxy.py in the rclone source code.
This controls the ETag header. Without this flag the ETag will be based on the ModTime and Size of the object.
If this flag is set to "auto" then rclone will choose the first supported hash on the backend or you can use a named hash such as "MD5" or "SHA-1". Use the hashsum command to see the full list.
+
Access WebDAV on Windows
+
WebDAV shared folder can be mapped as a drive on Windows, however the default settings prevent it. Windows will fail to connect to the server using insecure Basic authentication. It will not even display any login dialog. Windows requires SSL / HTTPS connection to be used with Basic. If you try to connect via Add Network Location Wizard you will get the following error: "The folder you entered does not appear to be valid. Please choose another". However, you still can connect if you set the following registry key on a client machine: HKEY_LOCAL_MACHINEto 2. The BasicAuthLevel can be set to the following values: 0 - Basic authentication disabled 1 - Basic authentication enabled for SSL connections only 2 - Basic authentication enabled for SSL connections and for non-SSL connections If required, increase the FileSizeLimitInBytes to a higher value. Navigate to the Services interface, then restart the WebClient service.
+
Access Office applications on WebDAV
+
Navigate to following registry HKEY_CURRENT_USER[14.0/15.0/16.0] Create a new DWORD BasicAuthLevel with value 2. 0 - Basic authentication disabled 1 - Basic authentication enabled for SSL connections only 2 - Basic authentication enabled for SSL and for non-SSL connections
Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df on the filesystem, then pass the flag --vfs-used-is-size to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.
WARNING. Contrary to rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
-
Auth Proxy
+
Auth Proxy
If you supply the parameter --auth-proxy /path/to/program then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocol with input on STDIN and output on STDOUT.
PLEASE NOTE:--auth-proxy and --authorized-keys cannot be used together, if --auth-proxy is set the authorized keys option will be ignored.
There is an example program bin/test_proxy.py in the rclone source code.
Set the modification time on file(s) as specified by remote:path to have the current time.
If remote:path does not exist then a zero sized file will be created, unless --no-create or --recursive is provided.
-
If --recursive is used then recursively sets the modification time on all existing files that is found under the path. Filters are supported, and you can test with the --dry-run or the --interactive flag.
+
If --recursive is used then recursively sets the modification time on all existing files that is found under the path. Filters are supported, and you can test with the --dry-run or the --interactive/-i flag.
If --timestamp is used then sets the modification time to that time instead of the current time. Times may be specified as one of:
DEBUG : :s3: detected overridden config - adding "{YTu53}" suffix to name
Valid remote names
-
Remote names are case sensitive, and must adhere to the following rules: - May contain number, letter, _, -, . and space. - May not start with - or space. - May not end with space.
+
Remote names are case sensitive, and must adhere to the following rules: - May contain number, letter, _, -, ., +, @ and space. - May not start with - or space. - May not end with space.
Starting with rclone version 1.61, any Unicode numbers and letters are allowed, while in older versions it was limited to plain ASCII (0-9, A-Z, a-z). If you use the same rclone configuration from different shells, which may be configured with different character encoding, you must be cautious to use characters that are possible to write in all of them. This is mostly a problem on Windows, where the console traditionally uses a non-Unicode character set - defined by the so-called "code page".
Quoting and the shell
When you are typing commands to your computer you are using something called the command line shell. This interprets various characters in an OS specific way.
rclone uses : to mark a remote name. This is, however, a valid filename component in non-Windows OSes. The remote name parser will only search for a : up to the first / so if you need to act on a file or directory like this then use the full path starting with a /, or use ./ as a current directory prefix.
So to sync a directory called sync:me to a remote called remote: use
Most remotes (but not all - see the overview) support server-side copy.
This means if you want to copy one folder to another then rclone won't download all the files and re-upload them; it will instruct the server to copy them in place.
Server side copies are used with sync and copy and will be identified in the log when using the -v flag. The move command may also use them if remote doesn't support server-side move directly. This is done by issuing a server-side copy then a delete which is much quicker than a download and re-upload.
Server side copies will only be attempted if the remote names are the same.
This can be used when scripting to make aged backups efficiently, e.g.
Metadata is data about a file which isn't the contents of the file. Normally rclone only preserves the modification time and the content (MIME) type where possible.
Rclone supports preserving all the available metadata on files (not directories) when using the --metadata or -M flag.
If --suffix is set, then the moved files will have the suffix added to them. If there is a file with the same path (after the suffix has been added) in DIR, then it will be overwritten.
The remote in use must support server-side move or copy and you must use the same remote as the destination of the sync. The backup directory must not overlap the destination directory without it being excluded by a filter rule.
will sync /path/to/local to remote:current, but for any files which would have been updated or deleted will be stored in remote:old.
If running rclone from a script you might want to use today's date as the directory name passed to --backup-dir to store the old files, or you might want to pass --suffix with today's date.
If this flag is set then in a sync, copy or move, rclone will do all the checks to see whether files need to be transferred before doing any of the transfers. Normally rclone would start running transfers as soon as possible.
This flag can be useful on IO limited systems where transfers interfere with checking.
It can also be useful to ensure perfect ordering when using --order-by.
+
If both --check-first and --order-by are set when doing rclone move then rclone will use the transfer thread to delete source files which don't need transferring. This will enable perfect ordering of the transfers and deletes but will cause the transfer stats to have more items in than expected.
Using this flag can use more memory as it effectively sets --max-backlog to infinite. This means that all the info on the objects to transfer is held in memory before the transfers start.
--checkers=N
Originally controlling just the number of file checkers to run in parallel, e.g. by rclone copy. Now a fairly universal parallelism control used by rclone in several places.
@@ -4512,11 +4575,11 @@ See the dedupe command for more information as to what these options mean.
rclone ls remote:test --header "X-Rclone: Foo" --header "X-LetMeIn: Yes"
--header-download
Add an HTTP header for all download transactions. The flag can be repeated to add multiple headers.
See the GitHub issue here for currently supported backends.
--human-readable
Rclone commands output values for sizes (e.g. number of bytes) and counts (e.g. number of files) either as raw numbers, or in human-readable format.
@@ -4547,11 +4610,11 @@ See the dedupe command for more information as to what these options mean.
With this option set, files will be created and deleted as requested, but existing files will never be updated. If an existing file does not match between the source and destination, rclone will give the error Source and destination exist but do not match: immutable file modified.
Note that only commands which transfer files (e.g. sync, copy, move) are affected by this behavior, and only modification is disallowed. Files may still be deleted explicitly (e.g. delete, purge) or implicitly (e.g. sync, move). Use copy --immutable if it is desired to avoid deletion as well as modification.
This can be useful as an additional layer of protection for immutable or append-only data sets (notably backup archives), where modification implies corruption and should not be propagated.
-
-i / --interactive
+
-i, --interactive
This flag can be used to tell rclone that you wish a manual confirmation before destructive operations.
It is recommended that you use this flag while learning rclone especially with rclone sync.
For example
-
$ rclone delete -i /tmp/dir
+
$ rclone delete --interactive /tmp/dir
rclone: delete "important-file.txt"?
y) Yes, this is OK (default)
n) No, skip this
@@ -4596,6 +4659,9 @@ y/n/s/!/q> n
Setting this to a negative number will make the backlog as large as possible.
--max-delete=N
This tells rclone not to delete more than N files. If that limit is exceeded then a fatal error will be generated and rclone will stop the operation in progress.
+
--max-delete-size=SIZE
+
Rclone will stop deleting files when the total size of deletions has reached the size specified. It defaults to off.
+
If that limit is exceeded then a fatal error will be generated and rclone will stop the operation in progress.
--max-depth=N
This modifies the recursion depth for all the commands except purge.
So if you do rclone --max-depth 1 ls remote:path you will see only the files in the top level directory. Using --max-depth 2 means you will see all the files in first two directory levels and so on.
@@ -4611,7 +4677,7 @@ y/n/s/!/q> n
Rclone will stop transferring when it has reached the size specified. Defaults to off.
When the limit is reached all transfers will stop immediately.
Rclone will exit with exit code 8 if the transfer limit is reached.
-
--metadata / -M
+
-M, --metadata
Setting this flag enables rclone to copy the metadata from the source to the destination. For local backends this is ownership, permissions, xattr etc. See the #metadata for more info.
--metadata-set key=value
Add metadata key = value when uploading. This can be repeated as many times as required. See the #metadata for more info.
@@ -4770,10 +4836,10 @@ y/n/s/!/q> n
The remote in use must support server-side move or copy and you must use the same remote as the destination of the sync.
This is for use with files to add the suffix in the current directory or with --backup-dir. See --backup-dir for more info.
will copy /path/to/local to remote:current, but for any files which would have been updated or deleted have .bak added.
If using rclone sync with --suffix and without --backup-dir then it is recommended to put a filter rule in excluding the suffix otherwise the sync will delete the backup files.
When using --suffix, setting this causes rclone put the SUFFIX before the extension of the files that it backs up rather than after.
So let's say we had --suffix -2019-01-01, without the flag file.txt would be backed up to file.txt-2019-01-01 and with the flag it would be backed up to file-2019-01-01.txt. This can be helpful to make sure the suffixed files can still be opened.
@@ -4870,8 +4936,8 @@ y/n/s/!/q> n
Prints the version number
SSL/TLS options
The outgoing SSL/TLS connections rclone makes can be controlled with these options. For example this can be very useful with the HTTP or WebDAV backends. Rclone HTTP servers have their own set of configuration for SSL/TLS which you can find in their documentation.
-
--ca-cert string
-
This loads the PEM encoded certificate authority certificate and uses it to verify the certificates of the servers rclone connects to.
+
--ca-cert stringArray
+
This loads the PEM encoded certificate authority certificates and uses it to verify the certificates of the servers rclone connects to.
If you have generated certificates signed with a local CA then you will need this flag to connect to servers using those certificates.
--client-cert string
This loads the PEM encoded client side certificate.
@@ -5575,7 +5641,7 @@ user2/prefect
--delete-excluded - Delete files on dest excluded from sync
Important this flag is dangerous to your data - use with --dry-run and -v first.
In conjunction with rclone sync, --delete-excluded deletes any files on the destination which are excluded from the command.
-
E.g. the scope of rclone sync -i A: B: can be restricted:
+
E.g. the scope of rclone sync --interactive A: B: can be restricted:
⁷ pCloud only supports SHA1 (not MD5) in its EU region
⁸ Opendrive does not support creation of duplicate files using their web client interface or other stock clients, but the underlying storage platform has been determined to allow duplicate files, and it is possible to create them with rclone. It may be that this is a mistake or an unsupported feature.
@@ -8306,14 +8372,14 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Storj
-
Yes †
+
Yes ☨
Yes
Yes
No
No
Yes
Yes
-
No
+
Yes
No
No
@@ -8386,7 +8452,8 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Purge
This deletes a directory quicker than just deleting all the files in the directory.
-
† Note Swift and Storj implement this in order to delete directory markers but they don't actually have a quicker way of deleting files other than deleting them individually.
+
† Note Swift implements this in order to delete directory markers but they don't actually have a quicker way of deleting files other than deleting them individually.
+
☨ Storj implements this efficiently only for entire buckets. If purging a directory inside a bucket, files are deleted individually.
‡ StreamUpload is not supported with Nextcloud
Copy
Used when copying an object to and from the same remote. This known as a server-side copy so you can copy a file without downloading it and uploading it again. It is used if you use rclone copy or rclone move if the remote doesn't support Move directly.
@@ -8424,7 +8491,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer (default 16Mi)
--bwlimit BwTimetable Bandwidth limit in KiB/s, or use suffix B|K|M|G|T|P or a full timetable
--bwlimit-file BwTimetable Bandwidth limit per file in KiB/s, or use suffix B|K|M|G|T|P or a full timetable
- --ca-cert string CA certificate used to verify servers
+ --ca-cert stringArray CA certificate used to verify servers
--cache-dir string Directory rclone will use for caching (default "$HOME/.cache/rclone")
--check-first Do all the checks before starting transfers
--checkers int Number of checkers to run in parallel (default 8)
@@ -8486,6 +8553,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off)
--max-depth int If set limits the recursion depth to this (default -1)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
@@ -8574,7 +8642,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.61.0")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.62.0")
-v, --verbose count Print lots more stuff (repeat for more)
Backend Flags
These flags are available for every command. They control the backends and may be set in the config file.
@@ -8785,6 +8853,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--gcs-decompress If set this will decompress gzip encoded objects
--gcs-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gcs-endpoint string Endpoint for the service
+ --gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars)
--gcs-location string Location for the newly created buckets
--gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it
--gcs-object-acl string Access Control List for new objects
@@ -8873,6 +8942,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--mega-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
--mega-hard-delete Delete files permanently rather than putting them into the trash
--mega-pass string Password (obscured)
+ --mega-use-https Use HTTPS for transfers
--mega-user string User name
--netstorage-account string Set the NetStorage account name
--netstorage-host string Domain+path of NetStorage host to connect to
@@ -8888,6 +8958,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--onedrive-drive-type string The type of the drive (personal | business | documentLibrary)
--onedrive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings
+ --onedrive-hash-type string Specify the hash in use for the backend (default "auto")
--onedrive-link-password string Set the password for links created by the link command
--onedrive-link-scope string Set the scope of the links created by the link command (default "anonymous")
--onedrive-link-type string Set the type of the links created by the link command (default "view")
@@ -8912,6 +8983,12 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--oos-no-check-bucket If set, don't attempt to check the bucket exists or create it
--oos-provider string Choose your Auth Provider (default "env_auth")
--oos-region string Object storage Region
+ --oos-sse-customer-algorithm string If using SSE-C, the optional header that specifies "AES256" as the encryption algorithm
+ --oos-sse-customer-key string To use SSE-C, the optional header that specifies the base64-encoded 256-bit encryption key to use to
+ --oos-sse-customer-key-file string To use SSE-C, a file containing the base64-encoded string of the AES-256 encryption key associated
+ --oos-sse-customer-key-sha256 string If using SSE-C, The optional header that specifies the base64-encoded SHA256 hash of the encryption
+ --oos-sse-kms-key-id string if using using your own master key in vault, this header specifies the
+ --oos-storage-tier string The storage class to use when storing new objects in storage. https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm (default "Standard")
--oos-upload-concurrency int Concurrency for multipart uploads (default 10)
--oos-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi)
@@ -8980,6 +9057,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--s3-sse-customer-key-md5 string If using SSE-C you may provide the secret encryption key MD5 checksum (optional)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key
--s3-storage-class string The storage class to use when storing new objects in S3
+ --s3-sts-endpoint string Endpoint for STS
--s3-upload-concurrency int Concurrency for multipart uploads (default 4)
--s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint
@@ -9045,12 +9123,13 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--smb-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--smb-pass string SMB password (obscured)
--smb-port int SMB port number (default 445)
+ --smb-spn string Service principal name
--smb-user string SMB username (default "$USER")
--storj-access-grant string Access grant
--storj-api-key string API key
--storj-passphrase string Encryption passphrase
--storj-provider string Choose an authentication method (default "existing")
- --storj-satellite-address string Satellite address (default "us-central-1.storj.io")
+ --storj-satellite-address string Satellite address (default "us1.storj.io")
--sugarsync-access-key-id string Sugarsync Access Key ID
--sugarsync-app-id string Sugarsync App ID
--sugarsync-authorization string Sugarsync authorization
@@ -9303,7 +9382,7 @@ docker volume inspect my_vol
Run bisync with the --resync flag, specifying the paths to the local and remote sync directory roots.
For successive sync runs, leave off the --resync flag.
Consider using a filters file for excluding unnecessary files and directories from the sync.
-
Consider setting up the --check-access feature for safety.
+
Consider setting up the --check-access feature for safety.
On Linux, consider setting up a crontab entry. bisync can safely run in concurrent cron jobs thanks to lock files it maintains.
Here is a typical run log (with timestamps removed for clarity):
@@ -9392,11 +9471,13 @@ Optional Flags:
--resync
This will effectively make both Path1 and Path2 filesystems contain a matching superset of all files. Path2 files that do not exist in Path1 will be copied to Path1, and the process will then sync the Path1 tree to Path2.
The base directories on the both Path1 and Path2 filesystems must exist or bisync will fail. This is required for safety - that bisync can verify that both paths are valid.
-
When using --resync a newer version of a file on the Path2 filesystem will be overwritten by the Path1 filesystem version. Carefully evaluate deltas using --dry-run.
+
When using --resync, a newer version of a file either on Path1 or Path2 filesystem, will overwrite the file on the other path (only the last version will be kept). Carefully evaluate deltas using --dry-run.
For a resync run, one of the paths may be empty (no files in the path tree). The resync run should result in files on both paths, else a normal non-resync run will fail.
For a non-resync run, either path being empty (no files in the tree) fails with Empty current PathN listing. Cannot sync to an empty directory: X.pathN.lst This is a safety check that an unexpected empty path does not result in deleting everything in the other path.
--check-access
-
Access check files are an additional safety measure against data loss. bisync will ensure it can find matching RCLONE_TEST files in the same places in the Path1 and Path2 filesystems. Time stamps and file contents are not important, just the names and locations. Place one or more RCLONE_TEST files in the Path1 or Path2 filesystem and then do either a run without --check-access or a --resync to set matching files on both filesystems. If you have symbolic links in your sync tree it is recommended to place RCLONE_TEST files in the linked-to directory tree to protect against bisync assuming a bunch of deleted files if the linked-to tree should not be accessible. Also see the --check-filename flag.
+
Access check files are an additional safety measure against data loss. bisync will ensure it can find matching RCLONE_TEST files in the same places in the Path1 and Path2 filesystems. RCLONE_TEST files are not generated automatically. For --check-accessto succeed, you must first either: A) Place one or more RCLONE_TEST files in the Path1 or Path2 filesystem and then do either a run without --check-access or a --resync to set matching files on both filesystems, or B) Set --check-filename to a filename already in use in various locations throughout your sync'd fileset. Time stamps and file contents are not important, just the names and locations. If you have symbolic links in your sync tree it is recommended to place RCLONE_TEST files in the linked-to directory tree to protect against bisync assuming a bunch of deleted files if the linked-to tree should not be accessible. See also the --check-filename flag.
+
--check-filename
+
Name of the file(s) used in access health validation. The default --check-filename is RCLONE_TEST. One or more files having this filename must exist, synchronized between your source and destination filesets, in order for --check-access to succeed. See --check-access for additional details.
--max-delete
As a safety check, if greater than the --max-delete percent of files were deleted on either the Path1 or Path2 filesystem, then bisync will abort with a warning message, without making any changes. The default --max-delete is 50%. One way to trigger this limit is to rename a directory that contains more than half of your files. This will appear to bisync as a bunch of deleted files and a bunch of new files. This safety check is intended to block bisync from deleting all of the files on both filesystems due to a temporary network access issue, or if the user had inadvertently deleted the files on one side or the other. To force the sync either set a different delete percentage limit, e.g. --max-delete 75 (allows up to 75% deletion), or use --force to bypass the check.
Here is an example of making an s3 configuration for the AWS S3 provider. Most applies to the other providers as well, any differences are described below.
First run
@@ -10717,7 +10798,7 @@ $ rclone -q ls s3:cleanup-test
$ rclone -q --s3-versions ls s3:cleanup-test
9 one.txt
Cleanup
-
If you run rclone cleanup s3:bucket then it will remove all pending multipart uploads older than 24 hours. You can use the -i flag to see exactly what it will do. If you want more control over the expiry date then run rclone backend cleanup s3:bucket -o max-age=1h to expire all uploads older than one hour. You can use rclone backend list-multipart-uploads s3:bucket to see the pending multipart uploads.
+
If you run rclone cleanup s3:bucket then it will remove all pending multipart uploads older than 24 hours. You can use the --interactive/i or --dry-run flag to see exactly what it will do. If you want more control over the expiry date then run rclone backend cleanup s3:bucket -o max-age=1h to expire all uploads older than one hour. You can use rclone backend list-multipart-uploads s3:bucket to see the pending multipart uploads.
Restricted filename characters
S3 allows any valid UTF-8 string as a key.
Invalid UTF-8 bytes will be replaced, as they can't be used in XML.
@@ -12167,7 +12248,7 @@ $ rclone -q --s3-versions ls s3:cleanup-test
--s3-endpoint
-
Endpoint of the Shared Gateway.
+
Endpoint for Storj Gateway.
Properties:
Config: endpoint
@@ -12177,17 +12258,9 @@ $ rclone -q --s3-versions ls s3:cleanup-test
You can use these with rclone too; you will need to use rclone version 1.43 or later.
@@ -17190,7 +17274,7 @@ y/e/d> y
Collect all your chunked files under a directory and have your chunker remote point to it.
Create another directory (most probably on the same cloud storage) and configure a new remote with desired metadata format, hash type, chunk naming etc.
-
Now run rclone sync -i oldchunks: newchunks: and all your data will be transparently converted in transfer. This may take some time, yet chunker will try server-side copy if possible.
+
Now run rclone sync --interactive oldchunks: newchunks: and all your data will be transparently converted in transfer. This may take some time, yet chunker will try server-side copy if possible.
After checking data integrity you may remove configuration section of the old remote.
If rclone gets killed during a long operation on a big composite file, hidden temporary chunks may stay in the directory. They will not be shown by the list command but will eat up your account quota. Please note that the deletefile command deletes only active chunks of a file. As a workaround, you can use remote of the wrapped file system to see them. An easy way to get rid of hidden garbage is to copy littered directory somewhere using the chunker remote and purge the original directory. The copy command will copy only active chunks while the purge will remove everything including garbage.
@@ -17607,7 +17691,7 @@ y/e/d> y
Accessing a storage system through a crypt remote realizes client-side encryption, which makes it safe to keep your data in a location you do not trust will not get compromised. When working against the crypt remote, rclone will automatically encrypt (before uploading) and decrypt (after downloading) on your local system as needed on the fly, leaving the data encrypted at rest in the wrapped remote. If you access the storage system using an application other than rclone, or access the wrapped remote directly using rclone, there will not be any encryption/decryption: Downloading existing content will just give you the encrypted (scrambled) format, and anything you upload will not become encrypted.
The encryption is a secret-key encryption (also called symmetric key encryption) algorithm, where a password (or pass phrase) is used to generate real encryption key. The password can be supplied by user, or you may chose to let rclone generate one. It will be stored in the configuration file, in a lightly obscured form. If you are in an environment where you are not able to keep your configuration secured, you should add configuration encryption as protection. As long as you have this configuration file, you will be able to decrypt your data. Without the configuration file, as long as you remember the password (or keep it in a safe place), you can re-create the configuration and gain access to the existing data. You may also configure a corresponding remote in a different installation to access the same data. See below for guidance to changing password.
Encryption uses cryptographic salt, to permute the encryption key so that the same string may be encrypted in different ways. When configuring the crypt remote it is optional to enter a salt, or to let rclone generate a unique salt. If omitted, rclone uses a built-in unique string. Normally in cryptography, the salt is stored together with the encrypted content, and do not have to be memorized by the user. This is not the case in rclone, because rclone does not store any additional information on the remotes. Use of custom salt is effectively a second password that must be memorized.
-
File content encryption is performed using NaCl SecretBox, based on XSalsa20 cipher and Poly1305 for integrity. Names (file- and directory names) are also encrypted by default, but this has some implications and is therefore possible to turned off.
+
File content encryption is performed using NaCl SecretBox, based on XSalsa20 cipher and Poly1305 for integrity. Names (file- and directory names) are also encrypted by default, but this has some implications and is therefore possible to be turned off.
Configuration
Here is an example of how to make a remote called secret.
To use crypt, first set up the underlying remote. Follow the rclone config instructions for the specific backend.
For example, let's say you have your original remote at remote: with the encrypted version at eremote: with path remote:crypt. You would then set up the new remote remote2: and then the encrypted version eremote2: with path remote2:crypt using the same passwords as eremote:.
When connecting to a FTP server that allows anonymous login, you can use the special "anonymous" username. Traditionally, this user account accepts any string as a password, although it is common to use either the password "anonymous" or "guest". Some servers require the use of a valid e-mail address as password.
Using on-the-fly or connection string remotes makes it easy to access such servers, without requiring any configuration in advance. The following are examples of that:
@@ -19234,7 +19318,7 @@ y/e/d> y
List the contents of a bucket
rclone ls remote:bucket
Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.
You can set up rclone with Google Cloud Storage in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don't have actively logged-in users, for example build machines.
To get credentials for Google Cloud Platform IAM Service Accounts, please head to the Service Account section of the Google Developer Console. Service Accounts behave just like normal User permissions in Google Cloud Storage ACLs, so you can limit their access (e.g. make them read only). After creating an account, a JSON file containing the Service Account's credentials will be downloaded onto your machines. These credentials are what rclone will use for authentication.
@@ -19650,6 +19734,27 @@ y/e/d> y
+
--gcs-env-auth
+
Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars).
+
Only applies if service_account_file and service_account_credentials is blank.
+
Properties:
+
+
Config: env_auth
+
Env Var: RCLONE_GCS_ENV_AUTH
+
Type: bool
+
Default: false
+
Examples:
+
+
"false"
+
+
Enter credentials in the next step.
+
+
"true"
+
+
Get GCP IAM credentials from the environment (env vars or IAM).
+
+
+
Advanced options
Here are the Advanced options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).
--gcs-token
@@ -20545,6 +20650,7 @@ trashed=false and 'c' in parents
--drive-acknowledge-abuse
Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
If downloading a file returns the error "This file has been identified as malware or spam and cannot be downloaded" with the error code "cannotDownloadAbusiveFile" then supply this flag to rclone to indicate you acknowledge the risks of downloading the file and rclone will download it anyway.
+
Note that if you are using service account it will need Manager permission (not Content Manager) to for this flag to work. If the SA does not have the right permission, Google will just ignore the flag.
It copies the drive file with ID given to the path (an rclone path which will be passed internally to rclone copyto). The ID and path pairs can be repeated.
The path should end with a / to indicate copy the file as named to this directory. If it doesn't end with a / then the last path component will be used as the file name.
If the destination is a drive backend then server-side copying will be attempted if possible.
-
Use the -i flag to see what would be copied before copying.
+
Use the --interactive/-i or --dry-run flag to see what would be copied before copying.
Under "ENABLE APIS AND SERVICES" search for "Drive", and enable the "Google Drive API".
Click "Credentials" in the left-side panel (not "Create credentials", which opens the wizard), then "Create credentials"
-
If you already configured an "Oauth Consent Screen", then skip to the next step; if not, click on "CONFIGURE CONSENT SCREEN" button (near the top right corner of the right panel), then select "External" and click on "CREATE"; on the next screen, enter an "Application name" ("rclone" is OK); enter "User Support Email" (your own email is OK); enter "Developer Contact Email" (your own email is OK); then click on "Save" (all other data is optional). Click again on "Credentials" on the left panel to go back to the "Credentials" screen.
+
If you already configured an "Oauth Consent Screen", then skip to the next step; if not, click on "CONFIGURE CONSENT SCREEN" button (near the top right corner of the right panel), then select "External" and click on "CREATE"; on the next screen, enter an "Application name" ("rclone" is OK); enter "User Support Email" (your own email is OK); enter "Developer Contact Email" (your own email is OK); then click on "Save" (all other data is optional). You will also have to add some scopes, including .../auth/docs and .../auth/drive in order to be able to edit, create and delete files with RClone. You may also want to include the ../auth/drive.metadata.readonly scope. After adding scopes, click "Save and continue" to add test users. Be sure to add your own account to the test users. Once you've added yourself as a test user and saved the changes, click again on "Credentials" on the left panel to go back to the "Credentials" screen.
(PS: if you are a GSuite user, you could also select "Internal" instead of "External" above, but this will restrict API use to Google Workspace users in your organisation).
Click on the "+ CREATE CREDENTIALS" button at the top of the screen, then select "OAuth client ID".
Choose an application type of "Desktop app" and click "Create". (the default name is fine)
It will show you a client ID and client secret. Make a note of these.
-
(If you selected "External" at Step 5 continue to "Publish App" in the Steps 9 and 10. If you chose "Internal" you don't need to publish and can skip straight to Step 11.)
-
Go to "Oauth consent screen" and press "Publish App"
-
Click "OAuth consent screen", then click "PUBLISH APP" button and confirm, or add your account under "Test users".
+
(If you selected "External" at Step 5 continue to Step 9. If you chose "Internal" you don't need to publish and can skip straight to Step 10 but your destination drive must be part of the same Google Workspace.)
+
Go to "Oauth consent screen" and then click "PUBLISH APP" button and confirm. You will also want to add yourself as a test user.
Provide the noted client ID and client secret to rclone.
-
Be aware that, due to the "enhanced security" recently introduced by Google, you are theoretically expected to "submit your app for verification" and then wait a few weeks(!) for their response; in practice, you can go right ahead and use the client ID and client secret with rclone, the only issue will be a very scary confirmation screen shown when you connect via your browser for rclone to be able to get its token-id (but as this only happens during the remote configuration, it's not such a big deal).
+
Be aware that, due to the "enhanced security" recently introduced by Google, you are theoretically expected to "submit your app for verification" and then wait a few weeks(!) for their response; in practice, you can go right ahead and use the client ID and client secret with rclone, the only issue will be a very scary confirmation screen shown when you connect via your browser for rclone to be able to get its token-id (but as this only happens during the remote configuration, it's not such a big deal). Keeping the application in "Testing" will work as well, but the limitation is that any grants will expire after a week, which can be annoying to refresh constantly. If, for whatever reason, a short grant time is not a problem, then keeping the application in testing mode would also be sufficient.
(Thanks to @balazer on github for these instructions.)
Sometimes, creation of an OAuth consent in Google API Console fails due to an error message “The request failed because changes to one of the field of the resource is not supported”. As a convenient workaround, the necessary Google Drive API key can be created on the Python Quickstart page. Just push the Enable the Drive API button to receive the Client ID and Secret. Note that it will automatically create a new project in the API Console.
Google Photos
@@ -20914,9 +21019,9 @@ y/e/d> y
List the contents of an album
rclone ls remote:album/newAlbum
Sync /home/local/images to the Google Photos, removing any excess files in the album.
As Google Photos is not a general purpose cloud storage system the backend is laid out to help you navigate it.
+
As Google Photos is not a general purpose cloud storage system, the backend is laid out to help you navigate it.
The directories under media show different ways of categorizing the media. Each file will appear multiple times. So if you want to make a backup of your google photos you might choose to backup remote:media/by-month. (NBremote:media/by-day is rather slow at the moment so avoid for syncing.)
Note that all your photos and videos will appear somewhere under media, but they may not appear under album unless you've put them into albums.
/
@@ -21372,7 +21477,7 @@ e/n/d/r/c/s/q> q
List the contents of a directory
rclone ls remote:directory
Sync the remote directory to /home/local/directory, deleting any excess files.
Because of Internet Archive's architecture, it enqueues write operations (and extra post-processings) in a per-item queue. You can check item's queue at https://catalogd.archive.org/history/item-name-here . Because of that, all uploads/deletes will not show up immediately and takes some time to be available. The per-item queue is enqueued to an another queue, Item Deriver Queue. You can check the status of Item Deriver Queue here. This queue has a limit, and it may block you from uploading, or even deleting. You should avoid uploading a lot of small files for better behavior.
You can optionally wait for the server's processing to finish, by setting non-zero value to wait_archive key. By making it wait, rclone can do normal file comparison. Make sure to set a large enough value (e.g. 30m0s for smaller files) as it can take a long time depending on server's queue.
@@ -22868,7 +22973,7 @@ y/e/d> y
List the contents of a directory
rclone ls remote:directory
Sync /home/local/directory to the remote path, deleting any excess files in the path.
Files support a modification time attribute with up to 1 second precision. Directories do not have a modification time, which is shown as "Jan 1 1970".
Hash checksums
@@ -23256,6 +23361,16 @@ me@example.com:/$
Type: bool
Default: false
+
--mega-use-https
+
Use HTTPS for transfers.
+
MEGA uses plain text HTTP connections by default. Some ISPs throttle HTTP connections, this causes transfers to become very slow. Enabling this will force MEGA to use HTTPS for all transfers. HTTPS is normally not necesary since all data is already encrypted anyway. Enabling it will increase CPU usage and add network overhead.
This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
Modified time
@@ -24169,7 +24284,9 @@ y/e/d> y
Note: If you have a special region, you may need a different host in step 4 and 5. Here are some hints.
Modification time and hashes
OneDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.
-
OneDrive personal supports SHA1 type hashes. OneDrive for business and Sharepoint Server support QuickXorHash.
+
OneDrive Personal, OneDrive for Business and Sharepoint Server support QuickXorHash.
+
Before rclone 1.62 the default hash for Onedrive Personal was SHA1. For rclone 1.62 and above the default for all Onedrive backends is QuickXorHash.
+
Starting from July 2023 SHA1 support is being phased out in Onedrive Personal in favour of QuickXorHash. If necessary the --onedrive-hash-type flag (or hash_type config option) can be used to select SHA1 during the transition period if this is important your workflow.
For all types of OneDrive you can use the --checksum flag.
This specifies the hash type in use. If set to "auto" it will use the default hash which is is QuickXorHash.
+
Before rclone 1.62 an SHA1 hash was used by default for Onedrive Personal. For 1.62 and later the default is to use a QuickXorHash for all onedrive types. If an SHA1 hash is desired then set this option accordingly.
+
From July 2023 QuickXorHash will be the only available hash for both OneDrive for Business and OneDriver Personal.
+
This can be set to "none" to not use any hashes.
+
If the hash requested does not exist on the object, it will be returned as an empty string which is treated as a missing hash by rclone.
Restore the versioning settings after using rclone. (Optional)
Cleanup
-
OneDrive supports rclone cleanup which causes rclone to look through every file under the path supplied and delete all version but the current version. Because this involves traversing all the files, then querying each file for versions it can be quite slow. Rclone does --checkers tests in parallel. The command also supports -i which is a great way to see what it would do.
-
rclone cleanup -i remote:path/subdir # interactively remove all old version for path/subdir
-rclone cleanup remote:path/subdir # unconditionally remove all old version for path/subdir
+
OneDrive supports rclone cleanup which causes rclone to look through every file under the path supplied and delete all version but the current version. Because this involves traversing all the files, then querying each file for versions it can be quite slow. Rclone does --checkers tests in parallel. The command also supports --interactive/i or --dry-run which is a great way to see what it would do.
+
rclone cleanup --interactive remote:path/subdir # interactively remove all old version for path/subdir
+rclone cleanup remote:path/subdir # unconditionally remove all old version for path/subdir
NB Onedrive personal can't currently delete versions
Troubleshooting
Excessive throttling or blocked on SharePoint
@@ -24607,26 +24765,26 @@ Description: Due to a configuration change made by your administrator, or becaus
Can not access Shared with me files
Shared with me files is not supported by rclone currently, but there is a workaround:
Right click a item in Shared, then click Add shortcut to My files in the context
+
The shortcut will appear in My files, you can access it with rclone, it behaves like a normal folder/file.
-
-
Screenshot (rclone mount)
-
-
+
Live Photos uploaded from iOS (small video clips in .heic files)
+
The iOS OneDrive app introduced upload and storage of Live Photos in 2020. The usage and download of these uploaded Live Photos is unfortunately still work-in-progress and this introduces several issues when copying, synchronising and mounting – both in rclone and in the native OneDrive client on Windows.
+
The root cause can easily be seen if you locate one of your Live Photos in the OneDrive web interface. Then download the photo from the web interface. You will then see that the size of downloaded .heic file is smaller than the size displayed in the web interface. The downloaded file is smaller because it only contains a single frame (still photo) extracted from the Live Photo (movie) stored in OneDrive.
+
The different sizes will cause rclone copy/sync to repeatedly recopy unmodified photos something like this:
These recopies can be worked around by adding --ignore-size. Please note that this workaround only syncs the still-picture not the movie clip, and relies on modification dates being correctly updated on all files in all situations.
+
The different sizes will also cause rclone check to report size errors something like this:
+
ERROR : 20230203_123826234_iOS.heic: sizes differ
+
These check errors can be suppressed by adding --ignore-size.
+
The different sizes will also cause rclone mount to fail downloading with an error something like this:
INFO : 20230203_123826234_iOS.heic: vfs cache: downloader: error count now 1: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable:
+ERROR : 20230203_123826234_iOS.heic: vfs cache: failed to download: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable:
OpenDrive
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory.
@@ -24891,7 +25049,7 @@ Enter a value. Press Enter to leave empty.
endpoint>
Option config_file.
-Path to OCI config file
+Full Path to OCI config file
Choose a number from below, or type in your own string value.
Press Enter for the default (~/.oci/config).
1 / oci configuration file location
@@ -24932,6 +25090,69 @@ y/e/d> y
List the contents of a bucket
rclone ls remote:bucket
rclone ls remote:bucket --max-depth 1
+
OCI Authentication Provider
+
OCI has various authentication methods. To learn more about authentication methods please refer oci authentication methods These choices can be specified in the rclone config file.
+
Rclone supports the following OCI authentication provider.
+
User Principal
+Instance Principal
+Resource Principal
+No authentication
+
Authentication provider choice: User Principal
+
Sample rclone config file for Authentication Provider User Principal:
Advantages: - One can use this method from any server within OCI or on-premises or from other cloud provider.
+
Considerations: - you need to configure user’s privileges / policy to allow access to object storage - Overhead of managing users and keys. - If the user is deleted, the config file will no longer work and may cause automation regressions that use the user's credentials.
+
Authentication provider choice: Instance Principal
+
An OCI compute instance can be authorized to use rclone by using it's identity and certificates as an instance principal. With this approach no credentials have to be stored and managed.
+
Sample rclone configuration file for Authentication Provider Instance Principal:
With instance principals, you don't need to configure user credentials and transfer/ save it to disk in your compute instances or rotate the credentials.
+
You don’t need to deal with users and keys.
+
Greatly helps in automation as you don't have to manage access keys, user private keys, storing them in vault, using kms etc.
+
+
Considerations:
+
+
You need to configure a dynamic group having this instance as member and add policy to read object storage to that dynamic group.
+
Everyone who has access to this machine can execute the CLI commands.
+
It is applicable for oci compute instances only. It cannot be used on external instance or resources.
+
+
Authentication provider choice: Resource Principal
+
Resource principal auth is very similar to instance principal auth but used for resources that are not compute instances such as serverless functions. To use resource principal ensure Rclone process is started with these environment variables set in its process.
The modified time is stored as metadata on the object as opc-meta-mtime as floating point since the epoch, accurate to 1 ns.
If the modification time needs to be updated rclone will attempt to perform a server side copy to update the modification if the object can be copied in a single part. In the case the object is larger than 5Gb, the object will be uploaded rather than copied.
@@ -25056,6 +25277,30 @@ rclone ls remote:bucket --max-depth 1
Advanced options
Here are the Advanced options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage).
+
--oos-storage-tier
+
The storage class to use when storing new objects in storage. https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm
+
Properties:
+
+
Config: storage_tier
+
Env Var: RCLONE_OOS_STORAGE_TIER
+
Type: string
+
Default: "Standard"
+
Examples:
+
+
"Standard"
+
+
Standard storage tier, this is the default tier
+
+
"InfrequentAccess"
+
+
InfrequentAccess storage tier
+
+
"Archive"
+
+
Archive storage tier
+
+
+
--oos-upload-cutoff
Cutoff for switching to chunked upload.
Any files larger than this will be uploaded in chunks of chunk_size. The minimum is 0 and the maximum is 5 GiB.
@@ -25155,6 +25400,90 @@ rclone ls remote:bucket --max-depth 1
Type: bool
Default: false
+
--oos-sse-customer-key-file
+
To use SSE-C, a file containing the base64-encoded string of the AES-256 encryption key associated with the object. Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed.'
+
Properties:
+
+
Config: sse_customer_key_file
+
Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY_FILE
+
Type: string
+
Required: false
+
Examples:
+
+
""
+
+
None
+
+
+
+
--oos-sse-customer-key
+
To use SSE-C, the optional header that specifies the base64-encoded 256-bit encryption key to use to encrypt or decrypt the data. Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed. For more information, see Using Your Own Keys for Server-Side Encryption (https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm)
+
Properties:
+
+
Config: sse_customer_key
+
Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY
+
Type: string
+
Required: false
+
Examples:
+
+
""
+
+
None
+
+
+
+
--oos-sse-customer-key-sha256
+
If using SSE-C, The optional header that specifies the base64-encoded SHA256 hash of the encryption key. This value is used to check the integrity of the encryption key. see Using Your Own Keys for Server-Side Encryption (https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm).
+
Properties:
+
+
Config: sse_customer_key_sha256
+
Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY_SHA256
+
Type: string
+
Required: false
+
Examples:
+
+
""
+
+
None
+
+
+
+
--oos-sse-kms-key-id
+
if using using your own master key in vault, this header specifies the OCID (https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm) of a master encryption key used to call the Key Management service to generate a data encryption key or to encrypt or decrypt a data encryption key. Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed.
+
Properties:
+
+
Config: sse_kms_key_id
+
Env Var: RCLONE_OOS_SSE_KMS_KEY_ID
+
Type: string
+
Required: false
+
Examples:
+
+
""
+
+
None
+
+
+
+
--oos-sse-customer-algorithm
+
If using SSE-C, the optional header that specifies "AES256" as the encryption algorithm. Object Storage supports "AES256" as the encryption algorithm. For more information, see Using Your Own Keys for Server-Side Encryption (https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm).
+
Properties:
+
+
Config: sse_customer_algorithm
+
Env Var: RCLONE_OOS_SSE_CUSTOMER_ALGORITHM
+
Type: string
+
Required: false
+
Examples:
+
+
""
+
+
None
+
+
"AES256"
+
+
AES256
+
+
+
Backend commands
Here are the commands specific to the oracleobjectstorage backend.
Run them with
@@ -25190,7 +25519,7 @@ rclone ls remote:bucket --max-depth 1
put.io has rate limiting. When you hit a limit, rclone automatically retries after waiting the amount of time requested by the server.
If you want to avoid ever hitting these limits, you may use the --tpslimit flag with a low number. Note that the imposed limits may be different for different operations, and may change over time.
Seafile
-
This is a backend for the Seafile storage service: - It works with both the free community edition or the professional edition. - Seafile versions 6.x and 7.x are all supported. - Encrypted libraries are also supported. - It supports 2FA enabled users
+
This is a backend for the Seafile storage service: - It works with both the free community edition or the professional edition. - Seafile versions 6.x, 7.x, 8.x and 9.x are all supported. - Encrypted libraries are also supported. - It supports 2FA enabled users - Using a Library API Token is not supported
Configuration
There are two distinct modes you can setup your remote: - you point your remote to the root of the server, meaning you don't specify a library during the configuration: Paths are specified as remote:library. You may put subdirectories in too, e.g. remote:library/path/to/dir. - you point your remote to a specific library during the configuration: Paths are specified as remote:path/to/dir. This is the recommended mode when using encrypted libraries. (This mode is possibly slightly faster than the root mode)
Configuration in root mode
@@ -26525,7 +26854,7 @@ y/e/d> y
List the contents of a library
rclone ls seafile:library
Sync /home/local/directory to the remote library, deleting any excess files in the library.
Here's an example of a configuration in library mode with a user that has the two-factor authentication enabled. Your 2FA code will be asked at the end of the configuration, and will attempt to authenticate you:
No remotes found, make a new one?
@@ -26604,7 +26933,7 @@ y/e/d> y
List the contents of a directory
rclone ls seafile:directory
Sync /home/local/directory to the remote library, deleting any excess files in the library.
Seafile version 7+ supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details. Please note this is not supported on seafile server version 6.x
Please note a share link is unique for each file or directory. If you run a link command on a file/dir that has already been shared, you will get the exact same link.
Compatibility
-
It has been actively tested using the seafile docker image of these versions: - 6.3.4 community edition - 7.0.5 community edition - 7.1.3 community edition
+
It has been actively developed using the seafile docker image of these versions: - 6.3.4 community edition - 7.0.5 community edition - 7.1.3 community edition - 9.0.10 community edition
Versions below 6.0 are not supported. Versions between 6.0 and 6.3 haven't been tested and might not work properly.
+
Each new version of rclone is automatically tested against the latest docker image of the seafile community server.
Standard options
Here are the Standard options specific to seafile (seafile).
--seafile-url
@@ -26814,7 +27144,7 @@ y/e/d> y
List the contents of a directory
rclone ls remote:path/to/directory
Sync /home/local/directory to the remote directory, deleting any excess files in the directory.
Mount the remote path /srv/www-data/ to the local path /mnt/www-data
rclone mount remote:/srv/www-data/ /mnt/www-data
SSH Authentication
@@ -27430,6 +27760,18 @@ y/e/d> d
Type: string
Default: "WORKGROUP"
+
--smb-spn
+
Service principal name.
+
Rclone presents this name to the server. Some servers use this as further authentication, and it often needs to be set for clusters. For example:
+
cifs/remotehost:1020
+
Leave blank if not sure.
+
Properties:
+
+
Config: spn
+
Env Var: RCLONE_SMB_SPN
+
Type: string
+
Required: false
+
Advanced options
Here are the Advanced options specific to smb (SMB / CIFS).
--smb-idle-timeout
@@ -27600,14 +27942,14 @@ Choose a number from below, or type in your own value
\ "new"
provider> new
Satellite Address. Custom satellite address should match the format: `<nodeid>@<address>:<port>`.
-Enter a string value. Press Enter for the default ("us-central-1.storj.io").
+Enter a string value. Press Enter for the default ("us1.storj.io").
Choose a number from below, or type in your own value
- 1 / US Central 1
- \ "us-central-1.storj.io"
- 2 / Europe West 1
- \ "europe-west-1.storj.io"
- 3 / Asia East 1
- \ "asia-east-1.storj.io"
+ 1 / US1
+ \ "us1.storj.io"
+ 2 / EU1
+ \ "eu1.storj.io"
+ 3 / AP1
+ \ "ap1.storj.io"
satellite_address> 1
API Key.
Enter a string value. Press Enter for the default ("").
@@ -27619,7 +27961,7 @@ Remote config
--------------------
[remote]
type = storj
-satellite_address = 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S@us-central-1.tardigrade.io:7777
+satellite_address = 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S@us1.storj.io:7777
api_key = your-api-key-for-your-storj-project
passphrase = your-human-readable-encryption-passphrase
access_grant = the-access-grant-generated-from-the-api-key-and-passphrase
@@ -27669,20 +28011,20 @@ y/e/d> y
Env Var: RCLONE_STORJ_SATELLITE_ADDRESS
Provider: new
Type: string
-
Default: "us-central-1.storj.io"
+
Default: "us1.storj.io"
Examples:
-
"us-central-1.storj.io"
+
"us1.storj.io"
-
US Central 1
+
US1
-
"europe-west-1.storj.io"
+
"eu1.storj.io"
-
Europe West 1
+
EU1
-
"asia-east-1.storj.io"
+
"ap1.storj.io"
-
Asia East 1
+
AP1
@@ -27752,15 +28094,15 @@ y/e/d> y
rclone size remote:bucket/path/to/dir/
Sync two Locations
Use the sync command to sync the source to the destination, changing the destination only, deleting any excess files.
rclone about is not supported by the rclone Storj backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.
For consistencies sake one can also configure a remote of type local in the config file, and access the local filesystem using rclone remote paths, e.g. remote:path/to/wherever, but it is probably easier not to.
diff --git a/MANUAL.md b/MANUAL.md
index 59c2cef30..430ad23bc 100644
--- a/MANUAL.md
+++ b/MANUAL.md
@@ -1,6 +1,6 @@
% rclone(1) User Manual
% Nick Craig-Wood
-% Dec 20, 2022
+% Mar 14, 2023
# Rclone syncs your files to cloud storage
@@ -333,6 +333,14 @@ If you are planning to use the [rclone mount](https://rclone.org/commands/rclone
feature then you will need to install the third party utility
[WinFsp](https://winfsp.dev/) also.
+### Windows package manager (Winget) {#windows-chocolatey}
+
+[Winget](https://learn.microsoft.com/en-us/windows/package-manager/) comes pre-installed with the latest versions of Windows. If not, update the [App Installer](https://www.microsoft.com/p/app-installer/9nblggh4nns1) package from the Microsoft store.
+
+```
+winget install Rclone.Rclone
+```
+
### Chocolatey package manager {#windows-chocolatey}
Make sure you have [Choco](https://chocolatey.org/) installed
@@ -356,6 +364,19 @@ developers so it may be out of date. Its current version is as below.
[![Chocolatey package](https://repology.org/badge/version-for-repo/chocolatey/rclone.svg)](https://repology.org/project/rclone/versions)
+### Scoop package manager {#windows-scoop}
+
+Make sure you have [Scoop](https://scoop.sh/) installed
+
+```
+scoop install rclone
+```
+
+Note that this is a third party installer not controlled by the rclone
+developers so it may be out of date. Its current version is as below.
+
+[![Scoop package](https://repology.org/badge/version-for-repo/scoop/rclone.svg)](https://repology.org/project/rclone/versions)
+
## Package manager installation {#package-manager}
Many Linux, Windows, macOS and other OS distributions package and
@@ -816,7 +837,7 @@ storage system in the config file then the sub path, e.g.
You can define as many storage paths as you like in the config file.
-Please use the [`-i` / `--interactive`](#interactive) flag while
+Please use the [`--interactive`/`-i`](#interactive) flag while
learning rclone to avoid accidental data loss.
Subcommands
@@ -826,7 +847,7 @@ rclone uses a system of subcommands. For example
rclone ls remote:path # lists a remote
rclone copy /local/path remote:path # copies /local/path to the remote
- rclone sync -i /local/path remote:path # syncs /local/path to the remote
+ rclone sync --interactive /local/path remote:path # syncs /local/path to the remote
# rclone config
@@ -967,7 +988,7 @@ want to delete files from destination, use the
**Important**: Since this can cause data loss, test first with the
`--dry-run` or the `--interactive`/`-i` flag.
- rclone sync -i SOURCE remote:DESTINATION
+ rclone sync --interactive SOURCE remote:DESTINATION
Note that files in the destination won't be deleted if there were any
errors at any point. Duplicate objects (files with the same name, on
@@ -1899,9 +1920,11 @@ Remote authorization. Used to authorize a remote or headless
rclone from a machine with a browser - use as instructed by
rclone config.
-Use the --auth-no-open-browser to prevent rclone to open auth
+Use --auth-no-open-browser to prevent rclone to open auth
link in default browser automatically.
+Use --template to generate HTML output via a custom Go template. If a blank string is provided as an argument to this flag, the default template is used.
+
```
rclone authorize [flags]
```
@@ -1911,6 +1934,7 @@ rclone authorize [flags]
```
--auth-no-open-browser Do not automatically open auth link in default browser
-h, --help help for authorize
+ --template string The path to a custom Go template for generating HTML responses
```
See the [global flags page](https://rclone.org/flags/) for global options not listed here.
@@ -3868,38 +3892,59 @@ group "Everyone" will be used to represent others. The user/group can be customi
with FUSE options "UserName" and "GroupName",
e.g. `-o UserName=user123 -o GroupName="Authenticated Users"`.
The permissions on each entry will be set according to [options](#options)
-`--dir-perms` and `--file-perms`, which takes a value in traditional
+`--dir-perms` and `--file-perms`, which takes a value in traditional Unix
[numeric notation](https://en.wikipedia.org/wiki/File-system_permissions#Numeric_notation).
The default permissions corresponds to `--file-perms 0666 --dir-perms 0777`,
i.e. read and write permissions to everyone. This means you will not be able
to start any programs from the mount. To be able to do that you must add
execute permissions, e.g. `--file-perms 0777 --dir-perms 0777` to add it
-to everyone. If the program needs to write files, chances are you will have
-to enable [VFS File Caching](#vfs-file-caching) as well (see also [limitations](#limitations)).
+to everyone. If the program needs to write files, chances are you will
+have to enable [VFS File Caching](#vfs-file-caching) as well (see also
+[limitations](#limitations)). Note that the default write permission have
+some restrictions for accounts other than the owner, specifically it lacks
+the "write extended attributes", as explained next.
-Note that the mapping of permissions is not always trivial, and the result
-you see in Windows Explorer may not be exactly like you expected.
-For example, when setting a value that includes write access, this will be
-mapped to individual permissions "write attributes", "write data" and "append data",
-but not "write extended attributes". Windows will then show this as basic
-permission "Special" instead of "Write", because "Write" includes the
-"write extended attributes" permission.
+The mapping of permissions is not always trivial, and the result you see in
+Windows Explorer may not be exactly like you expected. For example, when setting
+a value that includes write access for the group or others scope, this will be
+mapped to individual permissions "write attributes", "write data" and
+"append data", but not "write extended attributes". Windows will then show this
+as basic permission "Special" instead of "Write", because "Write" also covers
+the "write extended attributes" permission. When setting digit 0 for group or
+others, to indicate no permissions, they will still get individual permissions
+"read attributes", "read extended attributes" and "read permissions". This is
+done for compatibility reasons, e.g. to allow users without additional
+permissions to be able to read basic metadata about files like in Unix.
-If you set POSIX permissions for only allowing access to the owner, using
-`--file-perms 0600 --dir-perms 0700`, the user group and the built-in "Everyone"
-group will still be given some special permissions, such as "read attributes"
-and "read permissions", in Windows. This is done for compatibility reasons,
-e.g. to allow users without additional permissions to be able to read basic
-metadata about files like in UNIX. One case that may arise is that other programs
-(incorrectly) interprets this as the file being accessible by everyone. For example
-an SSH client may warn about "unprotected private key file".
-
-WinFsp 2021 (version 1.9) introduces a new FUSE option "FileSecurity",
+WinFsp 2021 (version 1.9) introduced a new FUSE option "FileSecurity",
that allows the complete specification of file security descriptors using
[SDDL](https://docs.microsoft.com/en-us/windows/win32/secauthz/security-descriptor-string-format).
-With this you can work around issues such as the mentioned "unprotected private key file"
-by specifying `-o FileSecurity="D:P(A;;FA;;;OW)"`, for file all access (FA) to the owner (OW).
+With this you get detailed control of the resulting permissions, compared
+to use of the POSIX permissions described above, and no additional permissions
+will be added automatically for compatibility with Unix. Some example use
+cases will following.
+
+If you set POSIX permissions for only allowing access to the owner,
+using `--file-perms 0600 --dir-perms 0700`, the user group and the built-in
+"Everyone" group will still be given some special permissions, as described
+above. Some programs may then (incorrectly) interpret this as the file being
+accessible by everyone, for example an SSH client may warn about "unprotected
+private key file". You can work around this by specifying
+`-o FileSecurity="D:P(A;;FA;;;OW)"`, which sets file all access (FA) to the
+owner (OW), and nothing else.
+
+When setting write permissions then, except for the owner, this does not
+include the "write extended attributes" permission, as mentioned above.
+This may prevent applications from writing to files, giving permission denied
+error instead. To set working write permissions for the built-in "Everyone"
+group, similar to what it gets by default but with the addition of the
+"write extended attributes", you can specify
+`-o FileSecurity="D:P(A;;FRFW;;;WD)"`, which sets file read (FR) and file
+write (FW) to everyone (WD). If file execute (FX) is also needed, then change
+to `-o FileSecurity="D:P(A;;FRFWFX;;;WD)"`, or set file all access (FA) to
+get full access permissions, including delete, with
+`-o FileSecurity="D:P(A;;FA;;;WD)"`.
### Windows caveats
@@ -3928,14 +3973,58 @@ processes as the SYSTEM account. Another alternative is to run the mount
command from a Windows Scheduled Task, or a Windows Service, configured
to run as the SYSTEM account. A third alternative is to use the
[WinFsp.Launcher infrastructure](https://github.com/winfsp/winfsp/wiki/WinFsp-Service-Architecture)).
+Read more in the [install documentation](https://rclone.org/install/).
Note that when running rclone as another user, it will not use
the configuration file from your profile unless you tell it to
with the [`--config`](https://rclone.org/docs/#config-config-file) option.
-Read more in the [install documentation](https://rclone.org/install/).
+Note also that it is now the SYSTEM account that will have the owner
+permissions, and other accounts will have permissions according to the
+group or others scopes. As mentioned above, these will then not get the
+"write extended attributes" permission, and this may prevent writing to
+files. You can work around this with the FileSecurity option, see
+example above.
Note that mapping to a directory path, instead of a drive letter,
does not suffer from the same limitations.
+## Mounting on macOS
+
+Mounting on macOS can be done either via [macFUSE](https://osxfuse.github.io/)
+(also known as osxfuse) or [FUSE-T](https://www.fuse-t.org/). macFUSE is a traditional
+FUSE driver utilizing a macOS kernel extension (kext). FUSE-T is an alternative FUSE system
+which "mounts" via an NFSv4 local server.
+
+### FUSE-T Limitations, Caveats, and Notes
+
+There are some limitations, caveats, and notes about how it works. These are current as
+of FUSE-T version 1.0.14.
+
+#### ModTime update on read
+
+As per the [FUSE-T wiki](https://github.com/macos-fuse-t/fuse-t/wiki#caveats):
+
+> File access and modification times cannot be set separately as it seems to be an
+> issue with the NFS client which always modifies both. Can be reproduced with
+> 'touch -m' and 'touch -a' commands
+
+This means that viewing files with various tools, notably macOS Finder, will cause rlcone
+to update the modification time of the file. This may make rclone upload a full new copy
+of the file.
+
+#### Unicode Normalization
+
+Rclone includes flags for unicode normalization with macFUSE that should be updated
+for FUSE-T. See [this forum post](https://forum.rclone.org/t/some-unicode-forms-break-mount-on-macos-with-fuse-t/36403)
+and [FUSE-T issue #16](https://github.com/macos-fuse-t/fuse-t/issues/16). The following
+flag should be added to the `rclone mount` command.
+
+ -o modules=iconv,from_code=UTF-8,to_code=UTF-8
+
+#### Read Only mounts
+
+When mounting with `--read-only`, attempts to write to files will fail *silently* as
+opposed to with a clear warning as in macFUSE.
+
## Limitations
Without the use of `--vfs-cache-mode` this can only write files
@@ -6805,6 +6894,87 @@ _WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
+## Auth Proxy
+
+If you supply the parameter `--auth-proxy /path/to/program` then
+rclone will use that program to generate backends on the fly which
+then are used to authenticate incoming requests. This uses a simple
+JSON based protocol with input on STDIN and output on STDOUT.
+
+**PLEASE NOTE:** `--auth-proxy` and `--authorized-keys` cannot be used
+together, if `--auth-proxy` is set the authorized keys option will be
+ignored.
+
+There is an example program
+[bin/test_proxy.py](https://github.com/rclone/rclone/blob/master/test_proxy.py)
+in the rclone source code.
+
+The program's job is to take a `user` and `pass` on the input and turn
+those into the config for a backend on STDOUT in JSON format. This
+config will have any default parameters for the backend added, but it
+won't use configuration from environment variables or command line
+options - it is the job of the proxy program to make a complete
+config.
+
+This config generated must have this extra parameter
+- `_root` - root to use for the backend
+
+And it may have this parameter
+- `_obscure` - comma separated strings for parameters to obscure
+
+If password authentication was used by the client, input to the proxy
+process (on STDIN) would look similar to this:
+
+```
+{
+ "user": "me",
+ "pass": "mypassword"
+}
+```
+
+If public-key authentication was used by the client, input to the
+proxy process (on STDIN) would look similar to this:
+
+```
+{
+ "user": "me",
+ "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf"
+}
+```
+
+And as an example return this on STDOUT
+
+```
+{
+ "type": "sftp",
+ "_root": "",
+ "_obscure": "pass",
+ "user": "me",
+ "pass": "mypassword",
+ "host": "sftp.example.com"
+}
+```
+
+This would mean that an SFTP backend would be created on the fly for
+the `user` and `pass`/`public_key` returned in the output to the host given. Note
+that since `_obscure` is set to `pass`, rclone will obscure the `pass`
+parameter before creating the backend (which is required for sftp
+backends).
+
+The program can manipulate the supplied `user` in any way, for example
+to make proxy to many different sftp backends, you could make the
+`user` be `user@example.com` and then set the `host` to `example.com`
+in the output and the user to `user`. For security you'd probably want
+to restrict the `host` to a limited list.
+
+Note that an internal cache is keyed on `user` so only use that for
+configuration, don't use `pass` or `public_key`. This also means that if a user's
+password or public-key is changed the cache will need to expire (which takes 5 mins)
+before it takes effect.
+
+This can be used to build general purpose proxies to any kind of
+backend that rclone supports.
+
```
rclone serve http remote:path [flags]
@@ -6814,6 +6984,7 @@ rclone serve http remote:path [flags]
```
--addr stringArray IPaddress:Port or :Port to bind server to (default [127.0.0.1:8080])
+ --auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
--cert string TLS PEM key (concatenation of certificate and CA certificate)
--client-ca string Client certificate authority to verify clients with
@@ -7585,6 +7756,30 @@ supported hash on the backend or you can use a named hash such as
"MD5" or "SHA-1". Use the [hashsum](https://rclone.org/commands/rclone_hashsum/) command
to see the full list.
+## Access WebDAV on Windows
+WebDAV shared folder can be mapped as a drive on Windows, however the default settings prevent it.
+Windows will fail to connect to the server using insecure Basic authentication.
+It will not even display any login dialog. Windows requires SSL / HTTPS connection to be used with Basic.
+If you try to connect via Add Network Location Wizard you will get the following error:
+"The folder you entered does not appear to be valid. Please choose another".
+However, you still can connect if you set the following registry key on a client machine:
+HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WebClient\Parameters\BasicAuthLevel to 2.
+The BasicAuthLevel can be set to the following values:
+ 0 - Basic authentication disabled
+ 1 - Basic authentication enabled for SSL connections only
+ 2 - Basic authentication enabled for SSL connections and for non-SSL connections
+If required, increase the FileSizeLimitInBytes to a higher value.
+Navigate to the Services interface, then restart the WebClient service.
+
+## Access Office applications on WebDAV
+Navigate to following registry HKEY_CURRENT_USER\Software\Microsoft\Office\[14.0/15.0/16.0]\Common\Internet
+Create a new DWORD BasicAuthLevel with value 2.
+ 0 - Basic authentication disabled
+ 1 - Basic authentication enabled for SSL connections only
+ 2 - Basic authentication enabled for SSL and for non-SSL connections
+
+https://learn.microsoft.com/en-us/office/troubleshoot/powerpoint/office-opens-blank-from-sharepoint
+
## Server options
@@ -8403,7 +8598,7 @@ unless `--no-create` or `--recursive` is provided.
If `--recursive` is used then recursively sets the modification
time on all existing files that is found under the path. Filters are supported,
-and you can test with the `--dry-run` or the `--interactive` flag.
+and you can test with the `--dry-run` or the `--interactive`/`-i` flag.
If `--timestamp` is used then sets the modification time to that
time instead of the current time. Times may be specified as one of:
@@ -8702,7 +8897,7 @@ Will get their own names
### Valid remote names
Remote names are case sensitive, and must adhere to the following rules:
- - May contain number, letter, `_`, `-`, `.` and space.
+ - May contain number, letter, `_`, `-`, `.`, `+`, `@` and space.
- May not start with `-` or space.
- May not end with space.
@@ -8760,11 +8955,11 @@ file or directory like this then use the full path starting with a
So to sync a directory called `sync:me` to a remote called `remote:` use
- rclone sync -i ./sync:me remote:path
+ rclone sync --interactive ./sync:me remote:path
or
- rclone sync -i /full/path/to/sync:me remote:path
+ rclone sync --interactive /full/path/to/sync:me remote:path
Server Side Copy
----------------
@@ -8797,8 +8992,8 @@ same.
This can be used when scripting to make aged backups efficiently, e.g.
- rclone sync -i remote:current-backup remote:previous-backup
- rclone sync -i /path/to/files remote:current-backup
+ rclone sync --interactive remote:current-backup remote:previous-backup
+ rclone sync --interactive /path/to/files remote:current-backup
## Metadata support {#metadata}
@@ -8985,7 +9180,7 @@ excluded by a filter rule.
For example
- rclone sync -i /path/to/local remote:current --backup-dir remote:old
+ rclone sync --interactive /path/to/local remote:current --backup-dir remote:old
will sync `/path/to/local` to `remote:current`, but for any files
which would have been updated or deleted will be stored in
@@ -9153,6 +9348,12 @@ interfere with checking.
It can also be useful to ensure perfect ordering when using
`--order-by`.
+If both `--check-first` and `--order-by` are set when doing `rclone move`
+then rclone will use the transfer thread to delete source files which
+don't need transferring. This will enable perfect ordering of the
+transfers and deletes but will cause the transfer stats to have more
+items in than expected.
+
Using this flag can use more memory as it effectively sets
`--max-backlog` to infinite. This means that all the info on the
objects to transfer is held in memory before the transfers start.
@@ -9450,7 +9651,7 @@ Add an HTTP header for all download transactions. The flag can be repeated to
add multiple headers.
```
-rclone sync -i s3:test/src ~/dst --header-download "X-Amz-Meta-Test: Foo" --header-download "X-Amz-Meta-Test2: Bar"
+rclone sync --interactive s3:test/src ~/dst --header-download "X-Amz-Meta-Test: Foo" --header-download "X-Amz-Meta-Test2: Bar"
```
See the GitHub issue [here](https://github.com/rclone/rclone/issues/59) for
@@ -9462,7 +9663,7 @@ Add an HTTP header for all upload transactions. The flag can be repeated to add
multiple headers.
```
-rclone sync -i ~/src s3:test/dst --header-upload "Content-Disposition: attachment; filename='cool.html'" --header-upload "X-Amz-Meta-Test: FooBar"
+rclone sync --interactive ~/src s3:test/dst --header-upload "Content-Disposition: attachment; filename='cool.html'" --header-upload "X-Amz-Meta-Test: FooBar"
```
See the GitHub issue [here](https://github.com/rclone/rclone/issues/59) for
@@ -9572,7 +9773,7 @@ This can be useful as an additional layer of protection for immutable
or append-only data sets (notably backup archives), where modification
implies corruption and should not be propagated.
-### -i / --interactive {#interactive}
+### -i, --interactive {#interactive}
This flag can be used to tell rclone that you wish a manual
confirmation before destructive operations.
@@ -9583,7 +9784,7 @@ especially with `rclone sync`.
For example
```
-$ rclone delete -i /tmp/dir
+$ rclone delete --interactive /tmp/dir
rclone: delete "important-file.txt"?
y) Yes, this is OK (default)
n) No, skip this
@@ -9698,6 +9899,14 @@ This tells rclone not to delete more than N files. If that limit is
exceeded then a fatal error will be generated and rclone will stop the
operation in progress.
+### --max-delete-size=SIZE ###
+
+Rclone will stop deleting files when the total size of deletions has
+reached the size specified. It defaults to off.
+
+If that limit is exceeded then a fatal error will be generated and
+rclone will stop the operation in progress.
+
### --max-depth=N ###
This modifies the recursion depth for all the commands except purge.
@@ -9736,7 +9945,7 @@ When the limit is reached all transfers will stop immediately.
Rclone will exit with exit code 8 if the transfer limit is reached.
-## --metadata / -M
+## -M, --metadata
Setting this flag enables rclone to copy the metadata from the source
to the destination. For local backends this is ownership, permissions,
@@ -10155,7 +10364,7 @@ or with `--backup-dir`. See `--backup-dir` for more info.
For example
- rclone copy -i /path/to/local/file remote:current --suffix .bak
+ rclone copy --interactive /path/to/local/file remote:current --suffix .bak
will copy `/path/to/local` to `remote:current`, but for any files
which would have been updated or deleted have .bak added.
@@ -10164,7 +10373,7 @@ If using `rclone sync` with `--suffix` and without `--backup-dir` then
it is recommended to put a filter rule in excluding the suffix
otherwise the `sync` will delete the backup files.
- rclone sync -i /path/to/local/file remote:current --suffix .bak --exclude "*.bak"
+ rclone sync --interactive /path/to/local/file remote:current --suffix .bak --exclude "*.bak"
### --suffix-keep-extension ###
@@ -10463,9 +10672,9 @@ these options. For example this can be very useful with the HTTP or
WebDAV backends. Rclone HTTP servers have their own set of
configuration for SSL/TLS which you can find in their documentation.
-### --ca-cert string
+### --ca-cert stringArray
-This loads the PEM encoded certificate authority certificate and uses
+This loads the PEM encoded certificate authority certificates and uses
it to verify the certificates of the servers rclone connects to.
If you have generated certificates signed with a local CA then you
@@ -11750,7 +11959,7 @@ and `-v` first.
In conjunction with `rclone sync`, `--delete-excluded` deletes any files
on the destination which are excluded from the command.
-E.g. the scope of `rclone sync -i A: B:` can be restricted:
+E.g. the scope of `rclone sync --interactive A: B:` can be restricted:
rclone --min-size 50k --delete-excluded sync A: B:
@@ -14006,7 +14215,7 @@ Here is an overview of the major features of each cloud storage system.
| Mega | - | - | No | Yes | - | - |
| Memory | MD5 | R/W | No | No | - | - |
| Microsoft Azure Blob Storage | MD5 | R/W | No | No | R/W | - |
-| Microsoft OneDrive | SHA1 ⁵ | R/W | Yes | No | R | - |
+| Microsoft OneDrive | QuickXorHash ⁵ | R/W | Yes | No | R | - |
| OpenDrive | MD5 | R/W | Yes | Partial ⁸ | - | - |
| OpenStack Swift | MD5 | R/W | No | No | R/W | - |
| Oracle Object Storage | MD5 | R/W | No | No | R/W | - |
@@ -14039,9 +14248,7 @@ This is an SHA256 sum of all the 4 MiB block SHA256s.
⁴ WebDAV supports modtimes when used with Owncloud and Nextcloud only.
-⁵ Microsoft OneDrive Personal supports SHA1 hashes, whereas OneDrive
-for business and SharePoint server support Microsoft's own
-[QuickXorHash](https://docs.microsoft.com/en-us/onedrive/developer/code-snippets/quickxorhash).
+⁵ [QuickXorHash](https://docs.microsoft.com/en-us/onedrive/developer/code-snippets/quickxorhash) is Microsoft's own hash.
⁶ Mail.ru uses its own modified SHA1 hash
@@ -14471,7 +14678,7 @@ upon backend-specific capabilities.
| Sia | No | No | No | No | No | No | Yes | No | No | Yes |
| SMB | No | No | Yes | Yes | No | No | Yes | No | No | Yes |
| SugarSync | Yes | Yes | Yes | Yes | No | No | Yes | Yes | No | Yes |
-| Storj | Yes † | Yes | Yes | No | No | Yes | Yes | No | No | No |
+| Storj | Yes ☨ | Yes | Yes | No | No | Yes | Yes | Yes | No | No |
| Uptobox | No | Yes | Yes | Yes | No | No | No | No | No | No |
| WebDAV | Yes | Yes | Yes | Yes | No | No | Yes ‡ | No | Yes | Yes |
| Yandex Disk | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | Yes | Yes |
@@ -14483,9 +14690,12 @@ upon backend-specific capabilities.
This deletes a directory quicker than just deleting all the files in
the directory.
-† Note Swift and Storj implement this in order to delete
-directory markers but they don't actually have a quicker way of deleting
-files other than deleting them individually.
+† Note Swift implements this in order to delete directory markers but
+they don't actually have a quicker way of deleting files other than
+deleting them individually.
+
+☨ Storj implements this efficiently only for entire buckets. If
+purging a directory inside a bucket, files are deleted individually.
‡ StreamUpload is not supported with Nextcloud
@@ -14579,7 +14789,7 @@ These flags are available for every command.
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer (default 16Mi)
--bwlimit BwTimetable Bandwidth limit in KiB/s, or use suffix B|K|M|G|T|P or a full timetable
--bwlimit-file BwTimetable Bandwidth limit per file in KiB/s, or use suffix B|K|M|G|T|P or a full timetable
- --ca-cert string CA certificate used to verify servers
+ --ca-cert stringArray CA certificate used to verify servers
--cache-dir string Directory rclone will use for caching (default "$HOME/.cache/rclone")
--check-first Do all the checks before starting transfers
--checkers int Number of checkers to run in parallel (default 8)
@@ -14641,6 +14851,7 @@ These flags are available for every command.
--max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off)
--max-depth int If set limits the recursion depth to this (default -1)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
@@ -14729,7 +14940,7 @@ These flags are available for every command.
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.61.0")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.62.0")
-v, --verbose count Print lots more stuff (repeat for more)
```
@@ -14946,6 +15157,7 @@ and may be set in the config file.
--gcs-decompress If set this will decompress gzip encoded objects
--gcs-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gcs-endpoint string Endpoint for the service
+ --gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars)
--gcs-location string Location for the newly created buckets
--gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it
--gcs-object-acl string Access Control List for new objects
@@ -15034,6 +15246,7 @@ and may be set in the config file.
--mega-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
--mega-hard-delete Delete files permanently rather than putting them into the trash
--mega-pass string Password (obscured)
+ --mega-use-https Use HTTPS for transfers
--mega-user string User name
--netstorage-account string Set the NetStorage account name
--netstorage-host string Domain+path of NetStorage host to connect to
@@ -15049,6 +15262,7 @@ and may be set in the config file.
--onedrive-drive-type string The type of the drive (personal | business | documentLibrary)
--onedrive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings
+ --onedrive-hash-type string Specify the hash in use for the backend (default "auto")
--onedrive-link-password string Set the password for links created by the link command
--onedrive-link-scope string Set the scope of the links created by the link command (default "anonymous")
--onedrive-link-type string Set the type of the links created by the link command (default "view")
@@ -15073,6 +15287,12 @@ and may be set in the config file.
--oos-no-check-bucket If set, don't attempt to check the bucket exists or create it
--oos-provider string Choose your Auth Provider (default "env_auth")
--oos-region string Object storage Region
+ --oos-sse-customer-algorithm string If using SSE-C, the optional header that specifies "AES256" as the encryption algorithm
+ --oos-sse-customer-key string To use SSE-C, the optional header that specifies the base64-encoded 256-bit encryption key to use to
+ --oos-sse-customer-key-file string To use SSE-C, a file containing the base64-encoded string of the AES-256 encryption key associated
+ --oos-sse-customer-key-sha256 string If using SSE-C, The optional header that specifies the base64-encoded SHA256 hash of the encryption
+ --oos-sse-kms-key-id string if using using your own master key in vault, this header specifies the
+ --oos-storage-tier string The storage class to use when storing new objects in storage. https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm (default "Standard")
--oos-upload-concurrency int Concurrency for multipart uploads (default 10)
--oos-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi)
@@ -15141,6 +15361,7 @@ and may be set in the config file.
--s3-sse-customer-key-md5 string If using SSE-C you may provide the secret encryption key MD5 checksum (optional)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key
--s3-storage-class string The storage class to use when storing new objects in S3
+ --s3-sts-endpoint string Endpoint for STS
--s3-upload-concurrency int Concurrency for multipart uploads (default 4)
--s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint
@@ -15206,12 +15427,13 @@ and may be set in the config file.
--smb-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--smb-pass string SMB password (obscured)
--smb-port int SMB port number (default 445)
+ --smb-spn string Service principal name
--smb-user string SMB username (default "$USER")
--storj-access-grant string Access grant
--storj-api-key string API key
--storj-passphrase string Encryption passphrase
--storj-provider string Choose an authentication method (default "existing")
- --storj-satellite-address string Satellite address (default "us-central-1.storj.io")
+ --storj-satellite-address string Satellite address (default "us1.storj.io")
--sugarsync-access-key-id string Sugarsync Access Key ID
--sugarsync-app-id string Sugarsync App ID
--sugarsync-authorization string Sugarsync authorization
@@ -15846,7 +16068,7 @@ or swarm services that use it and stop them first.
- For successive sync runs, leave off the `--resync` flag.
- Consider using a [filters file](#filtering) for excluding
unnecessary files and directories from the sync.
-- Consider setting up the [--check-access](#check-access-option) feature
+- Consider setting up the [--check-access](#check-access) feature
for safety.
- On Linux, consider setting up a [crontab entry](#cron). bisync can
safely run in concurrent cron jobs thanks to lock files it maintains.
@@ -15976,9 +16198,9 @@ The base directories on the both Path1 and Path2 filesystems must exist
or bisync will fail. This is required for safety - that bisync can verify
that both paths are valid.
-When using `--resync` a newer version of a file on the Path2 filesystem
-will be overwritten by the Path1 filesystem version.
-Carefully evaluate deltas using [--dry-run](https://rclone.org/flags/#non-backend-flags).
+When using `--resync`, a newer version of a file either on Path1 or Path2
+filesystem, will overwrite the file on the other path (only the last version
+will be kept). Carefully evaluate deltas using [--dry-run](https://rclone.org/flags/#non-backend-flags).
For a resync run, one of the paths may be empty (no files in the path tree).
The resync run should result in files on both paths, else a normal non-resync
@@ -15994,14 +16216,27 @@ deleting **everything** in the other path.
Access check files are an additional safety measure against data loss.
bisync will ensure it can find matching `RCLONE_TEST` files in the same places
in the Path1 and Path2 filesystems.
+`RCLONE_TEST` files are not generated automatically.
+For `--check-access`to succeed, you must first either:
+**A)** Place one or more `RCLONE_TEST` files in the Path1 or Path2 filesystem
+and then do either a run without `--check-access` or a [--resync](#resync) to
+set matching files on both filesystems, or
+**B)** Set `--check-filename` to a filename already in use in various locations
+throughout your sync'd fileset.
Time stamps and file contents are not important, just the names and locations.
-Place one or more `RCLONE_TEST` files in the Path1 or Path2 filesystem and
-then do either a run without `--check-access` or a `--resync` to set
-matching files on both filesystems.
If you have symbolic links in your sync tree it is recommended to place
`RCLONE_TEST` files in the linked-to directory tree to protect against
bisync assuming a bunch of deleted files if the linked-to tree should not be
-accessible. Also see the `--check-filename` flag.
+accessible.
+See also the [--check-filename](--check-filename) flag.
+
+#### --check-filename
+
+Name of the file(s) used in access health validation.
+The default `--check-filename` is `RCLONE_TEST`.
+One or more files having this filename must exist, synchronized between your
+source and destination filesets, in order for `--check-access` to succeed.
+See [--check-access](#check-access) for additional details.
#### --max-delete
@@ -17606,7 +17841,7 @@ List the contents of a bucket
Sync `/home/local/directory` to the remote bucket, deleting any excess
files in the bucket.
- rclone sync -i /home/local/directory remote:bucket
+ rclone sync --interactive /home/local/directory remote:bucket
## Configuration
@@ -18010,10 +18245,10 @@ $ rclone -q --s3-versions ls s3:cleanup-test
### Cleanup
If you run `rclone cleanup s3:bucket` then it will remove all pending
-multipart uploads older than 24 hours. You can use the `-i` flag to
-see exactly what it will do. If you want more control over the expiry
-date then run `rclone backend cleanup s3:bucket -o max-age=1h` to
-expire all uploads older than one hour. You can use `rclone backend
+multipart uploads older than 24 hours. You can use the `--interactive`/`i`
+or `--dry-run` flag to see exactly what it will do. If you want more control over the
+expiry date then run `rclone backend cleanup s3:bucket -o max-age=1h`
+to expire all uploads older than one hour. You can use `rclone backend
list-multipart-uploads s3:bucket` to see the pending multipart
uploads.
@@ -19025,7 +19260,7 @@ Properties:
#### --s3-endpoint
-Endpoint of the Shared Gateway.
+Endpoint for Storj Gateway.
Properties:
@@ -19035,12 +19270,8 @@ Properties:
- Type: string
- Required: false
- Examples:
- - "gateway.eu1.storjshare.io"
- - EU1 Shared Gateway
- - "gateway.us1.storjshare.io"
- - US1 Shared Gateway
- - "gateway.ap1.storjshare.io"
- - Asia-Pacific Shared Gateway
+ - "gateway.storjshare.io"
+ - Global Hosted Gateway
#### --s3-endpoint
@@ -20518,6 +20749,20 @@ Properties:
- Type: bool
- Default: false
+#### --s3-sts-endpoint
+
+Endpoint for STS.
+
+Leave blank if using AWS to use the default endpoint for the region.
+
+Properties:
+
+- Config: sts_endpoint
+- Env Var: RCLONE_S3_STS_ENDPOINT
+- Provider: AWS
+- Type: string
+- Required: false
+
### Metadata
User metadata is stored as x-amz-meta- keys. S3 metadata keys are case insensitive and are always returned in lower case.
@@ -20568,9 +20813,9 @@ Usage Examples:
rclone backend restore s3:bucket/path/to/directory [-o priority=PRIORITY] [-o lifetime=DAYS]
rclone backend restore s3:bucket [-o priority=PRIORITY] [-o lifetime=DAYS]
-This flag also obeys the filters. Test first with -i/--interactive or --dry-run flags
+This flag also obeys the filters. Test first with --interactive/-i or --dry-run flags
- rclone -i backend restore --include "*.txt" s3:bucket/path -o priority=Standard
+ rclone --interactive backend restore --include "*.txt" s3:bucket/path -o priority=Standard
All the objects shown will be marked for restore, then
@@ -20647,8 +20892,8 @@ Remove unfinished multipart uploads.
This command removes unfinished multipart uploads of age greater than
max-age which defaults to 24 hours.
-Note that you can use -i/--dry-run with this command to see what it
-would do.
+Note that you can use --interactive/-i or --dry-run with this command to see what
+it would do.
rclone backend cleanup s3:bucket/path/to/object
rclone backend cleanup -o max-age=7w s3:bucket/path/to/object
@@ -20669,8 +20914,8 @@ Remove old versions of files.
This command removes any old hidden versions of files
on a versions enabled bucket.
-Note that you can use -i/--dry-run with this command to see what it
-would do.
+Note that you can use --interactive/-i or --dry-run with this command to see what
+it would do.
rclone backend cleanup-hidden s3:bucket/path/to/dir
@@ -23061,7 +23306,7 @@ List the contents of a bucket
Sync `/home/local/directory` to the remote bucket, deleting any
excess files in the bucket.
- rclone sync -i /home/local/directory remote:bucket
+ rclone sync --interactive /home/local/directory remote:bucket
### Application Keys
@@ -24976,7 +25221,7 @@ style or chunk naming scheme is to:
- Create another directory (most probably on the same cloud storage)
and configure a new remote with desired metadata format,
hash type, chunk naming etc.
-- Now run `rclone sync -i oldchunks: newchunks:` and all your data
+- Now run `rclone sync --interactive oldchunks: newchunks:` and all your data
will be transparently converted in transfer.
This may take some time, yet chunker will try server-side
copy if possible.
@@ -25495,7 +25740,7 @@ custom salt is effectively a second password that must be memorized.
based on XSalsa20 cipher and Poly1305 for integrity.
[Names](#name-encryption) (file- and directory names) are also encrypted
by default, but this has some implications and is therefore
-possible to turned off.
+possible to be turned off.
## Configuration
@@ -26097,7 +26342,7 @@ as `eremote:`.
To sync the two remotes you would do
- rclone sync -i remote:crypt remote2:crypt
+ rclone sync --interactive remote:crypt remote2:crypt
And to check the integrity you would do
@@ -27384,7 +27629,7 @@ List the contents of a directory
Sync `/home/local/directory` to the remote directory, deleting any
excess files in the directory.
- rclone sync -i /home/local/directory remote:directory
+ rclone sync --interactive /home/local/directory remote:directory
### Anonymous FTP
@@ -27941,7 +28186,7 @@ List the contents of a bucket
Sync `/home/local/directory` to the remote bucket, deleting any excess
files in the bucket.
- rclone sync -i /home/local/directory remote:bucket
+ rclone sync --interactive /home/local/directory remote:bucket
### Service Account support
@@ -28321,6 +28566,24 @@ Properties:
- "DURABLE_REDUCED_AVAILABILITY"
- Durable reduced availability storage class
+#### --gcs-env-auth
+
+Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars).
+
+Only applies if service_account_file and service_account_credentials is blank.
+
+Properties:
+
+- Config: env_auth
+- Env Var: RCLONE_GCS_ENV_AUTH
+- Type: bool
+- Default: false
+- Examples:
+ - "false"
+ - Enter credentials in the next step.
+ - "true"
+ - Get GCP IAM credentials from the environment (env vars or IAM).
+
### Advanced options
Here are the Advanced options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).
@@ -29417,6 +29680,10 @@ as malware or spam and cannot be downloaded" with the error code
indicate you acknowledge the risks of downloading the file and rclone
will download it anyway.
+Note that if you are using service account it will need Manager
+permission (not Content Manager) to for this flag to work. If the SA
+does not have the right permission, Google will just ignore the flag.
+
Properties:
- Config: acknowledge_abuse
@@ -29791,9 +30058,9 @@ This takes an optional directory to trash which make this easier to
use via the API.
rclone backend untrash drive:directory
- rclone backend -i untrash drive:directory subdir
+ rclone backend --interactive untrash drive:directory subdir
-Use the -i flag to see what would be restored before restoring it.
+Use the --interactive/-i or --dry-run flag to see what would be restored before restoring it.
Result:
@@ -29827,7 +30094,7 @@ component will be used as the file name.
If the destination is a drive backend then server-side copying will be
attempted if possible.
-Use the -i flag to see what would be copied before copying.
+Use the --interactive/-i or --dry-run flag to see what would be copied before copying.
### exportformats
@@ -29937,9 +30204,15 @@ to the next step; if not, click on "CONFIGURE CONSENT SCREEN" button
(near the top right corner of the right panel), then select "External"
and click on "CREATE"; on the next screen, enter an "Application name"
("rclone" is OK); enter "User Support Email" (your own email is OK);
-enter "Developer Contact Email" (your own email is OK); then click on "Save" (all other data is optional).
-Click again on "Credentials" on the left panel to go back to the
-"Credentials" screen.
+enter "Developer Contact Email" (your own email is OK); then click on
+"Save" (all other data is optional). You will also have to add some scopes,
+including `.../auth/docs` and `.../auth/drive` in order to be able to edit,
+create and delete files with RClone. You may also want to include the
+`../auth/drive.metadata.readonly` scope. After adding scopes, click
+"Save and continue" to add test users. Be sure to add your own account to
+the test users. Once you've added yourself as a test user and saved the
+changes, click again on "Credentials" on the left panel to go back to
+the "Credentials" screen.
(PS: if you are a GSuite user, you could also select "Internal" instead
of "External" above, but this will restrict API use to Google Workspace
@@ -29952,16 +30225,14 @@ then select "OAuth client ID".
8. It will show you a client ID and client secret. Make a note of these.
- (If you selected "External" at Step 5 continue to "Publish App" in the Steps 9 and 10.
+ (If you selected "External" at Step 5 continue to Step 9.
If you chose "Internal" you don't need to publish and can skip straight to
- Step 11.)
+ Step 10 but your destination drive must be part of the same Google Workspace.)
-9. Go to "Oauth consent screen" and press "Publish App"
+9. Go to "Oauth consent screen" and then click "PUBLISH APP" button and confirm.
+ You will also want to add yourself as a test user.
-10. Click "OAuth consent screen", then click "PUBLISH APP" button and
-confirm, or add your account under "Test users".
-
-11. Provide the noted client ID and client secret to rclone.
+10. Provide the noted client ID and client secret to rclone.
Be aware that, due to the "enhanced security" recently introduced by
Google, you are theoretically expected to "submit your app for verification"
@@ -29969,7 +30240,11 @@ and then wait a few weeks(!) for their response; in practice, you can go right
ahead and use the client ID and client secret with rclone, the only issue will
be a very scary confirmation screen shown when you connect via your browser
for rclone to be able to get its token-id (but as this only happens during
-the remote configuration, it's not such a big deal).
+the remote configuration, it's not such a big deal). Keeping the application in
+"Testing" will work as well, but the limitation is that any grants will expire
+after a week, which can be annoying to refresh constantly. If, for whatever
+reason, a short grant time is not a problem, then keeping the application in
+testing mode would also be sufficient.
(Thanks to @balazer on github for these instructions.)
@@ -30093,11 +30368,11 @@ List the contents of an album
Sync `/home/local/images` to the Google Photos, removing any excess
files in the album.
- rclone sync -i /home/local/image remote:album/newAlbum
+ rclone sync --interactive /home/local/image remote:album/newAlbum
### Layout
-As Google Photos is not a general purpose cloud storage system the
+As Google Photos is not a general purpose cloud storage system, the
backend is laid out to help you navigate it.
The directories under `media` show different ways of categorizing the
@@ -30876,7 +31151,7 @@ List the contents of a directory
Sync the remote `directory` to `/home/local/directory`, deleting any excess files.
- rclone sync -i remote:directory /home/local/directory
+ rclone sync --interactive remote:directory /home/local/directory
### Setting up your own HDFS instance for testing
@@ -31560,7 +31835,7 @@ List the contents of a directory
Sync the remote `directory` to `/home/local/directory`, deleting any excess files.
- rclone sync -i remote:directory /home/local/directory
+ rclone sync --interactive remote:directory /home/local/directory
### Read only
@@ -31708,7 +31983,7 @@ List the contents of a item
Sync `/home/local/directory` to the remote item, deleting any excess
files in the item.
- rclone sync -i /home/local/directory remote:item
+ rclone sync --interactive /home/local/directory remote:item
## Notes
Because of Internet Archive's architecture, it enqueues write operations (and extra post-processings) in a per-item queue. You can check item's queue at https://catalogd.archive.org/history/item-name-here . Because of that, all uploads/deletes will not show up immediately and takes some time to be available.
@@ -32859,7 +33134,7 @@ List the contents of a directory
Sync `/home/local/directory` to the remote path, deleting any
excess files in the path.
- rclone sync -i /home/local/directory remote:directory
+ rclone sync --interactive /home/local/directory remote:directory
### Modified time
@@ -33348,6 +33623,23 @@ Properties:
- Type: bool
- Default: false
+#### --mega-use-https
+
+Use HTTPS for transfers.
+
+MEGA uses plain text HTTP connections by default.
+Some ISPs throttle HTTP connections, this causes transfers to become very slow.
+Enabling this will force MEGA to use HTTPS for all transfers.
+HTTPS is normally not necesary since all data is already encrypted anyway.
+Enabling it will increase CPU usage and add network overhead.
+
+Properties:
+
+- Config: use_https
+- Env Var: RCLONE_MEGA_USE_HTTPS
+- Type: bool
+- Default: false
+
#### --mega-encoding
The encoding for the backend.
@@ -33773,7 +34065,7 @@ List the contents of a container
Sync `/home/local/directory` to the remote container, deleting any excess
files in the container.
- rclone sync -i /home/local/directory remote:container
+ rclone sync --interactive /home/local/directory remote:container
### --fast-list
@@ -34725,10 +35017,19 @@ OneDrive allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or
not.
-OneDrive personal supports SHA1 type hashes. OneDrive for business and
-Sharepoint Server support
+OneDrive Personal, OneDrive for Business and Sharepoint Server support
[QuickXorHash](https://docs.microsoft.com/en-us/onedrive/developer/code-snippets/quickxorhash).
+Before rclone 1.62 the default hash for Onedrive Personal was `SHA1`.
+For rclone 1.62 and above the default for all Onedrive backends is
+`QuickXorHash`.
+
+Starting from July 2023 `SHA1` support is being phased out in Onedrive
+Personal in favour of `QuickXorHash`. If necessary the
+`--onedrive-hash-type` flag (or `hash_type` config option) can be used
+to select `SHA1` during the transition period if this is important
+your workflow.
+
For all types of OneDrive you can use the `--checksum` flag.
### Restricted filename characters
@@ -35074,6 +35375,48 @@ Properties:
- Type: string
- Required: false
+#### --onedrive-hash-type
+
+Specify the hash in use for the backend.
+
+This specifies the hash type in use. If set to "auto" it will use the
+default hash which is is QuickXorHash.
+
+Before rclone 1.62 an SHA1 hash was used by default for Onedrive
+Personal. For 1.62 and later the default is to use a QuickXorHash for
+all onedrive types. If an SHA1 hash is desired then set this option
+accordingly.
+
+From July 2023 QuickXorHash will be the only available hash for
+both OneDrive for Business and OneDriver Personal.
+
+This can be set to "none" to not use any hashes.
+
+If the hash requested does not exist on the object, it will be
+returned as an empty string which is treated as a missing hash by
+rclone.
+
+
+Properties:
+
+- Config: hash_type
+- Env Var: RCLONE_ONEDRIVE_HASH_TYPE
+- Type: string
+- Default: "auto"
+- Examples:
+ - "auto"
+ - Rclone chooses the best hash
+ - "quickxor"
+ - QuickXor
+ - "sha1"
+ - SHA1
+ - "sha256"
+ - SHA256
+ - "crc32"
+ - CRC32
+ - "none"
+ - None - don't use any hashes
+
#### --onedrive-encoding
The encoding for the backend.
@@ -35186,11 +35529,11 @@ OneDrive supports `rclone cleanup` which causes rclone to look through
every file under the path supplied and delete all version but the
current version. Because this involves traversing all the files, then
querying each file for versions it can be quite slow. Rclone does
-`--checkers` tests in parallel. The command also supports `-i` which
-is a great way to see what it would do.
+`--checkers` tests in parallel. The command also supports `--interactive`/`i`
+or `--dry-run` which is a great way to see what it would do.
- rclone cleanup -i remote:path/subdir # interactively remove all old version for path/subdir
- rclone cleanup remote:path/subdir # unconditionally remove all old version for path/subdir
+ rclone cleanup --interactive remote:path/subdir # interactively remove all old version for path/subdir
+ rclone cleanup remote:path/subdir # unconditionally remove all old version for path/subdir
**NB** Onedrive personal can't currently delete versions
@@ -35278,24 +35621,45 @@ Shared with me files is not supported by rclone [currently](https://github.com/r
1. Visit [https://onedrive.live.com](https://onedrive.live.com/)
2. Right click a item in `Shared`, then click `Add shortcut to My files` in the context
-
- Screenshot (Shared with me)
-
- ![make_shortcut](https://user-images.githubusercontent.com/60313789/206118040-7e762b3b-aa61-41a1-8649-cc18889f3572.png)
-
-
+ ![make_shortcut](https://user-images.githubusercontent.com/60313789/206118040-7e762b3b-aa61-41a1-8649-cc18889f3572.png "Screenshot (Shared with me)")
3. The shortcut will appear in `My files`, you can access it with rclone, it behaves like a normal folder/file.
-
- Screenshot (My Files)
+ ![in_my_files](https://i.imgur.com/0S8H3li.png "Screenshot (My Files)")
+ ![rclone_mount](https://i.imgur.com/2Iq66sW.png "Screenshot (rclone mount)")
- ![in_my_files](https://i.imgur.com/0S8H3li.png)
-
+### Live Photos uploaded from iOS (small video clips in .heic files)
-
- Screenshot (rclone mount)
+The iOS OneDrive app introduced [upload and storage](https://techcommunity.microsoft.com/t5/microsoft-onedrive-blog/live-photos-come-to-onedrive/ba-p/1953452)
+of [Live Photos](https://support.apple.com/en-gb/HT207310) in 2020.
+The usage and download of these uploaded Live Photos is unfortunately still work-in-progress
+and this introduces several issues when copying, synchronising and mounting – both in rclone and in the native OneDrive client on Windows.
- ![rclone_mount](https://i.imgur.com/2Iq66sW.png)
-
+The root cause can easily be seen if you locate one of your Live Photos in the OneDrive web interface.
+Then download the photo from the web interface. You will then see that the size of downloaded .heic file is smaller than the size displayed in the web interface.
+The downloaded file is smaller because it only contains a single frame (still photo) extracted from the Live Photo (movie) stored in OneDrive.
+
+The different sizes will cause `rclone copy/sync` to repeatedly recopy unmodified photos something like this:
+
+ DEBUG : 20230203_123826234_iOS.heic: Sizes differ (src 4470314 vs dst 1298667)
+ DEBUG : 20230203_123826234_iOS.heic: sha1 = fc2edde7863b7a7c93ca6771498ac797f8460750 OK
+ INFO : 20230203_123826234_iOS.heic: Copied (replaced existing)
+
+These recopies can be worked around by adding `--ignore-size`. Please note that this workaround only syncs the still-picture not the movie clip,
+and relies on modification dates being correctly updated on all files in all situations.
+
+The different sizes will also cause `rclone check` to report size errors something like this:
+
+ ERROR : 20230203_123826234_iOS.heic: sizes differ
+
+These check errors can be suppressed by adding `--ignore-size`.
+
+The different sizes will also cause `rclone mount` to fail downloading with an error something like this:
+
+ ERROR : 20230203_123826234_iOS.heic: ReadFileHandle.Read error: low level retry 1/10: unexpected EOF
+
+or like this when using `--cache-mode=full`:
+
+ INFO : 20230203_123826234_iOS.heic: vfs cache: downloader: error count now 1: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable:
+ ERROR : 20230203_123826234_iOS.heic: vfs cache: failed to download: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable:
# OpenDrive
@@ -35474,13 +35838,12 @@ remote.
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
# Oracle Object Storage
-
[Oracle Object Storage Overview](https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/objectstorageoverview.htm)
[Oracle Object Storage FAQ](https://www.oracle.com/cloud/storage/object-storage/faq/)
-Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
-command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`.
+Paths are specified as `remote:bucket` (or `remote:` for the `lsd` command.) You may put subdirectories in
+too, e.g. `remote:bucket/path/to/dir`.
## Configuration
@@ -35556,7 +35919,7 @@ Enter a value. Press Enter to leave empty.
endpoint>
Option config_file.
-Path to OCI config file
+Full Path to OCI config file
Choose a number from below, or type in your own string value.
Press Enter for the default (~/.oci/config).
1 / oci configuration file location
@@ -35605,6 +35968,99 @@ List the contents of a bucket
rclone ls remote:bucket
rclone ls remote:bucket --max-depth 1
+### OCI Authentication Provider
+
+OCI has various authentication methods. To learn more about authentication methods please refer [oci authentication
+methods](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdk_authentication_methods.htm)
+These choices can be specified in the rclone config file.
+
+Rclone supports the following OCI authentication provider.
+
+ User Principal
+ Instance Principal
+ Resource Principal
+ No authentication
+
+#### Authentication provider choice: User Principal
+Sample rclone config file for Authentication Provider User Principal:
+
+ [oos]
+ type = oracleobjectstorage
+ namespace = id34
+ compartment = ocid1.compartment.oc1..aaba
+ region = us-ashburn-1
+ provider = user_principal_auth
+ config_file = /home/opc/.oci/config
+ config_profile = Default
+
+Advantages:
+- One can use this method from any server within OCI or on-premises or from other cloud provider.
+
+Considerations:
+- you need to configure user’s privileges / policy to allow access to object storage
+- Overhead of managing users and keys.
+- If the user is deleted, the config file will no longer work and may cause automation regressions that use the user's credentials.
+
+#### Authentication provider choice: Instance Principal
+An OCI compute instance can be authorized to use rclone by using it's identity and certificates as an instance principal.
+With this approach no credentials have to be stored and managed.
+
+Sample rclone configuration file for Authentication Provider Instance Principal:
+
+ [opc@rclone ~]$ cat ~/.config/rclone/rclone.conf
+ [oos]
+ type = oracleobjectstorage
+ namespace = idfn
+ compartment = ocid1.compartment.oc1..aak7a
+ region = us-ashburn-1
+ provider = instance_principal_auth
+
+Advantages:
+
+- With instance principals, you don't need to configure user credentials and transfer/ save it to disk in your compute
+ instances or rotate the credentials.
+- You don’t need to deal with users and keys.
+- Greatly helps in automation as you don't have to manage access keys, user private keys, storing them in vault,
+ using kms etc.
+
+Considerations:
+
+- You need to configure a dynamic group having this instance as member and add policy to read object storage to that
+ dynamic group.
+- Everyone who has access to this machine can execute the CLI commands.
+- It is applicable for oci compute instances only. It cannot be used on external instance or resources.
+
+#### Authentication provider choice: Resource Principal
+Resource principal auth is very similar to instance principal auth but used for resources that are not
+compute instances such as [serverless functions](https://docs.oracle.com/en-us/iaas/Content/Functions/Concepts/functionsoverview.htm).
+To use resource principal ensure Rclone process is started with these environment variables set in its process.
+
+ export OCI_RESOURCE_PRINCIPAL_VERSION=2.2
+ export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1
+ export OCI_RESOURCE_PRINCIPAL_PRIVATE_PEM=/usr/share/model-server/key.pem
+ export OCI_RESOURCE_PRINCIPAL_RPST=/usr/share/model-server/security_token
+
+Sample rclone configuration file for Authentication Provider Resource Principal:
+
+ [oos]
+ type = oracleobjectstorage
+ namespace = id34
+ compartment = ocid1.compartment.oc1..aaba
+ region = us-ashburn-1
+ provider = resource_principal_auth
+
+#### Authentication provider choice: No authentication
+Public buckets do not require any authentication mechanism to read objects.
+Sample rclone configuration file for No authentication:
+
+ [oos]
+ type = oracleobjectstorage
+ namespace = id34
+ compartment = ocid1.compartment.oc1..aaba
+ region = us-ashburn-1
+ provider = no_auth
+
+## Options
### Modified time
The modified time is stored as metadata on the object as
@@ -35759,6 +36215,24 @@ Properties:
Here are the Advanced options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage).
+#### --oos-storage-tier
+
+The storage class to use when storing new objects in storage. https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm
+
+Properties:
+
+- Config: storage_tier
+- Env Var: RCLONE_OOS_STORAGE_TIER
+- Type: string
+- Default: "Standard"
+- Examples:
+ - "Standard"
+ - Standard storage tier, this is the default tier
+ - "InfrequentAccess"
+ - InfrequentAccess storage tier
+ - "Archive"
+ - Archive storage tier
+
#### --oos-upload-cutoff
Cutoff for switching to chunked upload.
@@ -35920,6 +36394,89 @@ Properties:
- Type: bool
- Default: false
+#### --oos-sse-customer-key-file
+
+To use SSE-C, a file containing the base64-encoded string of the AES-256 encryption key associated
+with the object. Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed.'
+
+Properties:
+
+- Config: sse_customer_key_file
+- Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY_FILE
+- Type: string
+- Required: false
+- Examples:
+ - ""
+ - None
+
+#### --oos-sse-customer-key
+
+To use SSE-C, the optional header that specifies the base64-encoded 256-bit encryption key to use to
+encrypt or decrypt the data. Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is
+needed. For more information, see Using Your Own Keys for Server-Side Encryption
+(https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm)
+
+Properties:
+
+- Config: sse_customer_key
+- Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY
+- Type: string
+- Required: false
+- Examples:
+ - ""
+ - None
+
+#### --oos-sse-customer-key-sha256
+
+If using SSE-C, The optional header that specifies the base64-encoded SHA256 hash of the encryption
+key. This value is used to check the integrity of the encryption key. see Using Your Own Keys for
+Server-Side Encryption (https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm).
+
+Properties:
+
+- Config: sse_customer_key_sha256
+- Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY_SHA256
+- Type: string
+- Required: false
+- Examples:
+ - ""
+ - None
+
+#### --oos-sse-kms-key-id
+
+if using using your own master key in vault, this header specifies the
+OCID (https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm) of a master encryption key used to call
+the Key Management service to generate a data encryption key or to encrypt or decrypt a data encryption key.
+Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed.
+
+Properties:
+
+- Config: sse_kms_key_id
+- Env Var: RCLONE_OOS_SSE_KMS_KEY_ID
+- Type: string
+- Required: false
+- Examples:
+ - ""
+ - None
+
+#### --oos-sse-customer-algorithm
+
+If using SSE-C, the optional header that specifies "AES256" as the encryption algorithm.
+Object Storage supports "AES256" as the encryption algorithm. For more information, see
+Using Your Own Keys for Server-Side Encryption (https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm).
+
+Properties:
+
+- Config: sse_customer_algorithm
+- Env Var: RCLONE_OOS_SSE_CUSTOMER_ALGORITHM
+- Type: string
+- Required: false
+- Examples:
+ - ""
+ - None
+ - "AES256"
+ - AES256
+
## Backend commands
Here are the commands specific to the oracleobjectstorage backend.
@@ -35987,8 +36544,8 @@ Remove unfinished multipart uploads.
This command removes unfinished multipart uploads of age greater than
max-age which defaults to 24 hours.
-Note that you can use -i/--dry-run with this command to see what it
-would do.
+Note that you can use --interactive/-i or --dry-run with this command to see what
+it would do.
rclone backend cleanup oos:bucket/path/to/object
rclone backend cleanup -o max-age=7w oos:bucket/path/to/object
@@ -36090,7 +36647,7 @@ List the contents of a bucket
Sync `/home/local/directory` to the remote bucket, deleting any excess
files in the bucket.
- rclone sync -i /home/local/directory remote:bucket
+ rclone sync --interactive /home/local/directory remote:bucket
### --fast-list
@@ -36651,7 +37208,7 @@ List the contents of a container
Sync `/home/local/directory` to the remote container, deleting any
excess files in the container.
- rclone sync -i /home/local/directory remote:container
+ rclone sync --interactive /home/local/directory remote:container
### Configuration from an OpenStack credentials file
@@ -37717,9 +38274,10 @@ may be different for different operations, and may change over time.
This is a backend for the [Seafile](https://www.seafile.com/) storage service:
- It works with both the free community edition or the professional edition.
-- Seafile versions 6.x and 7.x are all supported.
+- Seafile versions 6.x, 7.x, 8.x and 9.x are all supported.
- Encrypted libraries are also supported.
- It supports 2FA enabled users
+- Using a Library API Token is **not** supported
## Configuration
@@ -37821,7 +38379,7 @@ List the contents of a library
Sync `/home/local/directory` to the remote library, deleting any
excess files in the library.
- rclone sync -i /home/local/directory seafile:library
+ rclone sync --interactive /home/local/directory seafile:library
### Configuration in library mode
@@ -37917,7 +38475,7 @@ List the contents of a directory
Sync `/home/local/directory` to the remote library, deleting any
excess files in the library.
- rclone sync -i /home/local/directory seafile:
+ rclone sync --interactive /home/local/directory seafile:
### --fast-list
@@ -37965,14 +38523,17 @@ that has already been shared, you will get the exact same link.
### Compatibility
-It has been actively tested using the [seafile docker image](https://github.com/haiwen/seafile-docker) of these versions:
+It has been actively developed using the [seafile docker image](https://github.com/haiwen/seafile-docker) of these versions:
- 6.3.4 community edition
- 7.0.5 community edition
- 7.1.3 community edition
+- 9.0.10 community edition
Versions below 6.0 are not supported.
Versions between 6.0 and 6.3 haven't been tested and might not work properly.
+Each new version of `rclone` is automatically tested against the [latest docker image](https://hub.docker.com/r/seafileltd/seafile-mc/) of the seafile community server.
+
### Standard options
@@ -38201,7 +38762,7 @@ List the contents of a directory
Sync `/home/local/directory` to the remote directory, deleting any
excess files in the directory.
- rclone sync -i /home/local/directory remote:directory
+ rclone sync --interactive /home/local/directory remote:directory
Mount the remote path `/srv/www-data/` to the local path
`/mnt/www-data`
@@ -39217,6 +39778,25 @@ Properties:
- Type: string
- Default: "WORKGROUP"
+#### --smb-spn
+
+Service principal name.
+
+Rclone presents this name to the server. Some servers use this as further
+authentication, and it often needs to be set for clusters. For example:
+
+ cifs/remotehost:1020
+
+Leave blank if not sure.
+
+
+Properties:
+
+- Config: spn
+- Env Var: RCLONE_SMB_SPN
+- Type: string
+- Required: false
+
### Advanced options
Here are the Advanced options specific to smb (SMB / CIFS).
@@ -39456,14 +40036,14 @@ Choose a number from below, or type in your own value
\ "new"
provider> new
Satellite Address. Custom satellite address should match the format: `@:`.
-Enter a string value. Press Enter for the default ("us-central-1.storj.io").
+Enter a string value. Press Enter for the default ("us1.storj.io").
Choose a number from below, or type in your own value
- 1 / US Central 1
- \ "us-central-1.storj.io"
- 2 / Europe West 1
- \ "europe-west-1.storj.io"
- 3 / Asia East 1
- \ "asia-east-1.storj.io"
+ 1 / US1
+ \ "us1.storj.io"
+ 2 / EU1
+ \ "eu1.storj.io"
+ 3 / AP1
+ \ "ap1.storj.io"
satellite_address> 1
API Key.
Enter a string value. Press Enter for the default ("").
@@ -39475,7 +40055,7 @@ Remote config
--------------------
[remote]
type = storj
-satellite_address = 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S@us-central-1.tardigrade.io:7777
+satellite_address = 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S@us1.storj.io:7777
api_key = your-api-key-for-your-storj-project
passphrase = your-human-readable-encryption-passphrase
access_grant = the-access-grant-generated-from-the-api-key-and-passphrase
@@ -39531,14 +40111,14 @@ Properties:
- Env Var: RCLONE_STORJ_SATELLITE_ADDRESS
- Provider: new
- Type: string
-- Default: "us-central-1.storj.io"
+- Default: "us1.storj.io"
- Examples:
- - "us-central-1.storj.io"
- - US Central 1
- - "europe-west-1.storj.io"
- - Europe West 1
- - "asia-east-1.storj.io"
- - Asia East 1
+ - "us1.storj.io"
+ - US1
+ - "eu1.storj.io"
+ - EU1
+ - "ap1.storj.io"
+ - AP1
#### --storj-api-key
@@ -39662,7 +40242,7 @@ Use the `size` command to print the total size of objects in a bucket or a folde
Use the `sync` command to sync the source to the destination,
changing the destination only, deleting any excess files.
- rclone sync -i --progress /home/local/directory/ remote:bucket/path/to/dir/
+ rclone sync --interactive --progress /home/local/directory/ remote:bucket/path/to/dir/
The `--progress` flag is for displaying progress information.
Remove it if you don't need this information.
@@ -39672,15 +40252,15 @@ to see exactly what would be copied and deleted.
The sync can be done also from Storj to the local file system.
- rclone sync -i --progress remote:bucket/path/to/dir/ /home/local/directory/
+ rclone sync --interactive --progress remote:bucket/path/to/dir/ /home/local/directory/
Or between two Storj buckets.
- rclone sync -i --progress remote-us:bucket/path/to/dir/ remote-europe:bucket/path/to/dir/
+ rclone sync --interactive --progress remote-us:bucket/path/to/dir/ remote-europe:bucket/path/to/dir/
Or even between another cloud storage and Storj.
- rclone sync -i --progress s3:bucket/path/to/dir/ storj:bucket/path/to/dir/
+ rclone sync --interactive --progress s3:bucket/path/to/dir/ storj:bucket/path/to/dir/
## Limitations
@@ -40867,7 +41447,7 @@ List the contents of a directory
Sync `/home/local/directory` to the remote path, deleting any
excess files in the path.
- rclone sync -i /home/local/directory remote:directory
+ rclone sync --interactive /home/local/directory remote:directory
Yandex paths may be as deep as required, e.g. `remote:directory/subdirectory`.
@@ -41115,7 +41695,7 @@ List the contents of a directory
Sync `/home/local/directory` to the remote path, deleting any
excess files in the path.
- rclone sync -i /home/local/directory remote:directory
+ rclone sync --interactive /home/local/directory remote:directory
Zoho paths may be as deep as required, eg `remote:directory/subdirectory`.
@@ -41269,7 +41849,7 @@ The client id and client secret can now be used with rclone.
Local paths are specified as normal filesystem paths, e.g. `/path/to/wherever`, so
- rclone sync -i /home/source /tmp/destination
+ rclone sync --interactive /home/source /tmp/destination
Will sync `/home/source` to `/tmp/destination`.
@@ -41884,6 +42464,131 @@ Options:
# Changelog
+## v1.62.0 - 2023-03-14
+
+[See commits](https://github.com/rclone/rclone/compare/v1.61.0...v1.62.0)
+
+* New Features
+ * accounting: Make checkers show what they are doing (Nick Craig-Wood)
+ * authorize: Add support for custom templates (Hunter Wittenborn)
+ * build
+ * Update to go1.20 (Nick Craig-Wood, Anagh Kumar Baranwal)
+ * Add winget releaser workflow (Ryan Caezar Itang)
+ * Add dependabot (Ryan Caezar Itang)
+ * doc updates (albertony, Bryan Kaplan, Gerard Bosch, IMTheNachoMan, Justin Winokur, Manoj Ghosh, Nick Craig-Wood, Ole Frost, Peter Brunner, piyushgarg, Ryan Caezar Itang, Simmon Li, ToBeFree)
+ * filter: Emit INFO message when can't work out directory filters (Nick Craig-Wood)
+ * fs
+ * Added multiple ca certificate support. (alankrit)
+ * Add `--max-delete-size` a delete size threshold (Leandro Sacchet)
+ * fspath: Allow the symbols `@` and `+` in remote names (albertony)
+ * lib/terminal: Enable windows console virtual terminal sequences processing (ANSI/VT100 colors) (albertony)
+ * move: If `--check-first` and `--order-by` are set then delete with perfect ordering (Nick Craig-Wood)
+ * serve http: Support `--auth-proxy` (Matthias Baur)
+* Bug Fixes
+ * accounting
+ * Avoid negative ETA values for very slow speeds (albertony)
+ * Limit length of ETA string (albertony)
+ * Show human readable elapsed time when longer than a day (albertony)
+ * all: Apply codeql fixes (Aaron Gokaslan)
+ * build
+ * Fix condition for manual workflow run (albertony)
+ * Fix building for ARMv5 and ARMv6 (albertony)
+ * selfupdate: Consider ARM version
+ * install.sh: fix ARMv6 download
+ * version: Report ARM version
+ * deletefile: Return error code 4 if file does not exist (Nick Craig-Wood)
+ * docker: Fix volume plugin does not remount volume on docker restart (logopk)
+ * fs: Fix race conditions in `--max-delete` and `--max-delete-size` (Nick Craig-Wood)
+ * lib/oauthutil: Handle fatal errors better (Alex Chen)
+ * mount2: Fix `--allow-non-empty` (Nick Craig-Wood)
+ * operations: Fix concurrency: use `--checkers` unless transferring files (Nick Craig-Wood)
+ * serve ftp: Fix timestamps older than 1 year in listings (Nick Craig-Wood)
+ * sync: Fix concurrency: use `--checkers` unless transferring files (Nick Craig-Wood)
+ * tree
+ * Fix nil pointer exception on stat failure (Nick Craig-Wood)
+ * Fix colored output on windows (albertony)
+ * Fix display of files with illegal Windows file system names (Nick Craig-Wood)
+* Mount
+ * Fix creating and renaming files on case insensitive backends (Nick Craig-Wood)
+ * Do not treat `\\?\` prefixed paths as network share paths on windows (albertony)
+ * Fix check for empty mount point on Linux (Nick Craig-Wood)
+ * Fix `--allow-non-empty` (Nick Craig-Wood)
+ * Avoid incorrect or premature overlap check on windows (albertony)
+ * Update to fuse3 after bazil.org/fuse update (Nick Craig-Wood)
+* VFS
+ * Make uploaded files retain modtime with non-modtime backends (Nick Craig-Wood)
+ * Fix incorrect modtime on fs which don't support setting modtime (Nick Craig-Wood)
+ * Fix rename of directory containing files to be uploaded (Nick Craig-Wood)
+* Local
+ * Fix `%!w()` in "failed to read directory" error (Marks Polakovs)
+ * Fix exclusion of dangling symlinks with -L/--copy-links (Nick Craig-Wood)
+* Crypt
+ * Obey `--ignore-checksum` (Nick Craig-Wood)
+ * Fix for unencrypted directory names on case insensitive remotes (Ole Frost)
+* Azure Blob
+ * Remove workarounds for SDK bugs after v0.6.1 update (Nick Craig-Wood)
+* B2
+ * Fix uploading files bigger than 1TiB (Nick Craig-Wood)
+* Drive
+ * Note that `--drive-acknowledge-abuse` needs SA Manager permission (Nick Craig-Wood)
+ * Make `--drive-stop-on-upload-limit` to respond to storageQuotaExceeded (Ninh Pham)
+* FTP
+ * Retry 426 errors (Nick Craig-Wood)
+ * Retry errors when initiating downloads (Nick Craig-Wood)
+ * Revert to upstream `github.com/jlaffaye/ftp` now fix is merged (Nick Craig-Wood)
+* Google Cloud Storage
+ * Add `--gcs-env-auth` to pick up IAM credentials from env/instance (Peter Brunner)
+* Mega
+ * Add `--mega-use-https` flag (NodudeWasTaken)
+* Onedrive
+ * Default onedrive personal to QuickXorHash as Microsoft is removing SHA1 (Nick Craig-Wood)
+ * Add `--onedrive-hash-type` to change the hash in use (Nick Craig-Wood)
+ * Improve speed of QuickXorHash (LXY)
+* Oracle Object Storage
+ * Speed up operations by using S3 pacer and setting minsleep to 10ms (Manoj Ghosh)
+ * Expose the `storage_tier` option in config (Manoj Ghosh)
+ * Bring your own encryption keys (Manoj Ghosh)
+* S3
+ * Check multipart upload ETag when `--s3-no-head` is in use (Nick Craig-Wood)
+ * Add `--s3-sts-endpoint` to specify STS endpoint (Nick Craig-Wood)
+ * Fix incorrect tier support for StorJ and IDrive when pointing at a file (Ole Frost)
+ * Fix AWS STS failing if `--s3-endpoint` is set (Nick Craig-Wood)
+ * Make purge remove directory markers too (Nick Craig-Wood)
+* Seafile
+ * Renew library password (Fred)
+* SFTP
+ * Fix uploads being 65% slower than they should be with crypt (Nick Craig-Wood)
+* Smb
+ * Allow SPN (service principal name) to be configured (Nick Craig-Wood)
+ * Check smb connection is closed (happyxhw)
+* Storj
+ * Implement `rclone link` (Kaloyan Raev)
+ * Implement `rclone purge` (Kaloyan Raev)
+ * Update satellite urls and labels (Kaloyan Raev)
+* WebDAV
+ * Fix interop with davrods server (Nick Craig-Wood)
+
+## v1.61.1 - 2022-12-23
+
+[See commits](https://github.com/rclone/rclone/compare/v1.61.0...v1.61.1)
+
+* Bug Fixes
+ * docs:
+ * Show only significant parts of version number in version introduced label (albertony)
+ * Fix unescaped HTML (Nick Craig-Wood)
+ * lib/http: Shutdown all servers on exit to remove unix socket (Nick Craig-Wood)
+ * rc: Fix `--rc-addr` flag (which is an alternate for `--url`) (Anagh Kumar Baranwal)
+ * serve restic
+ * Don't serve via http if serving via `--stdio` (Nick Craig-Wood)
+ * Fix immediate exit when not using stdio (Nick Craig-Wood)
+ * serve webdav
+ * Fix `--baseurl` handling after `lib/http` refactor (Nick Craig-Wood)
+ * Fix running duplicate Serve call (Nick Craig-Wood)
+* Azure Blob
+ * Fix "409 Public access is not permitted on this storage account" (Nick Craig-Wood)
+* S3
+ * storj: Update endpoints (Kaloyan Raev)
+
## v1.61.0 - 2022-12-20
[See commits](https://github.com/rclone/rclone/compare/v1.60.0...v1.61.0)
@@ -46247,7 +46952,7 @@ The syncs would be incremental (on a file by file basis).
e.g.
- rclone sync -i drive:Folder s3:bucket
+ rclone sync --interactive drive:Folder s3:bucket
### Using rclone from multiple locations at the same time ###
@@ -46256,8 +46961,8 @@ You can use rclone from multiple places at the same time if you choose
different subdirectory for the output, e.g.
```
-Server A> rclone sync -i /tmp/whatever remote:ServerA
-Server B> rclone sync -i /tmp/whatever remote:ServerB
+Server A> rclone sync --interactive /tmp/whatever remote:ServerA
+Server B> rclone sync --interactive /tmp/whatever remote:ServerB
```
If you sync to the same directory then you should use rclone copy
@@ -47146,6 +47851,28 @@ put them back in again.` >}}
* vanplus <60313789+vanplus@users.noreply.github.com>
* Jack <16779171+jkpe@users.noreply.github.com>
* Abdullah Saglam
+ * Marks Polakovs
+ * piyushgarg
+ * Kaloyan Raev
+ * IMTheNachoMan
+ * alankrit
+ * Bryan Kaplan <#@bryankaplan.com>
+ * LXY <767763591@qq.com>
+ * Simmon Li (he/him)
+ * happyxhw <44490504+happyxhw@users.noreply.github.com>
+ * Simmon Li (he/him)
+ * Matthias Baur
+ * Hunter Wittenborn
+ * logopk
+ * Gerard Bosch <30733556+gerardbosch@users.noreply.github.com>
+ * ToBeFree
+ * NodudeWasTaken <75137537+NodudeWasTaken@users.noreply.github.com>
+ * Peter Brunner
+ * Ninh Pham
+ * Ryan Caezar Itang
+ * Peter Brunner
+ * Leandro Sacchet
+ * dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
# Contact the rclone project #
diff --git a/MANUAL.txt b/MANUAL.txt
index 81320892b..2a6915092 100644
--- a/MANUAL.txt
+++ b/MANUAL.txt
@@ -1,6 +1,6 @@
rclone(1) User Manual
Nick Craig-Wood
-Dec 20, 2022
+Mar 14, 2023
Rclone syncs your files to cloud storage
@@ -320,6 +320,13 @@ does not launch a GUI by default, it runs in the CMD Window.
If you are planning to use the rclone mount feature then you will need
to install the third party utility WinFsp also.
+Windows package manager (Winget)
+
+Winget comes pre-installed with the latest versions of Windows. If not,
+update the App Installer package from the Microsoft store.
+
+ winget install Rclone.Rclone
+
Chocolatey package manager
Make sure you have Choco installed
@@ -339,6 +346,17 @@ developers so it may be out of date. Its current version is as below.
[Chocolatey package]
+Scoop package manager
+
+Make sure you have Scoop installed
+
+ scoop install rclone
+
+Note that this is a third party installer not controlled by the rclone
+developers so it may be out of date. Its current version is as below.
+
+[Scoop package]
+
Package manager installation
Many Linux, Windows, macOS and other OS distributions package and
@@ -781,7 +799,7 @@ storage system in the config file then the sub path, e.g.
You can define as many storage paths as you like in the config file.
-Please use the -i / --interactive flag while learning rclone to avoid
+Please use the --interactive/-i flag while learning rclone to avoid
accidental data loss.
Subcommands
@@ -790,7 +808,7 @@ rclone uses a system of subcommands. For example
rclone ls remote:path # lists a remote
rclone copy /local/path remote:path # copies /local/path to the remote
- rclone sync -i /local/path remote:path # syncs /local/path to the remote
+ rclone sync --interactive /local/path remote:path # syncs /local/path to the remote
rclone config
@@ -921,7 +939,7 @@ use the copy command instead.
Important: Since this can cause data loss, test first with the --dry-run
or the --interactive/-i flag.
- rclone sync -i SOURCE remote:DESTINATION
+ rclone sync --interactive SOURCE remote:DESTINATION
Note that files in the destination won't be deleted if there were any
errors at any point. Duplicate objects (files with the same name, on
@@ -1758,15 +1776,20 @@ Synopsis
Remote authorization. Used to authorize a remote or headless rclone from
a machine with a browser - use as instructed by rclone config.
-Use the --auth-no-open-browser to prevent rclone to open auth link in
+Use --auth-no-open-browser to prevent rclone to open auth link in
default browser automatically.
+Use --template to generate HTML output via a custom Go template. If a
+blank string is provided as an argument to this flag, the default
+template is used.
+
rclone authorize [flags]
Options
--auth-no-open-browser Do not automatically open auth link in default browser
-h, --help help for authorize
+ --template string The path to a custom Go template for generating HTML responses
See the global flags page for global options not listed here.
@@ -3545,7 +3568,7 @@ group will be taken from the current user, and the built-in group
customized with FUSE options "UserName" and "GroupName", e.g.
-o UserName=user123 -o GroupName="Authenticated Users". The permissions
on each entry will be set according to options --dir-perms and
---file-perms, which takes a value in traditional numeric notation.
+--file-perms, which takes a value in traditional Unix numeric notation.
The default permissions corresponds to
--file-perms 0666 --dir-perms 0777, i.e. read and write permissions to
@@ -3553,32 +3576,51 @@ everyone. This means you will not be able to start any programs from the
mount. To be able to do that you must add execute permissions, e.g.
--file-perms 0777 --dir-perms 0777 to add it to everyone. If the program
needs to write files, chances are you will have to enable VFS File
-Caching as well (see also limitations).
+Caching as well (see also limitations). Note that the default write
+permission have some restrictions for accounts other than the owner,
+specifically it lacks the "write extended attributes", as explained
+next.
-Note that the mapping of permissions is not always trivial, and the
-result you see in Windows Explorer may not be exactly like you expected.
-For example, when setting a value that includes write access, this will
-be mapped to individual permissions "write attributes", "write data" and
-"append data", but not "write extended attributes". Windows will then
-show this as basic permission "Special" instead of "Write", because
-"Write" includes the "write extended attributes" permission.
+The mapping of permissions is not always trivial, and the result you see
+in Windows Explorer may not be exactly like you expected. For example,
+when setting a value that includes write access for the group or others
+scope, this will be mapped to individual permissions "write attributes",
+"write data" and "append data", but not "write extended attributes".
+Windows will then show this as basic permission "Special" instead of
+"Write", because "Write" also covers the "write extended attributes"
+permission. When setting digit 0 for group or others, to indicate no
+permissions, they will still get individual permissions "read
+attributes", "read extended attributes" and "read permissions". This is
+done for compatibility reasons, e.g. to allow users without additional
+permissions to be able to read basic metadata about files like in Unix.
+
+WinFsp 2021 (version 1.9) introduced a new FUSE option "FileSecurity",
+that allows the complete specification of file security descriptors
+using SDDL. With this you get detailed control of the resulting
+permissions, compared to use of the POSIX permissions described above,
+and no additional permissions will be added automatically for
+compatibility with Unix. Some example use cases will following.
If you set POSIX permissions for only allowing access to the owner,
using --file-perms 0600 --dir-perms 0700, the user group and the
built-in "Everyone" group will still be given some special permissions,
-such as "read attributes" and "read permissions", in Windows. This is
-done for compatibility reasons, e.g. to allow users without additional
-permissions to be able to read basic metadata about files like in UNIX.
-One case that may arise is that other programs (incorrectly) interprets
-this as the file being accessible by everyone. For example an SSH client
-may warn about "unprotected private key file".
+as described above. Some programs may then (incorrectly) interpret this
+as the file being accessible by everyone, for example an SSH client may
+warn about "unprotected private key file". You can work around this by
+specifying -o FileSecurity="D:P(A;;FA;;;OW)", which sets file all access
+(FA) to the owner (OW), and nothing else.
-WinFsp 2021 (version 1.9) introduces a new FUSE option "FileSecurity",
-that allows the complete specification of file security descriptors
-using SDDL. With this you can work around issues such as the mentioned
-"unprotected private key file" by specifying
--o FileSecurity="D:P(A;;FA;;;OW)", for file all access (FA) to the owner
-(OW).
+When setting write permissions then, except for the owner, this does not
+include the "write extended attributes" permission, as mentioned above.
+This may prevent applications from writing to files, giving permission
+denied error instead. To set working write permissions for the built-in
+"Everyone" group, similar to what it gets by default but with the
+addition of the "write extended attributes", you can specify
+-o FileSecurity="D:P(A;;FRFW;;;WD)", which sets file read (FR) and file
+write (FW) to everyone (WD). If file execute (FX) is also needed, then
+change to -o FileSecurity="D:P(A;;FRFWFX;;;WD)", or set file all access
+(FA) to get full access permissions, including delete, with
+-o FileSecurity="D:P(A;;FA;;;WD)".
Windows caveats
@@ -3604,14 +3646,56 @@ command-line utility PsExec, from Microsoft's Sysinternals suite, which
has option -s to start processes as the SYSTEM account. Another
alternative is to run the mount command from a Windows Scheduled Task,
or a Windows Service, configured to run as the SYSTEM account. A third
-alternative is to use the WinFsp.Launcher infrastructure). Note that
-when running rclone as another user, it will not use the configuration
-file from your profile unless you tell it to with the --config option.
-Read more in the install documentation.
+alternative is to use the WinFsp.Launcher infrastructure). Read more in
+the install documentation. Note that when running rclone as another
+user, it will not use the configuration file from your profile unless
+you tell it to with the --config option. Note also that it is now the
+SYSTEM account that will have the owner permissions, and other accounts
+will have permissions according to the group or others scopes. As
+mentioned above, these will then not get the "write extended attributes"
+permission, and this may prevent writing to files. You can work around
+this with the FileSecurity option, see example above.
Note that mapping to a directory path, instead of a drive letter, does
not suffer from the same limitations.
+Mounting on macOS
+
+Mounting on macOS can be done either via macFUSE (also known as osxfuse)
+or FUSE-T. macFUSE is a traditional FUSE driver utilizing a macOS kernel
+extension (kext). FUSE-T is an alternative FUSE system which "mounts"
+via an NFSv4 local server.
+
+FUSE-T Limitations, Caveats, and Notes
+
+There are some limitations, caveats, and notes about how it works. These
+are current as of FUSE-T version 1.0.14.
+
+ModTime update on read
+
+As per the FUSE-T wiki:
+
+ File access and modification times cannot be set separately as it
+ seems to be an issue with the NFS client which always modifies both.
+ Can be reproduced with 'touch -m' and 'touch -a' commands
+
+This means that viewing files with various tools, notably macOS Finder,
+will cause rlcone to update the modification time of the file. This may
+make rclone upload a full new copy of the file.
+
+Unicode Normalization
+
+Rclone includes flags for unicode normalization with macFUSE that should
+be updated for FUSE-T. See this forum post and FUSE-T issue #16. The
+following flag should be added to the rclone mount command.
+
+ -o modules=iconv,from_code=UTF-8,to_code=UTF-8
+
+Read Only mounts
+
+When mounting with --read-only, attempts to write to files will fail
+silently as opposed to with a clear warning as in macFUSE.
+
Limitations
Without the use of --vfs-cache-mode this can only write files
@@ -6443,11 +6527,83 @@ result is accurate. However, this is very inefficient and may cost lots
of API calls resulting in extra charges. Use it as a last resort and
only with caching.
+Auth Proxy
+
+If you supply the parameter --auth-proxy /path/to/program then rclone
+will use that program to generate backends on the fly which then are
+used to authenticate incoming requests. This uses a simple JSON based
+protocol with input on STDIN and output on STDOUT.
+
+PLEASE NOTE: --auth-proxy and --authorized-keys cannot be used together,
+if --auth-proxy is set the authorized keys option will be ignored.
+
+There is an example program bin/test_proxy.py in the rclone source code.
+
+The program's job is to take a user and pass on the input and turn those
+into the config for a backend on STDOUT in JSON format. This config will
+have any default parameters for the backend added, but it won't use
+configuration from environment variables or command line options - it is
+the job of the proxy program to make a complete config.
+
+This config generated must have this extra parameter - _root - root to
+use for the backend
+
+And it may have this parameter - _obscure - comma separated strings for
+parameters to obscure
+
+If password authentication was used by the client, input to the proxy
+process (on STDIN) would look similar to this:
+
+ {
+ "user": "me",
+ "pass": "mypassword"
+ }
+
+If public-key authentication was used by the client, input to the proxy
+process (on STDIN) would look similar to this:
+
+ {
+ "user": "me",
+ "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf"
+ }
+
+And as an example return this on STDOUT
+
+ {
+ "type": "sftp",
+ "_root": "",
+ "_obscure": "pass",
+ "user": "me",
+ "pass": "mypassword",
+ "host": "sftp.example.com"
+ }
+
+This would mean that an SFTP backend would be created on the fly for the
+user and pass/public_key returned in the output to the host given. Note
+that since _obscure is set to pass, rclone will obscure the pass
+parameter before creating the backend (which is required for sftp
+backends).
+
+The program can manipulate the supplied user in any way, for example to
+make proxy to many different sftp backends, you could make the user be
+user@example.com and then set the host to example.com in the output and
+the user to user. For security you'd probably want to restrict the host
+to a limited list.
+
+Note that an internal cache is keyed on user so only use that for
+configuration, don't use pass or public_key. This also means that if a
+user's password or public-key is changed the cache will need to expire
+(which takes 5 mins) before it takes effect.
+
+This can be used to build general purpose proxies to any kind of backend
+that rclone supports.
+
rclone serve http remote:path [flags]
Options
--addr stringArray IPaddress:Port or :Port to bind server to (default [127.0.0.1:8080])
+ --auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
--cert string TLS PEM key (concatenation of certificate and CA certificate)
--client-ca string Client certificate authority to verify clients with
@@ -7194,6 +7350,32 @@ If this flag is set to "auto" then rclone will choose the first
supported hash on the backend or you can use a named hash such as "MD5"
or "SHA-1". Use the hashsum command to see the full list.
+Access WebDAV on Windows
+
+WebDAV shared folder can be mapped as a drive on Windows, however the
+default settings prevent it. Windows will fail to connect to the server
+using insecure Basic authentication. It will not even display any login
+dialog. Windows requires SSL / HTTPS connection to be used with Basic.
+If you try to connect via Add Network Location Wizard you will get the
+following error: "The folder you entered does not appear to be valid.
+Please choose another". However, you still can connect if you set the
+following registry key on a client machine: HKEY_LOCAL_MACHINEto 2. The
+BasicAuthLevel can be set to the following values: 0 - Basic
+authentication disabled 1 - Basic authentication enabled for SSL
+connections only 2 - Basic authentication enabled for SSL connections
+and for non-SSL connections If required, increase the
+FileSizeLimitInBytes to a higher value. Navigate to the Services
+interface, then restart the WebClient service.
+
+Access Office applications on WebDAV
+
+Navigate to following registry HKEY_CURRENT_USER[14.0/15.0/16.0] Create
+a new DWORD BasicAuthLevel with value 2. 0 - Basic authentication
+disabled 1 - Basic authentication enabled for SSL connections only 2 -
+Basic authentication enabled for SSL and for non-SSL connections
+
+https://learn.microsoft.com/en-us/office/troubleshoot/powerpoint/office-opens-blank-from-sharepoint
+
Server options
Use --addr to specify which IP address and port the server should listen
@@ -7993,7 +8175,7 @@ unless --no-create or --recursive is provided.
If --recursive is used then recursively sets the modification time on
all existing files that is found under the path. Filters are supported,
-and you can test with the --dry-run or the --interactive flag.
+and you can test with the --dry-run or the --interactive/-i flag.
If --timestamp is used then sets the modification time to that time
instead of the current time. Times may be specified as one of:
@@ -8277,8 +8459,8 @@ Will get their own names
Valid remote names
Remote names are case sensitive, and must adhere to the following rules:
-- May contain number, letter, _, -, . and space. - May not start with -
-or space. - May not end with space.
+- May contain number, letter, _, -, ., +, @ and space. - May not start
+with - or space. - May not end with space.
Starting with rclone version 1.61, any Unicode numbers and letters are
allowed, while in older versions it was limited to plain ASCII (0-9,
@@ -8333,11 +8515,11 @@ current directory prefix.
So to sync a directory called sync:me to a remote called remote: use
- rclone sync -i ./sync:me remote:path
+ rclone sync --interactive ./sync:me remote:path
or
- rclone sync -i /full/path/to/sync:me remote:path
+ rclone sync --interactive /full/path/to/sync:me remote:path
Server Side Copy
@@ -8368,8 +8550,8 @@ same.
This can be used when scripting to make aged backups efficiently, e.g.
- rclone sync -i remote:current-backup remote:previous-backup
- rclone sync -i /path/to/files remote:current-backup
+ rclone sync --interactive remote:current-backup remote:previous-backup
+ rclone sync --interactive /path/to/files remote:current-backup
Metadata support
@@ -8575,7 +8757,7 @@ a filter rule.
For example
- rclone sync -i /path/to/local remote:current --backup-dir remote:old
+ rclone sync --interactive /path/to/local remote:current --backup-dir remote:old
will sync /path/to/local to remote:current, but for any files which
would have been updated or deleted will be stored in remote:old.
@@ -8742,6 +8924,12 @@ with checking.
It can also be useful to ensure perfect ordering when using --order-by.
+If both --check-first and --order-by are set when doing rclone move then
+rclone will use the transfer thread to delete source files which don't
+need transferring. This will enable perfect ordering of the transfers
+and deletes but will cause the transfer stats to have more items in than
+expected.
+
Using this flag can use more memory as it effectively sets --max-backlog
to infinite. This means that all the info on the objects to transfer is
held in memory before the transfers start.
@@ -9041,7 +9229,7 @@ workaround for those with care.
Add an HTTP header for all download transactions. The flag can be
repeated to add multiple headers.
- rclone sync -i s3:test/src ~/dst --header-download "X-Amz-Meta-Test: Foo" --header-download "X-Amz-Meta-Test2: Bar"
+ rclone sync --interactive s3:test/src ~/dst --header-download "X-Amz-Meta-Test: Foo" --header-download "X-Amz-Meta-Test2: Bar"
See the GitHub issue here for currently supported backends.
@@ -9050,7 +9238,7 @@ See the GitHub issue here for currently supported backends.
Add an HTTP header for all upload transactions. The flag can be repeated
to add multiple headers.
- rclone sync -i ~/src s3:test/dst --header-upload "Content-Disposition: attachment; filename='cool.html'" --header-upload "X-Amz-Meta-Test: FooBar"
+ rclone sync --interactive ~/src s3:test/dst --header-upload "Content-Disposition: attachment; filename='cool.html'" --header-upload "X-Amz-Meta-Test: FooBar"
See the GitHub issue here for currently supported backends.
@@ -9160,7 +9348,7 @@ This can be useful as an additional layer of protection for immutable or
append-only data sets (notably backup archives), where modification
implies corruption and should not be propagated.
--i / --interactive
+-i, --interactive
This flag can be used to tell rclone that you wish a manual confirmation
before destructive operations.
@@ -9170,7 +9358,7 @@ especially with rclone sync.
For example
- $ rclone delete -i /tmp/dir
+ $ rclone delete --interactive /tmp/dir
rclone: delete "important-file.txt"?
y) Yes, this is OK (default)
n) No, skip this
@@ -9281,6 +9469,14 @@ This tells rclone not to delete more than N files. If that limit is
exceeded then a fatal error will be generated and rclone will stop the
operation in progress.
+--max-delete-size=SIZE
+
+Rclone will stop deleting files when the total size of deletions has
+reached the size specified. It defaults to off.
+
+If that limit is exceeded then a fatal error will be generated and
+rclone will stop the operation in progress.
+
--max-depth=N
This modifies the recursion depth for all the commands except purge.
@@ -9319,7 +9515,7 @@ When the limit is reached all transfers will stop immediately.
Rclone will exit with exit code 8 if the transfer limit is reached.
---metadata / -M
+-M, --metadata
Setting this flag enables rclone to copy the metadata from the source to
the destination. For local backends this is ownership, permissions,
@@ -9732,7 +9928,7 @@ with --backup-dir. See --backup-dir for more info.
For example
- rclone copy -i /path/to/local/file remote:current --suffix .bak
+ rclone copy --interactive /path/to/local/file remote:current --suffix .bak
will copy /path/to/local to remote:current, but for any files which
would have been updated or deleted have .bak added.
@@ -9741,7 +9937,7 @@ If using rclone sync with --suffix and without --backup-dir then it is
recommended to put a filter rule in excluding the suffix otherwise the
sync will delete the backup files.
- rclone sync -i /path/to/local/file remote:current --suffix .bak --exclude "*.bak"
+ rclone sync --interactive /path/to/local/file remote:current --suffix .bak --exclude "*.bak"
--suffix-keep-extension
@@ -10041,10 +10237,10 @@ these options. For example this can be very useful with the HTTP or
WebDAV backends. Rclone HTTP servers have their own set of configuration
for SSL/TLS which you can find in their documentation.
---ca-cert string
+--ca-cert stringArray
-This loads the PEM encoded certificate authority certificate and uses it
-to verify the certificates of the servers rclone connects to.
+This loads the PEM encoded certificate authority certificates and uses
+it to verify the certificates of the servers rclone connects to.
If you have generated certificates signed with a local CA then you will
need this flag to connect to servers using those certificates.
@@ -11306,7 +11502,7 @@ Important this flag is dangerous to your data - use with --dry-run and
In conjunction with rclone sync, --delete-excluded deletes any files on
the destination which are excluded from the command.
-E.g. the scope of rclone sync -i A: B: can be restricted:
+E.g. the scope of rclone sync --interactive A: B: can be restricted:
rclone --min-size 50k --delete-excluded sync A: B:
@@ -13534,7 +13730,7 @@ Here is an overview of the major features of each cloud storage system.
Mega - - No Yes - -
Memory MD5 R/W No No - -
Microsoft Azure Blob Storage MD5 R/W No No R/W -
- Microsoft OneDrive SHA1 ⁵ R/W Yes No R -
+ Microsoft OneDrive QuickXorHash ⁵ R/W Yes No R -
OpenDrive MD5 R/W Yes Partial ⁸ - -
OpenStack Swift MD5 R/W No No R/W -
Oracle Object Storage MD5 R/W No No R/W -
@@ -13566,8 +13762,7 @@ or sha1sum as well as echo are in the remote's PATH.
⁴ WebDAV supports modtimes when used with Owncloud and Nextcloud only.
-⁵ Microsoft OneDrive Personal supports SHA1 hashes, whereas OneDrive for
-business and SharePoint server support Microsoft's own QuickXorHash.
+⁵ QuickXorHash is Microsoft's own hash.
⁶ Mail.ru uses its own modified SHA1 hash
@@ -14029,7 +14224,7 @@ upon backend-specific capabilities.
Sia No No No No No No Yes No No Yes
SMB No No Yes Yes No No Yes No No Yes
SugarSync Yes Yes Yes Yes No No Yes Yes No Yes
- Storj Yes † Yes Yes No No Yes Yes No No No
+ Storj Yes ☨ Yes Yes No No Yes Yes Yes No No
Uptobox No Yes Yes Yes No No No No No No
WebDAV Yes Yes Yes Yes No No Yes ‡ No Yes Yes
Yandex Disk Yes Yes Yes Yes Yes No Yes Yes Yes Yes
@@ -14041,9 +14236,12 @@ Purge
This deletes a directory quicker than just deleting all the files in the
directory.
-† Note Swift and Storj implement this in order to delete directory
-markers but they don't actually have a quicker way of deleting files
-other than deleting them individually.
+† Note Swift implements this in order to delete directory markers but
+they don't actually have a quicker way of deleting files other than
+deleting them individually.
+
+☨ Storj implements this efficiently only for entire buckets. If purging
+a directory inside a bucket, files are deleted individually.
‡ StreamUpload is not supported with Nextcloud
@@ -14134,7 +14332,7 @@ These flags are available for every command.
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer (default 16Mi)
--bwlimit BwTimetable Bandwidth limit in KiB/s, or use suffix B|K|M|G|T|P or a full timetable
--bwlimit-file BwTimetable Bandwidth limit per file in KiB/s, or use suffix B|K|M|G|T|P or a full timetable
- --ca-cert string CA certificate used to verify servers
+ --ca-cert stringArray CA certificate used to verify servers
--cache-dir string Directory rclone will use for caching (default "$HOME/.cache/rclone")
--check-first Do all the checks before starting transfers
--checkers int Number of checkers to run in parallel (default 8)
@@ -14196,6 +14394,7 @@ These flags are available for every command.
--max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off)
--max-depth int If set limits the recursion depth to this (default -1)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
@@ -14284,7 +14483,7 @@ These flags are available for every command.
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.61.0")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.62.0")
-v, --verbose count Print lots more stuff (repeat for more)
Backend Flags
@@ -14499,6 +14698,7 @@ and may be set in the config file.
--gcs-decompress If set this will decompress gzip encoded objects
--gcs-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gcs-endpoint string Endpoint for the service
+ --gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars)
--gcs-location string Location for the newly created buckets
--gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it
--gcs-object-acl string Access Control List for new objects
@@ -14587,6 +14787,7 @@ and may be set in the config file.
--mega-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
--mega-hard-delete Delete files permanently rather than putting them into the trash
--mega-pass string Password (obscured)
+ --mega-use-https Use HTTPS for transfers
--mega-user string User name
--netstorage-account string Set the NetStorage account name
--netstorage-host string Domain+path of NetStorage host to connect to
@@ -14602,6 +14803,7 @@ and may be set in the config file.
--onedrive-drive-type string The type of the drive (personal | business | documentLibrary)
--onedrive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings
+ --onedrive-hash-type string Specify the hash in use for the backend (default "auto")
--onedrive-link-password string Set the password for links created by the link command
--onedrive-link-scope string Set the scope of the links created by the link command (default "anonymous")
--onedrive-link-type string Set the type of the links created by the link command (default "view")
@@ -14626,6 +14828,12 @@ and may be set in the config file.
--oos-no-check-bucket If set, don't attempt to check the bucket exists or create it
--oos-provider string Choose your Auth Provider (default "env_auth")
--oos-region string Object storage Region
+ --oos-sse-customer-algorithm string If using SSE-C, the optional header that specifies "AES256" as the encryption algorithm
+ --oos-sse-customer-key string To use SSE-C, the optional header that specifies the base64-encoded 256-bit encryption key to use to
+ --oos-sse-customer-key-file string To use SSE-C, a file containing the base64-encoded string of the AES-256 encryption key associated
+ --oos-sse-customer-key-sha256 string If using SSE-C, The optional header that specifies the base64-encoded SHA256 hash of the encryption
+ --oos-sse-kms-key-id string if using using your own master key in vault, this header specifies the
+ --oos-storage-tier string The storage class to use when storing new objects in storage. https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm (default "Standard")
--oos-upload-concurrency int Concurrency for multipart uploads (default 10)
--oos-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi)
@@ -14694,6 +14902,7 @@ and may be set in the config file.
--s3-sse-customer-key-md5 string If using SSE-C you may provide the secret encryption key MD5 checksum (optional)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key
--s3-storage-class string The storage class to use when storing new objects in S3
+ --s3-sts-endpoint string Endpoint for STS
--s3-upload-concurrency int Concurrency for multipart uploads (default 4)
--s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint
@@ -14759,12 +14968,13 @@ and may be set in the config file.
--smb-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--smb-pass string SMB password (obscured)
--smb-port int SMB port number (default 445)
+ --smb-spn string Service principal name
--smb-user string SMB username (default "$USER")
--storj-access-grant string Access grant
--storj-api-key string API key
--storj-passphrase string Encryption passphrase
--storj-provider string Choose an authentication method (default "existing")
- --storj-satellite-address string Satellite address (default "us-central-1.storj.io")
+ --storj-satellite-address string Satellite address (default "us1.storj.io")
--sugarsync-access-key-id string Sugarsync Access Key ID
--sugarsync-app-id string Sugarsync App ID
--sugarsync-authorization string Sugarsync authorization
@@ -15470,9 +15680,9 @@ The base directories on the both Path1 and Path2 filesystems must exist
or bisync will fail. This is required for safety - that bisync can
verify that both paths are valid.
-When using --resync a newer version of a file on the Path2 filesystem
-will be overwritten by the Path1 filesystem version. Carefully evaluate
-deltas using --dry-run.
+When using --resync, a newer version of a file either on Path1 or Path2
+filesystem, will overwrite the file on the other path (only the last
+version will be kept). Carefully evaluate deltas using --dry-run.
For a resync run, one of the paths may be empty (no files in the path
tree). The resync run should result in files on both paths, else a
@@ -15488,16 +15698,27 @@ deleting everything in the other path.
Access check files are an additional safety measure against data loss.
bisync will ensure it can find matching RCLONE_TEST files in the same
-places in the Path1 and Path2 filesystems. Time stamps and file contents
-are not important, just the names and locations. Place one or more
-RCLONE_TEST files in the Path1 or Path2 filesystem and then do either a
-run without --check-access or a --resync to set matching files on both
-filesystems. If you have symbolic links in your sync tree it is
+places in the Path1 and Path2 filesystems. RCLONE_TEST files are not
+generated automatically. For --check-accessto succeed, you must first
+either: A) Place one or more RCLONE_TEST files in the Path1 or Path2
+filesystem and then do either a run without --check-access or a --resync
+to set matching files on both filesystems, or B) Set --check-filename to
+a filename already in use in various locations throughout your sync'd
+fileset. Time stamps and file contents are not important, just the names
+and locations. If you have symbolic links in your sync tree it is
recommended to place RCLONE_TEST files in the linked-to directory tree
to protect against bisync assuming a bunch of deleted files if the
-linked-to tree should not be accessible. Also see the --check-filename
+linked-to tree should not be accessible. See also the --check-filename
flag.
+--check-filename
+
+Name of the file(s) used in access health validation. The default
+--check-filename is RCLONE_TEST. One or more files having this filename
+must exist, synchronized between your source and destination filesets,
+in order for --check-access to succeed. See --check-access for
+additional details.
+
--max-delete
As a safety check, if greater than the --max-delete percent of files
@@ -17109,7 +17330,7 @@ List the contents of a bucket
Sync /home/local/directory to the remote bucket, deleting any excess
files in the bucket.
- rclone sync -i /home/local/directory remote:bucket
+ rclone sync --interactive /home/local/directory remote:bucket
Configuration
@@ -17501,10 +17722,11 @@ Clean up all the old versions and show that they've gone.
Cleanup
If you run rclone cleanup s3:bucket then it will remove all pending
-multipart uploads older than 24 hours. You can use the -i flag to see
-exactly what it will do. If you want more control over the expiry date
-then run rclone backend cleanup s3:bucket -o max-age=1h to expire all
-uploads older than one hour. You can use
+multipart uploads older than 24 hours. You can use the --interactive/i
+or --dry-run flag to see exactly what it will do. If you want more
+control over the expiry date then run
+rclone backend cleanup s3:bucket -o max-age=1h to expire all uploads
+older than one hour. You can use
rclone backend list-multipart-uploads s3:bucket to see the pending
multipart uploads.
@@ -18527,7 +18749,7 @@ Properties:
--s3-endpoint
-Endpoint of the Shared Gateway.
+Endpoint for Storj Gateway.
Properties:
@@ -18537,12 +18759,8 @@ Properties:
- Type: string
- Required: false
- Examples:
- - "gateway.eu1.storjshare.io"
- - EU1 Shared Gateway
- - "gateway.us1.storjshare.io"
- - US1 Shared Gateway
- - "gateway.ap1.storjshare.io"
- - Asia-Pacific Shared Gateway
+ - "gateway.storjshare.io"
+ - Global Hosted Gateway
--s3-endpoint
@@ -20031,6 +20249,20 @@ Properties:
- Type: bool
- Default: false
+--s3-sts-endpoint
+
+Endpoint for STS.
+
+Leave blank if using AWS to use the default endpoint for the region.
+
+Properties:
+
+- Config: sts_endpoint
+- Env Var: RCLONE_S3_STS_ENDPOINT
+- Provider: AWS
+- Type: string
+- Required: false
+
Metadata
User metadata is stored as x-amz-meta- keys. S3 metadata keys are case
@@ -20098,10 +20330,10 @@ Usage Examples:
rclone backend restore s3:bucket/path/to/directory [-o priority=PRIORITY] [-o lifetime=DAYS]
rclone backend restore s3:bucket [-o priority=PRIORITY] [-o lifetime=DAYS]
-This flag also obeys the filters. Test first with -i/--interactive or
+This flag also obeys the filters. Test first with --interactive/-i or
--dry-run flags
- rclone -i backend restore --include "*.txt" s3:bucket/path -o priority=Standard
+ rclone --interactive backend restore --include "*.txt" s3:bucket/path -o priority=Standard
All the objects shown will be marked for restore, then
@@ -20173,8 +20405,8 @@ Remove unfinished multipart uploads.
This command removes unfinished multipart uploads of age greater than
max-age which defaults to 24 hours.
-Note that you can use -i/--dry-run with this command to see what it
-would do.
+Note that you can use --interactive/-i or --dry-run with this command to
+see what it would do.
rclone backend cleanup s3:bucket/path/to/object
rclone backend cleanup -o max-age=7w s3:bucket/path/to/object
@@ -20194,8 +20426,8 @@ Remove old versions of files.
This command removes any old hidden versions of files on a versions
enabled bucket.
-Note that you can use -i/--dry-run with this command to see what it
-would do.
+Note that you can use --interactive/-i or --dry-run with this command to
+see what it would do.
rclone backend cleanup-hidden s3:bucket/path/to/dir
@@ -22503,7 +22735,7 @@ List the contents of a bucket
Sync /home/local/directory to the remote bucket, deleting any excess
files in the bucket.
- rclone sync -i /home/local/directory remote:bucket
+ rclone sync --interactive /home/local/directory remote:bucket
Application Keys
@@ -24382,9 +24614,9 @@ transaction style or chunk naming scheme is to:
- Create another directory (most probably on the same cloud storage)
and configure a new remote with desired metadata format, hash type,
chunk naming etc.
-- Now run rclone sync -i oldchunks: newchunks: and all your data will
- be transparently converted in transfer. This may take some time, yet
- chunker will try server-side copy if possible.
+- Now run rclone sync --interactive oldchunks: newchunks: and all your
+ data will be transparently converted in transfer. This may take some
+ time, yet chunker will try server-side copy if possible.
- After checking data integrity you may remove configuration section
of the old remote.
@@ -24902,7 +25134,7 @@ password that must be memorized.
File content encryption is performed using NaCl SecretBox, based on
XSalsa20 cipher and Poly1305 for integrity. Names (file- and directory
names) are also encrypted by default, but this has some implications and
-is therefore possible to turned off.
+is therefore possible to be turned off.
Configuration
@@ -25483,7 +25715,7 @@ path remote2:crypt using the same passwords as eremote:.
To sync the two remotes you would do
- rclone sync -i remote:crypt remote2:crypt
+ rclone sync --interactive remote:crypt remote2:crypt
And to check the integrity you would do
@@ -26742,7 +26974,7 @@ List the contents of a directory
Sync /home/local/directory to the remote directory, deleting any excess
files in the directory.
- rclone sync -i /home/local/directory remote:directory
+ rclone sync --interactive /home/local/directory remote:directory
Anonymous FTP
@@ -27291,7 +27523,7 @@ List the contents of a bucket
Sync /home/local/directory to the remote bucket, deleting any excess
files in the bucket.
- rclone sync -i /home/local/directory remote:bucket
+ rclone sync --interactive /home/local/directory remote:bucket
Service Account support
@@ -27668,6 +27900,27 @@ Properties:
- "DURABLE_REDUCED_AVAILABILITY"
- Durable reduced availability storage class
+--gcs-env-auth
+
+Get GCP IAM credentials from runtime (environment variables or instance
+meta data if no env vars).
+
+Only applies if service_account_file and service_account_credentials is
+blank.
+
+Properties:
+
+- Config: env_auth
+- Env Var: RCLONE_GCS_ENV_AUTH
+- Type: bool
+- Default: false
+- Examples:
+ - "false"
+ - Enter credentials in the next step.
+ - "true"
+ - Get GCP IAM credentials from the environment (env vars or
+ IAM).
+
Advanced options
Here are the Advanced options specific to google cloud storage (Google
@@ -28799,6 +29052,10 @@ as malware or spam and cannot be downloaded" with the error code
you acknowledge the risks of downloading the file and rclone will
download it anyway.
+Note that if you are using service account it will need Manager
+permission (not Content Manager) to for this flag to work. If the SA
+does not have the right permission, Google will just ignore the flag.
+
Properties:
- Config: acknowledge_abuse
@@ -29164,9 +29421,10 @@ This takes an optional directory to trash which make this easier to use
via the API.
rclone backend untrash drive:directory
- rclone backend -i untrash drive:directory subdir
+ rclone backend --interactive untrash drive:directory subdir
-Use the -i flag to see what would be restored before restoring it.
+Use the --interactive/-i or --dry-run flag to see what would be restored
+before restoring it.
Result:
@@ -29199,7 +29457,8 @@ be used as the file name.
If the destination is a drive backend then server-side copying will be
attempted if possible.
-Use the -i flag to see what would be copied before copying.
+Use the --interactive/-i or --dry-run flag to see what would be copied
+before copying.
exportformats
@@ -29310,9 +29569,15 @@ Here is how to create your own Google Drive client ID for rclone:
"External" and click on "CREATE"; on the next screen, enter an
"Application name" ("rclone" is OK); enter "User Support Email"
(your own email is OK); enter "Developer Contact Email" (your own
- email is OK); then click on "Save" (all other data is optional).
- Click again on "Credentials" on the left panel to go back to the
- "Credentials" screen.
+ email is OK); then click on "Save" (all other data is optional). You
+ will also have to add some scopes, including .../auth/docs and
+ .../auth/drive in order to be able to edit, create and delete files
+ with RClone. You may also want to include the
+ ../auth/drive.metadata.readonly scope. After adding scopes, click
+ "Save and continue" to add test users. Be sure to add your own
+ account to the test users. Once you've added yourself as a test user
+ and saved the changes, click again on "Credentials" on the left
+ panel to go back to the "Credentials" screen.
(PS: if you are a GSuite user, you could also select "Internal"
instead of "External" above, but this will restrict API use to
@@ -29327,16 +29592,15 @@ Here is how to create your own Google Drive client ID for rclone:
8. It will show you a client ID and client secret. Make a note of
these.
- (If you selected "External" at Step 5 continue to "Publish App" in
- the Steps 9 and 10. If you chose "Internal" you don't need to
- publish and can skip straight to Step 11.)
+ (If you selected "External" at Step 5 continue to Step 9. If you
+ chose "Internal" you don't need to publish and can skip straight to
+ Step 10 but your destination drive must be part of the same Google
+ Workspace.)
-9. Go to "Oauth consent screen" and press "Publish App"
+9. Go to "Oauth consent screen" and then click "PUBLISH APP" button and
+ confirm. You will also want to add yourself as a test user.
-10. Click "OAuth consent screen", then click "PUBLISH APP" button and
- confirm, or add your account under "Test users".
-
-11. Provide the noted client ID and client secret to rclone.
+10. Provide the noted client ID and client secret to rclone.
Be aware that, due to the "enhanced security" recently introduced by
Google, you are theoretically expected to "submit your app for
@@ -29345,7 +29609,11 @@ practice, you can go right ahead and use the client ID and client secret
with rclone, the only issue will be a very scary confirmation screen
shown when you connect via your browser for rclone to be able to get its
token-id (but as this only happens during the remote configuration, it's
-not such a big deal).
+not such a big deal). Keeping the application in "Testing" will work as
+well, but the limitation is that any grants will expire after a week,
+which can be annoying to refresh constantly. If, for whatever reason, a
+short grant time is not a problem, then keeping the application in
+testing mode would also be sufficient.
(Thanks to @balazer on github for these instructions.)
@@ -29466,11 +29734,11 @@ List the contents of an album
Sync /home/local/images to the Google Photos, removing any excess files
in the album.
- rclone sync -i /home/local/image remote:album/newAlbum
+ rclone sync --interactive /home/local/image remote:album/newAlbum
Layout
-As Google Photos is not a general purpose cloud storage system the
+As Google Photos is not a general purpose cloud storage system, the
backend is laid out to help you navigate it.
The directories under media show different ways of categorizing the
@@ -30230,7 +30498,7 @@ List the contents of a directory
Sync the remote directory to /home/local/directory, deleting any excess
files.
- rclone sync -i remote:directory /home/local/directory
+ rclone sync --interactive remote:directory /home/local/directory
Setting up your own HDFS instance for testing
@@ -30921,7 +31189,7 @@ List the contents of a directory
Sync the remote directory to /home/local/directory, deleting any excess
files.
- rclone sync -i remote:directory /home/local/directory
+ rclone sync --interactive remote:directory /home/local/directory
Read only
@@ -31069,7 +31337,7 @@ List the contents of a item
Sync /home/local/directory to the remote item, deleting any excess files
in the item.
- rclone sync -i /home/local/directory remote:item
+ rclone sync --interactive /home/local/directory remote:item
Notes
@@ -32273,7 +32541,7 @@ List the contents of a directory
Sync /home/local/directory to the remote path, deleting any excess files
in the path.
- rclone sync -i /home/local/directory remote:directory
+ rclone sync --interactive /home/local/directory remote:directory
Modified time
@@ -32756,6 +33024,23 @@ Properties:
- Type: bool
- Default: false
+--mega-use-https
+
+Use HTTPS for transfers.
+
+MEGA uses plain text HTTP connections by default. Some ISPs throttle
+HTTP connections, this causes transfers to become very slow. Enabling
+this will force MEGA to use HTTPS for all transfers. HTTPS is normally
+not necesary since all data is already encrypted anyway. Enabling it
+will increase CPU usage and add network overhead.
+
+Properties:
+
+- Config: use_https
+- Env Var: RCLONE_MEGA_USE_HTTPS
+- Type: bool
+- Default: false
+
--mega-encoding
The encoding for the backend.
@@ -33216,7 +33501,7 @@ List the contents of a container
Sync /home/local/directory to the remote container, deleting any excess
files in the container.
- rclone sync -i /home/local/directory remote:container
+ rclone sync --interactive /home/local/directory remote:container
--fast-list
@@ -34194,8 +34479,18 @@ OneDrive allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or
not.
-OneDrive personal supports SHA1 type hashes. OneDrive for business and
-Sharepoint Server support QuickXorHash.
+OneDrive Personal, OneDrive for Business and Sharepoint Server support
+QuickXorHash.
+
+Before rclone 1.62 the default hash for Onedrive Personal was SHA1. For
+rclone 1.62 and above the default for all Onedrive backends is
+QuickXorHash.
+
+Starting from July 2023 SHA1 support is being phased out in Onedrive
+Personal in favour of QuickXorHash. If necessary the
+--onedrive-hash-type flag (or hash_type config option) can be used to
+select SHA1 during the transition period if this is important your
+workflow.
For all types of OneDrive you can use the --checksum flag.
@@ -34550,6 +34845,46 @@ Properties:
- Type: string
- Required: false
+--onedrive-hash-type
+
+Specify the hash in use for the backend.
+
+This specifies the hash type in use. If set to "auto" it will use the
+default hash which is is QuickXorHash.
+
+Before rclone 1.62 an SHA1 hash was used by default for Onedrive
+Personal. For 1.62 and later the default is to use a QuickXorHash for
+all onedrive types. If an SHA1 hash is desired then set this option
+accordingly.
+
+From July 2023 QuickXorHash will be the only available hash for both
+OneDrive for Business and OneDriver Personal.
+
+This can be set to "none" to not use any hashes.
+
+If the hash requested does not exist on the object, it will be returned
+as an empty string which is treated as a missing hash by rclone.
+
+Properties:
+
+- Config: hash_type
+- Env Var: RCLONE_ONEDRIVE_HASH_TYPE
+- Type: string
+- Default: "auto"
+- Examples:
+ - "auto"
+ - Rclone chooses the best hash
+ - "quickxor"
+ - QuickXor
+ - "sha1"
+ - SHA1
+ - "sha256"
+ - SHA256
+ - "crc32"
+ - CRC32
+ - "none"
+ - None - don't use any hashes
+
--onedrive-encoding
The encoding for the backend.
@@ -34671,11 +35006,11 @@ OneDrive supports rclone cleanup which causes rclone to look through
every file under the path supplied and delete all version but the
current version. Because this involves traversing all the files, then
querying each file for versions it can be quite slow. Rclone does
---checkers tests in parallel. The command also supports -i which is a
-great way to see what it would do.
+--checkers tests in parallel. The command also supports --interactive/i
+or --dry-run which is a great way to see what it would do.
- rclone cleanup -i remote:path/subdir # interactively remove all old version for path/subdir
- rclone cleanup remote:path/subdir # unconditionally remove all old version for path/subdir
+ rclone cleanup --interactive remote:path/subdir # interactively remove all old version for path/subdir
+ rclone cleanup remote:path/subdir # unconditionally remove all old version for path/subdir
NB Onedrive personal can't currently delete versions
@@ -34767,24 +35102,55 @@ Shared with me files is not supported by rclone currently, but there is
a workaround:
1. Visit https://onedrive.live.com
-
2. Right click a item in Shared, then click Add shortcut to My files in
- the context
-
- Screenshot (Shared with me)
-
- [make_shortcut]
-
+ the context [make_shortcut]
3. The shortcut will appear in My files, you can access it with rclone,
- it behaves like a normal folder/file.
+ it behaves like a normal folder/file. [in_my_files] [rclone_mount]
- Screenshot (My Files)
+Live Photos uploaded from iOS (small video clips in .heic files)
- [in_my_files]
+The iOS OneDrive app introduced upload and storage of Live Photos in
+2020. The usage and download of these uploaded Live Photos is
+unfortunately still work-in-progress and this introduces several issues
+when copying, synchronising and mounting – both in rclone and in the
+native OneDrive client on Windows.
-Screenshot (rclone mount)
+The root cause can easily be seen if you locate one of your Live Photos
+in the OneDrive web interface. Then download the photo from the web
+interface. You will then see that the size of downloaded .heic file is
+smaller than the size displayed in the web interface. The downloaded
+file is smaller because it only contains a single frame (still photo)
+extracted from the Live Photo (movie) stored in OneDrive.
+
+The different sizes will cause rclone copy/sync to repeatedly recopy
+unmodified photos something like this:
+
+ DEBUG : 20230203_123826234_iOS.heic: Sizes differ (src 4470314 vs dst 1298667)
+ DEBUG : 20230203_123826234_iOS.heic: sha1 = fc2edde7863b7a7c93ca6771498ac797f8460750 OK
+ INFO : 20230203_123826234_iOS.heic: Copied (replaced existing)
+
+These recopies can be worked around by adding --ignore-size. Please note
+that this workaround only syncs the still-picture not the movie clip,
+and relies on modification dates being correctly updated on all files in
+all situations.
+
+The different sizes will also cause rclone check to report size errors
+something like this:
+
+ ERROR : 20230203_123826234_iOS.heic: sizes differ
+
+These check errors can be suppressed by adding --ignore-size.
+
+The different sizes will also cause rclone mount to fail downloading
+with an error something like this:
+
+ ERROR : 20230203_123826234_iOS.heic: ReadFileHandle.Read error: low level retry 1/10: unexpected EOF
+
+or like this when using --cache-mode=full:
+
+ INFO : 20230203_123826234_iOS.heic: vfs cache: downloader: error count now 1: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable:
+ ERROR : 20230203_123826234_iOS.heic: vfs cache: failed to download: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable:
-[rclone_mount]
OpenDrive
Paths are specified as remote:path
@@ -35037,7 +35403,7 @@ This will guide you through an interactive setup process:
endpoint>
Option config_file.
- Path to OCI config file
+ Full Path to OCI config file
Choose a number from below, or type in your own string value.
Press Enter for the default (~/.oci/config).
1 / oci configuration file location
@@ -35085,6 +35451,111 @@ List the contents of a bucket
rclone ls remote:bucket
rclone ls remote:bucket --max-depth 1
+OCI Authentication Provider
+
+OCI has various authentication methods. To learn more about
+authentication methods please refer oci authentication methods These
+choices can be specified in the rclone config file.
+
+Rclone supports the following OCI authentication provider.
+
+ User Principal
+ Instance Principal
+ Resource Principal
+ No authentication
+
+Authentication provider choice: User Principal
+
+Sample rclone config file for Authentication Provider User Principal:
+
+ [oos]
+ type = oracleobjectstorage
+ namespace = id34
+ compartment = ocid1.compartment.oc1..aaba
+ region = us-ashburn-1
+ provider = user_principal_auth
+ config_file = /home/opc/.oci/config
+ config_profile = Default
+
+Advantages: - One can use this method from any server within OCI or
+on-premises or from other cloud provider.
+
+Considerations: - you need to configure user’s privileges / policy to
+allow access to object storage - Overhead of managing users and keys. -
+If the user is deleted, the config file will no longer work and may
+cause automation regressions that use the user's credentials.
+
+Authentication provider choice: Instance Principal
+
+An OCI compute instance can be authorized to use rclone by using it's
+identity and certificates as an instance principal. With this approach
+no credentials have to be stored and managed.
+
+Sample rclone configuration file for Authentication Provider Instance
+Principal:
+
+ [opc@rclone ~]$ cat ~/.config/rclone/rclone.conf
+ [oos]
+ type = oracleobjectstorage
+ namespace = idfn
+ compartment = ocid1.compartment.oc1..aak7a
+ region = us-ashburn-1
+ provider = instance_principal_auth
+
+Advantages:
+
+- With instance principals, you don't need to configure user
+ credentials and transfer/ save it to disk in your compute instances
+ or rotate the credentials.
+- You don’t need to deal with users and keys.
+- Greatly helps in automation as you don't have to manage access keys,
+ user private keys, storing them in vault, using kms etc.
+
+Considerations:
+
+- You need to configure a dynamic group having this instance as member
+ and add policy to read object storage to that dynamic group.
+- Everyone who has access to this machine can execute the CLI
+ commands.
+- It is applicable for oci compute instances only. It cannot be used
+ on external instance or resources.
+
+Authentication provider choice: Resource Principal
+
+Resource principal auth is very similar to instance principal auth but
+used for resources that are not compute instances such as serverless
+functions. To use resource principal ensure Rclone process is started
+with these environment variables set in its process.
+
+ export OCI_RESOURCE_PRINCIPAL_VERSION=2.2
+ export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1
+ export OCI_RESOURCE_PRINCIPAL_PRIVATE_PEM=/usr/share/model-server/key.pem
+ export OCI_RESOURCE_PRINCIPAL_RPST=/usr/share/model-server/security_token
+
+Sample rclone configuration file for Authentication Provider Resource
+Principal:
+
+ [oos]
+ type = oracleobjectstorage
+ namespace = id34
+ compartment = ocid1.compartment.oc1..aaba
+ region = us-ashburn-1
+ provider = resource_principal_auth
+
+Authentication provider choice: No authentication
+
+Public buckets do not require any authentication mechanism to read
+objects. Sample rclone configuration file for No authentication:
+
+ [oos]
+ type = oracleobjectstorage
+ namespace = id34
+ compartment = ocid1.compartment.oc1..aaba
+ region = us-ashburn-1
+ provider = no_auth
+
+Options
+
Modified time
The modified time is stored as metadata on the object as opc-meta-mtime
@@ -35246,6 +35717,25 @@ Advanced options
Here are the Advanced options specific to oracleobjectstorage (Oracle
Cloud Infrastructure Object Storage).
+--oos-storage-tier
+
+The storage class to use when storing new objects in storage.
+https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm
+
+Properties:
+
+- Config: storage_tier
+- Env Var: RCLONE_OOS_STORAGE_TIER
+- Type: string
+- Default: "Standard"
+- Examples:
+ - "Standard"
+ - Standard storage tier, this is the default tier
+ - "InfrequentAccess"
+ - InfrequentAccess storage tier
+ - "Archive"
+ - Archive storage tier
+
--oos-upload-cutoff
Cutoff for switching to chunked upload.
@@ -35406,6 +35896,99 @@ Properties:
- Type: bool
- Default: false
+--oos-sse-customer-key-file
+
+To use SSE-C, a file containing the base64-encoded string of the AES-256
+encryption key associated with the object. Please note only one of
+sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed.'
+
+Properties:
+
+- Config: sse_customer_key_file
+- Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY_FILE
+- Type: string
+- Required: false
+- Examples:
+ - ""
+ - None
+
+--oos-sse-customer-key
+
+To use SSE-C, the optional header that specifies the base64-encoded
+256-bit encryption key to use to encrypt or decrypt the data. Please
+note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id
+is needed. For more information, see Using Your Own Keys for Server-Side
+Encryption
+(https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm)
+
+Properties:
+
+- Config: sse_customer_key
+- Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY
+- Type: string
+- Required: false
+- Examples:
+ - ""
+ - None
+
+--oos-sse-customer-key-sha256
+
+If using SSE-C, The optional header that specifies the base64-encoded
+SHA256 hash of the encryption key. This value is used to check the
+integrity of the encryption key. see Using Your Own Keys for Server-Side
+Encryption
+(https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm).
+
+Properties:
+
+- Config: sse_customer_key_sha256
+- Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY_SHA256
+- Type: string
+- Required: false
+- Examples:
+ - ""
+ - None
+
+--oos-sse-kms-key-id
+
+if using using your own master key in vault, this header specifies the
+OCID
+(https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm)
+of a master encryption key used to call the Key Management service to
+generate a data encryption key or to encrypt or decrypt a data
+encryption key. Please note only one of
+sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed.
+
+Properties:
+
+- Config: sse_kms_key_id
+- Env Var: RCLONE_OOS_SSE_KMS_KEY_ID
+- Type: string
+- Required: false
+- Examples:
+ - ""
+ - None
+
+--oos-sse-customer-algorithm
+
+If using SSE-C, the optional header that specifies "AES256" as the
+encryption algorithm. Object Storage supports "AES256" as the encryption
+algorithm. For more information, see Using Your Own Keys for Server-Side
+Encryption
+(https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm).
+
+Properties:
+
+- Config: sse_customer_algorithm
+- Env Var: RCLONE_OOS_SSE_CUSTOMER_ALGORITHM
+- Type: string
+- Required: false
+- Examples:
+ - ""
+ - None
+ - "AES256"
+ - AES256
+
Backend commands
Here are the commands specific to the oracleobjectstorage backend.
@@ -35471,8 +36054,8 @@ Remove unfinished multipart uploads.
This command removes unfinished multipart uploads of age greater than
max-age which defaults to 24 hours.
-Note that you can use -i/--dry-run with this command to see what it
-would do.
+Note that you can use --interactive/-i or --dry-run with this command to
+see what it would do.
rclone backend cleanup oos:bucket/path/to/object
rclone backend cleanup -o max-age=7w oos:bucket/path/to/object
@@ -35569,7 +36152,7 @@ List the contents of a bucket
Sync /home/local/directory to the remote bucket, deleting any excess
files in the bucket.
- rclone sync -i /home/local/directory remote:bucket
+ rclone sync --interactive /home/local/directory remote:bucket
--fast-list
@@ -36117,7 +36700,7 @@ List the contents of a container
Sync /home/local/directory to the remote container, deleting any excess
files in the container.
- rclone sync -i /home/local/directory remote:container
+ rclone sync --interactive /home/local/directory remote:container
Configuration from an OpenStack credentials file
@@ -37166,8 +37749,9 @@ Seafile
This is a backend for the Seafile storage service: - It works with both
the free community edition or the professional edition. - Seafile
-versions 6.x and 7.x are all supported. - Encrypted libraries are also
-supported. - It supports 2FA enabled users
+versions 6.x, 7.x, 8.x and 9.x are all supported. - Encrypted libraries
+are also supported. - It supports 2FA enabled users - Using a Library
+API Token is not supported
Configuration
@@ -37273,7 +37857,7 @@ List the contents of a library
Sync /home/local/directory to the remote library, deleting any excess
files in the library.
- rclone sync -i /home/local/directory seafile:library
+ rclone sync --interactive /home/local/directory seafile:library
Configuration in library mode
@@ -37370,7 +37954,7 @@ List the contents of a directory
Sync /home/local/directory to the remote library, deleting any excess
files in the library.
- rclone sync -i /home/local/directory seafile:
+ rclone sync --interactive /home/local/directory seafile:
--fast-list
@@ -37411,13 +37995,16 @@ get the exact same link.
Compatibility
-It has been actively tested using the seafile docker image of these
+It has been actively developed using the seafile docker image of these
versions: - 6.3.4 community edition - 7.0.5 community edition - 7.1.3
-community edition
+community edition - 9.0.10 community edition
Versions below 6.0 are not supported. Versions between 6.0 and 6.3
haven't been tested and might not work properly.
+Each new version of rclone is automatically tested against the latest
+docker image of the seafile community server.
+
Standard options
Here are the Standard options specific to seafile (seafile).
@@ -37638,7 +38225,7 @@ List the contents of a directory
Sync /home/local/directory to the remote directory, deleting any excess
files in the directory.
- rclone sync -i /home/local/directory remote:directory
+ rclone sync --interactive /home/local/directory remote:directory
Mount the remote path /srv/www-data/ to the local path /mnt/www-data
@@ -38650,6 +39237,25 @@ Properties:
- Type: string
- Default: "WORKGROUP"
+--smb-spn
+
+Service principal name.
+
+Rclone presents this name to the server. Some servers use this as
+further authentication, and it often needs to be set for clusters. For
+example:
+
+ cifs/remotehost:1020
+
+Leave blank if not sure.
+
+Properties:
+
+- Config: spn
+- Env Var: RCLONE_SMB_SPN
+- Type: string
+- Required: false
+
Advanced options
Here are the Advanced options specific to smb (SMB / CIFS).
@@ -38882,14 +39488,14 @@ Setup with API key and passphrase
\ "new"
provider> new
Satellite Address. Custom satellite address should match the format: `@:`.
- Enter a string value. Press Enter for the default ("us-central-1.storj.io").
+ Enter a string value. Press Enter for the default ("us1.storj.io").
Choose a number from below, or type in your own value
- 1 / US Central 1
- \ "us-central-1.storj.io"
- 2 / Europe West 1
- \ "europe-west-1.storj.io"
- 3 / Asia East 1
- \ "asia-east-1.storj.io"
+ 1 / US1
+ \ "us1.storj.io"
+ 2 / EU1
+ \ "eu1.storj.io"
+ 3 / AP1
+ \ "ap1.storj.io"
satellite_address> 1
API Key.
Enter a string value. Press Enter for the default ("").
@@ -38901,7 +39507,7 @@ Setup with API key and passphrase
--------------------
[remote]
type = storj
- satellite_address = 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S@us-central-1.tardigrade.io:7777
+ satellite_address = 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S@us1.storj.io:7777
api_key = your-api-key-for-your-storj-project
passphrase = your-human-readable-encryption-passphrase
access_grant = the-access-grant-generated-from-the-api-key-and-passphrase
@@ -38958,14 +39564,14 @@ Properties:
- Env Var: RCLONE_STORJ_SATELLITE_ADDRESS
- Provider: new
- Type: string
-- Default: "us-central-1.storj.io"
+- Default: "us1.storj.io"
- Examples:
- - "us-central-1.storj.io"
- - US Central 1
- - "europe-west-1.storj.io"
- - Europe West 1
- - "asia-east-1.storj.io"
- - Asia East 1
+ - "us1.storj.io"
+ - US1
+ - "eu1.storj.io"
+ - EU1
+ - "ap1.storj.io"
+ - AP1
--storj-api-key
@@ -39090,7 +39696,7 @@ Sync two Locations
Use the sync command to sync the source to the destination, changing the
destination only, deleting any excess files.
- rclone sync -i --progress /home/local/directory/ remote:bucket/path/to/dir/
+ rclone sync --interactive --progress /home/local/directory/ remote:bucket/path/to/dir/
The --progress flag is for displaying progress information. Remove it if
you don't need this information.
@@ -39100,15 +39706,15 @@ see exactly what would be copied and deleted.
The sync can be done also from Storj to the local file system.
- rclone sync -i --progress remote:bucket/path/to/dir/ /home/local/directory/
+ rclone sync --interactive --progress remote:bucket/path/to/dir/ /home/local/directory/
Or between two Storj buckets.
- rclone sync -i --progress remote-us:bucket/path/to/dir/ remote-europe:bucket/path/to/dir/
+ rclone sync --interactive --progress remote-us:bucket/path/to/dir/ remote-europe:bucket/path/to/dir/
Or even between another cloud storage and Storj.
- rclone sync -i --progress s3:bucket/path/to/dir/ storj:bucket/path/to/dir/
+ rclone sync --interactive --progress s3:bucket/path/to/dir/ storj:bucket/path/to/dir/
Limitations
@@ -40361,7 +40967,7 @@ List the contents of a directory
Sync /home/local/directory to the remote path, deleting any excess files
in the path.
- rclone sync -i /home/local/directory remote:directory
+ rclone sync --interactive /home/local/directory remote:directory
Yandex paths may be as deep as required, e.g.
remote:directory/subdirectory.
@@ -40603,7 +41209,7 @@ List the contents of a directory
Sync /home/local/directory to the remote path, deleting any excess files
in the path.
- rclone sync -i /home/local/directory remote:directory
+ rclone sync --interactive /home/local/directory remote:directory
Zoho paths may be as deep as required, eg remote:directory/subdirectory.
@@ -40758,7 +41364,7 @@ Local Filesystem
Local paths are specified as normal filesystem paths, e.g.
/path/to/wherever, so
- rclone sync -i /home/source /tmp/destination
+ rclone sync --interactive /home/source /tmp/destination
Will sync /home/source to /tmp/destination.
@@ -41358,6 +41964,174 @@ Options:
Changelog
+v1.62.0 - 2023-03-14
+
+See commits
+
+- New Features
+ - accounting: Make checkers show what they are doing (Nick
+ Craig-Wood)
+ - authorize: Add support for custom templates (Hunter Wittenborn)
+ - build
+ - Update to go1.20 (Nick Craig-Wood, Anagh Kumar Baranwal)
+ - Add winget releaser workflow (Ryan Caezar Itang)
+ - Add dependabot (Ryan Caezar Itang)
+ - doc updates (albertony, Bryan Kaplan, Gerard Bosch,
+ IMTheNachoMan, Justin Winokur, Manoj Ghosh, Nick Craig-Wood, Ole
+ Frost, Peter Brunner, piyushgarg, Ryan Caezar Itang, Simmon Li,
+ ToBeFree)
+ - filter: Emit INFO message when can't work out directory filters
+ (Nick Craig-Wood)
+ - fs
+ - Added multiple ca certificate support. (alankrit)
+ - Add --max-delete-size a delete size threshold (Leandro
+ Sacchet)
+ - fspath: Allow the symbols @ and + in remote names (albertony)
+ - lib/terminal: Enable windows console virtual terminal sequences
+ processing (ANSI/VT100 colors) (albertony)
+ - move: If --check-first and --order-by are set then delete with
+ perfect ordering (Nick Craig-Wood)
+ - serve http: Support --auth-proxy (Matthias Baur)
+- Bug Fixes
+ - accounting
+ - Avoid negative ETA values for very slow speeds (albertony)
+ - Limit length of ETA string (albertony)
+ - Show human readable elapsed time when longer than a day
+ (albertony)
+ - all: Apply codeql fixes (Aaron Gokaslan)
+ - build
+ - Fix condition for manual workflow run (albertony)
+ - Fix building for ARMv5 and ARMv6 (albertony)
+ - selfupdate: Consider ARM version
+ - install.sh: fix ARMv6 download
+ - version: Report ARM version
+ - deletefile: Return error code 4 if file does not exist (Nick
+ Craig-Wood)
+ - docker: Fix volume plugin does not remount volume on docker
+ restart (logopk)
+ - fs: Fix race conditions in --max-delete and --max-delete-size
+ (Nick Craig-Wood)
+ - lib/oauthutil: Handle fatal errors better (Alex Chen)
+ - mount2: Fix --allow-non-empty (Nick Craig-Wood)
+ - operations: Fix concurrency: use --checkers unless transferring
+ files (Nick Craig-Wood)
+ - serve ftp: Fix timestamps older than 1 year in listings (Nick
+ Craig-Wood)
+ - sync: Fix concurrency: use --checkers unless transferring files
+ (Nick Craig-Wood)
+ - tree
+ - Fix nil pointer exception on stat failure (Nick Craig-Wood)
+ - Fix colored output on windows (albertony)
+ - Fix display of files with illegal Windows file system names
+ (Nick Craig-Wood)
+- Mount
+ - Fix creating and renaming files on case insensitive backends
+ (Nick Craig-Wood)
+ - Do not treat \\?\ prefixed paths as network share paths on
+ windows (albertony)
+ - Fix check for empty mount point on Linux (Nick Craig-Wood)
+ - Fix --allow-non-empty (Nick Craig-Wood)
+ - Avoid incorrect or premature overlap check on windows
+ (albertony)
+ - Update to fuse3 after bazil.org/fuse update (Nick Craig-Wood)
+- VFS
+ - Make uploaded files retain modtime with non-modtime backends
+ (Nick Craig-Wood)
+ - Fix incorrect modtime on fs which don't support setting modtime
+ (Nick Craig-Wood)
+ - Fix rename of directory containing files to be uploaded (Nick
+ Craig-Wood)
+- Local
+ - Fix %!w() in "failed to read directory" error (Marks
+ Polakovs)
+ - Fix exclusion of dangling symlinks with -L/--copy-links (Nick
+ Craig-Wood)
+- Crypt
+ - Obey --ignore-checksum (Nick Craig-Wood)
+ - Fix for unencrypted directory names on case insensitive remotes
+ (Ole Frost)
+- Azure Blob
+ - Remove workarounds for SDK bugs after v0.6.1 update (Nick
+ Craig-Wood)
+- B2
+ - Fix uploading files bigger than 1TiB (Nick Craig-Wood)
+- Drive
+ - Note that --drive-acknowledge-abuse needs SA Manager permission
+ (Nick Craig-Wood)
+ - Make --drive-stop-on-upload-limit to respond to
+ storageQuotaExceeded (Ninh Pham)
+- FTP
+ - Retry 426 errors (Nick Craig-Wood)
+ - Retry errors when initiating downloads (Nick Craig-Wood)
+ - Revert to upstream github.com/jlaffaye/ftp now fix is merged
+ (Nick Craig-Wood)
+- Google Cloud Storage
+ - Add --gcs-env-auth to pick up IAM credentials from env/instance
+ (Peter Brunner)
+- Mega
+ - Add --mega-use-https flag (NodudeWasTaken)
+- Onedrive
+ - Default onedrive personal to QuickXorHash as Microsoft is
+ removing SHA1 (Nick Craig-Wood)
+ - Add --onedrive-hash-type to change the hash in use (Nick
+ Craig-Wood)
+ - Improve speed of QuickXorHash (LXY)
+- Oracle Object Storage
+ - Speed up operations by using S3 pacer and setting minsleep to
+ 10ms (Manoj Ghosh)
+ - Expose the storage_tier option in config (Manoj Ghosh)
+ - Bring your own encryption keys (Manoj Ghosh)
+- S3
+ - Check multipart upload ETag when --s3-no-head is in use (Nick
+ Craig-Wood)
+ - Add --s3-sts-endpoint to specify STS endpoint (Nick Craig-Wood)
+ - Fix incorrect tier support for StorJ and IDrive when pointing at
+ a file (Ole Frost)
+ - Fix AWS STS failing if --s3-endpoint is set (Nick Craig-Wood)
+ - Make purge remove directory markers too (Nick Craig-Wood)
+- Seafile
+ - Renew library password (Fred)
+- SFTP
+ - Fix uploads being 65% slower than they should be with crypt
+ (Nick Craig-Wood)
+- Smb
+ - Allow SPN (service principal name) to be configured (Nick
+ Craig-Wood)
+ - Check smb connection is closed (happyxhw)
+- Storj
+ - Implement rclone link (Kaloyan Raev)
+ - Implement rclone purge (Kaloyan Raev)
+ - Update satellite urls and labels (Kaloyan Raev)
+- WebDAV
+ - Fix interop with davrods server (Nick Craig-Wood)
+
+v1.61.1 - 2022-12-23
+
+See commits
+
+- Bug Fixes
+ - docs:
+ - Show only significant parts of version number in version
+ introduced label (albertony)
+ - Fix unescaped HTML (Nick Craig-Wood)
+ - lib/http: Shutdown all servers on exit to remove unix socket
+ (Nick Craig-Wood)
+ - rc: Fix --rc-addr flag (which is an alternate for --url) (Anagh
+ Kumar Baranwal)
+ - serve restic
+ - Don't serve via http if serving via --stdio (Nick
+ Craig-Wood)
+ - Fix immediate exit when not using stdio (Nick Craig-Wood)
+ - serve webdav
+ - Fix --baseurl handling after lib/http refactor (Nick
+ Craig-Wood)
+ - Fix running duplicate Serve call (Nick Craig-Wood)
+- Azure Blob
+ - Fix "409 Public access is not permitted on this storage account"
+ (Nick Craig-Wood)
+- S3
+ - storj: Update endpoints (Kaloyan Raev)
+
v1.61.0 - 2022-12-20
See commits
@@ -47032,15 +47806,15 @@ The syncs would be incremental (on a file by file basis).
e.g.
- rclone sync -i drive:Folder s3:bucket
+ rclone sync --interactive drive:Folder s3:bucket
Using rclone from multiple locations at the same time
You can use rclone from multiple places at the same time if you choose
different subdirectory for the output, e.g.
- Server A> rclone sync -i /tmp/whatever remote:ServerA
- Server B> rclone sync -i /tmp/whatever remote:ServerB
+ Server A> rclone sync --interactive /tmp/whatever remote:ServerA
+ Server B> rclone sync --interactive /tmp/whatever remote:ServerB
If you sync to the same directory then you should use rclone copy
otherwise the two instances of rclone may delete each other's files,
@@ -47926,6 +48700,28 @@ email addresses removed from here need to be addeed to bin/.ignore-emails to mak
- vanplus 60313789+vanplus@users.noreply.github.com
- Jack 16779171+jkpe@users.noreply.github.com
- Abdullah Saglam abdullah.saglam@stonebranch.com
+- Marks Polakovs github@markspolakovs.me
+- piyushgarg piyushgarg80@gmail.com
+- Kaloyan Raev kaloyan-raev@users.noreply.github.com
+- IMTheNachoMan imthenachoman@gmail.com
+- alankrit alankrit@google.com
+- Bryan Kaplan <#@bryankaplan.com>
+- LXY 767763591@qq.com
+- Simmon Li (he/him) li.simmon@gmail.com
+- happyxhw 44490504+happyxhw@users.noreply.github.com
+- Simmon Li (he/him) hello@crespire.dev
+- Matthias Baur baurmatt@users.noreply.github.com
+- Hunter Wittenborn hunter@hunterwittenborn.com
+- logopk peter@kreuser.name
+- Gerard Bosch 30733556+gerardbosch@users.noreply.github.com
+- ToBeFree github@tfrei.de
+- NodudeWasTaken 75137537+NodudeWasTaken@users.noreply.github.com
+- Peter Brunner peter@lugoues.net
+- Ninh Pham dongian.rapclubkhtn@gmail.com
+- Ryan Caezar Itang sitiom@proton.me
+- Peter Brunner peter@psykhe.com
+- Leandro Sacchet leandro.sacchet@animati.com.br
+- dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Contact the rclone project
diff --git a/docs/content/changelog.md b/docs/content/changelog.md
index 264111ef1..f212bab14 100644
--- a/docs/content/changelog.md
+++ b/docs/content/changelog.md
@@ -5,6 +5,110 @@ description: "Rclone Changelog"
# Changelog
+## v1.62.0 - 2023-03-14
+
+[See commits](https://github.com/rclone/rclone/compare/v1.61.0...v1.62.0)
+
+* New Features
+ * accounting: Make checkers show what they are doing (Nick Craig-Wood)
+ * authorize: Add support for custom templates (Hunter Wittenborn)
+ * build
+ * Update to go1.20 (Nick Craig-Wood, Anagh Kumar Baranwal)
+ * Add winget releaser workflow (Ryan Caezar Itang)
+ * Add dependabot (Ryan Caezar Itang)
+ * doc updates (albertony, Bryan Kaplan, Gerard Bosch, IMTheNachoMan, Justin Winokur, Manoj Ghosh, Nick Craig-Wood, Ole Frost, Peter Brunner, piyushgarg, Ryan Caezar Itang, Simmon Li, ToBeFree)
+ * filter: Emit INFO message when can't work out directory filters (Nick Craig-Wood)
+ * fs
+ * Added multiple ca certificate support. (alankrit)
+ * Add `--max-delete-size` a delete size threshold (Leandro Sacchet)
+ * fspath: Allow the symbols `@` and `+` in remote names (albertony)
+ * lib/terminal: Enable windows console virtual terminal sequences processing (ANSI/VT100 colors) (albertony)
+ * move: If `--check-first` and `--order-by` are set then delete with perfect ordering (Nick Craig-Wood)
+ * serve http: Support `--auth-proxy` (Matthias Baur)
+* Bug Fixes
+ * accounting
+ * Avoid negative ETA values for very slow speeds (albertony)
+ * Limit length of ETA string (albertony)
+ * Show human readable elapsed time when longer than a day (albertony)
+ * all: Apply codeql fixes (Aaron Gokaslan)
+ * build
+ * Fix condition for manual workflow run (albertony)
+ * Fix building for ARMv5 and ARMv6 (albertony)
+ * selfupdate: Consider ARM version
+ * install.sh: fix ARMv6 download
+ * version: Report ARM version
+ * deletefile: Return error code 4 if file does not exist (Nick Craig-Wood)
+ * docker: Fix volume plugin does not remount volume on docker restart (logopk)
+ * fs: Fix race conditions in `--max-delete` and `--max-delete-size` (Nick Craig-Wood)
+ * lib/oauthutil: Handle fatal errors better (Alex Chen)
+ * mount2: Fix `--allow-non-empty` (Nick Craig-Wood)
+ * operations: Fix concurrency: use `--checkers` unless transferring files (Nick Craig-Wood)
+ * serve ftp: Fix timestamps older than 1 year in listings (Nick Craig-Wood)
+ * sync: Fix concurrency: use `--checkers` unless transferring files (Nick Craig-Wood)
+ * tree
+ * Fix nil pointer exception on stat failure (Nick Craig-Wood)
+ * Fix colored output on windows (albertony)
+ * Fix display of files with illegal Windows file system names (Nick Craig-Wood)
+* Mount
+ * Fix creating and renaming files on case insensitive backends (Nick Craig-Wood)
+ * Do not treat `\\?\` prefixed paths as network share paths on windows (albertony)
+ * Fix check for empty mount point on Linux (Nick Craig-Wood)
+ * Fix `--allow-non-empty` (Nick Craig-Wood)
+ * Avoid incorrect or premature overlap check on windows (albertony)
+ * Update to fuse3 after bazil.org/fuse update (Nick Craig-Wood)
+* VFS
+ * Make uploaded files retain modtime with non-modtime backends (Nick Craig-Wood)
+ * Fix incorrect modtime on fs which don't support setting modtime (Nick Craig-Wood)
+ * Fix rename of directory containing files to be uploaded (Nick Craig-Wood)
+* Local
+ * Fix `%!w()` in "failed to read directory" error (Marks Polakovs)
+ * Fix exclusion of dangling symlinks with -L/--copy-links (Nick Craig-Wood)
+* Crypt
+ * Obey `--ignore-checksum` (Nick Craig-Wood)
+ * Fix for unencrypted directory names on case insensitive remotes (Ole Frost)
+* Azure Blob
+ * Remove workarounds for SDK bugs after v0.6.1 update (Nick Craig-Wood)
+* B2
+ * Fix uploading files bigger than 1TiB (Nick Craig-Wood)
+* Drive
+ * Note that `--drive-acknowledge-abuse` needs SA Manager permission (Nick Craig-Wood)
+ * Make `--drive-stop-on-upload-limit` to respond to storageQuotaExceeded (Ninh Pham)
+* FTP
+ * Retry 426 errors (Nick Craig-Wood)
+ * Retry errors when initiating downloads (Nick Craig-Wood)
+ * Revert to upstream `github.com/jlaffaye/ftp` now fix is merged (Nick Craig-Wood)
+* Google Cloud Storage
+ * Add `--gcs-env-auth` to pick up IAM credentials from env/instance (Peter Brunner)
+* Mega
+ * Add `--mega-use-https` flag (NodudeWasTaken)
+* Onedrive
+ * Default onedrive personal to QuickXorHash as Microsoft is removing SHA1 (Nick Craig-Wood)
+ * Add `--onedrive-hash-type` to change the hash in use (Nick Craig-Wood)
+ * Improve speed of QuickXorHash (LXY)
+* Oracle Object Storage
+ * Speed up operations by using S3 pacer and setting minsleep to 10ms (Manoj Ghosh)
+ * Expose the `storage_tier` option in config (Manoj Ghosh)
+ * Bring your own encryption keys (Manoj Ghosh)
+* S3
+ * Check multipart upload ETag when `--s3-no-head` is in use (Nick Craig-Wood)
+ * Add `--s3-sts-endpoint` to specify STS endpoint (Nick Craig-Wood)
+ * Fix incorrect tier support for StorJ and IDrive when pointing at a file (Ole Frost)
+ * Fix AWS STS failing if `--s3-endpoint` is set (Nick Craig-Wood)
+ * Make purge remove directory markers too (Nick Craig-Wood)
+* Seafile
+ * Renew library password (Fred)
+* SFTP
+ * Fix uploads being 65% slower than they should be with crypt (Nick Craig-Wood)
+* Smb
+ * Allow SPN (service principal name) to be configured (Nick Craig-Wood)
+ * Check smb connection is closed (happyxhw)
+* Storj
+ * Implement `rclone link` (Kaloyan Raev)
+ * Implement `rclone purge` (Kaloyan Raev)
+ * Update satellite urls and labels (Kaloyan Raev)
+* WebDAV
+ * Fix interop with davrods server (Nick Craig-Wood)
+
## v1.61.1 - 2022-12-23
[See commits](https://github.com/rclone/rclone/compare/v1.61.0...v1.61.1)
diff --git a/docs/content/commands/rclone_authorize.md b/docs/content/commands/rclone_authorize.md
index c976cec68..95b124822 100644
--- a/docs/content/commands/rclone_authorize.md
+++ b/docs/content/commands/rclone_authorize.md
@@ -20,9 +20,7 @@ rclone config.
Use --auth-no-open-browser to prevent rclone to open auth
link in default browser automatically.
-Use --template to generate HTML output via a custom Go
-template. If a blank string is provided as an argument to
-this flag, the default template is used.
+Use --template to generate HTML output via a custom Go template. If a blank string is provided as an argument to this flag, the default template is used.
```
rclone authorize [flags]
@@ -33,7 +31,7 @@ rclone authorize [flags]
```
--auth-no-open-browser Do not automatically open auth link in default browser
-h, --help help for authorize
- --template string Use a custom Go template for generating HTML responses
+ --template string The path to a custom Go template for generating HTML responses
```
See the [global flags page](/flags/) for global options not listed here.
diff --git a/docs/content/commands/rclone_mount.md b/docs/content/commands/rclone_mount.md
index 8e23443db..322a3426c 100644
--- a/docs/content/commands/rclone_mount.md
+++ b/docs/content/commands/rclone_mount.md
@@ -170,38 +170,59 @@ group "Everyone" will be used to represent others. The user/group can be customi
with FUSE options "UserName" and "GroupName",
e.g. `-o UserName=user123 -o GroupName="Authenticated Users"`.
The permissions on each entry will be set according to [options](#options)
-`--dir-perms` and `--file-perms`, which takes a value in traditional
+`--dir-perms` and `--file-perms`, which takes a value in traditional Unix
[numeric notation](https://en.wikipedia.org/wiki/File-system_permissions#Numeric_notation).
The default permissions corresponds to `--file-perms 0666 --dir-perms 0777`,
i.e. read and write permissions to everyone. This means you will not be able
to start any programs from the mount. To be able to do that you must add
execute permissions, e.g. `--file-perms 0777 --dir-perms 0777` to add it
-to everyone. If the program needs to write files, chances are you will have
-to enable [VFS File Caching](#vfs-file-caching) as well (see also [limitations](#limitations)).
+to everyone. If the program needs to write files, chances are you will
+have to enable [VFS File Caching](#vfs-file-caching) as well (see also
+[limitations](#limitations)). Note that the default write permission have
+some restrictions for accounts other than the owner, specifically it lacks
+the "write extended attributes", as explained next.
-Note that the mapping of permissions is not always trivial, and the result
-you see in Windows Explorer may not be exactly like you expected.
-For example, when setting a value that includes write access, this will be
-mapped to individual permissions "write attributes", "write data" and "append data",
-but not "write extended attributes". Windows will then show this as basic
-permission "Special" instead of "Write", because "Write" includes the
-"write extended attributes" permission.
+The mapping of permissions is not always trivial, and the result you see in
+Windows Explorer may not be exactly like you expected. For example, when setting
+a value that includes write access for the group or others scope, this will be
+mapped to individual permissions "write attributes", "write data" and
+"append data", but not "write extended attributes". Windows will then show this
+as basic permission "Special" instead of "Write", because "Write" also covers
+the "write extended attributes" permission. When setting digit 0 for group or
+others, to indicate no permissions, they will still get individual permissions
+"read attributes", "read extended attributes" and "read permissions". This is
+done for compatibility reasons, e.g. to allow users without additional
+permissions to be able to read basic metadata about files like in Unix.
-If you set POSIX permissions for only allowing access to the owner, using
-`--file-perms 0600 --dir-perms 0700`, the user group and the built-in "Everyone"
-group will still be given some special permissions, such as "read attributes"
-and "read permissions", in Windows. This is done for compatibility reasons,
-e.g. to allow users without additional permissions to be able to read basic
-metadata about files like in UNIX. One case that may arise is that other programs
-(incorrectly) interprets this as the file being accessible by everyone. For example
-an SSH client may warn about "unprotected private key file".
-
-WinFsp 2021 (version 1.9) introduces a new FUSE option "FileSecurity",
+WinFsp 2021 (version 1.9) introduced a new FUSE option "FileSecurity",
that allows the complete specification of file security descriptors using
[SDDL](https://docs.microsoft.com/en-us/windows/win32/secauthz/security-descriptor-string-format).
-With this you can work around issues such as the mentioned "unprotected private key file"
-by specifying `-o FileSecurity="D:P(A;;FA;;;OW)"`, for file all access (FA) to the owner (OW).
+With this you get detailed control of the resulting permissions, compared
+to use of the POSIX permissions described above, and no additional permissions
+will be added automatically for compatibility with Unix. Some example use
+cases will following.
+
+If you set POSIX permissions for only allowing access to the owner,
+using `--file-perms 0600 --dir-perms 0700`, the user group and the built-in
+"Everyone" group will still be given some special permissions, as described
+above. Some programs may then (incorrectly) interpret this as the file being
+accessible by everyone, for example an SSH client may warn about "unprotected
+private key file". You can work around this by specifying
+`-o FileSecurity="D:P(A;;FA;;;OW)"`, which sets file all access (FA) to the
+owner (OW), and nothing else.
+
+When setting write permissions then, except for the owner, this does not
+include the "write extended attributes" permission, as mentioned above.
+This may prevent applications from writing to files, giving permission denied
+error instead. To set working write permissions for the built-in "Everyone"
+group, similar to what it gets by default but with the addition of the
+"write extended attributes", you can specify
+`-o FileSecurity="D:P(A;;FRFW;;;WD)"`, which sets file read (FR) and file
+write (FW) to everyone (WD). If file execute (FX) is also needed, then change
+to `-o FileSecurity="D:P(A;;FRFWFX;;;WD)"`, or set file all access (FA) to
+get full access permissions, including delete, with
+`-o FileSecurity="D:P(A;;FA;;;WD)"`.
### Windows caveats
@@ -230,10 +251,16 @@ processes as the SYSTEM account. Another alternative is to run the mount
command from a Windows Scheduled Task, or a Windows Service, configured
to run as the SYSTEM account. A third alternative is to use the
[WinFsp.Launcher infrastructure](https://github.com/winfsp/winfsp/wiki/WinFsp-Service-Architecture)).
+Read more in the [install documentation](https://rclone.org/install/).
Note that when running rclone as another user, it will not use
the configuration file from your profile unless you tell it to
with the [`--config`](https://rclone.org/docs/#config-config-file) option.
-Read more in the [install documentation](https://rclone.org/install/).
+Note also that it is now the SYSTEM account that will have the owner
+permissions, and other accounts will have permissions according to the
+group or others scopes. As mentioned above, these will then not get the
+"write extended attributes" permission, and this may prevent writing to
+files. You can work around this with the FileSecurity option, see
+example above.
Note that mapping to a directory path, instead of a drive letter,
does not suffer from the same limitations.
diff --git a/docs/content/commands/rclone_serve_http.md b/docs/content/commands/rclone_serve_http.md
index 4cfa095f5..0986a9302 100644
--- a/docs/content/commands/rclone_serve_http.md
+++ b/docs/content/commands/rclone_serve_http.md
@@ -516,7 +516,7 @@ password or public-key is changed the cache will need to expire (which takes 5 m
before it takes effect.
This can be used to build general purpose proxies to any kind of
-backend that rclone supports.
+backend that rclone supports.
```
diff --git a/docs/content/commands/rclone_serve_webdav.md b/docs/content/commands/rclone_serve_webdav.md
index 88e6973db..55bd16069 100644
--- a/docs/content/commands/rclone_serve_webdav.md
+++ b/docs/content/commands/rclone_serve_webdav.md
@@ -28,6 +28,30 @@ supported hash on the backend or you can use a named hash such as
"MD5" or "SHA-1". Use the [hashsum](/commands/rclone_hashsum/) command
to see the full list.
+## Access WebDAV on Windows
+WebDAV shared folder can be mapped as a drive on Windows, however the default settings prevent it.
+Windows will fail to connect to the server using insecure Basic authentication.
+It will not even display any login dialog. Windows requires SSL / HTTPS connection to be used with Basic.
+If you try to connect via Add Network Location Wizard you will get the following error:
+"The folder you entered does not appear to be valid. Please choose another".
+However, you still can connect if you set the following registry key on a client machine:
+HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WebClient\Parameters\BasicAuthLevel to 2.
+The BasicAuthLevel can be set to the following values:
+ 0 - Basic authentication disabled
+ 1 - Basic authentication enabled for SSL connections only
+ 2 - Basic authentication enabled for SSL connections and for non-SSL connections
+If required, increase the FileSizeLimitInBytes to a higher value.
+Navigate to the Services interface, then restart the WebClient service.
+
+## Access Office applications on WebDAV
+Navigate to following registry HKEY_CURRENT_USER\Software\Microsoft\Office\[14.0/15.0/16.0]\Common\Internet
+Create a new DWORD BasicAuthLevel with value 2.
+ 0 - Basic authentication disabled
+ 1 - Basic authentication enabled for SSL connections only
+ 2 - Basic authentication enabled for SSL and for non-SSL connections
+
+https://learn.microsoft.com/en-us/office/troubleshoot/powerpoint/office-opens-blank-from-sharepoint
+
## Server options
diff --git a/docs/content/crypt.md b/docs/content/crypt.md
index dc45e56aa..6bc73dfe9 100644
--- a/docs/content/crypt.md
+++ b/docs/content/crypt.md
@@ -455,7 +455,6 @@ Properties:
- "off"
- Don't encrypt the file names.
- Adds a ".bin" extension only.
- - May cause problems on [case insensitive](/overview/#case-insensitive) [storage systems](/overview/#features) like OneDrive, Dropbox, Windows, OSX and SMB.
#### --crypt-directory-name-encryption
@@ -474,7 +473,6 @@ Properties:
- Encrypt directory names.
- "false"
- Don't encrypt directory names, leave them intact.
- - May cause problems on [case insensitive](/overview/#case-insensitive) [storage systems](/overview/#features) like OneDrive, Dropbox, Windows, OSX and SMB.
#### --crypt-password
diff --git a/docs/content/drive.md b/docs/content/drive.md
index fdf62f680..7975cf6b9 100644
--- a/docs/content/drive.md
+++ b/docs/content/drive.md
@@ -988,6 +988,10 @@ as malware or spam and cannot be downloaded" with the error code
indicate you acknowledge the risks of downloading the file and rclone
will download it anyway.
+Note that if you are using service account it will need Manager
+permission (not Content Manager) to for this flag to work. If the SA
+does not have the right permission, Google will just ignore the flag.
+
Properties:
- Config: acknowledge_abuse
@@ -1362,9 +1366,9 @@ This takes an optional directory to trash which make this easier to
use via the API.
rclone backend untrash drive:directory
- rclone backend -i untrash drive:directory subdir
+ rclone backend --interactive untrash drive:directory subdir
-Use the -i flag to see what would be restored before restoring it.
+Use the --interactive/-i or --dry-run flag to see what would be restored before restoring it.
Result:
@@ -1398,7 +1402,7 @@ component will be used as the file name.
If the destination is a drive backend then server-side copying will be
attempted if possible.
-Use the -i flag to see what would be copied before copying.
+Use the --interactive/-i or --dry-run flag to see what would be copied before copying.
### exportformats
diff --git a/docs/content/flags.md b/docs/content/flags.md
index fa0045834..51e9f931f 100644
--- a/docs/content/flags.md
+++ b/docs/content/flags.md
@@ -20,7 +20,7 @@ These flags are available for every command.
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer (default 16Mi)
--bwlimit BwTimetable Bandwidth limit in KiB/s, or use suffix B|K|M|G|T|P or a full timetable
--bwlimit-file BwTimetable Bandwidth limit per file in KiB/s, or use suffix B|K|M|G|T|P or a full timetable
- --ca-cert string CA certificate used to verify servers
+ --ca-cert stringArray CA certificate used to verify servers
--cache-dir string Directory rclone will use for caching (default "$HOME/.cache/rclone")
--check-first Do all the checks before starting transfers
--checkers int Number of checkers to run in parallel (default 8)
@@ -82,6 +82,7 @@ These flags are available for every command.
--max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off)
--max-depth int If set limits the recursion depth to this (default -1)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
@@ -170,7 +171,7 @@ These flags are available for every command.
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.61.0")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.62.0")
-v, --verbose count Print lots more stuff (repeat for more)
```
@@ -387,6 +388,7 @@ and may be set in the config file.
--gcs-decompress If set this will decompress gzip encoded objects
--gcs-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gcs-endpoint string Endpoint for the service
+ --gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars)
--gcs-location string Location for the newly created buckets
--gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it
--gcs-object-acl string Access Control List for new objects
@@ -475,6 +477,7 @@ and may be set in the config file.
--mega-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
--mega-hard-delete Delete files permanently rather than putting them into the trash
--mega-pass string Password (obscured)
+ --mega-use-https Use HTTPS for transfers
--mega-user string User name
--netstorage-account string Set the NetStorage account name
--netstorage-host string Domain+path of NetStorage host to connect to
@@ -490,6 +493,7 @@ and may be set in the config file.
--onedrive-drive-type string The type of the drive (personal | business | documentLibrary)
--onedrive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings
+ --onedrive-hash-type string Specify the hash in use for the backend (default "auto")
--onedrive-link-password string Set the password for links created by the link command
--onedrive-link-scope string Set the scope of the links created by the link command (default "anonymous")
--onedrive-link-type string Set the type of the links created by the link command (default "view")
@@ -514,6 +518,12 @@ and may be set in the config file.
--oos-no-check-bucket If set, don't attempt to check the bucket exists or create it
--oos-provider string Choose your Auth Provider (default "env_auth")
--oos-region string Object storage Region
+ --oos-sse-customer-algorithm string If using SSE-C, the optional header that specifies "AES256" as the encryption algorithm
+ --oos-sse-customer-key string To use SSE-C, the optional header that specifies the base64-encoded 256-bit encryption key to use to
+ --oos-sse-customer-key-file string To use SSE-C, a file containing the base64-encoded string of the AES-256 encryption key associated
+ --oos-sse-customer-key-sha256 string If using SSE-C, The optional header that specifies the base64-encoded SHA256 hash of the encryption
+ --oos-sse-kms-key-id string if using using your own master key in vault, this header specifies the
+ --oos-storage-tier string The storage class to use when storing new objects in storage. https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm (default "Standard")
--oos-upload-concurrency int Concurrency for multipart uploads (default 10)
--oos-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi)
@@ -582,6 +592,7 @@ and may be set in the config file.
--s3-sse-customer-key-md5 string If using SSE-C you may provide the secret encryption key MD5 checksum (optional)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key
--s3-storage-class string The storage class to use when storing new objects in S3
+ --s3-sts-endpoint string Endpoint for STS
--s3-upload-concurrency int Concurrency for multipart uploads (default 4)
--s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint
@@ -647,6 +658,7 @@ and may be set in the config file.
--smb-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--smb-pass string SMB password (obscured)
--smb-port int SMB port number (default 445)
+ --smb-spn string Service principal name
--smb-user string SMB username (default "$USER")
--storj-access-grant string Access grant
--storj-api-key string API key
diff --git a/docs/content/googlecloudstorage.md b/docs/content/googlecloudstorage.md
index 704112a0e..092549456 100644
--- a/docs/content/googlecloudstorage.md
+++ b/docs/content/googlecloudstorage.md
@@ -552,6 +552,24 @@ Properties:
- "DURABLE_REDUCED_AVAILABILITY"
- Durable reduced availability storage class
+#### --gcs-env-auth
+
+Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars).
+
+Only applies if service_account_file and service_account_credentials is blank.
+
+Properties:
+
+- Config: env_auth
+- Env Var: RCLONE_GCS_ENV_AUTH
+- Type: bool
+- Default: false
+- Examples:
+ - "false"
+ - Enter credentials in the next step.
+ - "true"
+ - Get GCP IAM credentials from the environment (env vars or IAM).
+
### Advanced options
Here are the Advanced options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).
diff --git a/docs/content/mega.md b/docs/content/mega.md
index 94563b470..8326f1fcb 100644
--- a/docs/content/mega.md
+++ b/docs/content/mega.md
@@ -252,6 +252,23 @@ Properties:
- Type: bool
- Default: false
+#### --mega-use-https
+
+Use HTTPS for transfers.
+
+MEGA uses plain text HTTP connections by default.
+Some ISPs throttle HTTP connections, this causes transfers to become very slow.
+Enabling this will force MEGA to use HTTPS for all transfers.
+HTTPS is normally not necesary since all data is already encrypted anyway.
+Enabling it will increase CPU usage and add network overhead.
+
+Properties:
+
+- Config: use_https
+- Env Var: RCLONE_MEGA_USE_HTTPS
+- Type: bool
+- Default: false
+
#### --mega-encoding
The encoding for the backend.
diff --git a/docs/content/onedrive.md b/docs/content/onedrive.md
index 99a6e8053..8e65338e7 100644
--- a/docs/content/onedrive.md
+++ b/docs/content/onedrive.md
@@ -526,6 +526,48 @@ Properties:
- Type: string
- Required: false
+#### --onedrive-hash-type
+
+Specify the hash in use for the backend.
+
+This specifies the hash type in use. If set to "auto" it will use the
+default hash which is is QuickXorHash.
+
+Before rclone 1.62 an SHA1 hash was used by default for Onedrive
+Personal. For 1.62 and later the default is to use a QuickXorHash for
+all onedrive types. If an SHA1 hash is desired then set this option
+accordingly.
+
+From July 2023 QuickXorHash will be the only available hash for
+both OneDrive for Business and OneDriver Personal.
+
+This can be set to "none" to not use any hashes.
+
+If the hash requested does not exist on the object, it will be
+returned as an empty string which is treated as a missing hash by
+rclone.
+
+
+Properties:
+
+- Config: hash_type
+- Env Var: RCLONE_ONEDRIVE_HASH_TYPE
+- Type: string
+- Default: "auto"
+- Examples:
+ - "auto"
+ - Rclone chooses the best hash
+ - "quickxor"
+ - QuickXor
+ - "sha1"
+ - SHA1
+ - "sha256"
+ - SHA256
+ - "crc32"
+ - CRC32
+ - "none"
+ - None - don't use any hashes
+
#### --onedrive-encoding
The encoding for the backend.
diff --git a/docs/content/rc.md b/docs/content/rc.md
index 3e1d1c65f..1cb4dd55b 100644
--- a/docs/content/rc.md
+++ b/docs/content/rc.md
@@ -1458,6 +1458,8 @@ This takes the following parameters:
- remote - a path within that remote e.g. "dir"
- each part in body represents a file to be uploaded
+See the [uploadfile](/commands/rclone_uploadfile/) command for more information on the above.
+
**Authentication is required for this call.**
### options/blocks: List all the option blocks {#options-blocks}
diff --git a/docs/content/s3.md b/docs/content/s3.md
index 7bfd75241..c6c5f6edb 100644
--- a/docs/content/s3.md
+++ b/docs/content/s3.md
@@ -1474,7 +1474,7 @@ Properties:
#### --s3-endpoint
-Endpoint of the Shared Gateway.
+Endpoint for Storj Gateway.
Properties:
@@ -1484,12 +1484,8 @@ Properties:
- Type: string
- Required: false
- Examples:
- - "gateway.eu1.storjshare.io"
- - EU1 Shared Gateway
- - "gateway.us1.storjshare.io"
- - US1 Shared Gateway
- - "gateway.ap1.storjshare.io"
- - Asia-Pacific Shared Gateway
+ - "gateway.storjshare.io"
+ - Global Hosted Gateway
#### --s3-endpoint
@@ -2967,6 +2963,20 @@ Properties:
- Type: bool
- Default: false
+#### --s3-sts-endpoint
+
+Endpoint for STS.
+
+Leave blank if using AWS to use the default endpoint for the region.
+
+Properties:
+
+- Config: sts_endpoint
+- Env Var: RCLONE_S3_STS_ENDPOINT
+- Provider: AWS
+- Type: string
+- Required: false
+
### Metadata
User metadata is stored as x-amz-meta- keys. S3 metadata keys are case insensitive and are always returned in lower case.
@@ -3017,9 +3027,9 @@ Usage Examples:
rclone backend restore s3:bucket/path/to/directory [-o priority=PRIORITY] [-o lifetime=DAYS]
rclone backend restore s3:bucket [-o priority=PRIORITY] [-o lifetime=DAYS]
-This flag also obeys the filters. Test first with -i/--interactive or --dry-run flags
+This flag also obeys the filters. Test first with --interactive/-i or --dry-run flags
- rclone -i backend restore --include "*.txt" s3:bucket/path -o priority=Standard
+ rclone --interactive backend restore --include "*.txt" s3:bucket/path -o priority=Standard
All the objects shown will be marked for restore, then
@@ -3096,8 +3106,8 @@ Remove unfinished multipart uploads.
This command removes unfinished multipart uploads of age greater than
max-age which defaults to 24 hours.
-Note that you can use -i/--dry-run with this command to see what it
-would do.
+Note that you can use --interactive/-i or --dry-run with this command to see what
+it would do.
rclone backend cleanup s3:bucket/path/to/object
rclone backend cleanup -o max-age=7w s3:bucket/path/to/object
@@ -3118,8 +3128,8 @@ Remove old versions of files.
This command removes any old hidden versions of files
on a versions enabled bucket.
-Note that you can use -i/--dry-run with this command to see what it
-would do.
+Note that you can use --interactive/-i or --dry-run with this command to see what
+it would do.
rclone backend cleanup-hidden s3:bucket/path/to/dir
diff --git a/docs/content/smb.md b/docs/content/smb.md
index 2cf26803b..b55cdba5a 100644
--- a/docs/content/smb.md
+++ b/docs/content/smb.md
@@ -171,6 +171,25 @@ Properties:
- Type: string
- Default: "WORKGROUP"
+#### --smb-spn
+
+Service principal name.
+
+Rclone presents this name to the server. Some servers use this as further
+authentication, and it often needs to be set for clusters. For example:
+
+ cifs/remotehost:1020
+
+Leave blank if not sure.
+
+
+Properties:
+
+- Config: spn
+- Env Var: RCLONE_SMB_SPN
+- Type: string
+- Required: false
+
### Advanced options
Here are the Advanced options specific to smb (SMB / CIFS).
diff --git a/rclone.1 b/rclone.1
index 2b14cd4ff..ca3b34396 100644
--- a/rclone.1
+++ b/rclone.1
@@ -1,7 +1,7 @@
.\"t
.\" Automatically generated by Pandoc 2.9.2.1
.\"
-.TH "rclone" "1" "Dec 20, 2022" "User Manual" ""
+.TH "rclone" "1" "Mar 14, 2023" "User Manual" ""
.hy
.SH Rclone syncs your files to cloud storage
.PP
@@ -517,6 +517,19 @@ If you are planning to use the rclone
mount (https://rclone.org/commands/rclone_mount/) feature then you will
need to install the third party utility WinFsp (https://winfsp.dev/)
also.
+.SS Windows package manager (Winget)
+.PP
+Winget (https://learn.microsoft.com/en-us/windows/package-manager/)
+comes pre-installed with the latest versions of Windows.
+If not, update the App
+Installer (https://www.microsoft.com/p/app-installer/9nblggh4nns1)
+package from the Microsoft store.
+.IP
+.nf
+\f[C]
+winget install Rclone.Rclone
+\f[R]
+.fi
.SS Chocolatey package manager
.PP
Make sure you have Choco (https://chocolatey.org/) installed
@@ -546,6 +559,22 @@ Its current version is as below.
.PP
[IMAGE: Chocolatey
package (https://repology.org/badge/version-for-repo/chocolatey/rclone.svg)] (https://repology.org/project/rclone/versions)
+.SS Scoop package manager
+.PP
+Make sure you have Scoop (https://scoop.sh/) installed
+.IP
+.nf
+\f[C]
+scoop install rclone
+\f[R]
+.fi
+.PP
+Note that this is a third party installer not controlled by the rclone
+developers so it may be out of date.
+Its current version is as below.
+.PP
+[IMAGE: Scoop
+package (https://repology.org/badge/version-for-repo/scoop/rclone.svg)] (https://repology.org/project/rclone/versions)
.SS Package manager installation
.PP
Many Linux, Windows, macOS and other OS distributions package and
@@ -1145,8 +1174,8 @@ storage system in the config file then the sub path, e.g.
.PP
You can define as many storage paths as you like in the config file.
.PP
-Please use the \f[C]-i\f[R] / \f[C]--interactive\f[R] flag while
-learning rclone to avoid accidental data loss.
+Please use the \f[C]--interactive\f[R]/\f[C]-i\f[R] flag while learning
+rclone to avoid accidental data loss.
.SS Subcommands
.PP
rclone uses a system of subcommands.
@@ -1156,7 +1185,7 @@ For example
\f[C]
rclone ls remote:path # lists a remote
rclone copy /local/path remote:path # copies /local/path to the remote
-rclone sync -i /local/path remote:path # syncs /local/path to the remote
+rclone sync --interactive /local/path remote:path # syncs /local/path to the remote
\f[R]
.fi
.SH rclone config
@@ -1356,7 +1385,7 @@ copy (https://rclone.org/commands/rclone_copy/) command instead.
.IP
.nf
\f[C]
-rclone sync -i SOURCE remote:DESTINATION
+rclone sync --interactive SOURCE remote:DESTINATION
\f[R]
.fi
.PP
@@ -2456,8 +2485,12 @@ Remote authorization.
Used to authorize a remote or headless rclone from a machine with a
browser - use as instructed by rclone config.
.PP
-Use the --auth-no-open-browser to prevent rclone to open auth link in
+Use --auth-no-open-browser to prevent rclone to open auth link in
default browser automatically.
+.PP
+Use --template to generate HTML output via a custom Go template.
+If a blank string is provided as an argument to this flag, the default
+template is used.
.IP
.nf
\f[C]
@@ -2470,6 +2503,7 @@ rclone authorize [flags]
\f[C]
--auth-no-open-browser Do not automatically open auth link in default browser
-h, --help help for authorize
+ --template string The path to a custom Go template for generating HTML responses
\f[R]
.fi
.PP
@@ -4843,7 +4877,7 @@ and \[dq]GroupName\[dq], e.g.
\f[C]-o UserName=user123 -o GroupName=\[dq]Authenticated Users\[dq]\f[R].
The permissions on each entry will be set according to options
\f[C]--dir-perms\f[R] and \f[C]--file-perms\f[R], which takes a value in
-traditional numeric
+traditional Unix numeric
notation (https://en.wikipedia.org/wiki/File-system_permissions#Numeric_notation).
.PP
The default permissions corresponds to
@@ -4854,38 +4888,61 @@ To be able to do that you must add execute permissions, e.g.
\f[C]--file-perms 0777 --dir-perms 0777\f[R] to add it to everyone.
If the program needs to write files, chances are you will have to enable
VFS File Caching as well (see also limitations).
+Note that the default write permission have some restrictions for
+accounts other than the owner, specifically it lacks the \[dq]write
+extended attributes\[dq], as explained next.
.PP
-Note that the mapping of permissions is not always trivial, and the
-result you see in Windows Explorer may not be exactly like you expected.
-For example, when setting a value that includes write access, this will
-be mapped to individual permissions \[dq]write attributes\[dq],
-\[dq]write data\[dq] and \[dq]append data\[dq], but not \[dq]write
-extended attributes\[dq].
+The mapping of permissions is not always trivial, and the result you see
+in Windows Explorer may not be exactly like you expected.
+For example, when setting a value that includes write access for the
+group or others scope, this will be mapped to individual permissions
+\[dq]write attributes\[dq], \[dq]write data\[dq] and \[dq]append
+data\[dq], but not \[dq]write extended attributes\[dq].
Windows will then show this as basic permission \[dq]Special\[dq]
-instead of \[dq]Write\[dq], because \[dq]Write\[dq] includes the
+instead of \[dq]Write\[dq], because \[dq]Write\[dq] also covers the
\[dq]write extended attributes\[dq] permission.
+When setting digit 0 for group or others, to indicate no permissions,
+they will still get individual permissions \[dq]read attributes\[dq],
+\[dq]read extended attributes\[dq] and \[dq]read permissions\[dq].
+This is done for compatibility reasons, e.g.
+to allow users without additional permissions to be able to read basic
+metadata about files like in Unix.
+.PP
+WinFsp 2021 (version 1.9) introduced a new FUSE option
+\[dq]FileSecurity\[dq], that allows the complete specification of file
+security descriptors using
+SDDL (https://docs.microsoft.com/en-us/windows/win32/secauthz/security-descriptor-string-format).
+With this you get detailed control of the resulting permissions,
+compared to use of the POSIX permissions described above, and no
+additional permissions will be added automatically for compatibility
+with Unix.
+Some example use cases will following.
.PP
If you set POSIX permissions for only allowing access to the owner,
using \f[C]--file-perms 0600 --dir-perms 0700\f[R], the user group and
the built-in \[dq]Everyone\[dq] group will still be given some special
-permissions, such as \[dq]read attributes\[dq] and \[dq]read
-permissions\[dq], in Windows.
-This is done for compatibility reasons, e.g.
-to allow users without additional permissions to be able to read basic
-metadata about files like in UNIX.
-One case that may arise is that other programs (incorrectly) interprets
-this as the file being accessible by everyone.
-For example an SSH client may warn about \[dq]unprotected private key
-file\[dq].
+permissions, as described above.
+Some programs may then (incorrectly) interpret this as the file being
+accessible by everyone, for example an SSH client may warn about
+\[dq]unprotected private key file\[dq].
+You can work around this by specifying
+\f[C]-o FileSecurity=\[dq]D:P(A;;FA;;;OW)\[dq]\f[R], which sets file all
+access (FA) to the owner (OW), and nothing else.
.PP
-WinFsp 2021 (version 1.9) introduces a new FUSE option
-\[dq]FileSecurity\[dq], that allows the complete specification of file
-security descriptors using
-SDDL (https://docs.microsoft.com/en-us/windows/win32/secauthz/security-descriptor-string-format).
-With this you can work around issues such as the mentioned
-\[dq]unprotected private key file\[dq] by specifying
-\f[C]-o FileSecurity=\[dq]D:P(A;;FA;;;OW)\[dq]\f[R], for file all access
-(FA) to the owner (OW).
+When setting write permissions then, except for the owner, this does not
+include the \[dq]write extended attributes\[dq] permission, as mentioned
+above.
+This may prevent applications from writing to files, giving permission
+denied error instead.
+To set working write permissions for the built-in \[dq]Everyone\[dq]
+group, similar to what it gets by default but with the addition of the
+\[dq]write extended attributes\[dq], you can specify
+\f[C]-o FileSecurity=\[dq]D:P(A;;FRFW;;;WD)\[dq]\f[R], which sets file
+read (FR) and file write (FW) to everyone (WD).
+If file execute (FX) is also needed, then change to
+\f[C]-o FileSecurity=\[dq]D:P(A;;FRFWFX;;;WD)\[dq]\f[R], or set file all
+access (FA) to get full access permissions, including delete, with
+\f[C]-o FileSecurity=\[dq]D:P(A;;FA;;;WD)\[dq]\f[R].
.SS Windows caveats
.PP
Drives created as Administrator are not visible to other accounts, not
@@ -4918,13 +4975,68 @@ Another alternative is to run the mount command from a Windows Scheduled
Task, or a Windows Service, configured to run as the SYSTEM account.
A third alternative is to use the WinFsp.Launcher
infrastructure (https://github.com/winfsp/winfsp/wiki/WinFsp-Service-Architecture)).
+Read more in the install documentation (https://rclone.org/install/).
Note that when running rclone as another user, it will not use the
configuration file from your profile unless you tell it to with the
\f[C]--config\f[R] (https://rclone.org/docs/#config-config-file) option.
-Read more in the install documentation (https://rclone.org/install/).
+Note also that it is now the SYSTEM account that will have the owner
+permissions, and other accounts will have permissions according to the
+group or others scopes.
+As mentioned above, these will then not get the \[dq]write extended
+attributes\[dq] permission, and this may prevent writing to files.
+You can work around this with the FileSecurity option, see example
+above.
.PP
Note that mapping to a directory path, instead of a drive letter, does
not suffer from the same limitations.
+.SS Mounting on macOS
+.PP
+Mounting on macOS can be done either via
+macFUSE (https://osxfuse.github.io/) (also known as osxfuse) or
+FUSE-T (https://www.fuse-t.org/).
+macFUSE is a traditional FUSE driver utilizing a macOS kernel extension
+(kext).
+FUSE-T is an alternative FUSE system which \[dq]mounts\[dq] via an NFSv4
+local server.
+.SS FUSE-T Limitations, Caveats, and Notes
+.PP
+There are some limitations, caveats, and notes about how it works.
+These are current as of FUSE-T version 1.0.14.
+.SS ModTime update on read
+.PP
+As per the FUSE-T
+wiki (https://github.com/macos-fuse-t/fuse-t/wiki#caveats):
+.RS
+.PP
+File access and modification times cannot be set separately as it seems
+to be an issue with the NFS client which always modifies both.
+Can be reproduced with \[aq]touch -m\[aq] and \[aq]touch -a\[aq]
+commands
+.RE
+.PP
+This means that viewing files with various tools, notably macOS Finder,
+will cause rlcone to update the modification time of the file.
+This may make rclone upload a full new copy of the file.
+.SS Unicode Normalization
+.PP
+Rclone includes flags for unicode normalization with macFUSE that should
+be updated for FUSE-T.
+See this forum
+post (https://forum.rclone.org/t/some-unicode-forms-break-mount-on-macos-with-fuse-t/36403)
+and FUSE-T issue #16 (https://github.com/macos-fuse-t/fuse-t/issues/16).
+The following flag should be added to the \f[C]rclone mount\f[R]
+command.
+.IP
+.nf
+\f[C]
+-o modules=iconv,from_code=UTF-8,to_code=UTF-8
+\f[R]
+.fi
+.SS Read Only mounts
+.PP
+When mounting with \f[C]--read-only\f[R], attempts to write to files
+will fail \f[I]silently\f[R] as opposed to with a clear warning as in
+macFUSE.
.SS Limitations
.PP
Without the use of \f[C]--vfs-cache-mode\f[R] this can only write files
@@ -8499,6 +8611,99 @@ filters so that the result is accurate.
However, this is very inefficient and may cost lots of API calls
resulting in extra charges.
Use it as a last resort and only with caching.
+.SS Auth Proxy
+.PP
+If you supply the parameter \f[C]--auth-proxy /path/to/program\f[R] then
+rclone will use that program to generate backends on the fly which then
+are used to authenticate incoming requests.
+This uses a simple JSON based protocol with input on STDIN and output on
+STDOUT.
+.PP
+\f[B]PLEASE NOTE:\f[R] \f[C]--auth-proxy\f[R] and
+\f[C]--authorized-keys\f[R] cannot be used together, if
+\f[C]--auth-proxy\f[R] is set the authorized keys option will be
+ignored.
+.PP
+There is an example program
+bin/test_proxy.py (https://github.com/rclone/rclone/blob/master/test_proxy.py)
+in the rclone source code.
+.PP
+The program\[aq]s job is to take a \f[C]user\f[R] and \f[C]pass\f[R] on
+the input and turn those into the config for a backend on STDOUT in JSON
+format.
+This config will have any default parameters for the backend added, but
+it won\[aq]t use configuration from environment variables or command
+line options - it is the job of the proxy program to make a complete
+config.
+.PP
+This config generated must have this extra parameter - \f[C]_root\f[R] -
+root to use for the backend
+.PP
+And it may have this parameter - \f[C]_obscure\f[R] - comma separated
+strings for parameters to obscure
+.PP
+If password authentication was used by the client, input to the proxy
+process (on STDIN) would look similar to this:
+.IP
+.nf
+\f[C]
+{
+ \[dq]user\[dq]: \[dq]me\[dq],
+ \[dq]pass\[dq]: \[dq]mypassword\[dq]
+}
+\f[R]
+.fi
+.PP
+If public-key authentication was used by the client, input to the proxy
+process (on STDIN) would look similar to this:
+.IP
+.nf
+\f[C]
+{
+ \[dq]user\[dq]: \[dq]me\[dq],
+ \[dq]public_key\[dq]: \[dq]AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf\[dq]
+}
+\f[R]
+.fi
+.PP
+And as an example return this on STDOUT
+.IP
+.nf
+\f[C]
+{
+ \[dq]type\[dq]: \[dq]sftp\[dq],
+ \[dq]_root\[dq]: \[dq]\[dq],
+ \[dq]_obscure\[dq]: \[dq]pass\[dq],
+ \[dq]user\[dq]: \[dq]me\[dq],
+ \[dq]pass\[dq]: \[dq]mypassword\[dq],
+ \[dq]host\[dq]: \[dq]sftp.example.com\[dq]
+}
+\f[R]
+.fi
+.PP
+This would mean that an SFTP backend would be created on the fly for the
+\f[C]user\f[R] and \f[C]pass\f[R]/\f[C]public_key\f[R] returned in the
+output to the host given.
+Note that since \f[C]_obscure\f[R] is set to \f[C]pass\f[R], rclone will
+obscure the \f[C]pass\f[R] parameter before creating the backend (which
+is required for sftp backends).
+.PP
+The program can manipulate the supplied \f[C]user\f[R] in any way, for
+example to make proxy to many different sftp backends, you could make
+the \f[C]user\f[R] be \f[C]user\[at]example.com\f[R] and then set the
+\f[C]host\f[R] to \f[C]example.com\f[R] in the output and the user to
+\f[C]user\f[R].
+For security you\[aq]d probably want to restrict the \f[C]host\f[R] to a
+limited list.
+.PP
+Note that an internal cache is keyed on \f[C]user\f[R] so only use that
+for configuration, don\[aq]t use \f[C]pass\f[R] or \f[C]public_key\f[R].
+This also means that if a user\[aq]s password or public-key is changed
+the cache will need to expire (which takes 5 mins) before it takes
+effect.
+.PP
+This can be used to build general purpose proxies to any kind of backend
+that rclone supports.
.IP
.nf
\f[C]
@@ -8510,6 +8715,7 @@ rclone serve http remote:path [flags]
.nf
\f[C]
--addr stringArray IPaddress:Port or :Port to bind server to (default [127.0.0.1:8080])
+ --auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
--cert string TLS PEM key (concatenation of certificate and CA certificate)
--client-ca string Client certificate authority to verify clients with
@@ -9403,6 +9609,35 @@ supported hash on the backend or you can use a named hash such as
\[dq]MD5\[dq] or \[dq]SHA-1\[dq].
Use the hashsum (https://rclone.org/commands/rclone_hashsum/) command to
see the full list.
+.SS Access WebDAV on Windows
+.PP
+WebDAV shared folder can be mapped as a drive on Windows, however the
+default settings prevent it.
+Windows will fail to connect to the server using insecure Basic
+authentication.
+It will not even display any login dialog.
+Windows requires SSL / HTTPS connection to be used with Basic.
+If you try to connect via Add Network Location Wizard you will get the
+following error: \[dq]The folder you entered does not appear to be
+valid.
+Please choose another\[dq].
+However, you still can connect if you set the following registry key on
+a client machine: HKEY_LOCAL_MACHINEto 2.
+The BasicAuthLevel can be set to the following values: 0 - Basic
+authentication disabled 1 - Basic authentication enabled for SSL
+connections only 2 - Basic authentication enabled for SSL connections
+and for non-SSL connections If required, increase the
+FileSizeLimitInBytes to a higher value.
+Navigate to the Services interface, then restart the WebClient service.
+.SS Access Office applications on WebDAV
+.PP
+Navigate to following registry HKEY_CURRENT_USER[14.0/15.0/16.0] Create
+a new DWORD BasicAuthLevel with value 2.
+0 - Basic authentication disabled 1 - Basic authentication enabled for
+SSL connections only 2 - Basic authentication enabled for SSL and for
+non-SSL connections
+.PP
+https://learn.microsoft.com/en-us/office/troubleshoot/powerpoint/office-opens-blank-from-sharepoint
.SS Server options
.PP
Use \f[C]--addr\f[R] to specify which IP address and port the server
@@ -10442,7 +10677,7 @@ unless \f[C]--no-create\f[R] or \f[C]--recursive\f[R] is provided.
If \f[C]--recursive\f[R] is used then recursively sets the modification
time on all existing files that is found under the path.
Filters are supported, and you can test with the \f[C]--dry-run\f[R] or
-the \f[C]--interactive\f[R] flag.
+the \f[C]--interactive\f[R]/\f[C]-i\f[R] flag.
.PP
If \f[C]--timestamp\f[R] is used then sets the modification time to that
time instead of the current time.
@@ -10852,8 +11087,8 @@ DEBUG : :s3: detected overridden config - adding \[dq]{YTu53}\[dq] suffix to nam
.SS Valid remote names
.PP
Remote names are case sensitive, and must adhere to the following rules:
-- May contain number, letter, \f[C]_\f[R], \f[C]-\f[R], \f[C].\f[R] and
-space.
+- May contain number, letter, \f[C]_\f[R], \f[C]-\f[R], \f[C].\f[R],
+\f[C]+\f[R], \f[C]\[at]\f[R] and space.
- May not start with \f[C]-\f[R] or space.
- May not end with space.
.PP
@@ -10931,7 +11166,7 @@ So to sync a directory called \f[C]sync:me\f[R] to a remote called
.IP
.nf
\f[C]
-rclone sync -i ./sync:me remote:path
+rclone sync --interactive ./sync:me remote:path
\f[R]
.fi
.PP
@@ -10939,7 +11174,7 @@ or
.IP
.nf
\f[C]
-rclone sync -i /full/path/to/sync:me remote:path
+rclone sync --interactive /full/path/to/sync:me remote:path
\f[R]
.fi
.SS Server Side Copy
@@ -10980,8 +11215,8 @@ This can be used when scripting to make aged backups efficiently, e.g.
.IP
.nf
\f[C]
-rclone sync -i remote:current-backup remote:previous-backup
-rclone sync -i /path/to/files remote:current-backup
+rclone sync --interactive remote:current-backup remote:previous-backup
+rclone sync --interactive /path/to/files remote:current-backup
\f[R]
.fi
.SS Metadata support
@@ -11272,7 +11507,7 @@ For example
.IP
.nf
\f[C]
-rclone sync -i /path/to/local remote:current --backup-dir remote:old
+rclone sync --interactive /path/to/local remote:current --backup-dir remote:old
\f[R]
.fi
.PP
@@ -11487,6 +11722,12 @@ with checking.
It can also be useful to ensure perfect ordering when using
\f[C]--order-by\f[R].
.PP
+If both \f[C]--check-first\f[R] and \f[C]--order-by\f[R] are set when
+doing \f[C]rclone move\f[R] then rclone will use the transfer thread to
+delete source files which don\[aq]t need transferring.
+This will enable perfect ordering of the transfers and deletes but will
+cause the transfer stats to have more items in than expected.
+.PP
Using this flag can use more memory as it effectively sets
\f[C]--max-backlog\f[R] to infinite.
This means that all the info on the objects to transfer is held in
@@ -11846,7 +12087,7 @@ The flag can be repeated to add multiple headers.
.IP
.nf
\f[C]
-rclone sync -i s3:test/src \[ti]/dst --header-download \[dq]X-Amz-Meta-Test: Foo\[dq] --header-download \[dq]X-Amz-Meta-Test2: Bar\[dq]
+rclone sync --interactive s3:test/src \[ti]/dst --header-download \[dq]X-Amz-Meta-Test: Foo\[dq] --header-download \[dq]X-Amz-Meta-Test2: Bar\[dq]
\f[R]
.fi
.PP
@@ -11859,7 +12100,7 @@ The flag can be repeated to add multiple headers.
.IP
.nf
\f[C]
-rclone sync -i \[ti]/src s3:test/dst --header-upload \[dq]Content-Disposition: attachment; filename=\[aq]cool.html\[aq]\[dq] --header-upload \[dq]X-Amz-Meta-Test: FooBar\[dq]
+rclone sync --interactive \[ti]/src s3:test/dst --header-upload \[dq]Content-Disposition: attachment; filename=\[aq]cool.html\[aq]\[dq] --header-upload \[dq]X-Amz-Meta-Test: FooBar\[dq]
\f[R]
.fi
.PP
@@ -11982,7 +12223,7 @@ well as modification.
This can be useful as an additional layer of protection for immutable or
append-only data sets (notably backup archives), where modification
implies corruption and should not be propagated.
-.SS -i / --interactive
+.SS -i, --interactive
.PP
This flag can be used to tell rclone that you wish a manual confirmation
before destructive operations.
@@ -11994,7 +12235,7 @@ For example
.IP
.nf
\f[C]
-$ rclone delete -i /tmp/dir
+$ rclone delete --interactive /tmp/dir
rclone: delete \[dq]important-file.txt\[dq]?
y) Yes, this is OK (default)
n) No, skip this
@@ -12123,6 +12364,14 @@ possible.
This tells rclone not to delete more than N files.
If that limit is exceeded then a fatal error will be generated and
rclone will stop the operation in progress.
+.SS --max-delete-size=SIZE
+.PP
+Rclone will stop deleting files when the total size of deletions has
+reached the size specified.
+It defaults to off.
+.PP
+If that limit is exceeded then a fatal error will be generated and
+rclone will stop the operation in progress.
.SS --max-depth=N
.PP
This modifies the recursion depth for all the commands except purge.
@@ -12162,7 +12411,7 @@ Defaults to off.
When the limit is reached all transfers will stop immediately.
.PP
Rclone will exit with exit code 8 if the transfer limit is reached.
-.SS --metadata / -M
+.SS -M, --metadata
.PP
Setting this flag enables rclone to copy the metadata from the source to
the destination.
@@ -12618,7 +12867,7 @@ For example
.IP
.nf
\f[C]
-rclone copy -i /path/to/local/file remote:current --suffix .bak
+rclone copy --interactive /path/to/local/file remote:current --suffix .bak
\f[R]
.fi
.PP
@@ -12632,7 +12881,7 @@ files.
.IP
.nf
\f[C]
-rclone sync -i /path/to/local/file remote:current --suffix .bak --exclude \[dq]*.bak\[dq]
+rclone sync --interactive /path/to/local/file remote:current --suffix .bak --exclude \[dq]*.bak\[dq]
\f[R]
.fi
.SS --suffix-keep-extension
@@ -12955,10 +13204,10 @@ these options.
For example this can be very useful with the HTTP or WebDAV backends.
Rclone HTTP servers have their own set of configuration for SSL/TLS
which you can find in their documentation.
-.SS --ca-cert string
+.SS --ca-cert stringArray
.PP
-This loads the PEM encoded certificate authority certificate and uses it
-to verify the certificates of the servers rclone connects to.
+This loads the PEM encoded certificate authority certificates and uses
+it to verify the certificates of the servers rclone connects to.
.PP
If you have generated certificates signed with a local CA then you will
need this flag to connect to servers using those certificates.
@@ -14733,7 +14982,8 @@ deletes any files on the destination which are excluded from the
command.
.PP
E.g.
-the scope of \f[C]rclone sync -i A: B:\f[R] can be restricted:
+the scope of \f[C]rclone sync --interactive A: B:\f[R] can be
+restricted:
.IP
.nf
\f[C]
@@ -17782,7 +18032,7 @@ T}
T{
Microsoft OneDrive
T}@T{
-SHA1 \[u2075]
+QuickXorHash \[u2075]
T}@T{
R/W
T}@T{
@@ -18080,9 +18330,9 @@ the remote\[aq]s PATH.
\[u2074] WebDAV supports modtimes when used with Owncloud and Nextcloud
only.
.PP
-\[u2075] Microsoft OneDrive Personal supports SHA1 hashes, whereas
-OneDrive for business and SharePoint server support Microsoft\[aq]s own
-QuickXorHash (https://docs.microsoft.com/en-us/onedrive/developer/code-snippets/quickxorhash).
+\[u2075]
+QuickXorHash (https://docs.microsoft.com/en-us/onedrive/developer/code-snippets/quickxorhash)
+is Microsoft\[aq]s own hash.
.PP
\[u2076] Mail.ru uses its own modified SHA1 hash
.PP
@@ -19818,7 +20068,7 @@ T}
T{
Storj
T}@T{
-Yes \[dg]
+Yes \[u2628]
T}@T{
Yes
T}@T{
@@ -19832,7 +20082,7 @@ Yes
T}@T{
Yes
T}@T{
-No
+Yes
T}@T{
No
T}@T{
@@ -19959,9 +20209,12 @@ T}
This deletes a directory quicker than just deleting all the files in the
directory.
.PP
-\[dg] Note Swift and Storj implement this in order to delete directory
-markers but they don\[aq]t actually have a quicker way of deleting files
-other than deleting them individually.
+\[dg] Note Swift implements this in order to delete directory markers
+but they don\[aq]t actually have a quicker way of deleting files other
+than deleting them individually.
+.PP
+\[u2628] Storj implements this efficiently only for entire buckets.
+If purging a directory inside a bucket, files are deleted individually.
.PP
\[dd] StreamUpload is not supported with Nextcloud
.SS Copy
@@ -20057,7 +20310,7 @@ These flags are available for every command.
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer (default 16Mi)
--bwlimit BwTimetable Bandwidth limit in KiB/s, or use suffix B|K|M|G|T|P or a full timetable
--bwlimit-file BwTimetable Bandwidth limit per file in KiB/s, or use suffix B|K|M|G|T|P or a full timetable
- --ca-cert string CA certificate used to verify servers
+ --ca-cert stringArray CA certificate used to verify servers
--cache-dir string Directory rclone will use for caching (default \[dq]$HOME/.cache/rclone\[dq])
--check-first Do all the checks before starting transfers
--checkers int Number of checkers to run in parallel (default 8)
@@ -20119,6 +20372,7 @@ These flags are available for every command.
--max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off)
--max-depth int If set limits the recursion depth to this (default -1)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
@@ -20207,7 +20461,7 @@ These flags are available for every command.
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.61.0\[dq])
+ --user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.62.0\[dq])
-v, --verbose count Print lots more stuff (repeat for more)
\f[R]
.fi
@@ -20425,6 +20679,7 @@ They control the backends and may be set in the config file.
--gcs-decompress If set this will decompress gzip encoded objects
--gcs-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gcs-endpoint string Endpoint for the service
+ --gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars)
--gcs-location string Location for the newly created buckets
--gcs-no-check-bucket If set, don\[aq]t attempt to check the bucket exists or create it
--gcs-object-acl string Access Control List for new objects
@@ -20513,6 +20768,7 @@ They control the backends and may be set in the config file.
--mega-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
--mega-hard-delete Delete files permanently rather than putting them into the trash
--mega-pass string Password (obscured)
+ --mega-use-https Use HTTPS for transfers
--mega-user string User name
--netstorage-account string Set the NetStorage account name
--netstorage-host string Domain+path of NetStorage host to connect to
@@ -20528,6 +20784,7 @@ They control the backends and may be set in the config file.
--onedrive-drive-type string The type of the drive (personal | business | documentLibrary)
--onedrive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings
+ --onedrive-hash-type string Specify the hash in use for the backend (default \[dq]auto\[dq])
--onedrive-link-password string Set the password for links created by the link command
--onedrive-link-scope string Set the scope of the links created by the link command (default \[dq]anonymous\[dq])
--onedrive-link-type string Set the type of the links created by the link command (default \[dq]view\[dq])
@@ -20552,6 +20809,12 @@ They control the backends and may be set in the config file.
--oos-no-check-bucket If set, don\[aq]t attempt to check the bucket exists or create it
--oos-provider string Choose your Auth Provider (default \[dq]env_auth\[dq])
--oos-region string Object storage Region
+ --oos-sse-customer-algorithm string If using SSE-C, the optional header that specifies \[dq]AES256\[dq] as the encryption algorithm
+ --oos-sse-customer-key string To use SSE-C, the optional header that specifies the base64-encoded 256-bit encryption key to use to
+ --oos-sse-customer-key-file string To use SSE-C, a file containing the base64-encoded string of the AES-256 encryption key associated
+ --oos-sse-customer-key-sha256 string If using SSE-C, The optional header that specifies the base64-encoded SHA256 hash of the encryption
+ --oos-sse-kms-key-id string if using using your own master key in vault, this header specifies the
+ --oos-storage-tier string The storage class to use when storing new objects in storage. https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm (default \[dq]Standard\[dq])
--oos-upload-concurrency int Concurrency for multipart uploads (default 10)
--oos-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi)
@@ -20620,6 +20883,7 @@ They control the backends and may be set in the config file.
--s3-sse-customer-key-md5 string If using SSE-C you may provide the secret encryption key MD5 checksum (optional)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key
--s3-storage-class string The storage class to use when storing new objects in S3
+ --s3-sts-endpoint string Endpoint for STS
--s3-upload-concurrency int Concurrency for multipart uploads (default 4)
--s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint
@@ -20685,12 +20949,13 @@ They control the backends and may be set in the config file.
--smb-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--smb-pass string SMB password (obscured)
--smb-port int SMB port number (default 445)
+ --smb-spn string Service principal name
--smb-user string SMB username (default \[dq]$USER\[dq])
--storj-access-grant string Access grant
--storj-api-key string API key
--storj-passphrase string Encryption passphrase
--storj-provider string Choose an authentication method (default \[dq]existing\[dq])
- --storj-satellite-address string Satellite address (default \[dq]us-central-1.storj.io\[dq])
+ --storj-satellite-address string Satellite address (default \[dq]us1.storj.io\[dq])
--sugarsync-access-key-id string Sugarsync Access Key ID
--sugarsync-app-id string Sugarsync App ID
--sugarsync-authorization string Sugarsync authorization
@@ -21635,8 +21900,9 @@ or bisync will fail.
This is required for safety - that bisync can verify that both paths are
valid.
.PP
-When using \f[C]--resync\f[R] a newer version of a file on the Path2
-filesystem will be overwritten by the Path1 filesystem version.
+When using \f[C]--resync\f[R], a newer version of a file either on Path1
+or Path2 filesystem, will overwrite the file on the other path (only the
+last version will be kept).
Carefully evaluate deltas using
--dry-run (https://rclone.org/flags/#non-backend-flags).
.PP
@@ -21655,16 +21921,29 @@ deleting \f[B]everything\f[R] in the other path.
Access check files are an additional safety measure against data loss.
bisync will ensure it can find matching \f[C]RCLONE_TEST\f[R] files in
the same places in the Path1 and Path2 filesystems.
+\f[C]RCLONE_TEST\f[R] files are not generated automatically.
+For \f[C]--check-access\f[R]to succeed, you must first either:
+\f[B]A)\f[R] Place one or more \f[C]RCLONE_TEST\f[R] files in the Path1
+or Path2 filesystem and then do either a run without
+\f[C]--check-access\f[R] or a --resync to set matching files on both
+filesystems, or \f[B]B)\f[R] Set \f[C]--check-filename\f[R] to a
+filename already in use in various locations throughout your sync\[aq]d
+fileset.
Time stamps and file contents are not important, just the names and
locations.
-Place one or more \f[C]RCLONE_TEST\f[R] files in the Path1 or Path2
-filesystem and then do either a run without \f[C]--check-access\f[R] or
-a \f[C]--resync\f[R] to set matching files on both filesystems.
If you have symbolic links in your sync tree it is recommended to place
\f[C]RCLONE_TEST\f[R] files in the linked-to directory tree to protect
against bisync assuming a bunch of deleted files if the linked-to tree
should not be accessible.
-Also see the \f[C]--check-filename\f[R] flag.
+See also the --check-filename flag.
+.SS --check-filename
+.PP
+Name of the file(s) used in access health validation.
+The default \f[C]--check-filename\f[R] is \f[C]RCLONE_TEST\f[R].
+One or more files having this filename must exist, synchronized between
+your source and destination filesets, in order for
+\f[C]--check-access\f[R] to succeed.
+See --check-access for additional details.
.SS --max-delete
.PP
As a safety check, if greater than the \f[C]--max-delete\f[R] percent of
@@ -23835,7 +24114,7 @@ excess files in the bucket.
.IP
.nf
\f[C]
-rclone sync -i /home/local/directory remote:bucket
+rclone sync --interactive /home/local/directory remote:bucket
\f[R]
.fi
.SS Configuration
@@ -24291,7 +24570,8 @@ $ rclone -q --s3-versions ls s3:cleanup-test
.PP
If you run \f[C]rclone cleanup s3:bucket\f[R] then it will remove all
pending multipart uploads older than 24 hours.
-You can use the \f[C]-i\f[R] flag to see exactly what it will do.
+You can use the \f[C]--interactive\f[R]/\f[C]i\f[R] or
+\f[C]--dry-run\f[R] flag to see exactly what it will do.
If you want more control over the expiry date then run
\f[C]rclone backend cleanup s3:bucket -o max-age=1h\f[R] to expire all
uploads older than one hour.
@@ -26546,7 +26826,7 @@ EU Endpoint
.RE
.SS --s3-endpoint
.PP
-Endpoint of the Shared Gateway.
+Endpoint for Storj Gateway.
.PP
Properties:
.IP \[bu] 2
@@ -26563,22 +26843,10 @@ Required: false
Examples:
.RS 2
.IP \[bu] 2
-\[dq]gateway.eu1.storjshare.io\[dq]
+\[dq]gateway.storjshare.io\[dq]
.RS 2
.IP \[bu] 2
-EU1 Shared Gateway
-.RE
-.IP \[bu] 2
-\[dq]gateway.us1.storjshare.io\[dq]
-.RS 2
-.IP \[bu] 2
-US1 Shared Gateway
-.RE
-.IP \[bu] 2
-\[dq]gateway.ap1.storjshare.io\[dq]
-.RS 2
-.IP \[bu] 2
-Asia-Pacific Shared Gateway
+Global Hosted Gateway
.RE
.RE
.SS --s3-endpoint
@@ -29310,6 +29578,23 @@ Env Var: RCLONE_S3_NO_SYSTEM_METADATA
Type: bool
.IP \[bu] 2
Default: false
+.SS --s3-sts-endpoint
+.PP
+Endpoint for STS.
+.PP
+Leave blank if using AWS to use the default endpoint for the region.
+.PP
+Properties:
+.IP \[bu] 2
+Config: sts_endpoint
+.IP \[bu] 2
+Env Var: RCLONE_S3_STS_ENDPOINT
+.IP \[bu] 2
+Provider: AWS
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
.SS Metadata
.PP
User metadata is stored as x-amz-meta- keys.
@@ -29467,11 +29752,11 @@ rclone backend restore s3:bucket [-o priority=PRIORITY] [-o lifetime=DAYS]
.fi
.PP
This flag also obeys the filters.
-Test first with -i/--interactive or --dry-run flags
+Test first with --interactive/-i or --dry-run flags
.IP
.nf
\f[C]
-rclone -i backend restore --include \[dq]*.txt\[dq] s3:bucket/path -o priority=Standard
+rclone --interactive backend restore --include \[dq]*.txt\[dq] s3:bucket/path -o priority=Standard
\f[R]
.fi
.PP
@@ -29569,8 +29854,8 @@ rclone backend cleanup remote: [options] [+]
This command removes unfinished multipart uploads of age greater than
max-age which defaults to 24 hours.
.PP
-Note that you can use -i/--dry-run with this command to see what it
-would do.
+Note that you can use --interactive/-i or --dry-run with this command to
+see what it would do.
.IP
.nf
\f[C]
@@ -29597,8 +29882,8 @@ rclone backend cleanup-hidden remote: [options] [+]
This command removes any old hidden versions of files on a versions
enabled bucket.
.PP
-Note that you can use -i/--dry-run with this command to see what it
-would do.
+Note that you can use --interactive/-i or --dry-run with this command to
+see what it would do.
.IP
.nf
\f[C]
@@ -32332,7 +32617,7 @@ excess files in the bucket.
.IP
.nf
\f[C]
-rclone sync -i /home/local/directory remote:bucket
+rclone sync --interactive /home/local/directory remote:bucket
\f[R]
.fi
.SS Application Keys
@@ -34602,8 +34887,8 @@ Create another directory (most probably on the same cloud storage) and
configure a new remote with desired metadata format, hash type, chunk
naming etc.
.IP \[bu] 2
-Now run \f[C]rclone sync -i oldchunks: newchunks:\f[R] and all your data
-will be transparently converted in transfer.
+Now run \f[C]rclone sync --interactive oldchunks: newchunks:\f[R] and
+all your data will be transparently converted in transfer.
This may take some time, yet chunker will try server-side copy if
possible.
.IP \[bu] 2
@@ -35376,7 +35661,7 @@ File content encryption is performed using NaCl
SecretBox (https://godoc.org/golang.org/x/crypto/nacl/secretbox), based
on XSalsa20 cipher and Poly1305 for integrity.
Names (file- and directory names) are also encrypted by default, but
-this has some implications and is therefore possible to turned off.
+this has some implications and is therefore possible to be turned off.
.SS Configuration
.PP
Here is an example of how to make a remote called \f[C]secret\f[R].
@@ -36114,7 +36399,7 @@ To sync the two remotes you would do
.IP
.nf
\f[C]
-rclone sync -i remote:crypt remote2:crypt
+rclone sync --interactive remote:crypt remote2:crypt
\f[R]
.fi
.PP
@@ -37648,7 +37933,7 @@ any excess files in the directory.
.IP
.nf
\f[C]
-rclone sync -i /home/local/directory remote:directory
+rclone sync --interactive /home/local/directory remote:directory
\f[R]
.fi
.SS Anonymous FTP
@@ -38339,7 +38624,7 @@ excess files in the bucket.
.IP
.nf
\f[C]
-rclone sync -i /home/local/directory remote:bucket
+rclone sync --interactive /home/local/directory remote:bucket
\f[R]
.fi
.SS Service Account support
@@ -39032,6 +39317,39 @@ Archive storage class
Durable reduced availability storage class
.RE
.RE
+.SS --gcs-env-auth
+.PP
+Get GCP IAM credentials from runtime (environment variables or instance
+meta data if no env vars).
+.PP
+Only applies if service_account_file and service_account_credentials is
+blank.
+.PP
+Properties:
+.IP \[bu] 2
+Config: env_auth
+.IP \[bu] 2
+Env Var: RCLONE_GCS_ENV_AUTH
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+\[dq]false\[dq]
+.RS 2
+.IP \[bu] 2
+Enter credentials in the next step.
+.RE
+.IP \[bu] 2
+\[dq]true\[dq]
+.RS 2
+.IP \[bu] 2
+Get GCP IAM credentials from the environment (env vars or IAM).
+.RE
+.RE
.SS Advanced options
.PP
Here are the Advanced options specific to google cloud storage (Google
@@ -40594,6 +40912,11 @@ error code \[dq]cannotDownloadAbusiveFile\[dq] then supply this flag to
rclone to indicate you acknowledge the risks of downloading the file and
rclone will download it anyway.
.PP
+Note that if you are using service account it will need Manager
+permission (not Content Manager) to for this flag to work.
+If the SA does not have the right permission, Google will just ignore
+the flag.
+.PP
Properties:
.IP \[bu] 2
Config: acknowledge_abuse
@@ -41048,11 +41371,12 @@ via the API.
.nf
\f[C]
rclone backend untrash drive:directory
-rclone backend -i untrash drive:directory subdir
+rclone backend --interactive untrash drive:directory subdir
\f[R]
.fi
.PP
-Use the -i flag to see what would be restored before restoring it.
+Use the --interactive/-i or --dry-run flag to see what would be restored
+before restoring it.
.PP
Result:
.IP
@@ -41097,7 +41421,8 @@ as the file name.
If the destination is a drive backend then server-side copying will be
attempted if possible.
.PP
-Use the -i flag to see what would be copied before copying.
+Use the --interactive/-i or --dry-run flag to see what would be copied
+before copying.
.SS exportformats
.PP
Dump the export formats for debug purposes
@@ -41223,7 +41548,16 @@ enter an \[dq]Application name\[dq] (\[dq]rclone\[dq] is OK); enter
\[dq]User Support Email\[dq] (your own email is OK); enter
\[dq]Developer Contact Email\[dq] (your own email is OK); then click on
\[dq]Save\[dq] (all other data is optional).
-Click again on \[dq]Credentials\[dq] on the left panel to go back to the
+You will also have to add some scopes, including \f[C].../auth/docs\f[R]
+and \f[C].../auth/drive\f[R] in order to be able to edit, create and
+delete files with RClone.
+You may also want to include the
+\f[C]../auth/drive.metadata.readonly\f[R] scope.
+After adding scopes, click \[dq]Save and continue\[dq] to add test
+users.
+Be sure to add your own account to the test users.
+Once you\[aq]ve added yourself as a test user and saved the changes,
+click again on \[dq]Credentials\[dq] on the left panel to go back to the
\[dq]Credentials\[dq] screen.
.RS 4
.PP
@@ -41243,17 +41577,16 @@ It will show you a client ID and client secret.
Make a note of these.
.RS 4
.PP
-(If you selected \[dq]External\[dq] at Step 5 continue to \[dq]Publish
-App\[dq] in the Steps 9 and 10.
+(If you selected \[dq]External\[dq] at Step 5 continue to Step 9.
If you chose \[dq]Internal\[dq] you don\[aq]t need to publish and can
-skip straight to Step 11.)
+skip straight to Step 10 but your destination drive must be part of the
+same Google Workspace.)
.RE
.IP " 9." 4
-Go to \[dq]Oauth consent screen\[dq] and press \[dq]Publish App\[dq]
+Go to \[dq]Oauth consent screen\[dq] and then click \[dq]PUBLISH
+APP\[dq] button and confirm.
+You will also want to add yourself as a test user.
.IP "10." 4
-Click \[dq]OAuth consent screen\[dq], then click \[dq]PUBLISH APP\[dq]
-button and confirm, or add your account under \[dq]Test users\[dq].
-.IP "11." 4
Provide the noted client ID and client secret to rclone.
.PP
Be aware that, due to the \[dq]enhanced security\[dq] recently
@@ -41264,6 +41597,11 @@ client secret with rclone, the only issue will be a very scary
confirmation screen shown when you connect via your browser for rclone
to be able to get its token-id (but as this only happens during the
remote configuration, it\[aq]s not such a big deal).
+Keeping the application in \[dq]Testing\[dq] will work as well, but the
+limitation is that any grants will expire after a week, which can be
+annoying to refresh constantly.
+If, for whatever reason, a short grant time is not a problem, then
+keeping the application in testing mode would also be sufficient.
.PP
(Thanks to \[at]balazer on github for these instructions.)
.PP
@@ -41411,12 +41749,12 @@ excess files in the album.
.IP
.nf
\f[C]
-rclone sync -i /home/local/image remote:album/newAlbum
+rclone sync --interactive /home/local/image remote:album/newAlbum
\f[R]
.fi
.SS Layout
.PP
-As Google Photos is not a general purpose cloud storage system the
+As Google Photos is not a general purpose cloud storage system, the
backend is laid out to help you navigate it.
.PP
The directories under \f[C]media\f[R] show different ways of
@@ -42321,7 +42659,7 @@ deleting any excess files.
.IP
.nf
\f[C]
-rclone sync -i remote:directory /home/local/directory
+rclone sync --interactive remote:directory /home/local/directory
\f[R]
.fi
.SS Setting up your own HDFS instance for testing
@@ -43202,7 +43540,7 @@ deleting any excess files.
.IP
.nf
\f[C]
-rclone sync -i remote:directory /home/local/directory
+rclone sync --interactive remote:directory /home/local/directory
\f[R]
.fi
.SS Read only
@@ -43382,7 +43720,7 @@ excess files in the item.
.IP
.nf
\f[C]
-rclone sync -i /home/local/directory remote:item
+rclone sync --interactive /home/local/directory remote:item
\f[R]
.fi
.SS Notes
@@ -44981,7 +45319,7 @@ excess files in the path.
.IP
.nf
\f[C]
-rclone sync -i /home/local/directory remote:directory
+rclone sync --interactive /home/local/directory remote:directory
\f[R]
.fi
.SS Modified time
@@ -45681,6 +46019,27 @@ Env Var: RCLONE_MEGA_HARD_DELETE
Type: bool
.IP \[bu] 2
Default: false
+.SS --mega-use-https
+.PP
+Use HTTPS for transfers.
+.PP
+MEGA uses plain text HTTP connections by default.
+Some ISPs throttle HTTP connections, this causes transfers to become
+very slow.
+Enabling this will force MEGA to use HTTPS for all transfers.
+HTTPS is normally not necesary since all data is already encrypted
+anyway.
+Enabling it will increase CPU usage and add network overhead.
+.PP
+Properties:
+.IP \[bu] 2
+Config: use_https
+.IP \[bu] 2
+Env Var: RCLONE_MEGA_USE_HTTPS
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS --mega-encoding
.PP
The encoding for the backend.
@@ -46266,7 +46625,7 @@ any excess files in the container.
.IP
.nf
\f[C]
-rclone sync -i /home/local/directory remote:container
+rclone sync --interactive /home/local/directory remote:container
\f[R]
.fi
.SS --fast-list
@@ -47509,10 +47868,20 @@ OneDrive allows modification times to be set on objects accurate to 1
second.
These will be used to detect whether objects need syncing or not.
.PP
-OneDrive personal supports SHA1 type hashes.
-OneDrive for business and Sharepoint Server support
+OneDrive Personal, OneDrive for Business and Sharepoint Server support
QuickXorHash (https://docs.microsoft.com/en-us/onedrive/developer/code-snippets/quickxorhash).
.PP
+Before rclone 1.62 the default hash for Onedrive Personal was
+\f[C]SHA1\f[R].
+For rclone 1.62 and above the default for all Onedrive backends is
+\f[C]QuickXorHash\f[R].
+.PP
+Starting from July 2023 \f[C]SHA1\f[R] support is being phased out in
+Onedrive Personal in favour of \f[C]QuickXorHash\f[R].
+If necessary the \f[C]--onedrive-hash-type\f[R] flag (or
+\f[C]hash_type\f[R] config option) can be used to select \f[C]SHA1\f[R]
+during the transition period if this is important your workflow.
+.PP
For all types of OneDrive you can use the \f[C]--checksum\f[R] flag.
.SS Restricted filename characters
.PP
@@ -48070,6 +48439,77 @@ Env Var: RCLONE_ONEDRIVE_LINK_PASSWORD
Type: string
.IP \[bu] 2
Required: false
+.SS --onedrive-hash-type
+.PP
+Specify the hash in use for the backend.
+.PP
+This specifies the hash type in use.
+If set to \[dq]auto\[dq] it will use the default hash which is is
+QuickXorHash.
+.PP
+Before rclone 1.62 an SHA1 hash was used by default for Onedrive
+Personal.
+For 1.62 and later the default is to use a QuickXorHash for all onedrive
+types.
+If an SHA1 hash is desired then set this option accordingly.
+.PP
+From July 2023 QuickXorHash will be the only available hash for both
+OneDrive for Business and OneDriver Personal.
+.PP
+This can be set to \[dq]none\[dq] to not use any hashes.
+.PP
+If the hash requested does not exist on the object, it will be returned
+as an empty string which is treated as a missing hash by rclone.
+.PP
+Properties:
+.IP \[bu] 2
+Config: hash_type
+.IP \[bu] 2
+Env Var: RCLONE_ONEDRIVE_HASH_TYPE
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: \[dq]auto\[dq]
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+\[dq]auto\[dq]
+.RS 2
+.IP \[bu] 2
+Rclone chooses the best hash
+.RE
+.IP \[bu] 2
+\[dq]quickxor\[dq]
+.RS 2
+.IP \[bu] 2
+QuickXor
+.RE
+.IP \[bu] 2
+\[dq]sha1\[dq]
+.RS 2
+.IP \[bu] 2
+SHA1
+.RE
+.IP \[bu] 2
+\[dq]sha256\[dq]
+.RS 2
+.IP \[bu] 2
+SHA256
+.RE
+.IP \[bu] 2
+\[dq]crc32\[dq]
+.RS 2
+.IP \[bu] 2
+CRC32
+.RE
+.IP \[bu] 2
+\[dq]none\[dq]
+.RS 2
+.IP \[bu] 2
+None - don\[aq]t use any hashes
+.RE
+.RE
.SS --onedrive-encoding
.PP
The encoding for the backend.
@@ -48216,13 +48656,13 @@ the current version.
Because this involves traversing all the files, then querying each file
for versions it can be quite slow.
Rclone does \f[C]--checkers\f[R] tests in parallel.
-The command also supports \f[C]-i\f[R] which is a great way to see what
-it would do.
+The command also supports \f[C]--interactive\f[R]/\f[C]i\f[R] or
+\f[C]--dry-run\f[R] which is a great way to see what it would do.
.IP
.nf
\f[C]
-rclone cleanup -i remote:path/subdir # interactively remove all old version for path/subdir
-rclone cleanup remote:path/subdir # unconditionally remove all old version for path/subdir
+rclone cleanup --interactive remote:path/subdir # interactively remove all old version for path/subdir
+rclone cleanup remote:path/subdir # unconditionally remove all old version for path/subdir
\f[R]
.fi
.PP
@@ -48342,24 +48782,74 @@ Visit https://onedrive.live.com (https://onedrive.live.com/)
.IP "2." 3
Right click a item in \f[C]Shared\f[R], then click
\f[C]Add shortcut to My files\f[R] in the context
-.RS 4
-.PP
-Screenshot (Shared with me)
-.PP
[IMAGE: make_shortcut (https://user-images.githubusercontent.com/60313789/206118040-7e762b3b-aa61-41a1-8649-cc18889f3572.png)]
-.RE
.IP "3." 3
The shortcut will appear in \f[C]My files\f[R], you can access it with
rclone, it behaves like a normal folder/file.
-.RS 4
-.PP
-Screenshot (My Files)
-.PP
[IMAGE: in_my_files (https://i.imgur.com/0S8H3li.png)]
-.RE
-.PP
-Screenshot (rclone mount)
[IMAGE: rclone_mount (https://i.imgur.com/2Iq66sW.png)]
+.SS Live Photos uploaded from iOS (small video clips in .heic files)
+.PP
+The iOS OneDrive app introduced upload and
+storage (https://techcommunity.microsoft.com/t5/microsoft-onedrive-blog/live-photos-come-to-onedrive/ba-p/1953452)
+of Live Photos (https://support.apple.com/en-gb/HT207310) in 2020.
+The usage and download of these uploaded Live Photos is unfortunately
+still work-in-progress and this introduces several issues when copying,
+synchronising and mounting \[en] both in rclone and in the native
+OneDrive client on Windows.
+.PP
+The root cause can easily be seen if you locate one of your Live Photos
+in the OneDrive web interface.
+Then download the photo from the web interface.
+You will then see that the size of downloaded .heic file is smaller than
+the size displayed in the web interface.
+The downloaded file is smaller because it only contains a single frame
+(still photo) extracted from the Live Photo (movie) stored in OneDrive.
+.PP
+The different sizes will cause \f[C]rclone copy/sync\f[R] to repeatedly
+recopy unmodified photos something like this:
+.IP
+.nf
+\f[C]
+DEBUG : 20230203_123826234_iOS.heic: Sizes differ (src 4470314 vs dst 1298667)
+DEBUG : 20230203_123826234_iOS.heic: sha1 = fc2edde7863b7a7c93ca6771498ac797f8460750 OK
+INFO : 20230203_123826234_iOS.heic: Copied (replaced existing)
+\f[R]
+.fi
+.PP
+These recopies can be worked around by adding \f[C]--ignore-size\f[R].
+Please note that this workaround only syncs the still-picture not the
+movie clip, and relies on modification dates being correctly updated on
+all files in all situations.
+.PP
+The different sizes will also cause \f[C]rclone check\f[R] to report
+size errors something like this:
+.IP
+.nf
+\f[C]
+ERROR : 20230203_123826234_iOS.heic: sizes differ
+\f[R]
+.fi
+.PP
+These check errors can be suppressed by adding \f[C]--ignore-size\f[R].
+.PP
+The different sizes will also cause \f[C]rclone mount\f[R] to fail
+downloading with an error something like this:
+.IP
+.nf
+\f[C]
+ERROR : 20230203_123826234_iOS.heic: ReadFileHandle.Read error: low level retry 1/10: unexpected EOF
+\f[R]
+.fi
+.PP
+or like this when using \f[C]--cache-mode=full\f[R]:
+.IP
+.nf
+\f[C]
+INFO : 20230203_123826234_iOS.heic: vfs cache: downloader: error count now 1: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable:
+ERROR : 20230203_123826234_iOS.heic: vfs cache: failed to download: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable:
+\f[R]
+.fi
.SH OpenDrive
.PP
Paths are specified as \f[C]remote:path\f[R]
@@ -48763,7 +49253,7 @@ Enter a value. Press Enter to leave empty.
endpoint>
Option config_file.
-Path to OCI config file
+Full Path to OCI config file
Choose a number from below, or type in your own string value.
Press Enter for the default (\[ti]/.oci/config).
1 / oci configuration file location
@@ -48824,6 +49314,136 @@ rclone ls remote:bucket
rclone ls remote:bucket --max-depth 1
\f[R]
.fi
+.SS OCI Authentication Provider
+.PP
+OCI has various authentication methods.
+To learn more about authentication methods please refer oci
+authentication
+methods (https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdk_authentication_methods.htm)
+These choices can be specified in the rclone config file.
+.PP
+Rclone supports the following OCI authentication provider.
+.IP
+.nf
+\f[C]
+User Principal
+Instance Principal
+Resource Principal
+No authentication
+\f[R]
+.fi
+.SS Authentication provider choice: User Principal
+.PP
+Sample rclone config file for Authentication Provider User Principal:
+.IP
+.nf
+\f[C]
+[oos]
+type = oracleobjectstorage
+namespace = id34
+compartment = ocid1.compartment.oc1..aaba
+region = us-ashburn-1
+provider = user_principal_auth
+config_file = /home/opc/.oci/config
+config_profile = Default
+\f[R]
+.fi
+.PP
+Advantages: - One can use this method from any server within OCI or
+on-premises or from other cloud provider.
+.PP
+Considerations: - you need to configure user\[cq]s privileges / policy
+to allow access to object storage - Overhead of managing users and keys.
+- If the user is deleted, the config file will no longer work and may
+cause automation regressions that use the user\[aq]s credentials.
+.SS Authentication provider choice: Instance Principal
+.PP
+An OCI compute instance can be authorized to use rclone by using
+it\[aq]s identity and certificates as an instance principal.
+With this approach no credentials have to be stored and managed.
+.PP
+Sample rclone configuration file for Authentication Provider Instance
+Principal:
+.IP
+.nf
+\f[C]
+[opc\[at]rclone \[ti]]$ cat \[ti]/.config/rclone/rclone.conf
+[oos]
+type = oracleobjectstorage
+namespace = idfn
+compartment = ocid1.compartment.oc1..aak7a
+region = us-ashburn-1
+provider = instance_principal_auth
+\f[R]
+.fi
+.PP
+Advantages:
+.IP \[bu] 2
+With instance principals, you don\[aq]t need to configure user
+credentials and transfer/ save it to disk in your compute instances or
+rotate the credentials.
+.IP \[bu] 2
+You don\[cq]t need to deal with users and keys.
+.IP \[bu] 2
+Greatly helps in automation as you don\[aq]t have to manage access keys,
+user private keys, storing them in vault, using kms etc.
+.PP
+Considerations:
+.IP \[bu] 2
+You need to configure a dynamic group having this instance as member and
+add policy to read object storage to that dynamic group.
+.IP \[bu] 2
+Everyone who has access to this machine can execute the CLI commands.
+.IP \[bu] 2
+It is applicable for oci compute instances only.
+It cannot be used on external instance or resources.
+.SS Authentication provider choice: Resource Principal
+.PP
+Resource principal auth is very similar to instance principal auth but
+used for resources that are not compute instances such as serverless
+functions (https://docs.oracle.com/en-us/iaas/Content/Functions/Concepts/functionsoverview.htm).
+To use resource principal ensure Rclone process is started with these
+environment variables set in its process.
+.IP
+.nf
+\f[C]
+export OCI_RESOURCE_PRINCIPAL_VERSION=2.2
+export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1
+export OCI_RESOURCE_PRINCIPAL_PRIVATE_PEM=/usr/share/model-server/key.pem
+export OCI_RESOURCE_PRINCIPAL_RPST=/usr/share/model-server/security_token
+\f[R]
+.fi
+.PP
+Sample rclone configuration file for Authentication Provider Resource
+Principal:
+.IP
+.nf
+\f[C]
+[oos]
+type = oracleobjectstorage
+namespace = id34
+compartment = ocid1.compartment.oc1..aaba
+region = us-ashburn-1
+provider = resource_principal_auth
+\f[R]
+.fi
+.SS Authentication provider choice: No authentication
+.PP
+Public buckets do not require any authentication mechanism to read
+objects.
+Sample rclone configuration file for No authentication:
+.IP
+.nf
+\f[C]
+[oos]
+type = oracleobjectstorage
+namespace = id34
+compartment = ocid1.compartment.oc1..aaba
+region = us-ashburn-1
+provider = no_auth
+\f[R]
+.fi
+.SS Options
.SS Modified time
.PP
The modified time is stored as metadata on the object as
@@ -49043,6 +49663,42 @@ Use the default profile
.PP
Here are the Advanced options specific to oracleobjectstorage (Oracle
Cloud Infrastructure Object Storage).
+.SS --oos-storage-tier
+.PP
+The storage class to use when storing new objects in storage.
+https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm
+.PP
+Properties:
+.IP \[bu] 2
+Config: storage_tier
+.IP \[bu] 2
+Env Var: RCLONE_OOS_STORAGE_TIER
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: \[dq]Standard\[dq]
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+\[dq]Standard\[dq]
+.RS 2
+.IP \[bu] 2
+Standard storage tier, this is the default tier
+.RE
+.IP \[bu] 2
+\[dq]InfrequentAccess\[dq]
+.RS 2
+.IP \[bu] 2
+InfrequentAccess storage tier
+.RE
+.IP \[bu] 2
+\[dq]Archive\[dq]
+.RS 2
+.IP \[bu] 2
+Archive storage tier
+.RE
+.RE
.SS --oos-upload-cutoff
.PP
Cutoff for switching to chunked upload.
@@ -49224,6 +49880,150 @@ Env Var: RCLONE_OOS_NO_CHECK_BUCKET
Type: bool
.IP \[bu] 2
Default: false
+.SS --oos-sse-customer-key-file
+.PP
+To use SSE-C, a file containing the base64-encoded string of the AES-256
+encryption key associated with the object.
+Please note only one of
+sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed.\[aq]
+.PP
+Properties:
+.IP \[bu] 2
+Config: sse_customer_key_file
+.IP \[bu] 2
+Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY_FILE
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+\[dq]\[dq]
+.RS 2
+.IP \[bu] 2
+None
+.RE
+.RE
+.SS --oos-sse-customer-key
+.PP
+To use SSE-C, the optional header that specifies the base64-encoded
+256-bit encryption key to use to encrypt or decrypt the data.
+Please note only one of
+sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed.
+For more information, see Using Your Own Keys for Server-Side Encryption
+(https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm)
+.PP
+Properties:
+.IP \[bu] 2
+Config: sse_customer_key
+.IP \[bu] 2
+Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+\[dq]\[dq]
+.RS 2
+.IP \[bu] 2
+None
+.RE
+.RE
+.SS --oos-sse-customer-key-sha256
+.PP
+If using SSE-C, The optional header that specifies the base64-encoded
+SHA256 hash of the encryption key.
+This value is used to check the integrity of the encryption key.
+see Using Your Own Keys for Server-Side Encryption
+(https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm).
+.PP
+Properties:
+.IP \[bu] 2
+Config: sse_customer_key_sha256
+.IP \[bu] 2
+Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY_SHA256
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+\[dq]\[dq]
+.RS 2
+.IP \[bu] 2
+None
+.RE
+.RE
+.SS --oos-sse-kms-key-id
+.PP
+if using using your own master key in vault, this header specifies the
+OCID
+(https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm)
+of a master encryption key used to call the Key Management service to
+generate a data encryption key or to encrypt or decrypt a data
+encryption key.
+Please note only one of
+sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed.
+.PP
+Properties:
+.IP \[bu] 2
+Config: sse_kms_key_id
+.IP \[bu] 2
+Env Var: RCLONE_OOS_SSE_KMS_KEY_ID
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+\[dq]\[dq]
+.RS 2
+.IP \[bu] 2
+None
+.RE
+.RE
+.SS --oos-sse-customer-algorithm
+.PP
+If using SSE-C, the optional header that specifies \[dq]AES256\[dq] as
+the encryption algorithm.
+Object Storage supports \[dq]AES256\[dq] as the encryption algorithm.
+For more information, see Using Your Own Keys for Server-Side Encryption
+(https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm).
+.PP
+Properties:
+.IP \[bu] 2
+Config: sse_customer_algorithm
+.IP \[bu] 2
+Env Var: RCLONE_OOS_SSE_CUSTOMER_ALGORITHM
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+\[dq]\[dq]
+.RS 2
+.IP \[bu] 2
+None
+.RE
+.IP \[bu] 2
+\[dq]AES256\[dq]
+.RS 2
+.IP \[bu] 2
+AES256
+.RE
+.RE
.SS Backend commands
.PP
Here are the commands specific to the oracleobjectstorage backend.
@@ -49314,8 +50114,8 @@ rclone backend cleanup remote: [options] [+]
This command removes unfinished multipart uploads of age greater than
max-age which defaults to 24 hours.
.PP
-Note that you can use -i/--dry-run with this command to see what it
-would do.
+Note that you can use --interactive/-i or --dry-run with this command to
+see what it would do.
.IP
.nf
\f[C]
@@ -49438,7 +50238,7 @@ excess files in the bucket.
.IP
.nf
\f[C]
-rclone sync -i /home/local/directory remote:bucket
+rclone sync --interactive /home/local/directory remote:bucket
\f[R]
.fi
.SS --fast-list
@@ -50132,7 +50932,7 @@ any excess files in the container.
.IP
.nf
\f[C]
-rclone sync -i /home/local/directory remote:container
+rclone sync --interactive /home/local/directory remote:container
\f[R]
.fi
.SS Configuration from an OpenStack credentials file
@@ -51491,9 +52291,10 @@ and may change over time.
This is a backend for the Seafile (https://www.seafile.com/) storage
service: - It works with both the free community edition or the
professional edition.
-- Seafile versions 6.x and 7.x are all supported.
+- Seafile versions 6.x, 7.x, 8.x and 9.x are all supported.
- Encrypted libraries are also supported.
-- It supports 2FA enabled users
+- It supports 2FA enabled users - Using a Library API Token is
+\f[B]not\f[R] supported
.SS Configuration
.PP
There are two distinct modes you can setup your remote: - you point your
@@ -51623,7 +52424,7 @@ excess files in the library.
.IP
.nf
\f[C]
-rclone sync -i /home/local/directory seafile:library
+rclone sync --interactive /home/local/directory seafile:library
\f[R]
.fi
.SS Configuration in library mode
@@ -51741,7 +52542,7 @@ excess files in the library.
.IP
.nf
\f[C]
-rclone sync -i /home/local/directory seafile:
+rclone sync --interactive /home/local/directory seafile:
\f[R]
.fi
.SS --fast-list
@@ -51820,14 +52621,18 @@ If you run a link command on a file/dir that has already been shared,
you will get the exact same link.
.SS Compatibility
.PP
-It has been actively tested using the seafile docker
+It has been actively developed using the seafile docker
image (https://github.com/haiwen/seafile-docker) of these versions: -
6.3.4 community edition - 7.0.5 community edition - 7.1.3 community
-edition
+edition - 9.0.10 community edition
.PP
Versions below 6.0 are not supported.
Versions between 6.0 and 6.3 haven\[aq]t been tested and might not work
properly.
+.PP
+Each new version of \f[C]rclone\f[R] is automatically tested against the
+latest docker image (https://hub.docker.com/r/seafileltd/seafile-mc/) of
+the seafile community server.
.SS Standard options
.PP
Here are the Standard options specific to seafile (seafile).
@@ -52108,7 +52913,7 @@ any excess files in the directory.
.IP
.nf
\f[C]
-rclone sync -i /home/local/directory remote:directory
+rclone sync --interactive /home/local/directory remote:directory
\f[R]
.fi
.PP
@@ -53383,6 +54188,32 @@ Env Var: RCLONE_SMB_DOMAIN
Type: string
.IP \[bu] 2
Default: \[dq]WORKGROUP\[dq]
+.SS --smb-spn
+.PP
+Service principal name.
+.PP
+Rclone presents this name to the server.
+Some servers use this as further authentication, and it often needs to
+be set for clusters.
+For example:
+.IP
+.nf
+\f[C]
+cifs/remotehost:1020
+\f[R]
+.fi
+.PP
+Leave blank if not sure.
+.PP
+Properties:
+.IP \[bu] 2
+Config: spn
+.IP \[bu] 2
+Env Var: RCLONE_SMB_SPN
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
.SS Advanced options
.PP
Here are the Advanced options specific to smb (SMB / CIFS).
@@ -53693,14 +54524,14 @@ Choose a number from below, or type in your own value
\[rs] \[dq]new\[dq]
provider> new
Satellite Address. Custom satellite address should match the format: \[ga]\[at]:\[ga].
-Enter a string value. Press Enter for the default (\[dq]us-central-1.storj.io\[dq]).
+Enter a string value. Press Enter for the default (\[dq]us1.storj.io\[dq]).
Choose a number from below, or type in your own value
- 1 / US Central 1
- \[rs] \[dq]us-central-1.storj.io\[dq]
- 2 / Europe West 1
- \[rs] \[dq]europe-west-1.storj.io\[dq]
- 3 / Asia East 1
- \[rs] \[dq]asia-east-1.storj.io\[dq]
+ 1 / US1
+ \[rs] \[dq]us1.storj.io\[dq]
+ 2 / EU1
+ \[rs] \[dq]eu1.storj.io\[dq]
+ 3 / AP1
+ \[rs] \[dq]ap1.storj.io\[dq]
satellite_address> 1
API Key.
Enter a string value. Press Enter for the default (\[dq]\[dq]).
@@ -53712,7 +54543,7 @@ Remote config
--------------------
[remote]
type = storj
-satellite_address = 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S\[at]us-central-1.tardigrade.io:7777
+satellite_address = 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S\[at]us1.storj.io:7777
api_key = your-api-key-for-your-storj-project
passphrase = your-human-readable-encryption-passphrase
access_grant = the-access-grant-generated-from-the-api-key-and-passphrase
@@ -53789,27 +54620,27 @@ Provider: new
.IP \[bu] 2
Type: string
.IP \[bu] 2
-Default: \[dq]us-central-1.storj.io\[dq]
+Default: \[dq]us1.storj.io\[dq]
.IP \[bu] 2
Examples:
.RS 2
.IP \[bu] 2
-\[dq]us-central-1.storj.io\[dq]
+\[dq]us1.storj.io\[dq]
.RS 2
.IP \[bu] 2
-US Central 1
+US1
.RE
.IP \[bu] 2
-\[dq]europe-west-1.storj.io\[dq]
+\[dq]eu1.storj.io\[dq]
.RS 2
.IP \[bu] 2
-Europe West 1
+EU1
.RE
.IP \[bu] 2
-\[dq]asia-east-1.storj.io\[dq]
+\[dq]ap1.storj.io\[dq]
.RS 2
.IP \[bu] 2
-Asia East 1
+AP1
.RE
.RE
.SS --storj-api-key
@@ -53994,7 +54825,7 @@ changing the destination only, deleting any excess files.
.IP
.nf
\f[C]
-rclone sync -i --progress /home/local/directory/ remote:bucket/path/to/dir/
+rclone sync --interactive --progress /home/local/directory/ remote:bucket/path/to/dir/
\f[R]
.fi
.PP
@@ -54008,7 +54839,7 @@ The sync can be done also from Storj to the local file system.
.IP
.nf
\f[C]
-rclone sync -i --progress remote:bucket/path/to/dir/ /home/local/directory/
+rclone sync --interactive --progress remote:bucket/path/to/dir/ /home/local/directory/
\f[R]
.fi
.PP
@@ -54016,7 +54847,7 @@ Or between two Storj buckets.
.IP
.nf
\f[C]
-rclone sync -i --progress remote-us:bucket/path/to/dir/ remote-europe:bucket/path/to/dir/
+rclone sync --interactive --progress remote-us:bucket/path/to/dir/ remote-europe:bucket/path/to/dir/
\f[R]
.fi
.PP
@@ -54024,7 +54855,7 @@ Or even between another cloud storage and Storj.
.IP
.nf
\f[C]
-rclone sync -i --progress s3:bucket/path/to/dir/ storj:bucket/path/to/dir/
+rclone sync --interactive --progress s3:bucket/path/to/dir/ storj:bucket/path/to/dir/
\f[R]
.fi
.SS Limitations
@@ -55633,7 +56464,7 @@ excess files in the path.
.IP
.nf
\f[C]
-rclone sync -i /home/local/directory remote:directory
+rclone sync --interactive /home/local/directory remote:directory
\f[R]
.fi
.PP
@@ -55920,7 +56751,7 @@ excess files in the path.
.IP
.nf
\f[C]
-rclone sync -i /home/local/directory remote:directory
+rclone sync --interactive /home/local/directory remote:directory
\f[R]
.fi
.PP
@@ -56116,7 +56947,7 @@ Local paths are specified as normal filesystem paths, e.g.
.IP
.nf
\f[C]
-rclone sync -i /home/source /tmp/destination
+rclone sync --interactive /home/source /tmp/destination
\f[R]
.fi
.PP
@@ -57215,6 +58046,345 @@ Options:
.IP \[bu] 2
\[dq]error\[dq]: return an error based on option value
.SH Changelog
+.SS v1.62.0 - 2023-03-14
+.PP
+See commits (https://github.com/rclone/rclone/compare/v1.61.0...v1.62.0)
+.IP \[bu] 2
+New Features
+.RS 2
+.IP \[bu] 2
+accounting: Make checkers show what they are doing (Nick Craig-Wood)
+.IP \[bu] 2
+authorize: Add support for custom templates (Hunter Wittenborn)
+.IP \[bu] 2
+build
+.RS 2
+.IP \[bu] 2
+Update to go1.20 (Nick Craig-Wood, Anagh Kumar Baranwal)
+.IP \[bu] 2
+Add winget releaser workflow (Ryan Caezar Itang)
+.IP \[bu] 2
+Add dependabot (Ryan Caezar Itang)
+.RE
+.IP \[bu] 2
+doc updates (albertony, Bryan Kaplan, Gerard Bosch, IMTheNachoMan,
+Justin Winokur, Manoj Ghosh, Nick Craig-Wood, Ole Frost, Peter Brunner,
+piyushgarg, Ryan Caezar Itang, Simmon Li, ToBeFree)
+.IP \[bu] 2
+filter: Emit INFO message when can\[aq]t work out directory filters
+(Nick Craig-Wood)
+.IP \[bu] 2
+fs
+.RS 2
+.IP \[bu] 2
+Added multiple ca certificate support.
+(alankrit)
+.IP \[bu] 2
+Add \f[C]--max-delete-size\f[R] a delete size threshold (Leandro
+Sacchet)
+.RE
+.IP \[bu] 2
+fspath: Allow the symbols \f[C]\[at]\f[R] and \f[C]+\f[R] in remote
+names (albertony)
+.IP \[bu] 2
+lib/terminal: Enable windows console virtual terminal sequences
+processing (ANSI/VT100 colors) (albertony)
+.IP \[bu] 2
+move: If \f[C]--check-first\f[R] and \f[C]--order-by\f[R] are set then
+delete with perfect ordering (Nick Craig-Wood)
+.IP \[bu] 2
+serve http: Support \f[C]--auth-proxy\f[R] (Matthias Baur)
+.RE
+.IP \[bu] 2
+Bug Fixes
+.RS 2
+.IP \[bu] 2
+accounting
+.RS 2
+.IP \[bu] 2
+Avoid negative ETA values for very slow speeds (albertony)
+.IP \[bu] 2
+Limit length of ETA string (albertony)
+.IP \[bu] 2
+Show human readable elapsed time when longer than a day (albertony)
+.RE
+.IP \[bu] 2
+all: Apply codeql fixes (Aaron Gokaslan)
+.IP \[bu] 2
+build
+.RS 2
+.IP \[bu] 2
+Fix condition for manual workflow run (albertony)
+.IP \[bu] 2
+Fix building for ARMv5 and ARMv6 (albertony)
+.RS 2
+.IP \[bu] 2
+selfupdate: Consider ARM version
+.IP \[bu] 2
+install.sh: fix ARMv6 download
+.IP \[bu] 2
+version: Report ARM version
+.RE
+.RE
+.IP \[bu] 2
+deletefile: Return error code 4 if file does not exist (Nick Craig-Wood)
+.IP \[bu] 2
+docker: Fix volume plugin does not remount volume on docker restart
+(logopk)
+.IP \[bu] 2
+fs: Fix race conditions in \f[C]--max-delete\f[R] and
+\f[C]--max-delete-size\f[R] (Nick Craig-Wood)
+.IP \[bu] 2
+lib/oauthutil: Handle fatal errors better (Alex Chen)
+.IP \[bu] 2
+mount2: Fix \f[C]--allow-non-empty\f[R] (Nick Craig-Wood)
+.IP \[bu] 2
+operations: Fix concurrency: use \f[C]--checkers\f[R] unless
+transferring files (Nick Craig-Wood)
+.IP \[bu] 2
+serve ftp: Fix timestamps older than 1 year in listings (Nick
+Craig-Wood)
+.IP \[bu] 2
+sync: Fix concurrency: use \f[C]--checkers\f[R] unless transferring
+files (Nick Craig-Wood)
+.IP \[bu] 2
+tree
+.RS 2
+.IP \[bu] 2
+Fix nil pointer exception on stat failure (Nick Craig-Wood)
+.IP \[bu] 2
+Fix colored output on windows (albertony)
+.IP \[bu] 2
+Fix display of files with illegal Windows file system names (Nick
+Craig-Wood)
+.RE
+.RE
+.IP \[bu] 2
+Mount
+.RS 2
+.IP \[bu] 2
+Fix creating and renaming files on case insensitive backends (Nick
+Craig-Wood)
+.IP \[bu] 2
+Do not treat \f[C]\[rs]\[rs]?\[rs]\f[R] prefixed paths as network share
+paths on windows (albertony)
+.IP \[bu] 2
+Fix check for empty mount point on Linux (Nick Craig-Wood)
+.IP \[bu] 2
+Fix \f[C]--allow-non-empty\f[R] (Nick Craig-Wood)
+.IP \[bu] 2
+Avoid incorrect or premature overlap check on windows (albertony)
+.IP \[bu] 2
+Update to fuse3 after bazil.org/fuse update (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+VFS
+.RS 2
+.IP \[bu] 2
+Make uploaded files retain modtime with non-modtime backends (Nick
+Craig-Wood)
+.IP \[bu] 2
+Fix incorrect modtime on fs which don\[aq]t support setting modtime
+(Nick Craig-Wood)
+.IP \[bu] 2
+Fix rename of directory containing files to be uploaded (Nick
+Craig-Wood)
+.RE
+.IP \[bu] 2
+Local
+.RS 2
+.IP \[bu] 2
+Fix \f[C]%!w()\f[R] in \[dq]failed to read directory\[dq] error
+(Marks Polakovs)
+.IP \[bu] 2
+Fix exclusion of dangling symlinks with -L/--copy-links (Nick
+Craig-Wood)
+.RE
+.IP \[bu] 2
+Crypt
+.RS 2
+.IP \[bu] 2
+Obey \f[C]--ignore-checksum\f[R] (Nick Craig-Wood)
+.IP \[bu] 2
+Fix for unencrypted directory names on case insensitive remotes (Ole
+Frost)
+.RE
+.IP \[bu] 2
+Azure Blob
+.RS 2
+.IP \[bu] 2
+Remove workarounds for SDK bugs after v0.6.1 update (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+B2
+.RS 2
+.IP \[bu] 2
+Fix uploading files bigger than 1TiB (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Drive
+.RS 2
+.IP \[bu] 2
+Note that \f[C]--drive-acknowledge-abuse\f[R] needs SA Manager
+permission (Nick Craig-Wood)
+.IP \[bu] 2
+Make \f[C]--drive-stop-on-upload-limit\f[R] to respond to
+storageQuotaExceeded (Ninh Pham)
+.RE
+.IP \[bu] 2
+FTP
+.RS 2
+.IP \[bu] 2
+Retry 426 errors (Nick Craig-Wood)
+.IP \[bu] 2
+Retry errors when initiating downloads (Nick Craig-Wood)
+.IP \[bu] 2
+Revert to upstream \f[C]github.com/jlaffaye/ftp\f[R] now fix is merged
+(Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Google Cloud Storage
+.RS 2
+.IP \[bu] 2
+Add \f[C]--gcs-env-auth\f[R] to pick up IAM credentials from
+env/instance (Peter Brunner)
+.RE
+.IP \[bu] 2
+Mega
+.RS 2
+.IP \[bu] 2
+Add \f[C]--mega-use-https\f[R] flag (NodudeWasTaken)
+.RE
+.IP \[bu] 2
+Onedrive
+.RS 2
+.IP \[bu] 2
+Default onedrive personal to QuickXorHash as Microsoft is removing SHA1
+(Nick Craig-Wood)
+.IP \[bu] 2
+Add \f[C]--onedrive-hash-type\f[R] to change the hash in use (Nick
+Craig-Wood)
+.IP \[bu] 2
+Improve speed of QuickXorHash (LXY)
+.RE
+.IP \[bu] 2
+Oracle Object Storage
+.RS 2
+.IP \[bu] 2
+Speed up operations by using S3 pacer and setting minsleep to 10ms
+(Manoj Ghosh)
+.IP \[bu] 2
+Expose the \f[C]storage_tier\f[R] option in config (Manoj Ghosh)
+.IP \[bu] 2
+Bring your own encryption keys (Manoj Ghosh)
+.RE
+.IP \[bu] 2
+S3
+.RS 2
+.IP \[bu] 2
+Check multipart upload ETag when \f[C]--s3-no-head\f[R] is in use (Nick
+Craig-Wood)
+.IP \[bu] 2
+Add \f[C]--s3-sts-endpoint\f[R] to specify STS endpoint (Nick
+Craig-Wood)
+.IP \[bu] 2
+Fix incorrect tier support for StorJ and IDrive when pointing at a file
+(Ole Frost)
+.IP \[bu] 2
+Fix AWS STS failing if \f[C]--s3-endpoint\f[R] is set (Nick Craig-Wood)
+.IP \[bu] 2
+Make purge remove directory markers too (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Seafile
+.RS 2
+.IP \[bu] 2
+Renew library password (Fred)
+.RE
+.IP \[bu] 2
+SFTP
+.RS 2
+.IP \[bu] 2
+Fix uploads being 65% slower than they should be with crypt (Nick
+Craig-Wood)
+.RE
+.IP \[bu] 2
+Smb
+.RS 2
+.IP \[bu] 2
+Allow SPN (service principal name) to be configured (Nick Craig-Wood)
+.IP \[bu] 2
+Check smb connection is closed (happyxhw)
+.RE
+.IP \[bu] 2
+Storj
+.RS 2
+.IP \[bu] 2
+Implement \f[C]rclone link\f[R] (Kaloyan Raev)
+.IP \[bu] 2
+Implement \f[C]rclone purge\f[R] (Kaloyan Raev)
+.IP \[bu] 2
+Update satellite urls and labels (Kaloyan Raev)
+.RE
+.IP \[bu] 2
+WebDAV
+.RS 2
+.IP \[bu] 2
+Fix interop with davrods server (Nick Craig-Wood)
+.RE
+.SS v1.61.1 - 2022-12-23
+.PP
+See commits (https://github.com/rclone/rclone/compare/v1.61.0...v1.61.1)
+.IP \[bu] 2
+Bug Fixes
+.RS 2
+.IP \[bu] 2
+docs:
+.RS 2
+.IP \[bu] 2
+Show only significant parts of version number in version introduced
+label (albertony)
+.IP \[bu] 2
+Fix unescaped HTML (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+lib/http: Shutdown all servers on exit to remove unix socket (Nick
+Craig-Wood)
+.IP \[bu] 2
+rc: Fix \f[C]--rc-addr\f[R] flag (which is an alternate for
+\f[C]--url\f[R]) (Anagh Kumar Baranwal)
+.IP \[bu] 2
+serve restic
+.RS 2
+.IP \[bu] 2
+Don\[aq]t serve via http if serving via \f[C]--stdio\f[R] (Nick
+Craig-Wood)
+.IP \[bu] 2
+Fix immediate exit when not using stdio (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+serve webdav
+.RS 2
+.IP \[bu] 2
+Fix \f[C]--baseurl\f[R] handling after \f[C]lib/http\f[R] refactor (Nick
+Craig-Wood)
+.IP \[bu] 2
+Fix running duplicate Serve call (Nick Craig-Wood)
+.RE
+.RE
+.IP \[bu] 2
+Azure Blob
+.RS 2
+.IP \[bu] 2
+Fix \[dq]409 Public access is not permitted on this storage account\[dq]
+(Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+S3
+.RS 2
+.IP \[bu] 2
+storj: Update endpoints (Kaloyan Raev)
+.RE
.SS v1.61.0 - 2022-12-20
.PP
See commits (https://github.com/rclone/rclone/compare/v1.60.0...v1.61.0)
@@ -68523,7 +69693,7 @@ e.g.
.IP
.nf
\f[C]
-rclone sync -i drive:Folder s3:bucket
+rclone sync --interactive drive:Folder s3:bucket
\f[R]
.fi
.SS Using rclone from multiple locations at the same time
@@ -68533,8 +69703,8 @@ different subdirectory for the output, e.g.
.IP
.nf
\f[C]
-Server A> rclone sync -i /tmp/whatever remote:ServerA
-Server B> rclone sync -i /tmp/whatever remote:ServerB
+Server A> rclone sync --interactive /tmp/whatever remote:ServerA
+Server B> rclone sync --interactive /tmp/whatever remote:ServerB
\f[R]
.fi
.PP
@@ -70131,6 +71301,50 @@ vanplus <60313789+vanplus@users.noreply.github.com>
Jack <16779171+jkpe@users.noreply.github.com>
.IP \[bu] 2
Abdullah Saglam
+.IP \[bu] 2
+Marks Polakovs
+.IP \[bu] 2
+piyushgarg
+.IP \[bu] 2
+Kaloyan Raev
+.IP \[bu] 2
+IMTheNachoMan
+.IP \[bu] 2
+alankrit
+.IP \[bu] 2
+Bryan Kaplan <#\[at]bryankaplan.com>
+.IP \[bu] 2
+LXY <767763591@qq.com>
+.IP \[bu] 2
+Simmon Li (he/him)
+.IP \[bu] 2
+happyxhw <44490504+happyxhw@users.noreply.github.com>
+.IP \[bu] 2
+Simmon Li (he/him)
+.IP \[bu] 2
+Matthias Baur
+.IP \[bu] 2
+Hunter Wittenborn
+.IP \[bu] 2
+logopk
+.IP \[bu] 2
+Gerard Bosch <30733556+gerardbosch@users.noreply.github.com>
+.IP \[bu] 2
+ToBeFree
+.IP \[bu] 2
+NodudeWasTaken <75137537+NodudeWasTaken@users.noreply.github.com>
+.IP \[bu] 2
+Peter Brunner
+.IP \[bu] 2
+Ninh Pham
+.IP \[bu] 2
+Ryan Caezar Itang
+.IP \[bu] 2
+Peter Brunner
+.IP \[bu] 2
+Leandro Sacchet
+.IP \[bu] 2
+dependabot[bot] <49699333+dependabot[bot]\[at]users.noreply.github.com>
.SH Contact the rclone project
.SS Forum
.PP