mirror of
https://github.com/rclone/rclone.git
synced 2025-01-08 23:40:29 +01:00
docs: cleanup backend hashes sections
This commit is contained in:
parent
98a96596df
commit
a7faf05393
@ -127,13 +127,13 @@ To copy a local directory to an Amazon Drive directory called backup
|
||||
|
||||
rclone copy /home/source remote:backup
|
||||
|
||||
### Modified time and MD5SUMs
|
||||
### Modification times and hashes
|
||||
|
||||
Amazon Drive doesn't allow modification times to be changed via
|
||||
the API so these won't be accurate or used for syncing.
|
||||
|
||||
It does store MD5SUMs so for a more accurate sync, you can use the
|
||||
`--checksum` flag.
|
||||
It does support the MD5 hash algorithm, so for a more accurate sync,
|
||||
you can use the `--checksum` flag.
|
||||
|
||||
### Restricted filename characters
|
||||
|
||||
|
@ -75,10 +75,10 @@ This remote supports `--fast-list` which allows you to use fewer
|
||||
transactions in exchange for more memory. See the [rclone
|
||||
docs](/docs/#fast-list) for more details.
|
||||
|
||||
### Modified time
|
||||
### Modification times and hashes
|
||||
|
||||
The modified time is stored as metadata on the object with the `mtime`
|
||||
key. It is stored using RFC3339 Format time with nanosecond
|
||||
The modification time is stored as metadata on the object with the
|
||||
`mtime` key. It is stored using RFC3339 Format time with nanosecond
|
||||
precision. The metadata is supplied during directory listings so
|
||||
there is no performance overhead to using it.
|
||||
|
||||
@ -88,6 +88,10 @@ flag. Note that rclone can't set `LastModified`, so using the
|
||||
`--update` flag when syncing is recommended if using
|
||||
`--use-server-modtime`.
|
||||
|
||||
MD5 hashes are stored with blobs. However blobs that were uploaded in
|
||||
chunks only have an MD5 if the source remote was capable of MD5
|
||||
hashes, e.g. the local disk.
|
||||
|
||||
### Performance
|
||||
|
||||
When uploading large files, increasing the value of
|
||||
@ -116,12 +120,6 @@ These only get replaced if they are the last character in the name:
|
||||
Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8),
|
||||
as they can't be used in JSON strings.
|
||||
|
||||
### Hashes
|
||||
|
||||
MD5 hashes are stored with blobs. However blobs that were uploaded in
|
||||
chunks only have an MD5 if the source remote was capable of MD5
|
||||
hashes, e.g. the local disk.
|
||||
|
||||
### Authentication {#authentication}
|
||||
|
||||
There are a number of ways of supplying credentials for Azure Blob
|
||||
|
@ -96,9 +96,9 @@ This remote supports `--fast-list` which allows you to use fewer
|
||||
transactions in exchange for more memory. See the [rclone
|
||||
docs](/docs/#fast-list) for more details.
|
||||
|
||||
### Modified time
|
||||
### Modification times
|
||||
|
||||
The modified time is stored as metadata on the object as
|
||||
The modification time is stored as metadata on the object as
|
||||
`X-Bz-Info-src_last_modified_millis` as milliseconds since 1970-01-01
|
||||
in the Backblaze standard. Other tools should be able to use this as
|
||||
a modified time.
|
||||
|
@ -298,7 +298,7 @@ while `--ignore-checksum` controls whether checksums are considered during the c
|
||||
if there ARE diffs.
|
||||
* Unless `--ignore-listing-checksum` is passed, bisync currently computes hashes for one path
|
||||
*even when there's no common hash with the other path*
|
||||
(for example, a [crypt](/crypt/#modified-time-and-hashes) remote.)
|
||||
(for example, a [crypt](/crypt/#modification-times-and-hashes) remote.)
|
||||
* If both paths support checksums and have a common hash,
|
||||
AND `--ignore-listing-checksum` was not specified when creating the listings,
|
||||
`--check-sync=only` can be used to compare Path1 vs. Path2 checksums (as of the time the previous listings were created.)
|
||||
@ -402,7 +402,7 @@ Alternately, a `--resync` may be used (Path1 versions will be pushed
|
||||
to Path2). Consider the situation carefully and perhaps use `--dry-run`
|
||||
before you commit to the changes.
|
||||
|
||||
### Modification time
|
||||
### Modification times
|
||||
|
||||
Bisync relies on file timestamps to identify changed files and will
|
||||
_refuse_ to operate if backend lacks the modification time support.
|
||||
|
@ -199,7 +199,7 @@ d) Delete this remote
|
||||
y/e/d> y
|
||||
```
|
||||
|
||||
### Modified time and hashes
|
||||
### Modification times and hashes
|
||||
|
||||
Box allows modification times to be set on objects accurate to 1
|
||||
second. These will be used to detect whether objects need syncing or
|
||||
|
@ -244,7 +244,7 @@ revert (sometimes silently) to time/size comparison if compatible hashsums
|
||||
between source and target are not found.
|
||||
|
||||
|
||||
### Modified time
|
||||
### Modification times
|
||||
|
||||
Chunker stores modification times using the wrapped remote so support
|
||||
depends on that. For a small non-chunked file the chunker overlay simply
|
||||
|
@ -405,7 +405,7 @@ Example:
|
||||
`1/12/qgm4avr35m5loi1th53ato71v0`
|
||||
|
||||
|
||||
### Modified time and hashes
|
||||
### Modification times and hashes
|
||||
|
||||
Crypt stores modification times using the underlying remote so support
|
||||
depends on that.
|
||||
|
@ -361,10 +361,14 @@ large folder (10600 directories, 39000 files):
|
||||
- without `--fast-list`: 22:05 min
|
||||
- with `--fast-list`: 58s
|
||||
|
||||
### Modified time
|
||||
### Modification times and hashes
|
||||
|
||||
Google drive stores modification times accurate to 1 ms.
|
||||
|
||||
Hash algorithms MD5, SHA1 and SHA256 are supported. Note, however,
|
||||
that a small fraction of files uploaded may not have SHA1 or SHA256
|
||||
hashes especially if they were uploaded before 2018.
|
||||
|
||||
### Restricted filename characters
|
||||
|
||||
Only Invalid UTF-8 bytes will be [replaced](/overview/#invalid-utf8),
|
||||
@ -1528,9 +1532,10 @@ Waiting a moderate period of time between attempts (estimated to be
|
||||
approximately 1 hour) and/or not using --fast-list both seem to be
|
||||
effective in preventing the problem.
|
||||
|
||||
### Hashes
|
||||
### SHA1 or SHA256 hashes may be missing
|
||||
|
||||
We need to say that all files have MD5 hashes, but a small fraction of files uploaded may not have SHA1 or SHA256 hashes especially if they were uploaded before 2018.
|
||||
All files have MD5 hashes, but a small fraction of files uploaded may
|
||||
not have SHA1 or SHA256 hashes especially if they were uploaded before 2018.
|
||||
|
||||
## Making your own client_id
|
||||
|
||||
|
@ -97,7 +97,7 @@ You can then use team folders like this `remote:/TeamFolder` and
|
||||
A leading `/` for a Dropbox personal account will do nothing, but it
|
||||
will take an extra HTTP transaction so it should be avoided.
|
||||
|
||||
### Modified time and Hashes
|
||||
### Modification times and hashes
|
||||
|
||||
Dropbox supports modified times, but the only way to set a
|
||||
modification time is to re-upload the file.
|
||||
|
@ -76,11 +76,11 @@ To copy a local directory to a 1Fichier directory called backup
|
||||
|
||||
rclone copy /home/source remote:backup
|
||||
|
||||
### Modified time and hashes ###
|
||||
### Modification times and hashes
|
||||
|
||||
1Fichier does not support modification times. It supports the Whirlpool hash algorithm.
|
||||
|
||||
### Duplicated files ###
|
||||
### Duplicated files
|
||||
|
||||
1Fichier can have two files with exactly the same name and path (unlike a
|
||||
normal file system).
|
||||
|
@ -101,7 +101,7 @@ To copy a local directory to an Enterprise File Fabric directory called backup
|
||||
|
||||
rclone copy /home/source remote:backup
|
||||
|
||||
### Modified time and hashes
|
||||
### Modification times and hashes
|
||||
|
||||
The Enterprise File Fabric allows modification times to be set on
|
||||
files accurate to 1 second. These will be used to detect whether
|
||||
|
@ -486,7 +486,7 @@ at present.
|
||||
|
||||
The `ftp_proxy` environment variable is not currently supported.
|
||||
|
||||
#### Modified time
|
||||
### Modification times
|
||||
|
||||
File modification time (timestamps) is supported to 1 second resolution
|
||||
for major FTP servers: ProFTPd, PureFTPd, VsFTPd, and FileZilla FTP server.
|
||||
|
@ -247,7 +247,7 @@ Eg `--header-upload "Content-Type text/potato"`
|
||||
Note that the last of these is for setting custom metadata in the form
|
||||
`--header-upload "x-goog-meta-key: value"`
|
||||
|
||||
### Modification time
|
||||
### Modification times
|
||||
|
||||
Google Cloud Storage stores md5sum natively.
|
||||
Google's [gsutil](https://cloud.google.com/storage/docs/gsutil) tool stores modification time
|
||||
|
@ -428,7 +428,7 @@ if you uploaded an image to `upload` then uploaded the same image to
|
||||
what it was uploaded with initially, not what you uploaded it with to
|
||||
`album`. In practise this shouldn't cause too many problems.
|
||||
|
||||
### Modified time
|
||||
### Modification times
|
||||
|
||||
The date shown of media in Google Photos is the creation date as
|
||||
determined by the EXIF information, or the upload date if that is not
|
||||
|
@ -126,7 +126,7 @@ username = root
|
||||
You can stop this image with `docker kill rclone-hdfs` (**NB** it does not use volumes, so all data
|
||||
uploaded will be lost.)
|
||||
|
||||
### Modified time
|
||||
### Modification times
|
||||
|
||||
Time accurate to 1 second is stored.
|
||||
|
||||
|
@ -123,7 +123,7 @@ Using
|
||||
|
||||
the process is very similar to the process of initial setup exemplified before.
|
||||
|
||||
### Modified time and hashes
|
||||
### Modification times and hashes
|
||||
|
||||
HiDrive allows modification times to be set on objects accurate to 1 second.
|
||||
|
||||
|
@ -105,7 +105,7 @@ Sync the remote `directory` to `/home/local/directory`, deleting any excess file
|
||||
|
||||
This remote is read only - you can't upload files to an HTTP server.
|
||||
|
||||
### Modified time
|
||||
### Modification times
|
||||
|
||||
Most HTTP servers store time accurate to 1 second.
|
||||
|
||||
|
@ -245,7 +245,7 @@ Note also that with rclone version 1.58 and newer, information about
|
||||
[MIME types](/overview/#mime-type) and metadata item [utime](#metadata)
|
||||
are not available when using `--fast-list`.
|
||||
|
||||
### Modified time and hashes
|
||||
### Modification times and hashes
|
||||
|
||||
Jottacloud allows modification times to be set on objects accurate to 1
|
||||
second. These will be used to detect whether objects need syncing or
|
||||
|
@ -19,10 +19,10 @@ For consistencies sake one can also configure a remote of type
|
||||
rclone remote paths, e.g. `remote:path/to/wherever`, but it is probably
|
||||
easier not to.
|
||||
|
||||
### Modified time ###
|
||||
### Modification times
|
||||
|
||||
Rclone reads and writes the modified time using an accuracy determined by
|
||||
the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second
|
||||
Rclone reads and writes the modification times using an accuracy determined
|
||||
by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second
|
||||
on OS X.
|
||||
|
||||
### Filenames ###
|
||||
|
@ -123,17 +123,15 @@ excess files in the path.
|
||||
|
||||
rclone sync --interactive /home/local/directory remote:directory
|
||||
|
||||
### Modified time
|
||||
### Modification times and hashes
|
||||
|
||||
Files support a modification time attribute with up to 1 second precision.
|
||||
Directories do not have a modification time, which is shown as "Jan 1 1970".
|
||||
|
||||
### Hash checksums
|
||||
|
||||
Hash sums use a custom Mail.ru algorithm based on SHA1.
|
||||
File hashes are supported, with a custom Mail.ru algorithm based on SHA1.
|
||||
If file size is less than or equal to the SHA1 block size (20 bytes),
|
||||
its hash is simply its data right-padded with zero bytes.
|
||||
Hash sum of a larger file is computed as a SHA1 sum of the file data
|
||||
Hashes of a larger file is computed as a SHA1 of the file data
|
||||
bytes concatenated with a decimal representation of the data length.
|
||||
|
||||
### Emptying Trash
|
||||
|
@ -82,7 +82,7 @@ To copy a local directory to an Mega directory called backup
|
||||
|
||||
rclone copy /home/source remote:backup
|
||||
|
||||
### Modified time and hashes
|
||||
### Modification times and hashes
|
||||
|
||||
Mega does not support modification times or hashes yet.
|
||||
|
||||
|
@ -54,7 +54,7 @@ testing or with an rclone server or rclone mount, e.g.
|
||||
rclone serve webdav :memory:
|
||||
rclone serve sftp :memory:
|
||||
|
||||
### Modified time and hashes
|
||||
### Modification times and hashes
|
||||
|
||||
The memory backend supports MD5 hashes and modification times accurate to 1 nS.
|
||||
|
||||
|
@ -162,7 +162,7 @@ You may try to [verify you account](https://docs.microsoft.com/en-us/azure/activ
|
||||
Note: If you have a special region, you may need a different host in step 4 and 5. Here are [some hints](https://github.com/rclone/rclone/blob/bc23bf11db1c78c6ebbf8ea538fbebf7058b4176/backend/onedrive/onedrive.go#L86).
|
||||
|
||||
|
||||
### Modification time and hashes
|
||||
### Modification times and hashes
|
||||
|
||||
OneDrive allows modification times to be set on objects accurate to 1
|
||||
second. These will be used to detect whether objects need syncing or
|
||||
|
@ -64,12 +64,14 @@ To copy a local directory to an OpenDrive directory called backup
|
||||
|
||||
rclone copy /home/source remote:backup
|
||||
|
||||
### Modified time and MD5SUMs
|
||||
### Modification times and hashes
|
||||
|
||||
OpenDrive allows modification times to be set on objects accurate to 1
|
||||
second. These will be used to detect whether objects need syncing or
|
||||
not.
|
||||
|
||||
The MD5 hash algorithm is supported.
|
||||
|
||||
### Restricted filename characters
|
||||
|
||||
| Character | Value | Replacement |
|
||||
|
@ -154,6 +154,7 @@ Rclone supports the following OCI authentication provider.
|
||||
No authentication
|
||||
|
||||
### User Principal
|
||||
|
||||
Sample rclone config file for Authentication Provider User Principal:
|
||||
|
||||
[oos]
|
||||
@ -174,6 +175,7 @@ Considerations:
|
||||
- If the user is deleted, the config file will no longer work and may cause automation regressions that use the user's credentials.
|
||||
|
||||
### Instance Principal
|
||||
|
||||
An OCI compute instance can be authorized to use rclone by using it's identity and certificates as an instance principal.
|
||||
With this approach no credentials have to be stored and managed.
|
||||
|
||||
@ -203,6 +205,7 @@ Considerations:
|
||||
- It is applicable for oci compute instances only. It cannot be used on external instance or resources.
|
||||
|
||||
### Resource Principal
|
||||
|
||||
Resource principal auth is very similar to instance principal auth but used for resources that are not
|
||||
compute instances such as [serverless functions](https://docs.oracle.com/en-us/iaas/Content/Functions/Concepts/functionsoverview.htm).
|
||||
To use resource principal ensure Rclone process is started with these environment variables set in its process.
|
||||
@ -222,6 +225,7 @@ Sample rclone configuration file for Authentication Provider Resource Principal:
|
||||
provider = resource_principal_auth
|
||||
|
||||
### No authentication
|
||||
|
||||
Public buckets do not require any authentication mechanism to read objects.
|
||||
Sample rclone configuration file for No authentication:
|
||||
|
||||
@ -232,10 +236,9 @@ Sample rclone configuration file for No authentication:
|
||||
region = us-ashburn-1
|
||||
provider = no_auth
|
||||
|
||||
## Options
|
||||
### Modified time
|
||||
### Modification times and hashes
|
||||
|
||||
The modified time is stored as metadata on the object as
|
||||
The modification time is stored as metadata on the object as
|
||||
`opc-meta-mtime` as floating point since the epoch, accurate to 1 ns.
|
||||
|
||||
If the modification time needs to be updated rclone will attempt to perform a server
|
||||
@ -245,6 +248,8 @@ In the case the object is larger than 5Gb, the object will be uploaded rather th
|
||||
Note that reading this from the object takes an additional `HEAD` request as the metadata
|
||||
isn't returned in object listings.
|
||||
|
||||
The MD5 hash algorithm is supported.
|
||||
|
||||
### Multipart uploads
|
||||
|
||||
rclone supports multipart uploads with OOS which means that it can
|
||||
|
@ -90,7 +90,7 @@ mistake or an unsupported feature.
|
||||
⁹ QingStor does not support SetModTime for objects bigger than 5 GiB.
|
||||
|
||||
¹⁰ FTP supports modtimes for the major FTP servers, and also others
|
||||
if they advertised required protocol extensions. See [this](/ftp/#modified-time)
|
||||
if they advertised required protocol extensions. See [this](/ftp/#modification-times)
|
||||
for more details.
|
||||
|
||||
¹¹ Internet Archive requires option `wait_archive` to be set to a non-zero value
|
||||
|
@ -86,7 +86,7 @@ To copy a local directory to a pCloud directory called backup
|
||||
|
||||
rclone copy /home/source remote:backup
|
||||
|
||||
### Modified time and hashes ###
|
||||
### Modification times and hashes
|
||||
|
||||
pCloud allows modification times to be set on objects accurate to 1
|
||||
second. These will be used to detect whether objects need syncing or
|
||||
|
@ -71,6 +71,13 @@ d) Delete this remote
|
||||
y/e/d> y
|
||||
```
|
||||
|
||||
### Modification times and hashes
|
||||
|
||||
PikPak keeps modification times on objects, and updates them when uploading objects,
|
||||
but it does not support changing only the modification time
|
||||
|
||||
The MD5 hash algorithm is supported.
|
||||
|
||||
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/pikpak/pikpak.go then run make backenddocs" >}}
|
||||
### Standard options
|
||||
|
||||
@ -294,12 +301,13 @@ Result:
|
||||
|
||||
{{< rem autogenerated options stop >}}
|
||||
|
||||
## Limitations ##
|
||||
## Limitations
|
||||
|
||||
### Hashes ###
|
||||
### Hashes may be empty
|
||||
|
||||
PikPak supports MD5 hash, but sometimes given empty especially for user-uploaded files.
|
||||
|
||||
### Deleted files ###
|
||||
### Deleted files still visible with trashed-only
|
||||
|
||||
Deleted files will still be visible with `--pikpak-trashed-only` even after the trash emptied. This goes away after few days.
|
||||
Deleted files will still be visible with `--pikpak-trashed-only` even after the
|
||||
trash emptied. This goes away after few days.
|
||||
|
@ -84,7 +84,7 @@ To copy a local directory to an premiumize.me directory called backup
|
||||
|
||||
rclone copy /home/source remote:backup
|
||||
|
||||
### Modified time and hashes
|
||||
### Modification times and hashes
|
||||
|
||||
premiumize.me does not support modification times or hashes, therefore
|
||||
syncing will default to `--size-only` checking. Note that using
|
||||
|
@ -95,10 +95,12 @@ To copy a local directory to an Proton Drive directory called backup
|
||||
|
||||
rclone copy /home/source remote:backup
|
||||
|
||||
### Modified time
|
||||
### Modification times and hashes
|
||||
|
||||
Proton Drive Bridge does not support updating modification times yet.
|
||||
|
||||
The SHA1 hash algorithm is supported.
|
||||
|
||||
### Restricted filename characters
|
||||
|
||||
Invalid UTF-8 bytes will be [replaced](/overview/#invalid-utf8), also left and
|
||||
|
@ -121,7 +121,7 @@ d) Delete this remote
|
||||
y/e/d> y
|
||||
```
|
||||
|
||||
### Modified time and hashes
|
||||
### Modification times and hashes
|
||||
|
||||
Quatrix allows modification times to be set on objects accurate to 1 microsecond.
|
||||
These will be used to detect whether objects need syncing or not.
|
||||
|
@ -271,7 +271,9 @@ d) Delete this remote
|
||||
y/e/d>
|
||||
```
|
||||
|
||||
### Modified time
|
||||
### Modification times and hashes
|
||||
|
||||
#### Modification times
|
||||
|
||||
The modified time is stored as metadata on the object as
|
||||
`X-Amz-Meta-Mtime` as floating point since the epoch, accurate to 1 ns.
|
||||
@ -284,6 +286,29 @@ storage the object will be uploaded rather than copied.
|
||||
Note that reading this from the object takes an additional `HEAD`
|
||||
request as the metadata isn't returned in object listings.
|
||||
|
||||
#### Hashes
|
||||
|
||||
For small objects which weren't uploaded as multipart uploads (objects
|
||||
sized below `--s3-upload-cutoff` if uploaded with rclone) rclone uses
|
||||
the `ETag:` header as an MD5 checksum.
|
||||
|
||||
However for objects which were uploaded as multipart uploads or with
|
||||
server side encryption (SSE-AWS or SSE-C) the `ETag` header is no
|
||||
longer the MD5 sum of the data, so rclone adds an additional piece of
|
||||
metadata `X-Amz-Meta-Md5chksum` which is a base64 encoded MD5 hash (in
|
||||
the same format as is required for `Content-MD5`). You can use base64 -d and hexdump to check this value manually:
|
||||
|
||||
echo 'VWTGdNx3LyXQDfA0e2Edxw==' | base64 -d | hexdump
|
||||
|
||||
or you can use `rclone check` to verify the hashes are OK.
|
||||
|
||||
For large objects, calculating this hash can take some time so the
|
||||
addition of this hash can be disabled with `--s3-disable-checksum`.
|
||||
This will mean that these objects do not have an MD5 checksum.
|
||||
|
||||
Note that reading this from the object takes an additional `HEAD`
|
||||
request as the metadata isn't returned in object listings.
|
||||
|
||||
### Reducing costs
|
||||
|
||||
#### Avoiding HEAD requests to read the modification time
|
||||
@ -375,29 +400,6 @@ there for more details.
|
||||
|
||||
Setting this flag increases the chance for undetected upload failures.
|
||||
|
||||
### Hashes
|
||||
|
||||
For small objects which weren't uploaded as multipart uploads (objects
|
||||
sized below `--s3-upload-cutoff` if uploaded with rclone) rclone uses
|
||||
the `ETag:` header as an MD5 checksum.
|
||||
|
||||
However for objects which were uploaded as multipart uploads or with
|
||||
server side encryption (SSE-AWS or SSE-C) the `ETag` header is no
|
||||
longer the MD5 sum of the data, so rclone adds an additional piece of
|
||||
metadata `X-Amz-Meta-Md5chksum` which is a base64 encoded MD5 hash (in
|
||||
the same format as is required for `Content-MD5`). You can use base64 -d and hexdump to check this value manually:
|
||||
|
||||
echo 'VWTGdNx3LyXQDfA0e2Edxw==' | base64 -d | hexdump
|
||||
|
||||
or you can use `rclone check` to verify the hashes are OK.
|
||||
|
||||
For large objects, calculating this hash can take some time so the
|
||||
addition of this hash can be disabled with `--s3-disable-checksum`.
|
||||
This will mean that these objects do not have an MD5 checksum.
|
||||
|
||||
Note that reading this from the object takes an additional `HEAD`
|
||||
request as the metadata isn't returned in object listings.
|
||||
|
||||
### Versions
|
||||
|
||||
When bucket versioning is enabled (this can be done with rclone with
|
||||
@ -660,7 +662,8 @@ According to AWS's [documentation on S3 Object Lock](https://docs.aws.amazon.com
|
||||
|
||||
> If you configure a default retention period on a bucket, requests to upload objects in such a bucket must include the Content-MD5 header.
|
||||
|
||||
As mentioned in the [Hashes](#hashes) section, small files that are not uploaded as multipart, use a different tag, causing the upload to fail.
|
||||
As mentioned in the [Modification times and hashes](#modification-times-and-hashes) section,
|
||||
small files that are not uploaded as multipart, use a different tag, causing the upload to fail.
|
||||
A simple solution is to set the `--s3-upload-cutoff 0` and force all the files to be uploaded as multipart.
|
||||
|
||||
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/s3/s3.go then run make backenddocs" >}}
|
||||
|
@ -359,7 +359,7 @@ commands is prohibited. Set the configuration option `disable_hashcheck`
|
||||
to `true` to disable checksumming entirely, or set `shell_type` to `none`
|
||||
to disable all functionality based on remote shell command execution.
|
||||
|
||||
### Modified time
|
||||
### Modification times and hashes
|
||||
|
||||
Modified times are stored on the server to 1 second precision.
|
||||
|
||||
|
@ -105,7 +105,7 @@ To copy a local directory to an ShareFile directory called backup
|
||||
|
||||
Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
|
||||
|
||||
### Modified time and hashes
|
||||
### Modification times and hashes
|
||||
|
||||
ShareFile allows modification times to be set on objects accurate to 1
|
||||
second. These will be used to detect whether objects need syncing or
|
||||
|
@ -98,7 +98,7 @@ Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
|
||||
create a folder, which rclone will create as a "Sync Folder" with
|
||||
SugarSync.
|
||||
|
||||
### Modified time and hashes
|
||||
### Modification times and hashes
|
||||
|
||||
SugarSync does not support modification times or hashes, therefore
|
||||
syncing will default to `--size-only` checking. Note that using
|
||||
|
@ -227,7 +227,7 @@ sufficient to determine if it is "dirty". By using `--update` along with
|
||||
`--use-server-modtime`, you can avoid the extra API call and simply upload
|
||||
files whose local modtime is newer than the time it was last uploaded.
|
||||
|
||||
### Modified time
|
||||
### Modification times and hashes
|
||||
|
||||
The modified time is stored as metadata on the object as
|
||||
`X-Object-Meta-Mtime` as floating point since the epoch accurate to 1
|
||||
@ -236,6 +236,8 @@ ns.
|
||||
This is a de facto standard (used in the official python-swiftclient
|
||||
amongst others) for storing the modification time for an object.
|
||||
|
||||
The MD5 hash algorithm is supported.
|
||||
|
||||
### Restricted filename characters
|
||||
|
||||
| Character | Value | Replacement |
|
||||
|
@ -82,7 +82,7 @@ To copy a local directory to an Uptobox directory called backup
|
||||
|
||||
rclone copy /home/source remote:backup
|
||||
|
||||
### Modified time and hashes
|
||||
### Modification times and hashes
|
||||
|
||||
Uptobox supports neither modified times nor checksums. All timestamps
|
||||
will read as that set by `--default-time`.
|
||||
|
@ -101,7 +101,7 @@ To copy a local directory to an WebDAV directory called backup
|
||||
|
||||
rclone copy /home/source remote:backup
|
||||
|
||||
### Modified time and hashes ###
|
||||
### Modification times and hashes
|
||||
|
||||
Plain WebDAV does not support modified times. However when used with
|
||||
Fastmail Files, Owncloud or Nextcloud rclone will support modified times.
|
||||
|
@ -87,14 +87,12 @@ excess files in the path.
|
||||
|
||||
Yandex paths may be as deep as required, e.g. `remote:directory/subdirectory`.
|
||||
|
||||
### Modified time
|
||||
### Modification times and hashes
|
||||
|
||||
Modified times are supported and are stored accurate to 1 ns in custom
|
||||
metadata called `rclone_modified` in RFC3339 with nanoseconds format.
|
||||
|
||||
### MD5 checksums
|
||||
|
||||
MD5 checksums are natively supported by Yandex Disk.
|
||||
The MD5 hash algorithm is natively supported by Yandex Disk.
|
||||
|
||||
### Emptying Trash
|
||||
|
||||
|
@ -107,13 +107,11 @@ excess files in the path.
|
||||
|
||||
Zoho paths may be as deep as required, eg `remote:directory/subdirectory`.
|
||||
|
||||
### Modified time
|
||||
### Modification times and hashes
|
||||
|
||||
Modified times are currently not supported for Zoho Workdrive
|
||||
|
||||
### Checksums
|
||||
|
||||
No checksums are supported.
|
||||
No hash algorithms are supported.
|
||||
|
||||
### Usage information
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user