mirror of
https://github.com/rclone/rclone.git
synced 2024-11-07 17:14:44 +01:00
4632 lines
138 KiB
Markdown
4632 lines
138 KiB
Markdown
---
|
||
title: "Amazon S3"
|
||
description: "Rclone docs for Amazon S3"
|
||
---
|
||
|
||
# {{< icon "fab fa-amazon" >}} Amazon S3 Storage Providers
|
||
|
||
The S3 backend can be used with a number of different providers:
|
||
|
||
{{< provider_list >}}
|
||
{{< provider name="AWS S3" home="https://aws.amazon.com/s3/" config="/s3/#configuration" start="true" >}}
|
||
{{< provider name="Alibaba Cloud (Aliyun) Object Storage System (OSS)" home="https://www.alibabacloud.com/product/oss/" config="/s3/#alibaba-oss" >}}
|
||
{{< provider name="Ceph" home="http://ceph.com/" config="/s3/#ceph" >}}
|
||
{{< provider name="China Mobile Ecloud Elastic Object Storage (EOS)" home="https://ecloud.10086.cn/home/product-introduction/eos/" config="/s3/#china-mobile-ecloud-eos" >}}
|
||
{{< provider name="Cloudflare R2" home="https://blog.cloudflare.com/r2-open-beta/" config="/s3/#cloudflare-r2" >}}
|
||
{{< provider name="Arvan Cloud Object Storage (AOS)" home="https://www.arvancloud.com/en/products/cloud-storage" config="/s3/#arvan-cloud" >}}
|
||
{{< provider name="DigitalOcean Spaces" home="https://www.digitalocean.com/products/object-storage/" config="/s3/#digitalocean-spaces" >}}
|
||
{{< provider name="Dreamhost" home="https://www.dreamhost.com/cloud/storage/" config="/s3/#dreamhost" >}}
|
||
{{< provider name="Huawei OBS" home="https://www.huaweicloud.com/intl/en-us/product/obs.html" config="/s3/#huawei-obs" >}}
|
||
{{< provider name="IBM COS S3" home="http://www.ibm.com/cloud/object-storage" config="/s3/#ibm-cos-s3" >}}
|
||
{{< provider name="IDrive e2" home="https://www.idrive.com/e2/" config="/s3/#idrive-e2" >}}
|
||
{{< provider name="Minio" home="https://www.minio.io/" config="/s3/#minio" >}}
|
||
{{< provider name="RackCorp Object Storage" home="https://www.rackcorp.com/" config="/s3/#RackCorp" >}}
|
||
{{< provider name="Scaleway" home="https://www.scaleway.com/en/object-storage/" config="/s3/#scaleway" >}}
|
||
{{< provider name="Seagate Lyve Cloud" home="https://www.seagate.com/gb/en/services/cloud/storage/" config="/s3/#lyve" >}}
|
||
{{< provider name="SeaweedFS" home="https://github.com/chrislusf/seaweedfs/" config="/s3/#seaweedfs" >}}
|
||
{{< provider name="StackPath" home="https://www.stackpath.com/products/object-storage/" config="/s3/#stackpath" >}}
|
||
{{< provider name="Storj" home="https://storj.io/" config="/s3/#storj" >}}
|
||
{{< provider name="Tencent Cloud Object Storage (COS)" home="https://intl.cloud.tencent.com/product/cos" config="/s3/#tencent-cos" >}}
|
||
{{< provider name="Wasabi" home="https://wasabi.com/" config="/s3/#wasabi" end="true" >}}
|
||
{{< /provider_list >}}
|
||
|
||
Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
|
||
command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`.
|
||
|
||
Once you have made a remote (see the provider specific section above)
|
||
you can use it like this:
|
||
|
||
See all buckets
|
||
|
||
rclone lsd remote:
|
||
|
||
Make a new bucket
|
||
|
||
rclone mkdir remote:bucket
|
||
|
||
List the contents of a bucket
|
||
|
||
rclone ls remote:bucket
|
||
|
||
Sync `/home/local/directory` to the remote bucket, deleting any excess
|
||
files in the bucket.
|
||
|
||
rclone sync -i /home/local/directory remote:bucket
|
||
|
||
## Configuration
|
||
|
||
Here is an example of making an s3 configuration for the AWS S3 provider.
|
||
Most applies to the other providers as well, any differences are described [below](#providers).
|
||
|
||
First run
|
||
|
||
rclone config
|
||
|
||
This will guide you through an interactive setup process.
|
||
|
||
```
|
||
No remotes found, make a new one?
|
||
n) New remote
|
||
s) Set configuration password
|
||
q) Quit config
|
||
n/s/q> n
|
||
name> remote
|
||
Type of storage to configure.
|
||
Choose a number from below, or type in your own value
|
||
[snip]
|
||
XX / Amazon S3 Compliant Storage Providers including AWS, Ceph, ChinaMobile, ArvanCloud, Dreamhost, IBM COS, Minio, and Tencent COS
|
||
\ "s3"
|
||
[snip]
|
||
Storage> s3
|
||
Choose your S3 provider.
|
||
Choose a number from below, or type in your own value
|
||
1 / Amazon Web Services (AWS) S3
|
||
\ "AWS"
|
||
2 / Ceph Object Storage
|
||
\ "Ceph"
|
||
3 / Digital Ocean Spaces
|
||
\ "DigitalOcean"
|
||
4 / Dreamhost DreamObjects
|
||
\ "Dreamhost"
|
||
5 / IBM COS S3
|
||
\ "IBMCOS"
|
||
6 / Minio Object Storage
|
||
\ "Minio"
|
||
7 / Wasabi Object Storage
|
||
\ "Wasabi"
|
||
8 / Any other S3 compatible provider
|
||
\ "Other"
|
||
provider> 1
|
||
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
|
||
Choose a number from below, or type in your own value
|
||
1 / Enter AWS credentials in the next step
|
||
\ "false"
|
||
2 / Get AWS credentials from the environment (env vars or IAM)
|
||
\ "true"
|
||
env_auth> 1
|
||
AWS Access Key ID - leave blank for anonymous access or runtime credentials.
|
||
access_key_id> XXX
|
||
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
|
||
secret_access_key> YYY
|
||
Region to connect to.
|
||
Choose a number from below, or type in your own value
|
||
/ The default endpoint - a good choice if you are unsure.
|
||
1 | US Region, Northern Virginia, or Pacific Northwest.
|
||
| Leave location constraint empty.
|
||
\ "us-east-1"
|
||
/ US East (Ohio) Region
|
||
2 | Needs location constraint us-east-2.
|
||
\ "us-east-2"
|
||
/ US West (Oregon) Region
|
||
3 | Needs location constraint us-west-2.
|
||
\ "us-west-2"
|
||
/ US West (Northern California) Region
|
||
4 | Needs location constraint us-west-1.
|
||
\ "us-west-1"
|
||
/ Canada (Central) Region
|
||
5 | Needs location constraint ca-central-1.
|
||
\ "ca-central-1"
|
||
/ EU (Ireland) Region
|
||
6 | Needs location constraint EU or eu-west-1.
|
||
\ "eu-west-1"
|
||
/ EU (London) Region
|
||
7 | Needs location constraint eu-west-2.
|
||
\ "eu-west-2"
|
||
/ EU (Frankfurt) Region
|
||
8 | Needs location constraint eu-central-1.
|
||
\ "eu-central-1"
|
||
/ Asia Pacific (Singapore) Region
|
||
9 | Needs location constraint ap-southeast-1.
|
||
\ "ap-southeast-1"
|
||
/ Asia Pacific (Sydney) Region
|
||
10 | Needs location constraint ap-southeast-2.
|
||
\ "ap-southeast-2"
|
||
/ Asia Pacific (Tokyo) Region
|
||
11 | Needs location constraint ap-northeast-1.
|
||
\ "ap-northeast-1"
|
||
/ Asia Pacific (Seoul)
|
||
12 | Needs location constraint ap-northeast-2.
|
||
\ "ap-northeast-2"
|
||
/ Asia Pacific (Mumbai)
|
||
13 | Needs location constraint ap-south-1.
|
||
\ "ap-south-1"
|
||
/ Asia Pacific (Hong Kong) Region
|
||
14 | Needs location constraint ap-east-1.
|
||
\ "ap-east-1"
|
||
/ South America (Sao Paulo) Region
|
||
15 | Needs location constraint sa-east-1.
|
||
\ "sa-east-1"
|
||
region> 1
|
||
Endpoint for S3 API.
|
||
Leave blank if using AWS to use the default endpoint for the region.
|
||
endpoint>
|
||
Location constraint - must be set to match the Region. Used when creating buckets only.
|
||
Choose a number from below, or type in your own value
|
||
1 / Empty for US Region, Northern Virginia, or Pacific Northwest.
|
||
\ ""
|
||
2 / US East (Ohio) Region.
|
||
\ "us-east-2"
|
||
3 / US West (Oregon) Region.
|
||
\ "us-west-2"
|
||
4 / US West (Northern California) Region.
|
||
\ "us-west-1"
|
||
5 / Canada (Central) Region.
|
||
\ "ca-central-1"
|
||
6 / EU (Ireland) Region.
|
||
\ "eu-west-1"
|
||
7 / EU (London) Region.
|
||
\ "eu-west-2"
|
||
8 / EU Region.
|
||
\ "EU"
|
||
9 / Asia Pacific (Singapore) Region.
|
||
\ "ap-southeast-1"
|
||
10 / Asia Pacific (Sydney) Region.
|
||
\ "ap-southeast-2"
|
||
11 / Asia Pacific (Tokyo) Region.
|
||
\ "ap-northeast-1"
|
||
12 / Asia Pacific (Seoul)
|
||
\ "ap-northeast-2"
|
||
13 / Asia Pacific (Mumbai)
|
||
\ "ap-south-1"
|
||
14 / Asia Pacific (Hong Kong)
|
||
\ "ap-east-1"
|
||
15 / South America (Sao Paulo) Region.
|
||
\ "sa-east-1"
|
||
location_constraint> 1
|
||
Canned ACL used when creating buckets and/or storing objects in S3.
|
||
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
|
||
Choose a number from below, or type in your own value
|
||
1 / Owner gets FULL_CONTROL. No one else has access rights (default).
|
||
\ "private"
|
||
2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
|
||
\ "public-read"
|
||
/ Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
|
||
3 | Granting this on a bucket is generally not recommended.
|
||
\ "public-read-write"
|
||
4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
|
||
\ "authenticated-read"
|
||
/ Object owner gets FULL_CONTROL. Bucket owner gets READ access.
|
||
5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
|
||
\ "bucket-owner-read"
|
||
/ Both the object owner and the bucket owner get FULL_CONTROL over the object.
|
||
6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
|
||
\ "bucket-owner-full-control"
|
||
acl> 1
|
||
The server-side encryption algorithm used when storing this object in S3.
|
||
Choose a number from below, or type in your own value
|
||
1 / None
|
||
\ ""
|
||
2 / AES256
|
||
\ "AES256"
|
||
server_side_encryption> 1
|
||
The storage class to use when storing objects in S3.
|
||
Choose a number from below, or type in your own value
|
||
1 / Default
|
||
\ ""
|
||
2 / Standard storage class
|
||
\ "STANDARD"
|
||
3 / Reduced redundancy storage class
|
||
\ "REDUCED_REDUNDANCY"
|
||
4 / Standard Infrequent Access storage class
|
||
\ "STANDARD_IA"
|
||
5 / One Zone Infrequent Access storage class
|
||
\ "ONEZONE_IA"
|
||
6 / Glacier storage class
|
||
\ "GLACIER"
|
||
7 / Glacier Deep Archive storage class
|
||
\ "DEEP_ARCHIVE"
|
||
8 / Intelligent-Tiering storage class
|
||
\ "INTELLIGENT_TIERING"
|
||
9 / Glacier Instant Retrieval storage class
|
||
\ "GLACIER_IR"
|
||
storage_class> 1
|
||
Remote config
|
||
--------------------
|
||
[remote]
|
||
type = s3
|
||
provider = AWS
|
||
env_auth = false
|
||
access_key_id = XXX
|
||
secret_access_key = YYY
|
||
region = us-east-1
|
||
endpoint =
|
||
location_constraint =
|
||
acl = private
|
||
server_side_encryption =
|
||
storage_class =
|
||
--------------------
|
||
y) Yes this is OK
|
||
e) Edit this remote
|
||
d) Delete this remote
|
||
y/e/d>
|
||
```
|
||
|
||
### Modified time
|
||
|
||
The modified time is stored as metadata on the object as
|
||
`X-Amz-Meta-Mtime` as floating point since the epoch, accurate to 1 ns.
|
||
|
||
If the modification time needs to be updated rclone will attempt to perform a server
|
||
side copy to update the modification if the object can be copied in a single part.
|
||
In the case the object is larger than 5Gb or is in Glacier or Glacier Deep Archive
|
||
storage the object will be uploaded rather than copied.
|
||
|
||
Note that reading this from the object takes an additional `HEAD`
|
||
request as the metadata isn't returned in object listings.
|
||
|
||
### Reducing costs
|
||
|
||
#### Avoiding HEAD requests to read the modification time
|
||
|
||
By default, rclone will use the modification time of objects stored in
|
||
S3 for syncing. This is stored in object metadata which unfortunately
|
||
takes an extra HEAD request to read which can be expensive (in time
|
||
and money).
|
||
|
||
The modification time is used by default for all operations that
|
||
require checking the time a file was last updated. It allows rclone to
|
||
treat the remote more like a true filesystem, but it is inefficient on
|
||
S3 because it requires an extra API call to retrieve the metadata.
|
||
|
||
The extra API calls can be avoided when syncing (using `rclone sync`
|
||
or `rclone copy`) in a few different ways, each with its own
|
||
tradeoffs.
|
||
|
||
- `--size-only`
|
||
- Only checks the size of files.
|
||
- Uses no extra transactions.
|
||
- If the file doesn't change size then rclone won't detect it has
|
||
changed.
|
||
- `rclone sync --size-only /path/to/source s3:bucket`
|
||
- `--checksum`
|
||
- Checks the size and MD5 checksum of files.
|
||
- Uses no extra transactions.
|
||
- The most accurate detection of changes possible.
|
||
- Will cause the source to read an MD5 checksum which, if it is a
|
||
local disk, will cause lots of disk activity.
|
||
- If the source and destination are both S3 this is the
|
||
**recommended** flag to use for maximum efficiency.
|
||
- `rclone sync --checksum /path/to/source s3:bucket`
|
||
- `--update --use-server-modtime`
|
||
- Uses no extra transactions.
|
||
- Modification time becomes the time the object was uploaded.
|
||
- For many operations this is sufficient to determine if it needs
|
||
uploading.
|
||
- Using `--update` along with `--use-server-modtime`, avoids the
|
||
extra API call and uploads files whose local modification time
|
||
is newer than the time it was last uploaded.
|
||
- Files created with timestamps in the past will be missed by the sync.
|
||
- `rclone sync --update --use-server-modtime /path/to/source s3:bucket`
|
||
|
||
These flags can and should be used in combination with `--fast-list` -
|
||
see below.
|
||
|
||
If using `rclone mount` or any command using the VFS (eg `rclone
|
||
serve`) commands then you might want to consider using the VFS flag
|
||
`--no-modtime` which will stop rclone reading the modification time
|
||
for every object. You could also use `--use-server-modtime` if you are
|
||
happy with the modification times of the objects being the time of
|
||
upload.
|
||
|
||
#### Avoiding GET requests to read directory listings
|
||
|
||
Rclone's default directory traversal is to process each directory
|
||
individually. This takes one API call per directory. Using the
|
||
`--fast-list` flag will read all info about the the objects into
|
||
memory first using a smaller number of API calls (one per 1000
|
||
objects). See the [rclone docs](/docs/#fast-list) for more details.
|
||
|
||
rclone sync --fast-list --checksum /path/to/source s3:bucket
|
||
|
||
`--fast-list` trades off API transactions for memory use. As a rough
|
||
guide rclone uses 1k of memory per object stored, so using
|
||
`--fast-list` on a sync of a million objects will use roughly 1 GiB of
|
||
RAM.
|
||
|
||
If you are only copying a small number of files into a big repository
|
||
then using `--no-traverse` is a good idea. This finds objects directly
|
||
instead of through directory listings. You can do a "top-up" sync very
|
||
cheaply by using `--max-age` and `--no-traverse` to copy only recent
|
||
files, eg
|
||
|
||
rclone copy --max-age 24h --no-traverse /path/to/source s3:bucket
|
||
|
||
You'd then do a full `rclone sync` less often.
|
||
|
||
Note that `--fast-list` isn't required in the top-up sync.
|
||
|
||
#### Avoiding HEAD requests after PUT
|
||
|
||
By default, rclone will HEAD every object it uploads. It does this to
|
||
check the object got uploaded correctly.
|
||
|
||
You can disable this with the [--s3-no-head](#s3-no-head) option - see
|
||
there for more details.
|
||
|
||
Setting this flag increases the chance for undetected upload failures.
|
||
|
||
### Hashes
|
||
|
||
For small objects which weren't uploaded as multipart uploads (objects
|
||
sized below `--s3-upload-cutoff` if uploaded with rclone) rclone uses
|
||
the `ETag:` header as an MD5 checksum.
|
||
|
||
However for objects which were uploaded as multipart uploads or with
|
||
server side encryption (SSE-AWS or SSE-C) the `ETag` header is no
|
||
longer the MD5 sum of the data, so rclone adds an additional piece of
|
||
metadata `X-Amz-Meta-Md5chksum` which is a base64 encoded MD5 hash (in
|
||
the same format as is required for `Content-MD5`).
|
||
|
||
For large objects, calculating this hash can take some time so the
|
||
addition of this hash can be disabled with `--s3-disable-checksum`.
|
||
This will mean that these objects do not have an MD5 checksum.
|
||
|
||
Note that reading this from the object takes an additional `HEAD`
|
||
request as the metadata isn't returned in object listings.
|
||
|
||
### Versions
|
||
|
||
When bucket versioning is enabled (this can be done with rclone with
|
||
the [`rclone backend versioning`](#versioning) command) when rclone
|
||
uploads a new version of a file it creates a
|
||
[new version of it](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Versioning.html)
|
||
Likewise when you delete a file, the old version will be marked hidden
|
||
and still be available.
|
||
|
||
Old versions of files, where available, are visible using the
|
||
`--s3-versions` flag.
|
||
|
||
If you wish to remove all the old versions then you can use the
|
||
[`rclone backend cleanup-hidden remote:bucket`](#cleanup-hidden)
|
||
command which will delete all the old hidden versions of files,
|
||
leaving the current ones intact. You can also supply a path and only
|
||
old versions under that path will be deleted, e.g.
|
||
`rclone backend cleanup-hidden remote:bucket/path/to/stuff`.
|
||
|
||
When you `purge` a bucket, the current and the old versions will be
|
||
deleted then the bucket will be deleted.
|
||
|
||
However `delete` will cause the current versions of the files to
|
||
become hidden old versions.
|
||
|
||
Here is a session showing the listing and retrieval of an old
|
||
version followed by a `cleanup` of the old versions.
|
||
|
||
Show current version and all the versions with `--s3-versions` flag.
|
||
|
||
```
|
||
$ rclone -q ls s3:cleanup-test
|
||
9 one.txt
|
||
|
||
$ rclone -q --s3-versions ls s3:cleanup-test
|
||
9 one.txt
|
||
8 one-v2016-07-04-141032-000.txt
|
||
16 one-v2016-07-04-141003-000.txt
|
||
15 one-v2016-07-02-155621-000.txt
|
||
```
|
||
|
||
Retrieve an old version
|
||
|
||
```
|
||
$ rclone -q --s3-versions copy s3:cleanup-test/one-v2016-07-04-141003-000.txt /tmp
|
||
|
||
$ ls -l /tmp/one-v2016-07-04-141003-000.txt
|
||
-rw-rw-r-- 1 ncw ncw 16 Jul 2 17:46 /tmp/one-v2016-07-04-141003-000.txt
|
||
```
|
||
|
||
Clean up all the old versions and show that they've gone.
|
||
|
||
```
|
||
$ rclone -q backend cleanup-hidden s3:cleanup-test
|
||
|
||
$ rclone -q ls s3:cleanup-test
|
||
9 one.txt
|
||
|
||
$ rclone -q --s3-versions ls s3:cleanup-test
|
||
9 one.txt
|
||
```
|
||
|
||
### Cleanup
|
||
|
||
If you run `rclone cleanup s3:bucket` then it will remove all pending
|
||
multipart uploads older than 24 hours. You can use the `-i` flag to
|
||
see exactly what it will do. If you want more control over the expiry
|
||
date then run `rclone backend cleanup s3:bucket -o max-age=1h` to
|
||
expire all uploads older than one hour. You can use `rclone backend
|
||
list-multipart-uploads s3:bucket` to see the pending multipart
|
||
uploads.
|
||
|
||
### Restricted filename characters
|
||
|
||
S3 allows any valid UTF-8 string as a key.
|
||
|
||
Invalid UTF-8 bytes will be [replaced](/overview/#invalid-utf8), as
|
||
they can't be used in XML.
|
||
|
||
The following characters are replaced since these are problematic when
|
||
dealing with the REST API:
|
||
|
||
| Character | Value | Replacement |
|
||
| --------- |:-----:|:-----------:|
|
||
| NUL | 0x00 | ␀ |
|
||
| / | 0x2F | / |
|
||
|
||
The encoding will also encode these file names as they don't seem to
|
||
work with the SDK properly:
|
||
|
||
| File name | Replacement |
|
||
| --------- |:-----------:|
|
||
| . | . |
|
||
| .. | .. |
|
||
|
||
### Multipart uploads
|
||
|
||
rclone supports multipart uploads with S3 which means that it can
|
||
upload files bigger than 5 GiB.
|
||
|
||
Note that files uploaded *both* with multipart upload *and* through
|
||
crypt remotes do not have MD5 sums.
|
||
|
||
rclone switches from single part uploads to multipart uploads at the
|
||
point specified by `--s3-upload-cutoff`. This can be a maximum of 5 GiB
|
||
and a minimum of 0 (ie always upload multipart files).
|
||
|
||
The chunk sizes used in the multipart upload are specified by
|
||
`--s3-chunk-size` and the number of chunks uploaded concurrently is
|
||
specified by `--s3-upload-concurrency`.
|
||
|
||
Multipart uploads will use `--transfers` * `--s3-upload-concurrency` *
|
||
`--s3-chunk-size` extra memory. Single part uploads to not use extra
|
||
memory.
|
||
|
||
Single part transfers can be faster than multipart transfers or slower
|
||
depending on your latency from S3 - the more latency, the more likely
|
||
single part transfers will be faster.
|
||
|
||
Increasing `--s3-upload-concurrency` will increase throughput (8 would
|
||
be a sensible value) and increasing `--s3-chunk-size` also increases
|
||
throughput (16M would be sensible). Increasing either of these will
|
||
use more memory. The default values are high enough to gain most of
|
||
the possible performance without using too much memory.
|
||
|
||
|
||
### Buckets and Regions
|
||
|
||
With Amazon S3 you can list buckets (`rclone lsd`) using any region,
|
||
but you can only access the content of a bucket from the region it was
|
||
created in. If you attempt to access a bucket from the wrong region,
|
||
you will get an error, `incorrect region, the bucket is not in 'XXX'
|
||
region`.
|
||
|
||
### Authentication
|
||
|
||
There are a number of ways to supply `rclone` with a set of AWS
|
||
credentials, with and without using the environment.
|
||
|
||
The different authentication methods are tried in this order:
|
||
|
||
- Directly in the rclone configuration file (`env_auth = false` in the config file):
|
||
- `access_key_id` and `secret_access_key` are required.
|
||
- `session_token` can be optionally set when using AWS STS.
|
||
- Runtime configuration (`env_auth = true` in the config file):
|
||
- Export the following environment variables before running `rclone`:
|
||
- Access Key ID: `AWS_ACCESS_KEY_ID` or `AWS_ACCESS_KEY`
|
||
- Secret Access Key: `AWS_SECRET_ACCESS_KEY` or `AWS_SECRET_KEY`
|
||
- Session Token: `AWS_SESSION_TOKEN` (optional)
|
||
- Or, use a [named profile](https://docs.aws.amazon.com/cli/latest/userguide/cli-multiple-profiles.html):
|
||
- Profile files are standard files used by AWS CLI tools
|
||
- By default it will use the profile in your home directory (e.g. `~/.aws/credentials` on unix based systems) file and the "default" profile, to change set these environment variables:
|
||
- `AWS_SHARED_CREDENTIALS_FILE` to control which file.
|
||
- `AWS_PROFILE` to control which profile to use.
|
||
- Or, run `rclone` in an ECS task with an IAM role (AWS only).
|
||
- Or, run `rclone` on an EC2 instance with an IAM role (AWS only).
|
||
- Or, run `rclone` in an EKS pod with an IAM role that is associated with a service account (AWS only).
|
||
|
||
If none of these option actually end up providing `rclone` with AWS
|
||
credentials then S3 interaction will be non-authenticated (see below).
|
||
|
||
### S3 Permissions
|
||
|
||
When using the `sync` subcommand of `rclone` the following minimum
|
||
permissions are required to be available on the bucket being written to:
|
||
|
||
* `ListBucket`
|
||
* `DeleteObject`
|
||
* `GetObject`
|
||
* `PutObject`
|
||
* `PutObjectACL`
|
||
|
||
When using the `lsd` subcommand, the `ListAllMyBuckets` permission is required.
|
||
|
||
Example policy:
|
||
|
||
```
|
||
{
|
||
"Version": "2012-10-17",
|
||
"Statement": [
|
||
{
|
||
"Effect": "Allow",
|
||
"Principal": {
|
||
"AWS": "arn:aws:iam::USER_SID:user/USER_NAME"
|
||
},
|
||
"Action": [
|
||
"s3:ListBucket",
|
||
"s3:DeleteObject",
|
||
"s3:GetObject",
|
||
"s3:PutObject",
|
||
"s3:PutObjectAcl"
|
||
],
|
||
"Resource": [
|
||
"arn:aws:s3:::BUCKET_NAME/*",
|
||
"arn:aws:s3:::BUCKET_NAME"
|
||
]
|
||
},
|
||
{
|
||
"Effect": "Allow",
|
||
"Action": "s3:ListAllMyBuckets",
|
||
"Resource": "arn:aws:s3:::*"
|
||
}
|
||
]
|
||
}
|
||
```
|
||
|
||
Notes on above:
|
||
|
||
1. This is a policy that can be used when creating bucket. It assumes
|
||
that `USER_NAME` has been created.
|
||
2. The Resource entry must include both resource ARNs, as one implies
|
||
the bucket and the other implies the bucket's objects.
|
||
|
||
For reference, [here's an Ansible script](https://gist.github.com/ebridges/ebfc9042dd7c756cd101cfa807b7ae2b)
|
||
that will generate one or more buckets that will work with `rclone sync`.
|
||
|
||
### Key Management System (KMS)
|
||
|
||
If you are using server-side encryption with KMS then you must make
|
||
sure rclone is configured with `server_side_encryption = aws:kms`
|
||
otherwise you will find you can't transfer small objects - these will
|
||
create checksum errors.
|
||
|
||
### Glacier and Glacier Deep Archive
|
||
|
||
You can upload objects using the glacier storage class or transition them to glacier using a [lifecycle policy](http://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html).
|
||
The bucket can still be synced or copied into normally, but if rclone
|
||
tries to access data from the glacier storage class you will see an error like below.
|
||
|
||
2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file
|
||
|
||
In this case you need to [restore](http://docs.aws.amazon.com/AmazonS3/latest/user-guide/restore-archived-objects.html)
|
||
the object(s) in question before using rclone.
|
||
|
||
Note that rclone only speaks the S3 API it does not speak the Glacier
|
||
Vault API, so rclone cannot directly access Glacier Vaults.
|
||
|
||
### Object-lock enabled S3 bucket
|
||
|
||
According to AWS's [documentation on S3 Object Lock](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html#object-lock-permission):
|
||
|
||
> If you configure a default retention period on a bucket, requests to upload objects in such a bucket must include the Content-MD5 header.
|
||
|
||
As mentioned in the [Hashes](#hashes) section, small files that are not uploaded as multipart, use a different tag, causing the upload to fail.
|
||
A simple solution is to set the `--s3-upload-cutoff 0` and force all the files to be uploaded as multipart.
|
||
|
||
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/s3/s3.go then run make backenddocs" >}}
|
||
### Standard options
|
||
|
||
Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi).
|
||
|
||
#### --s3-provider
|
||
|
||
Choose your S3 provider.
|
||
|
||
Properties:
|
||
|
||
- Config: provider
|
||
- Env Var: RCLONE_S3_PROVIDER
|
||
- Type: string
|
||
- Required: false
|
||
- Examples:
|
||
- "AWS"
|
||
- Amazon Web Services (AWS) S3
|
||
- "Alibaba"
|
||
- Alibaba Cloud Object Storage System (OSS) formerly Aliyun
|
||
- "Ceph"
|
||
- Ceph Object Storage
|
||
- "ChinaMobile"
|
||
- China Mobile Ecloud Elastic Object Storage (EOS)
|
||
- "Cloudflare"
|
||
- Cloudflare R2 Storage
|
||
- "ArvanCloud"
|
||
- Arvan Cloud Object Storage (AOS)
|
||
- "DigitalOcean"
|
||
- Digital Ocean Spaces
|
||
- "Dreamhost"
|
||
- Dreamhost DreamObjects
|
||
- "HuaweiOBS"
|
||
- Huawei Object Storage Service
|
||
- "IBMCOS"
|
||
- IBM COS S3
|
||
- "IDrive"
|
||
- IDrive e2
|
||
- "LyveCloud"
|
||
- Seagate Lyve Cloud
|
||
- "Minio"
|
||
- Minio Object Storage
|
||
- "Netease"
|
||
- Netease Object Storage (NOS)
|
||
- "RackCorp"
|
||
- RackCorp Object Storage
|
||
- "Scaleway"
|
||
- Scaleway Object Storage
|
||
- "SeaweedFS"
|
||
- SeaweedFS S3
|
||
- "StackPath"
|
||
- StackPath Object Storage
|
||
- "Storj"
|
||
- Storj (S3 Compatible Gateway)
|
||
- "TencentCOS"
|
||
- Tencent Cloud Object Storage (COS)
|
||
- "Wasabi"
|
||
- Wasabi Object Storage
|
||
- "Other"
|
||
- Any other S3 compatible provider
|
||
|
||
#### --s3-env-auth
|
||
|
||
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
|
||
|
||
Only applies if access_key_id and secret_access_key is blank.
|
||
|
||
Properties:
|
||
|
||
- Config: env_auth
|
||
- Env Var: RCLONE_S3_ENV_AUTH
|
||
- Type: bool
|
||
- Default: false
|
||
- Examples:
|
||
- "false"
|
||
- Enter AWS credentials in the next step.
|
||
- "true"
|
||
- Get AWS credentials from the environment (env vars or IAM).
|
||
|
||
#### --s3-access-key-id
|
||
|
||
AWS Access Key ID.
|
||
|
||
Leave blank for anonymous access or runtime credentials.
|
||
|
||
Properties:
|
||
|
||
- Config: access_key_id
|
||
- Env Var: RCLONE_S3_ACCESS_KEY_ID
|
||
- Type: string
|
||
- Required: false
|
||
|
||
#### --s3-secret-access-key
|
||
|
||
AWS Secret Access Key (password).
|
||
|
||
Leave blank for anonymous access or runtime credentials.
|
||
|
||
Properties:
|
||
|
||
- Config: secret_access_key
|
||
- Env Var: RCLONE_S3_SECRET_ACCESS_KEY
|
||
- Type: string
|
||
- Required: false
|
||
|
||
#### --s3-region
|
||
|
||
Region to connect to.
|
||
|
||
Properties:
|
||
|
||
- Config: region
|
||
- Env Var: RCLONE_S3_REGION
|
||
- Provider: AWS
|
||
- Type: string
|
||
- Required: false
|
||
- Examples:
|
||
- "us-east-1"
|
||
- The default endpoint - a good choice if you are unsure.
|
||
- US Region, Northern Virginia, or Pacific Northwest.
|
||
- Leave location constraint empty.
|
||
- "us-east-2"
|
||
- US East (Ohio) Region.
|
||
- Needs location constraint us-east-2.
|
||
- "us-west-1"
|
||
- US West (Northern California) Region.
|
||
- Needs location constraint us-west-1.
|
||
- "us-west-2"
|
||
- US West (Oregon) Region.
|
||
- Needs location constraint us-west-2.
|
||
- "ca-central-1"
|
||
- Canada (Central) Region.
|
||
- Needs location constraint ca-central-1.
|
||
- "eu-west-1"
|
||
- EU (Ireland) Region.
|
||
- Needs location constraint EU or eu-west-1.
|
||
- "eu-west-2"
|
||
- EU (London) Region.
|
||
- Needs location constraint eu-west-2.
|
||
- "eu-west-3"
|
||
- EU (Paris) Region.
|
||
- Needs location constraint eu-west-3.
|
||
- "eu-north-1"
|
||
- EU (Stockholm) Region.
|
||
- Needs location constraint eu-north-1.
|
||
- "eu-south-1"
|
||
- EU (Milan) Region.
|
||
- Needs location constraint eu-south-1.
|
||
- "eu-central-1"
|
||
- EU (Frankfurt) Region.
|
||
- Needs location constraint eu-central-1.
|
||
- "ap-southeast-1"
|
||
- Asia Pacific (Singapore) Region.
|
||
- Needs location constraint ap-southeast-1.
|
||
- "ap-southeast-2"
|
||
- Asia Pacific (Sydney) Region.
|
||
- Needs location constraint ap-southeast-2.
|
||
- "ap-northeast-1"
|
||
- Asia Pacific (Tokyo) Region.
|
||
- Needs location constraint ap-northeast-1.
|
||
- "ap-northeast-2"
|
||
- Asia Pacific (Seoul).
|
||
- Needs location constraint ap-northeast-2.
|
||
- "ap-northeast-3"
|
||
- Asia Pacific (Osaka-Local).
|
||
- Needs location constraint ap-northeast-3.
|
||
- "ap-south-1"
|
||
- Asia Pacific (Mumbai).
|
||
- Needs location constraint ap-south-1.
|
||
- "ap-east-1"
|
||
- Asia Pacific (Hong Kong) Region.
|
||
- Needs location constraint ap-east-1.
|
||
- "sa-east-1"
|
||
- South America (Sao Paulo) Region.
|
||
- Needs location constraint sa-east-1.
|
||
- "me-south-1"
|
||
- Middle East (Bahrain) Region.
|
||
- Needs location constraint me-south-1.
|
||
- "af-south-1"
|
||
- Africa (Cape Town) Region.
|
||
- Needs location constraint af-south-1.
|
||
- "cn-north-1"
|
||
- China (Beijing) Region.
|
||
- Needs location constraint cn-north-1.
|
||
- "cn-northwest-1"
|
||
- China (Ningxia) Region.
|
||
- Needs location constraint cn-northwest-1.
|
||
- "us-gov-east-1"
|
||
- AWS GovCloud (US-East) Region.
|
||
- Needs location constraint us-gov-east-1.
|
||
- "us-gov-west-1"
|
||
- AWS GovCloud (US) Region.
|
||
- Needs location constraint us-gov-west-1.
|
||
|
||
#### --s3-region
|
||
|
||
region - the location where your bucket will be created and your data stored.
|
||
|
||
|
||
Properties:
|
||
|
||
- Config: region
|
||
- Env Var: RCLONE_S3_REGION
|
||
- Provider: RackCorp
|
||
- Type: string
|
||
- Required: false
|
||
- Examples:
|
||
- "global"
|
||
- Global CDN (All locations) Region
|
||
- "au"
|
||
- Australia (All states)
|
||
- "au-nsw"
|
||
- NSW (Australia) Region
|
||
- "au-qld"
|
||
- QLD (Australia) Region
|
||
- "au-vic"
|
||
- VIC (Australia) Region
|
||
- "au-wa"
|
||
- Perth (Australia) Region
|
||
- "ph"
|
||
- Manila (Philippines) Region
|
||
- "th"
|
||
- Bangkok (Thailand) Region
|
||
- "hk"
|
||
- HK (Hong Kong) Region
|
||
- "mn"
|
||
- Ulaanbaatar (Mongolia) Region
|
||
- "kg"
|
||
- Bishkek (Kyrgyzstan) Region
|
||
- "id"
|
||
- Jakarta (Indonesia) Region
|
||
- "jp"
|
||
- Tokyo (Japan) Region
|
||
- "sg"
|
||
- SG (Singapore) Region
|
||
- "de"
|
||
- Frankfurt (Germany) Region
|
||
- "us"
|
||
- USA (AnyCast) Region
|
||
- "us-east-1"
|
||
- New York (USA) Region
|
||
- "us-west-1"
|
||
- Freemont (USA) Region
|
||
- "nz"
|
||
- Auckland (New Zealand) Region
|
||
|
||
#### --s3-region
|
||
|
||
Region to connect to.
|
||
|
||
Properties:
|
||
|
||
- Config: region
|
||
- Env Var: RCLONE_S3_REGION
|
||
- Provider: Scaleway
|
||
- Type: string
|
||
- Required: false
|
||
- Examples:
|
||
- "nl-ams"
|
||
- Amsterdam, The Netherlands
|
||
- "fr-par"
|
||
- Paris, France
|
||
- "pl-waw"
|
||
- Warsaw, Poland
|
||
|
||
#### --s3-region
|
||
|
||
Region to connect to. - the location where your bucket will be created and your data stored. Need bo be same with your endpoint.
|
||
|
||
|
||
Properties:
|
||
|
||
- Config: region
|
||
- Env Var: RCLONE_S3_REGION
|
||
- Provider: HuaweiOBS
|
||
- Type: string
|
||
- Required: false
|
||
- Examples:
|
||
- "af-south-1"
|
||
- AF-Johannesburg
|
||
- "ap-southeast-2"
|
||
- AP-Bangkok
|
||
- "ap-southeast-3"
|
||
- AP-Singapore
|
||
- "cn-east-3"
|
||
- CN East-Shanghai1
|
||
- "cn-east-2"
|
||
- CN East-Shanghai2
|
||
- "cn-north-1"
|
||
- CN North-Beijing1
|
||
- "cn-north-4"
|
||
- CN North-Beijing4
|
||
- "cn-south-1"
|
||
- CN South-Guangzhou
|
||
- "ap-southeast-1"
|
||
- CN-Hong Kong
|
||
- "sa-argentina-1"
|
||
- LA-Buenos Aires1
|
||
- "sa-peru-1"
|
||
- LA-Lima1
|
||
- "na-mexico-1"
|
||
- LA-Mexico City1
|
||
- "sa-chile-1"
|
||
- LA-Santiago2
|
||
- "sa-brazil-1"
|
||
- LA-Sao Paulo1
|
||
- "ru-northwest-2"
|
||
- RU-Moscow2
|
||
|
||
#### --s3-region
|
||
|
||
Region to connect to.
|
||
|
||
Properties:
|
||
|
||
- Config: region
|
||
- Env Var: RCLONE_S3_REGION
|
||
- Provider: Cloudflare
|
||
- Type: string
|
||
- Required: false
|
||
- Examples:
|
||
- "auto"
|
||
- R2 buckets are automatically distributed across Cloudflare's data centers for low latency.
|
||
|
||
#### --s3-region
|
||
|
||
Region to connect to.
|
||
|
||
Leave blank if you are using an S3 clone and you don't have a region.
|
||
|
||
Properties:
|
||
|
||
- Config: region
|
||
- Env Var: RCLONE_S3_REGION
|
||
- Provider: !AWS,Alibaba,ChinaMobile,Cloudflare,ArvanCloud,RackCorp,Scaleway,Storj,TencentCOS,HuaweiOBS,IDrive
|
||
- Type: string
|
||
- Required: false
|
||
- Examples:
|
||
- ""
|
||
- Use this if unsure.
|
||
- Will use v4 signatures and an empty region.
|
||
- "other-v2-signature"
|
||
- Use this only if v4 signatures don't work.
|
||
- E.g. pre Jewel/v10 CEPH.
|
||
|
||
#### --s3-endpoint
|
||
|
||
Endpoint for S3 API.
|
||
|
||
Leave blank if using AWS to use the default endpoint for the region.
|
||
|
||
Properties:
|
||
|
||
- Config: endpoint
|
||
- Env Var: RCLONE_S3_ENDPOINT
|
||
- Provider: AWS
|
||
- Type: string
|
||
- Required: false
|
||
|
||
#### --s3-endpoint
|
||
|
||
Endpoint for China Mobile Ecloud Elastic Object Storage (EOS) API.
|
||
|
||
Properties:
|
||
|
||
- Config: endpoint
|
||
- Env Var: RCLONE_S3_ENDPOINT
|
||
- Provider: ChinaMobile
|
||
- Type: string
|
||
- Required: false
|
||
- Examples:
|
||
- "eos-wuxi-1.cmecloud.cn"
|
||
- The default endpoint - a good choice if you are unsure.
|
||
- East China (Suzhou)
|
||
- "eos-jinan-1.cmecloud.cn"
|
||
- East China (Jinan)
|
||
- "eos-ningbo-1.cmecloud.cn"
|
||
- East China (Hangzhou)
|
||
- "eos-shanghai-1.cmecloud.cn"
|
||
- East China (Shanghai-1)
|
||
- "eos-zhengzhou-1.cmecloud.cn"
|
||
- Central China (Zhengzhou)
|
||
- "eos-hunan-1.cmecloud.cn"
|
||
- Central China (Changsha-1)
|
||
- "eos-zhuzhou-1.cmecloud.cn"
|
||
- Central China (Changsha-2)
|
||
- "eos-guangzhou-1.cmecloud.cn"
|
||
- South China (Guangzhou-2)
|
||
- "eos-dongguan-1.cmecloud.cn"
|
||
- South China (Guangzhou-3)
|
||
- "eos-beijing-1.cmecloud.cn"
|
||
- North China (Beijing-1)
|
||
- "eos-beijing-2.cmecloud.cn"
|
||
- North China (Beijing-2)
|
||
- "eos-beijing-4.cmecloud.cn"
|
||
- North China (Beijing-3)
|
||
- "eos-huhehaote-1.cmecloud.cn"
|
||
- North China (Huhehaote)
|
||
- "eos-chengdu-1.cmecloud.cn"
|
||
- Southwest China (Chengdu)
|
||
- "eos-chongqing-1.cmecloud.cn"
|
||
- Southwest China (Chongqing)
|
||
- "eos-guiyang-1.cmecloud.cn"
|
||
- Southwest China (Guiyang)
|
||
- "eos-xian-1.cmecloud.cn"
|
||
- Nouthwest China (Xian)
|
||
- "eos-yunnan.cmecloud.cn"
|
||
- Yunnan China (Kunming)
|
||
- "eos-yunnan-2.cmecloud.cn"
|
||
- Yunnan China (Kunming-2)
|
||
- "eos-tianjin-1.cmecloud.cn"
|
||
- Tianjin China (Tianjin)
|
||
- "eos-jilin-1.cmecloud.cn"
|
||
- Jilin China (Changchun)
|
||
- "eos-hubei-1.cmecloud.cn"
|
||
- Hubei China (Xiangyan)
|
||
- "eos-jiangxi-1.cmecloud.cn"
|
||
- Jiangxi China (Nanchang)
|
||
- "eos-gansu-1.cmecloud.cn"
|
||
- Gansu China (Lanzhou)
|
||
- "eos-shanxi-1.cmecloud.cn"
|
||
- Shanxi China (Taiyuan)
|
||
- "eos-liaoning-1.cmecloud.cn"
|
||
- Liaoning China (Shenyang)
|
||
- "eos-hebei-1.cmecloud.cn"
|
||
- Hebei China (Shijiazhuang)
|
||
- "eos-fujian-1.cmecloud.cn"
|
||
- Fujian China (Xiamen)
|
||
- "eos-guangxi-1.cmecloud.cn"
|
||
- Guangxi China (Nanning)
|
||
- "eos-anhui-1.cmecloud.cn"
|
||
- Anhui China (Huainan)
|
||
|
||
#### --s3-endpoint
|
||
|
||
Endpoint for Arvan Cloud Object Storage (AOS) API.
|
||
|
||
Properties:
|
||
|
||
- Config: endpoint
|
||
- Env Var: RCLONE_S3_ENDPOINT
|
||
- Provider: ArvanCloud
|
||
- Type: string
|
||
- Required: false
|
||
- Examples:
|
||
- "s3.ir-thr-at1.arvanstorage.com"
|
||
- The default endpoint - a good choice if you are unsure.
|
||
- Tehran Iran (Asiatech)
|
||
- "s3.ir-tbz-sh1.arvanstorage.com"
|
||
- Tabriz Iran (Shahriar)
|
||
|
||
#### --s3-endpoint
|
||
|
||
Endpoint for IBM COS S3 API.
|
||
|
||
Specify if using an IBM COS On Premise.
|
||
|
||
Properties:
|
||
|
||
- Config: endpoint
|
||
- Env Var: RCLONE_S3_ENDPOINT
|
||
- Provider: IBMCOS
|
||
- Type: string
|
||
- Required: false
|
||
- Examples:
|
||
- "s3.us.cloud-object-storage.appdomain.cloud"
|
||
- US Cross Region Endpoint
|
||
- "s3.dal.us.cloud-object-storage.appdomain.cloud"
|
||
- US Cross Region Dallas Endpoint
|
||
- "s3.wdc.us.cloud-object-storage.appdomain.cloud"
|
||
- US Cross Region Washington DC Endpoint
|
||
- "s3.sjc.us.cloud-object-storage.appdomain.cloud"
|
||
- US Cross Region San Jose Endpoint
|
||
- "s3.private.us.cloud-object-storage.appdomain.cloud"
|
||
- US Cross Region Private Endpoint
|
||
- "s3.private.dal.us.cloud-object-storage.appdomain.cloud"
|
||
- US Cross Region Dallas Private Endpoint
|
||
- "s3.private.wdc.us.cloud-object-storage.appdomain.cloud"
|
||
- US Cross Region Washington DC Private Endpoint
|
||
- "s3.private.sjc.us.cloud-object-storage.appdomain.cloud"
|
||
- US Cross Region San Jose Private Endpoint
|
||
- "s3.us-east.cloud-object-storage.appdomain.cloud"
|
||
- US Region East Endpoint
|
||
- "s3.private.us-east.cloud-object-storage.appdomain.cloud"
|
||
- US Region East Private Endpoint
|
||
- "s3.us-south.cloud-object-storage.appdomain.cloud"
|
||
- US Region South Endpoint
|
||
- "s3.private.us-south.cloud-object-storage.appdomain.cloud"
|
||
- US Region South Private Endpoint
|
||
- "s3.eu.cloud-object-storage.appdomain.cloud"
|
||
- EU Cross Region Endpoint
|
||
- "s3.fra.eu.cloud-object-storage.appdomain.cloud"
|
||
- EU Cross Region Frankfurt Endpoint
|
||
- "s3.mil.eu.cloud-object-storage.appdomain.cloud"
|
||
- EU Cross Region Milan Endpoint
|
||
- "s3.ams.eu.cloud-object-storage.appdomain.cloud"
|
||
- EU Cross Region Amsterdam Endpoint
|
||
- "s3.private.eu.cloud-object-storage.appdomain.cloud"
|
||
- EU Cross Region Private Endpoint
|
||
- "s3.private.fra.eu.cloud-object-storage.appdomain.cloud"
|
||
- EU Cross Region Frankfurt Private Endpoint
|
||
- "s3.private.mil.eu.cloud-object-storage.appdomain.cloud"
|
||
- EU Cross Region Milan Private Endpoint
|
||
- "s3.private.ams.eu.cloud-object-storage.appdomain.cloud"
|
||
- EU Cross Region Amsterdam Private Endpoint
|
||
- "s3.eu-gb.cloud-object-storage.appdomain.cloud"
|
||
- Great Britain Endpoint
|
||
- "s3.private.eu-gb.cloud-object-storage.appdomain.cloud"
|
||
- Great Britain Private Endpoint
|
||
- "s3.eu-de.cloud-object-storage.appdomain.cloud"
|
||
- EU Region DE Endpoint
|
||
- "s3.private.eu-de.cloud-object-storage.appdomain.cloud"
|
||
- EU Region DE Private Endpoint
|
||
- "s3.ap.cloud-object-storage.appdomain.cloud"
|
||
- APAC Cross Regional Endpoint
|
||
- "s3.tok.ap.cloud-object-storage.appdomain.cloud"
|
||
- APAC Cross Regional Tokyo Endpoint
|
||
- "s3.hkg.ap.cloud-object-storage.appdomain.cloud"
|
||
- APAC Cross Regional HongKong Endpoint
|
||
- "s3.seo.ap.cloud-object-storage.appdomain.cloud"
|
||
- APAC Cross Regional Seoul Endpoint
|
||
- "s3.private.ap.cloud-object-storage.appdomain.cloud"
|
||
- APAC Cross Regional Private Endpoint
|
||
- "s3.private.tok.ap.cloud-object-storage.appdomain.cloud"
|
||
- APAC Cross Regional Tokyo Private Endpoint
|
||
- "s3.private.hkg.ap.cloud-object-storage.appdomain.cloud"
|
||
- APAC Cross Regional HongKong Private Endpoint
|
||
- "s3.private.seo.ap.cloud-object-storage.appdomain.cloud"
|
||
- APAC Cross Regional Seoul Private Endpoint
|
||
- "s3.jp-tok.cloud-object-storage.appdomain.cloud"
|
||
- APAC Region Japan Endpoint
|
||
- "s3.private.jp-tok.cloud-object-storage.appdomain.cloud"
|
||
- APAC Region Japan Private Endpoint
|
||
- "s3.au-syd.cloud-object-storage.appdomain.cloud"
|
||
- APAC Region Australia Endpoint
|
||
- "s3.private.au-syd.cloud-object-storage.appdomain.cloud"
|
||
- APAC Region Australia Private Endpoint
|
||
- "s3.ams03.cloud-object-storage.appdomain.cloud"
|
||
- Amsterdam Single Site Endpoint
|
||
- "s3.private.ams03.cloud-object-storage.appdomain.cloud"
|
||
- Amsterdam Single Site Private Endpoint
|
||
- "s3.che01.cloud-object-storage.appdomain.cloud"
|
||
- Chennai Single Site Endpoint
|
||
- "s3.private.che01.cloud-object-storage.appdomain.cloud"
|
||
- Chennai Single Site Private Endpoint
|
||
- "s3.mel01.cloud-object-storage.appdomain.cloud"
|
||
- Melbourne Single Site Endpoint
|
||
- "s3.private.mel01.cloud-object-storage.appdomain.cloud"
|
||
- Melbourne Single Site Private Endpoint
|
||
- "s3.osl01.cloud-object-storage.appdomain.cloud"
|
||
- Oslo Single Site Endpoint
|
||
- "s3.private.osl01.cloud-object-storage.appdomain.cloud"
|
||
- Oslo Single Site Private Endpoint
|
||
- "s3.tor01.cloud-object-storage.appdomain.cloud"
|
||
- Toronto Single Site Endpoint
|
||
- "s3.private.tor01.cloud-object-storage.appdomain.cloud"
|
||
- Toronto Single Site Private Endpoint
|
||
- "s3.seo01.cloud-object-storage.appdomain.cloud"
|
||
- Seoul Single Site Endpoint
|
||
- "s3.private.seo01.cloud-object-storage.appdomain.cloud"
|
||
- Seoul Single Site Private Endpoint
|
||
- "s3.mon01.cloud-object-storage.appdomain.cloud"
|
||
- Montreal Single Site Endpoint
|
||
- "s3.private.mon01.cloud-object-storage.appdomain.cloud"
|
||
- Montreal Single Site Private Endpoint
|
||
- "s3.mex01.cloud-object-storage.appdomain.cloud"
|
||
- Mexico Single Site Endpoint
|
||
- "s3.private.mex01.cloud-object-storage.appdomain.cloud"
|
||
- Mexico Single Site Private Endpoint
|
||
- "s3.sjc04.cloud-object-storage.appdomain.cloud"
|
||
- San Jose Single Site Endpoint
|
||
- "s3.private.sjc04.cloud-object-storage.appdomain.cloud"
|
||
- San Jose Single Site Private Endpoint
|
||
- "s3.mil01.cloud-object-storage.appdomain.cloud"
|
||
- Milan Single Site Endpoint
|
||
- "s3.private.mil01.cloud-object-storage.appdomain.cloud"
|
||
- Milan Single Site Private Endpoint
|
||
- "s3.hkg02.cloud-object-storage.appdomain.cloud"
|
||
- Hong Kong Single Site Endpoint
|
||
- "s3.private.hkg02.cloud-object-storage.appdomain.cloud"
|
||
- Hong Kong Single Site Private Endpoint
|
||
- "s3.par01.cloud-object-storage.appdomain.cloud"
|
||
- Paris Single Site Endpoint
|
||
- "s3.private.par01.cloud-object-storage.appdomain.cloud"
|
||
- Paris Single Site Private Endpoint
|
||
- "s3.sng01.cloud-object-storage.appdomain.cloud"
|
||
- Singapore Single Site Endpoint
|
||
- "s3.private.sng01.cloud-object-storage.appdomain.cloud"
|
||
- Singapore Single Site Private Endpoint
|
||
|
||
#### --s3-endpoint
|
||
|
||
Endpoint for OSS API.
|
||
|
||
Properties:
|
||
|
||
- Config: endpoint
|
||
- Env Var: RCLONE_S3_ENDPOINT
|
||
- Provider: Alibaba
|
||
- Type: string
|
||
- Required: false
|
||
- Examples:
|
||
- "oss-accelerate.aliyuncs.com"
|
||
- Global Accelerate
|
||
- "oss-accelerate-overseas.aliyuncs.com"
|
||
- Global Accelerate (outside mainland China)
|
||
- "oss-cn-hangzhou.aliyuncs.com"
|
||
- East China 1 (Hangzhou)
|
||
- "oss-cn-shanghai.aliyuncs.com"
|
||
- East China 2 (Shanghai)
|
||
- "oss-cn-qingdao.aliyuncs.com"
|
||
- North China 1 (Qingdao)
|
||
- "oss-cn-beijing.aliyuncs.com"
|
||
- North China 2 (Beijing)
|
||
- "oss-cn-zhangjiakou.aliyuncs.com"
|
||
- North China 3 (Zhangjiakou)
|
||
- "oss-cn-huhehaote.aliyuncs.com"
|
||
- North China 5 (Hohhot)
|
||
- "oss-cn-wulanchabu.aliyuncs.com"
|
||
- North China 6 (Ulanqab)
|
||
- "oss-cn-shenzhen.aliyuncs.com"
|
||
- South China 1 (Shenzhen)
|
||
- "oss-cn-heyuan.aliyuncs.com"
|
||
- South China 2 (Heyuan)
|
||
- "oss-cn-guangzhou.aliyuncs.com"
|
||
- South China 3 (Guangzhou)
|
||
- "oss-cn-chengdu.aliyuncs.com"
|
||
- West China 1 (Chengdu)
|
||
- "oss-cn-hongkong.aliyuncs.com"
|
||
- Hong Kong (Hong Kong)
|
||
- "oss-us-west-1.aliyuncs.com"
|
||
- US West 1 (Silicon Valley)
|
||
- "oss-us-east-1.aliyuncs.com"
|
||
- US East 1 (Virginia)
|
||
- "oss-ap-southeast-1.aliyuncs.com"
|
||
- Southeast Asia Southeast 1 (Singapore)
|
||
- "oss-ap-southeast-2.aliyuncs.com"
|
||
- Asia Pacific Southeast 2 (Sydney)
|
||
- "oss-ap-southeast-3.aliyuncs.com"
|
||
- Southeast Asia Southeast 3 (Kuala Lumpur)
|
||
- "oss-ap-southeast-5.aliyuncs.com"
|
||
- Asia Pacific Southeast 5 (Jakarta)
|
||
- "oss-ap-northeast-1.aliyuncs.com"
|
||
- Asia Pacific Northeast 1 (Japan)
|
||
- "oss-ap-south-1.aliyuncs.com"
|
||
- Asia Pacific South 1 (Mumbai)
|
||
- "oss-eu-central-1.aliyuncs.com"
|
||
- Central Europe 1 (Frankfurt)
|
||
- "oss-eu-west-1.aliyuncs.com"
|
||
- West Europe (London)
|
||
- "oss-me-east-1.aliyuncs.com"
|
||
- Middle East 1 (Dubai)
|
||
|
||
#### --s3-endpoint
|
||
|
||
Endpoint for OBS API.
|
||
|
||
Properties:
|
||
|
||
- Config: endpoint
|
||
- Env Var: RCLONE_S3_ENDPOINT
|
||
- Provider: HuaweiOBS
|
||
- Type: string
|
||
- Required: false
|
||
- Examples:
|
||
- "obs.af-south-1.myhuaweicloud.com"
|
||
- AF-Johannesburg
|
||
- "obs.ap-southeast-2.myhuaweicloud.com"
|
||
- AP-Bangkok
|
||
- "obs.ap-southeast-3.myhuaweicloud.com"
|
||
- AP-Singapore
|
||
- "obs.cn-east-3.myhuaweicloud.com"
|
||
- CN East-Shanghai1
|
||
- "obs.cn-east-2.myhuaweicloud.com"
|
||
- CN East-Shanghai2
|
||
- "obs.cn-north-1.myhuaweicloud.com"
|
||
- CN North-Beijing1
|
||
- "obs.cn-north-4.myhuaweicloud.com"
|
||
- CN North-Beijing4
|
||
- "obs.cn-south-1.myhuaweicloud.com"
|
||
- CN South-Guangzhou
|
||
- "obs.ap-southeast-1.myhuaweicloud.com"
|
||
- CN-Hong Kong
|
||
- "obs.sa-argentina-1.myhuaweicloud.com"
|
||
- LA-Buenos Aires1
|
||
- "obs.sa-peru-1.myhuaweicloud.com"
|
||
- LA-Lima1
|
||
- "obs.na-mexico-1.myhuaweicloud.com"
|
||
- LA-Mexico City1
|
||
- "obs.sa-chile-1.myhuaweicloud.com"
|
||
- LA-Santiago2
|
||
- "obs.sa-brazil-1.myhuaweicloud.com"
|
||
- LA-Sao Paulo1
|
||
- "obs.ru-northwest-2.myhuaweicloud.com"
|
||
- RU-Moscow2
|
||
|
||
#### --s3-endpoint
|
||
|
||
Endpoint for Scaleway Object Storage.
|
||
|
||
Properties:
|
||
|
||
- Config: endpoint
|
||
- Env Var: RCLONE_S3_ENDPOINT
|
||
- Provider: Scaleway
|
||
- Type: string
|
||
- Required: false
|
||
- Examples:
|
||
- "s3.nl-ams.scw.cloud"
|
||
- Amsterdam Endpoint
|
||
- "s3.fr-par.scw.cloud"
|
||
- Paris Endpoint
|
||
- "s3.pl-waw.scw.cloud"
|
||
- Warsaw Endpoint
|
||
|
||
#### --s3-endpoint
|
||
|
||
Endpoint for StackPath Object Storage.
|
||
|
||
Properties:
|
||
|
||
- Config: endpoint
|
||
- Env Var: RCLONE_S3_ENDPOINT
|
||
- Provider: StackPath
|
||
- Type: string
|
||
- Required: false
|
||
- Examples:
|
||
- "s3.us-east-2.stackpathstorage.com"
|
||
- US East Endpoint
|
||
- "s3.us-west-1.stackpathstorage.com"
|
||
- US West Endpoint
|
||
- "s3.eu-central-1.stackpathstorage.com"
|
||
- EU Endpoint
|
||
|
||
#### --s3-endpoint
|
||
|
||
Endpoint of the Shared Gateway.
|
||
|
||
Properties:
|
||
|
||
- Config: endpoint
|
||
- Env Var: RCLONE_S3_ENDPOINT
|
||
- Provider: Storj
|
||
- Type: string
|
||
- Required: false
|
||
- Examples:
|
||
- "gateway.eu1.storjshare.io"
|
||
- EU1 Shared Gateway
|
||
- "gateway.us1.storjshare.io"
|
||
- US1 Shared Gateway
|
||
- "gateway.ap1.storjshare.io"
|
||
- Asia-Pacific Shared Gateway
|
||
|
||
#### --s3-endpoint
|
||
|
||
Endpoint for Tencent COS API.
|
||
|
||
Properties:
|
||
|
||
- Config: endpoint
|
||
- Env Var: RCLONE_S3_ENDPOINT
|
||
- Provider: TencentCOS
|
||
- Type: string
|
||
- Required: false
|
||
- Examples:
|
||
- "cos.ap-beijing.myqcloud.com"
|
||
- Beijing Region
|
||
- "cos.ap-nanjing.myqcloud.com"
|
||
- Nanjing Region
|
||
- "cos.ap-shanghai.myqcloud.com"
|
||
- Shanghai Region
|
||
- "cos.ap-guangzhou.myqcloud.com"
|
||
- Guangzhou Region
|
||
- "cos.ap-nanjing.myqcloud.com"
|
||
- Nanjing Region
|
||
- "cos.ap-chengdu.myqcloud.com"
|
||
- Chengdu Region
|
||
- "cos.ap-chongqing.myqcloud.com"
|
||
- Chongqing Region
|
||
- "cos.ap-hongkong.myqcloud.com"
|
||
- Hong Kong (China) Region
|
||
- "cos.ap-singapore.myqcloud.com"
|
||
- Singapore Region
|
||
- "cos.ap-mumbai.myqcloud.com"
|
||
- Mumbai Region
|
||
- "cos.ap-seoul.myqcloud.com"
|
||
- Seoul Region
|
||
- "cos.ap-bangkok.myqcloud.com"
|
||
- Bangkok Region
|
||
- "cos.ap-tokyo.myqcloud.com"
|
||
- Tokyo Region
|
||
- "cos.na-siliconvalley.myqcloud.com"
|
||
- Silicon Valley Region
|
||
- "cos.na-ashburn.myqcloud.com"
|
||
- Virginia Region
|
||
- "cos.na-toronto.myqcloud.com"
|
||
- Toronto Region
|
||
- "cos.eu-frankfurt.myqcloud.com"
|
||
- Frankfurt Region
|
||
- "cos.eu-moscow.myqcloud.com"
|
||
- Moscow Region
|
||
- "cos.accelerate.myqcloud.com"
|
||
- Use Tencent COS Accelerate Endpoint
|
||
|
||
#### --s3-endpoint
|
||
|
||
Endpoint for RackCorp Object Storage.
|
||
|
||
Properties:
|
||
|
||
- Config: endpoint
|
||
- Env Var: RCLONE_S3_ENDPOINT
|
||
- Provider: RackCorp
|
||
- Type: string
|
||
- Required: false
|
||
- Examples:
|
||
- "s3.rackcorp.com"
|
||
- Global (AnyCast) Endpoint
|
||
- "au.s3.rackcorp.com"
|
||
- Australia (Anycast) Endpoint
|
||
- "au-nsw.s3.rackcorp.com"
|
||
- Sydney (Australia) Endpoint
|
||
- "au-qld.s3.rackcorp.com"
|
||
- Brisbane (Australia) Endpoint
|
||
- "au-vic.s3.rackcorp.com"
|
||
- Melbourne (Australia) Endpoint
|
||
- "au-wa.s3.rackcorp.com"
|
||
- Perth (Australia) Endpoint
|
||
- "ph.s3.rackcorp.com"
|
||
- Manila (Philippines) Endpoint
|
||
- "th.s3.rackcorp.com"
|
||
- Bangkok (Thailand) Endpoint
|
||
- "hk.s3.rackcorp.com"
|
||
- HK (Hong Kong) Endpoint
|
||
- "mn.s3.rackcorp.com"
|
||
- Ulaanbaatar (Mongolia) Endpoint
|
||
- "kg.s3.rackcorp.com"
|
||
- Bishkek (Kyrgyzstan) Endpoint
|
||
- "id.s3.rackcorp.com"
|
||
- Jakarta (Indonesia) Endpoint
|
||
- "jp.s3.rackcorp.com"
|
||
- Tokyo (Japan) Endpoint
|
||
- "sg.s3.rackcorp.com"
|
||
- SG (Singapore) Endpoint
|
||
- "de.s3.rackcorp.com"
|
||
- Frankfurt (Germany) Endpoint
|
||
- "us.s3.rackcorp.com"
|
||
- USA (AnyCast) Endpoint
|
||
- "us-east-1.s3.rackcorp.com"
|
||
- New York (USA) Endpoint
|
||
- "us-west-1.s3.rackcorp.com"
|
||
- Freemont (USA) Endpoint
|
||
- "nz.s3.rackcorp.com"
|
||
- Auckland (New Zealand) Endpoint
|
||
|
||
#### --s3-endpoint
|
||
|
||
Endpoint for S3 API.
|
||
|
||
Required when using an S3 clone.
|
||
|
||
Properties:
|
||
|
||
- Config: endpoint
|
||
- Env Var: RCLONE_S3_ENDPOINT
|
||
- Provider: !AWS,IBMCOS,IDrive,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,ArvanCloud,Scaleway,StackPath,Storj,RackCorp
|
||
- Type: string
|
||
- Required: false
|
||
- Examples:
|
||
- "objects-us-east-1.dream.io"
|
||
- Dream Objects endpoint
|
||
- "nyc3.digitaloceanspaces.com"
|
||
- Digital Ocean Spaces New York 3
|
||
- "ams3.digitaloceanspaces.com"
|
||
- Digital Ocean Spaces Amsterdam 3
|
||
- "sgp1.digitaloceanspaces.com"
|
||
- Digital Ocean Spaces Singapore 1
|
||
- "localhost:8333"
|
||
- SeaweedFS S3 localhost
|
||
- "s3.us-east-1.lyvecloud.seagate.com"
|
||
- Seagate Lyve Cloud US East 1 (Virginia)
|
||
- "s3.us-west-1.lyvecloud.seagate.com"
|
||
- Seagate Lyve Cloud US West 1 (California)
|
||
- "s3.ap-southeast-1.lyvecloud.seagate.com"
|
||
- Seagate Lyve Cloud AP Southeast 1 (Singapore)
|
||
- "s3.wasabisys.com"
|
||
- Wasabi US East endpoint
|
||
- "s3.us-west-1.wasabisys.com"
|
||
- Wasabi US West endpoint
|
||
- "s3.eu-central-1.wasabisys.com"
|
||
- Wasabi EU Central endpoint
|
||
- "s3.ap-northeast-1.wasabisys.com"
|
||
- Wasabi AP Northeast 1 (Tokyo) endpoint
|
||
- "s3.ap-northeast-2.wasabisys.com"
|
||
- Wasabi AP Northeast 2 (Osaka) endpoint
|
||
- "s3.ir-thr-at1.arvanstorage.com"
|
||
- ArvanCloud Tehran Iran (Asiatech) endpoint
|
||
|
||
#### --s3-location-constraint
|
||
|
||
Location constraint - must be set to match the Region.
|
||
|
||
Used when creating buckets only.
|
||
|
||
Properties:
|
||
|
||
- Config: location_constraint
|
||
- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
|
||
- Provider: AWS
|
||
- Type: string
|
||
- Required: false
|
||
- Examples:
|
||
- ""
|
||
- Empty for US Region, Northern Virginia, or Pacific Northwest
|
||
- "us-east-2"
|
||
- US East (Ohio) Region
|
||
- "us-west-1"
|
||
- US West (Northern California) Region
|
||
- "us-west-2"
|
||
- US West (Oregon) Region
|
||
- "ca-central-1"
|
||
- Canada (Central) Region
|
||
- "eu-west-1"
|
||
- EU (Ireland) Region
|
||
- "eu-west-2"
|
||
- EU (London) Region
|
||
- "eu-west-3"
|
||
- EU (Paris) Region
|
||
- "eu-north-1"
|
||
- EU (Stockholm) Region
|
||
- "eu-south-1"
|
||
- EU (Milan) Region
|
||
- "EU"
|
||
- EU Region
|
||
- "ap-southeast-1"
|
||
- Asia Pacific (Singapore) Region
|
||
- "ap-southeast-2"
|
||
- Asia Pacific (Sydney) Region
|
||
- "ap-northeast-1"
|
||
- Asia Pacific (Tokyo) Region
|
||
- "ap-northeast-2"
|
||
- Asia Pacific (Seoul) Region
|
||
- "ap-northeast-3"
|
||
- Asia Pacific (Osaka-Local) Region
|
||
- "ap-south-1"
|
||
- Asia Pacific (Mumbai) Region
|
||
- "ap-east-1"
|
||
- Asia Pacific (Hong Kong) Region
|
||
- "sa-east-1"
|
||
- South America (Sao Paulo) Region
|
||
- "me-south-1"
|
||
- Middle East (Bahrain) Region
|
||
- "af-south-1"
|
||
- Africa (Cape Town) Region
|
||
- "cn-north-1"
|
||
- China (Beijing) Region
|
||
- "cn-northwest-1"
|
||
- China (Ningxia) Region
|
||
- "us-gov-east-1"
|
||
- AWS GovCloud (US-East) Region
|
||
- "us-gov-west-1"
|
||
- AWS GovCloud (US) Region
|
||
|
||
#### --s3-location-constraint
|
||
|
||
Location constraint - must match endpoint.
|
||
|
||
Used when creating buckets only.
|
||
|
||
Properties:
|
||
|
||
- Config: location_constraint
|
||
- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
|
||
- Provider: ChinaMobile
|
||
- Type: string
|
||
- Required: false
|
||
- Examples:
|
||
- "wuxi1"
|
||
- East China (Suzhou)
|
||
- "jinan1"
|
||
- East China (Jinan)
|
||
- "ningbo1"
|
||
- East China (Hangzhou)
|
||
- "shanghai1"
|
||
- East China (Shanghai-1)
|
||
- "zhengzhou1"
|
||
- Central China (Zhengzhou)
|
||
- "hunan1"
|
||
- Central China (Changsha-1)
|
||
- "zhuzhou1"
|
||
- Central China (Changsha-2)
|
||
- "guangzhou1"
|
||
- South China (Guangzhou-2)
|
||
- "dongguan1"
|
||
- South China (Guangzhou-3)
|
||
- "beijing1"
|
||
- North China (Beijing-1)
|
||
- "beijing2"
|
||
- North China (Beijing-2)
|
||
- "beijing4"
|
||
- North China (Beijing-3)
|
||
- "huhehaote1"
|
||
- North China (Huhehaote)
|
||
- "chengdu1"
|
||
- Southwest China (Chengdu)
|
||
- "chongqing1"
|
||
- Southwest China (Chongqing)
|
||
- "guiyang1"
|
||
- Southwest China (Guiyang)
|
||
- "xian1"
|
||
- Nouthwest China (Xian)
|
||
- "yunnan"
|
||
- Yunnan China (Kunming)
|
||
- "yunnan2"
|
||
- Yunnan China (Kunming-2)
|
||
- "tianjin1"
|
||
- Tianjin China (Tianjin)
|
||
- "jilin1"
|
||
- Jilin China (Changchun)
|
||
- "hubei1"
|
||
- Hubei China (Xiangyan)
|
||
- "jiangxi1"
|
||
- Jiangxi China (Nanchang)
|
||
- "gansu1"
|
||
- Gansu China (Lanzhou)
|
||
- "shanxi1"
|
||
- Shanxi China (Taiyuan)
|
||
- "liaoning1"
|
||
- Liaoning China (Shenyang)
|
||
- "hebei1"
|
||
- Hebei China (Shijiazhuang)
|
||
- "fujian1"
|
||
- Fujian China (Xiamen)
|
||
- "guangxi1"
|
||
- Guangxi China (Nanning)
|
||
- "anhui1"
|
||
- Anhui China (Huainan)
|
||
|
||
#### --s3-location-constraint
|
||
|
||
Location constraint - must match endpoint.
|
||
|
||
Used when creating buckets only.
|
||
|
||
Properties:
|
||
|
||
- Config: location_constraint
|
||
- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
|
||
- Provider: ArvanCloud
|
||
- Type: string
|
||
- Required: false
|
||
- Examples:
|
||
- "ir-thr-at1"
|
||
- Tehran Iran (Asiatech)
|
||
- "ir-tbz-sh1"
|
||
- Tabriz Iran (Shahriar)
|
||
|
||
#### --s3-location-constraint
|
||
|
||
Location constraint - must match endpoint when using IBM Cloud Public.
|
||
|
||
For on-prem COS, do not make a selection from this list, hit enter.
|
||
|
||
Properties:
|
||
|
||
- Config: location_constraint
|
||
- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
|
||
- Provider: IBMCOS
|
||
- Type: string
|
||
- Required: false
|
||
- Examples:
|
||
- "us-standard"
|
||
- US Cross Region Standard
|
||
- "us-vault"
|
||
- US Cross Region Vault
|
||
- "us-cold"
|
||
- US Cross Region Cold
|
||
- "us-flex"
|
||
- US Cross Region Flex
|
||
- "us-east-standard"
|
||
- US East Region Standard
|
||
- "us-east-vault"
|
||
- US East Region Vault
|
||
- "us-east-cold"
|
||
- US East Region Cold
|
||
- "us-east-flex"
|
||
- US East Region Flex
|
||
- "us-south-standard"
|
||
- US South Region Standard
|
||
- "us-south-vault"
|
||
- US South Region Vault
|
||
- "us-south-cold"
|
||
- US South Region Cold
|
||
- "us-south-flex"
|
||
- US South Region Flex
|
||
- "eu-standard"
|
||
- EU Cross Region Standard
|
||
- "eu-vault"
|
||
- EU Cross Region Vault
|
||
- "eu-cold"
|
||
- EU Cross Region Cold
|
||
- "eu-flex"
|
||
- EU Cross Region Flex
|
||
- "eu-gb-standard"
|
||
- Great Britain Standard
|
||
- "eu-gb-vault"
|
||
- Great Britain Vault
|
||
- "eu-gb-cold"
|
||
- Great Britain Cold
|
||
- "eu-gb-flex"
|
||
- Great Britain Flex
|
||
- "ap-standard"
|
||
- APAC Standard
|
||
- "ap-vault"
|
||
- APAC Vault
|
||
- "ap-cold"
|
||
- APAC Cold
|
||
- "ap-flex"
|
||
- APAC Flex
|
||
- "mel01-standard"
|
||
- Melbourne Standard
|
||
- "mel01-vault"
|
||
- Melbourne Vault
|
||
- "mel01-cold"
|
||
- Melbourne Cold
|
||
- "mel01-flex"
|
||
- Melbourne Flex
|
||
- "tor01-standard"
|
||
- Toronto Standard
|
||
- "tor01-vault"
|
||
- Toronto Vault
|
||
- "tor01-cold"
|
||
- Toronto Cold
|
||
- "tor01-flex"
|
||
- Toronto Flex
|
||
|
||
#### --s3-location-constraint
|
||
|
||
Location constraint - the location where your bucket will be located and your data stored.
|
||
|
||
|
||
Properties:
|
||
|
||
- Config: location_constraint
|
||
- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
|
||
- Provider: RackCorp
|
||
- Type: string
|
||
- Required: false
|
||
- Examples:
|
||
- "global"
|
||
- Global CDN Region
|
||
- "au"
|
||
- Australia (All locations)
|
||
- "au-nsw"
|
||
- NSW (Australia) Region
|
||
- "au-qld"
|
||
- QLD (Australia) Region
|
||
- "au-vic"
|
||
- VIC (Australia) Region
|
||
- "au-wa"
|
||
- Perth (Australia) Region
|
||
- "ph"
|
||
- Manila (Philippines) Region
|
||
- "th"
|
||
- Bangkok (Thailand) Region
|
||
- "hk"
|
||
- HK (Hong Kong) Region
|
||
- "mn"
|
||
- Ulaanbaatar (Mongolia) Region
|
||
- "kg"
|
||
- Bishkek (Kyrgyzstan) Region
|
||
- "id"
|
||
- Jakarta (Indonesia) Region
|
||
- "jp"
|
||
- Tokyo (Japan) Region
|
||
- "sg"
|
||
- SG (Singapore) Region
|
||
- "de"
|
||
- Frankfurt (Germany) Region
|
||
- "us"
|
||
- USA (AnyCast) Region
|
||
- "us-east-1"
|
||
- New York (USA) Region
|
||
- "us-west-1"
|
||
- Freemont (USA) Region
|
||
- "nz"
|
||
- Auckland (New Zealand) Region
|
||
|
||
#### --s3-location-constraint
|
||
|
||
Location constraint - must be set to match the Region.
|
||
|
||
Leave blank if not sure. Used when creating buckets only.
|
||
|
||
Properties:
|
||
|
||
- Config: location_constraint
|
||
- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
|
||
- Provider: !AWS,IBMCOS,IDrive,Alibaba,HuaweiOBS,ChinaMobile,Cloudflare,ArvanCloud,RackCorp,Scaleway,StackPath,Storj,TencentCOS
|
||
- Type: string
|
||
- Required: false
|
||
|
||
#### --s3-acl
|
||
|
||
Canned ACL used when creating buckets and storing or copying objects.
|
||
|
||
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
|
||
|
||
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
|
||
|
||
Note that this ACL is applied when server-side copying objects as S3
|
||
doesn't copy the ACL from the source but rather writes a fresh one.
|
||
|
||
Properties:
|
||
|
||
- Config: acl
|
||
- Env Var: RCLONE_S3_ACL
|
||
- Provider: !Storj,Cloudflare
|
||
- Type: string
|
||
- Required: false
|
||
- Examples:
|
||
- "default"
|
||
- Owner gets Full_CONTROL.
|
||
- No one else has access rights (default).
|
||
- "private"
|
||
- Owner gets FULL_CONTROL.
|
||
- No one else has access rights (default).
|
||
- "public-read"
|
||
- Owner gets FULL_CONTROL.
|
||
- The AllUsers group gets READ access.
|
||
- "public-read-write"
|
||
- Owner gets FULL_CONTROL.
|
||
- The AllUsers group gets READ and WRITE access.
|
||
- Granting this on a bucket is generally not recommended.
|
||
- "authenticated-read"
|
||
- Owner gets FULL_CONTROL.
|
||
- The AuthenticatedUsers group gets READ access.
|
||
- "bucket-owner-read"
|
||
- Object owner gets FULL_CONTROL.
|
||
- Bucket owner gets READ access.
|
||
- If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
|
||
- "bucket-owner-full-control"
|
||
- Both the object owner and the bucket owner get FULL_CONTROL over the object.
|
||
- If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
|
||
- "private"
|
||
- Owner gets FULL_CONTROL.
|
||
- No one else has access rights (default).
|
||
- This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS.
|
||
- "public-read"
|
||
- Owner gets FULL_CONTROL.
|
||
- The AllUsers group gets READ access.
|
||
- This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS.
|
||
- "public-read-write"
|
||
- Owner gets FULL_CONTROL.
|
||
- The AllUsers group gets READ and WRITE access.
|
||
- This acl is available on IBM Cloud (Infra), On-Premise IBM COS.
|
||
- "authenticated-read"
|
||
- Owner gets FULL_CONTROL.
|
||
- The AuthenticatedUsers group gets READ access.
|
||
- Not supported on Buckets.
|
||
- This acl is available on IBM Cloud (Infra) and On-Premise IBM COS.
|
||
|
||
#### --s3-server-side-encryption
|
||
|
||
The server-side encryption algorithm used when storing this object in S3.
|
||
|
||
Properties:
|
||
|
||
- Config: server_side_encryption
|
||
- Env Var: RCLONE_S3_SERVER_SIDE_ENCRYPTION
|
||
- Provider: AWS,Ceph,ChinaMobile,Minio
|
||
- Type: string
|
||
- Required: false
|
||
- Examples:
|
||
- ""
|
||
- None
|
||
- "AES256"
|
||
- AES256
|
||
- "aws:kms"
|
||
- aws:kms
|
||
|
||
#### --s3-sse-kms-key-id
|
||
|
||
If using KMS ID you must provide the ARN of Key.
|
||
|
||
Properties:
|
||
|
||
- Config: sse_kms_key_id
|
||
- Env Var: RCLONE_S3_SSE_KMS_KEY_ID
|
||
- Provider: AWS,Ceph,Minio
|
||
- Type: string
|
||
- Required: false
|
||
- Examples:
|
||
- ""
|
||
- None
|
||
- "arn:aws:kms:us-east-1:*"
|
||
- arn:aws:kms:*
|
||
|
||
#### --s3-storage-class
|
||
|
||
The storage class to use when storing new objects in S3.
|
||
|
||
Properties:
|
||
|
||
- Config: storage_class
|
||
- Env Var: RCLONE_S3_STORAGE_CLASS
|
||
- Provider: AWS
|
||
- Type: string
|
||
- Required: false
|
||
- Examples:
|
||
- ""
|
||
- Default
|
||
- "STANDARD"
|
||
- Standard storage class
|
||
- "REDUCED_REDUNDANCY"
|
||
- Reduced redundancy storage class
|
||
- "STANDARD_IA"
|
||
- Standard Infrequent Access storage class
|
||
- "ONEZONE_IA"
|
||
- One Zone Infrequent Access storage class
|
||
- "GLACIER"
|
||
- Glacier storage class
|
||
- "DEEP_ARCHIVE"
|
||
- Glacier Deep Archive storage class
|
||
- "INTELLIGENT_TIERING"
|
||
- Intelligent-Tiering storage class
|
||
- "GLACIER_IR"
|
||
- Glacier Instant Retrieval storage class
|
||
|
||
#### --s3-storage-class
|
||
|
||
The storage class to use when storing new objects in OSS.
|
||
|
||
Properties:
|
||
|
||
- Config: storage_class
|
||
- Env Var: RCLONE_S3_STORAGE_CLASS
|
||
- Provider: Alibaba
|
||
- Type: string
|
||
- Required: false
|
||
- Examples:
|
||
- ""
|
||
- Default
|
||
- "STANDARD"
|
||
- Standard storage class
|
||
- "GLACIER"
|
||
- Archive storage mode
|
||
- "STANDARD_IA"
|
||
- Infrequent access storage mode
|
||
|
||
#### --s3-storage-class
|
||
|
||
The storage class to use when storing new objects in ChinaMobile.
|
||
|
||
Properties:
|
||
|
||
- Config: storage_class
|
||
- Env Var: RCLONE_S3_STORAGE_CLASS
|
||
- Provider: ChinaMobile
|
||
- Type: string
|
||
- Required: false
|
||
- Examples:
|
||
- ""
|
||
- Default
|
||
- "STANDARD"
|
||
- Standard storage class
|
||
- "GLACIER"
|
||
- Archive storage mode
|
||
- "STANDARD_IA"
|
||
- Infrequent access storage mode
|
||
|
||
#### --s3-storage-class
|
||
|
||
The storage class to use when storing new objects in ArvanCloud.
|
||
|
||
Properties:
|
||
|
||
- Config: storage_class
|
||
- Env Var: RCLONE_S3_STORAGE_CLASS
|
||
- Provider: ArvanCloud
|
||
- Type: string
|
||
- Required: false
|
||
- Examples:
|
||
- "STANDARD"
|
||
- Standard storage class
|
||
|
||
#### --s3-storage-class
|
||
|
||
The storage class to use when storing new objects in Tencent COS.
|
||
|
||
Properties:
|
||
|
||
- Config: storage_class
|
||
- Env Var: RCLONE_S3_STORAGE_CLASS
|
||
- Provider: TencentCOS
|
||
- Type: string
|
||
- Required: false
|
||
- Examples:
|
||
- ""
|
||
- Default
|
||
- "STANDARD"
|
||
- Standard storage class
|
||
- "ARCHIVE"
|
||
- Archive storage mode
|
||
- "STANDARD_IA"
|
||
- Infrequent access storage mode
|
||
|
||
#### --s3-storage-class
|
||
|
||
The storage class to use when storing new objects in S3.
|
||
|
||
Properties:
|
||
|
||
- Config: storage_class
|
||
- Env Var: RCLONE_S3_STORAGE_CLASS
|
||
- Provider: Scaleway
|
||
- Type: string
|
||
- Required: false
|
||
- Examples:
|
||
- ""
|
||
- Default.
|
||
- "STANDARD"
|
||
- The Standard class for any upload.
|
||
- Suitable for on-demand content like streaming or CDN.
|
||
- "GLACIER"
|
||
- Archived storage.
|
||
- Prices are lower, but it needs to be restored first to be accessed.
|
||
|
||
### Advanced options
|
||
|
||
Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi).
|
||
|
||
#### --s3-bucket-acl
|
||
|
||
Canned ACL used when creating buckets.
|
||
|
||
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
|
||
|
||
Note that this ACL is applied when only when creating buckets. If it
|
||
isn't set then "acl" is used instead.
|
||
|
||
Properties:
|
||
|
||
- Config: bucket_acl
|
||
- Env Var: RCLONE_S3_BUCKET_ACL
|
||
- Type: string
|
||
- Required: false
|
||
- Examples:
|
||
- "private"
|
||
- Owner gets FULL_CONTROL.
|
||
- No one else has access rights (default).
|
||
- "public-read"
|
||
- Owner gets FULL_CONTROL.
|
||
- The AllUsers group gets READ access.
|
||
- "public-read-write"
|
||
- Owner gets FULL_CONTROL.
|
||
- The AllUsers group gets READ and WRITE access.
|
||
- Granting this on a bucket is generally not recommended.
|
||
- "authenticated-read"
|
||
- Owner gets FULL_CONTROL.
|
||
- The AuthenticatedUsers group gets READ access.
|
||
|
||
#### --s3-requester-pays
|
||
|
||
Enables requester pays option when interacting with S3 bucket.
|
||
|
||
Properties:
|
||
|
||
- Config: requester_pays
|
||
- Env Var: RCLONE_S3_REQUESTER_PAYS
|
||
- Provider: AWS
|
||
- Type: bool
|
||
- Default: false
|
||
|
||
#### --s3-sse-customer-algorithm
|
||
|
||
If using SSE-C, the server-side encryption algorithm used when storing this object in S3.
|
||
|
||
Properties:
|
||
|
||
- Config: sse_customer_algorithm
|
||
- Env Var: RCLONE_S3_SSE_CUSTOMER_ALGORITHM
|
||
- Provider: AWS,Ceph,ChinaMobile,Minio
|
||
- Type: string
|
||
- Required: false
|
||
- Examples:
|
||
- ""
|
||
- None
|
||
- "AES256"
|
||
- AES256
|
||
|
||
#### --s3-sse-customer-key
|
||
|
||
If using SSE-C you must provide the secret encryption key used to encrypt/decrypt your data.
|
||
|
||
Properties:
|
||
|
||
- Config: sse_customer_key
|
||
- Env Var: RCLONE_S3_SSE_CUSTOMER_KEY
|
||
- Provider: AWS,Ceph,ChinaMobile,Minio
|
||
- Type: string
|
||
- Required: false
|
||
- Examples:
|
||
- ""
|
||
- None
|
||
|
||
#### --s3-sse-customer-key-md5
|
||
|
||
If using SSE-C you may provide the secret encryption key MD5 checksum (optional).
|
||
|
||
If you leave it blank, this is calculated automatically from the sse_customer_key provided.
|
||
|
||
|
||
Properties:
|
||
|
||
- Config: sse_customer_key_md5
|
||
- Env Var: RCLONE_S3_SSE_CUSTOMER_KEY_MD5
|
||
- Provider: AWS,Ceph,ChinaMobile,Minio
|
||
- Type: string
|
||
- Required: false
|
||
- Examples:
|
||
- ""
|
||
- None
|
||
|
||
#### --s3-upload-cutoff
|
||
|
||
Cutoff for switching to chunked upload.
|
||
|
||
Any files larger than this will be uploaded in chunks of chunk_size.
|
||
The minimum is 0 and the maximum is 5 GiB.
|
||
|
||
Properties:
|
||
|
||
- Config: upload_cutoff
|
||
- Env Var: RCLONE_S3_UPLOAD_CUTOFF
|
||
- Type: SizeSuffix
|
||
- Default: 200Mi
|
||
|
||
#### --s3-chunk-size
|
||
|
||
Chunk size to use for uploading.
|
||
|
||
When uploading files larger than upload_cutoff or files with unknown
|
||
size (e.g. from "rclone rcat" or uploaded with "rclone mount" or google
|
||
photos or google docs) they will be uploaded as multipart uploads
|
||
using this chunk size.
|
||
|
||
Note that "--s3-upload-concurrency" chunks of this size are buffered
|
||
in memory per transfer.
|
||
|
||
If you are transferring large files over high-speed links and you have
|
||
enough memory, then increasing this will speed up the transfers.
|
||
|
||
Rclone will automatically increase the chunk size when uploading a
|
||
large file of known size to stay below the 10,000 chunks limit.
|
||
|
||
Files of unknown size are uploaded with the configured
|
||
chunk_size. Since the default chunk size is 5 MiB and there can be at
|
||
most 10,000 chunks, this means that by default the maximum size of
|
||
a file you can stream upload is 48 GiB. If you wish to stream upload
|
||
larger files then you will need to increase chunk_size.
|
||
|
||
Increasing the chunk size decreases the accuracy of the progress
|
||
statistics displayed with "-P" flag. Rclone treats chunk as sent when
|
||
it's buffered by the AWS SDK, when in fact it may still be uploading.
|
||
A bigger chunk size means a bigger AWS SDK buffer and progress
|
||
reporting more deviating from the truth.
|
||
|
||
|
||
Properties:
|
||
|
||
- Config: chunk_size
|
||
- Env Var: RCLONE_S3_CHUNK_SIZE
|
||
- Type: SizeSuffix
|
||
- Default: 5Mi
|
||
|
||
#### --s3-max-upload-parts
|
||
|
||
Maximum number of parts in a multipart upload.
|
||
|
||
This option defines the maximum number of multipart chunks to use
|
||
when doing a multipart upload.
|
||
|
||
This can be useful if a service does not support the AWS S3
|
||
specification of 10,000 chunks.
|
||
|
||
Rclone will automatically increase the chunk size when uploading a
|
||
large file of a known size to stay below this number of chunks limit.
|
||
|
||
|
||
Properties:
|
||
|
||
- Config: max_upload_parts
|
||
- Env Var: RCLONE_S3_MAX_UPLOAD_PARTS
|
||
- Type: int
|
||
- Default: 10000
|
||
|
||
#### --s3-copy-cutoff
|
||
|
||
Cutoff for switching to multipart copy.
|
||
|
||
Any files larger than this that need to be server-side copied will be
|
||
copied in chunks of this size.
|
||
|
||
The minimum is 0 and the maximum is 5 GiB.
|
||
|
||
Properties:
|
||
|
||
- Config: copy_cutoff
|
||
- Env Var: RCLONE_S3_COPY_CUTOFF
|
||
- Type: SizeSuffix
|
||
- Default: 4.656Gi
|
||
|
||
#### --s3-disable-checksum
|
||
|
||
Don't store MD5 checksum with object metadata.
|
||
|
||
Normally rclone will calculate the MD5 checksum of the input before
|
||
uploading it so it can add it to metadata on the object. This is great
|
||
for data integrity checking but can cause long delays for large files
|
||
to start uploading.
|
||
|
||
Properties:
|
||
|
||
- Config: disable_checksum
|
||
- Env Var: RCLONE_S3_DISABLE_CHECKSUM
|
||
- Type: bool
|
||
- Default: false
|
||
|
||
#### --s3-shared-credentials-file
|
||
|
||
Path to the shared credentials file.
|
||
|
||
If env_auth = true then rclone can use a shared credentials file.
|
||
|
||
If this variable is empty rclone will look for the
|
||
"AWS_SHARED_CREDENTIALS_FILE" env variable. If the env value is empty
|
||
it will default to the current user's home directory.
|
||
|
||
Linux/OSX: "$HOME/.aws/credentials"
|
||
Windows: "%USERPROFILE%\.aws\credentials"
|
||
|
||
|
||
Properties:
|
||
|
||
- Config: shared_credentials_file
|
||
- Env Var: RCLONE_S3_SHARED_CREDENTIALS_FILE
|
||
- Type: string
|
||
- Required: false
|
||
|
||
#### --s3-profile
|
||
|
||
Profile to use in the shared credentials file.
|
||
|
||
If env_auth = true then rclone can use a shared credentials file. This
|
||
variable controls which profile is used in that file.
|
||
|
||
If empty it will default to the environment variable "AWS_PROFILE" or
|
||
"default" if that environment variable is also not set.
|
||
|
||
|
||
Properties:
|
||
|
||
- Config: profile
|
||
- Env Var: RCLONE_S3_PROFILE
|
||
- Type: string
|
||
- Required: false
|
||
|
||
#### --s3-session-token
|
||
|
||
An AWS session token.
|
||
|
||
Properties:
|
||
|
||
- Config: session_token
|
||
- Env Var: RCLONE_S3_SESSION_TOKEN
|
||
- Type: string
|
||
- Required: false
|
||
|
||
#### --s3-upload-concurrency
|
||
|
||
Concurrency for multipart uploads.
|
||
|
||
This is the number of chunks of the same file that are uploaded
|
||
concurrently.
|
||
|
||
If you are uploading small numbers of large files over high-speed links
|
||
and these uploads do not fully utilize your bandwidth, then increasing
|
||
this may help to speed up the transfers.
|
||
|
||
Properties:
|
||
|
||
- Config: upload_concurrency
|
||
- Env Var: RCLONE_S3_UPLOAD_CONCURRENCY
|
||
- Type: int
|
||
- Default: 4
|
||
|
||
#### --s3-force-path-style
|
||
|
||
If true use path style access if false use virtual hosted style.
|
||
|
||
If this is true (the default) then rclone will use path style access,
|
||
if false then rclone will use virtual path style. See [the AWS S3
|
||
docs](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro)
|
||
for more info.
|
||
|
||
Some providers (e.g. AWS, Aliyun OSS, Netease COS, or Tencent COS) require this set to
|
||
false - rclone will do this automatically based on the provider
|
||
setting.
|
||
|
||
Properties:
|
||
|
||
- Config: force_path_style
|
||
- Env Var: RCLONE_S3_FORCE_PATH_STYLE
|
||
- Type: bool
|
||
- Default: true
|
||
|
||
#### --s3-v2-auth
|
||
|
||
If true use v2 authentication.
|
||
|
||
If this is false (the default) then rclone will use v4 authentication.
|
||
If it is set then rclone will use v2 authentication.
|
||
|
||
Use this only if v4 signatures don't work, e.g. pre Jewel/v10 CEPH.
|
||
|
||
Properties:
|
||
|
||
- Config: v2_auth
|
||
- Env Var: RCLONE_S3_V2_AUTH
|
||
- Type: bool
|
||
- Default: false
|
||
|
||
#### --s3-use-accelerate-endpoint
|
||
|
||
If true use the AWS S3 accelerated endpoint.
|
||
|
||
See: [AWS S3 Transfer acceleration](https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration-examples.html)
|
||
|
||
Properties:
|
||
|
||
- Config: use_accelerate_endpoint
|
||
- Env Var: RCLONE_S3_USE_ACCELERATE_ENDPOINT
|
||
- Provider: AWS
|
||
- Type: bool
|
||
- Default: false
|
||
|
||
#### --s3-leave-parts-on-error
|
||
|
||
If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery.
|
||
|
||
It should be set to true for resuming uploads across different sessions.
|
||
|
||
WARNING: Storing parts of an incomplete multipart upload counts towards space usage on S3 and will add additional costs if not cleaned up.
|
||
|
||
|
||
Properties:
|
||
|
||
- Config: leave_parts_on_error
|
||
- Env Var: RCLONE_S3_LEAVE_PARTS_ON_ERROR
|
||
- Provider: AWS
|
||
- Type: bool
|
||
- Default: false
|
||
|
||
#### --s3-list-chunk
|
||
|
||
Size of listing chunk (response list for each ListObject S3 request).
|
||
|
||
This option is also known as "MaxKeys", "max-items", or "page-size" from the AWS S3 specification.
|
||
Most services truncate the response list to 1000 objects even if requested more than that.
|
||
In AWS S3 this is a global maximum and cannot be changed, see [AWS S3](https://docs.aws.amazon.com/cli/latest/reference/s3/ls.html).
|
||
In Ceph, this can be increased with the "rgw list buckets max chunk" option.
|
||
|
||
|
||
Properties:
|
||
|
||
- Config: list_chunk
|
||
- Env Var: RCLONE_S3_LIST_CHUNK
|
||
- Type: int
|
||
- Default: 1000
|
||
|
||
#### --s3-list-version
|
||
|
||
Version of ListObjects to use: 1,2 or 0 for auto.
|
||
|
||
When S3 originally launched it only provided the ListObjects call to
|
||
enumerate objects in a bucket.
|
||
|
||
However in May 2016 the ListObjectsV2 call was introduced. This is
|
||
much higher performance and should be used if at all possible.
|
||
|
||
If set to the default, 0, rclone will guess according to the provider
|
||
set which list objects method to call. If it guesses wrong, then it
|
||
may be set manually here.
|
||
|
||
|
||
Properties:
|
||
|
||
- Config: list_version
|
||
- Env Var: RCLONE_S3_LIST_VERSION
|
||
- Type: int
|
||
- Default: 0
|
||
|
||
#### --s3-list-url-encode
|
||
|
||
Whether to url encode listings: true/false/unset
|
||
|
||
Some providers support URL encoding listings and where this is
|
||
available this is more reliable when using control characters in file
|
||
names. If this is set to unset (the default) then rclone will choose
|
||
according to the provider setting what to apply, but you can override
|
||
rclone's choice here.
|
||
|
||
|
||
Properties:
|
||
|
||
- Config: list_url_encode
|
||
- Env Var: RCLONE_S3_LIST_URL_ENCODE
|
||
- Type: Tristate
|
||
- Default: unset
|
||
|
||
#### --s3-no-check-bucket
|
||
|
||
If set, don't attempt to check the bucket exists or create it.
|
||
|
||
This can be useful when trying to minimise the number of transactions
|
||
rclone does if you know the bucket exists already.
|
||
|
||
It can also be needed if the user you are using does not have bucket
|
||
creation permissions. Before v1.52.0 this would have passed silently
|
||
due to a bug.
|
||
|
||
|
||
Properties:
|
||
|
||
- Config: no_check_bucket
|
||
- Env Var: RCLONE_S3_NO_CHECK_BUCKET
|
||
- Type: bool
|
||
- Default: false
|
||
|
||
#### --s3-no-head
|
||
|
||
If set, don't HEAD uploaded objects to check integrity.
|
||
|
||
This can be useful when trying to minimise the number of transactions
|
||
rclone does.
|
||
|
||
Setting it means that if rclone receives a 200 OK message after
|
||
uploading an object with PUT then it will assume that it got uploaded
|
||
properly.
|
||
|
||
In particular it will assume:
|
||
|
||
- the metadata, including modtime, storage class and content type was as uploaded
|
||
- the size was as uploaded
|
||
|
||
It reads the following items from the response for a single part PUT:
|
||
|
||
- the MD5SUM
|
||
- The uploaded date
|
||
|
||
For multipart uploads these items aren't read.
|
||
|
||
If an source object of unknown length is uploaded then rclone **will** do a
|
||
HEAD request.
|
||
|
||
Setting this flag increases the chance for undetected upload failures,
|
||
in particular an incorrect size, so it isn't recommended for normal
|
||
operation. In practice the chance of an undetected upload failure is
|
||
very small even with this flag.
|
||
|
||
|
||
Properties:
|
||
|
||
- Config: no_head
|
||
- Env Var: RCLONE_S3_NO_HEAD
|
||
- Type: bool
|
||
- Default: false
|
||
|
||
#### --s3-no-head-object
|
||
|
||
If set, do not do HEAD before GET when getting objects.
|
||
|
||
Properties:
|
||
|
||
- Config: no_head_object
|
||
- Env Var: RCLONE_S3_NO_HEAD_OBJECT
|
||
- Type: bool
|
||
- Default: false
|
||
|
||
#### --s3-encoding
|
||
|
||
The encoding for the backend.
|
||
|
||
See the [encoding section in the overview](/overview/#encoding) for more info.
|
||
|
||
Properties:
|
||
|
||
- Config: encoding
|
||
- Env Var: RCLONE_S3_ENCODING
|
||
- Type: MultiEncoder
|
||
- Default: Slash,InvalidUtf8,Dot
|
||
|
||
#### --s3-memory-pool-flush-time
|
||
|
||
How often internal memory buffer pools will be flushed.
|
||
|
||
Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations.
|
||
This option controls how often unused buffers will be removed from the pool.
|
||
|
||
Properties:
|
||
|
||
- Config: memory_pool_flush_time
|
||
- Env Var: RCLONE_S3_MEMORY_POOL_FLUSH_TIME
|
||
- Type: Duration
|
||
- Default: 1m0s
|
||
|
||
#### --s3-memory-pool-use-mmap
|
||
|
||
Whether to use mmap buffers in internal memory pool.
|
||
|
||
Properties:
|
||
|
||
- Config: memory_pool_use_mmap
|
||
- Env Var: RCLONE_S3_MEMORY_POOL_USE_MMAP
|
||
- Type: bool
|
||
- Default: false
|
||
|
||
#### --s3-disable-http2
|
||
|
||
Disable usage of http2 for S3 backends.
|
||
|
||
There is currently an unsolved issue with the s3 (specifically minio) backend
|
||
and HTTP/2. HTTP/2 is enabled by default for the s3 backend but can be
|
||
disabled here. When the issue is solved this flag will be removed.
|
||
|
||
See: https://github.com/rclone/rclone/issues/4673, https://github.com/rclone/rclone/issues/3631
|
||
|
||
|
||
|
||
Properties:
|
||
|
||
- Config: disable_http2
|
||
- Env Var: RCLONE_S3_DISABLE_HTTP2
|
||
- Type: bool
|
||
- Default: false
|
||
|
||
#### --s3-download-url
|
||
|
||
Custom endpoint for downloads.
|
||
This is usually set to a CloudFront CDN URL as AWS S3 offers
|
||
cheaper egress for data downloaded through the CloudFront network.
|
||
|
||
Properties:
|
||
|
||
- Config: download_url
|
||
- Env Var: RCLONE_S3_DOWNLOAD_URL
|
||
- Type: string
|
||
- Required: false
|
||
|
||
#### --s3-use-multipart-etag
|
||
|
||
Whether to use ETag in multipart uploads for verification
|
||
|
||
This should be true, false or left unset to use the default for the provider.
|
||
|
||
|
||
Properties:
|
||
|
||
- Config: use_multipart_etag
|
||
- Env Var: RCLONE_S3_USE_MULTIPART_ETAG
|
||
- Type: Tristate
|
||
- Default: unset
|
||
|
||
#### --s3-use-presigned-request
|
||
|
||
Whether to use a presigned request or PutObject for single part uploads
|
||
|
||
If this is false rclone will use PutObject from the AWS SDK to upload
|
||
an object.
|
||
|
||
Versions of rclone < 1.59 use presigned requests to upload a single
|
||
part object and setting this flag to true will re-enable that
|
||
functionality. This shouldn't be necessary except in exceptional
|
||
circumstances or for testing.
|
||
|
||
|
||
Properties:
|
||
|
||
- Config: use_presigned_request
|
||
- Env Var: RCLONE_S3_USE_PRESIGNED_REQUEST
|
||
- Type: bool
|
||
- Default: false
|
||
|
||
#### --s3-versions
|
||
|
||
Include old versions in directory listings.
|
||
|
||
Properties:
|
||
|
||
- Config: versions
|
||
- Env Var: RCLONE_S3_VERSIONS
|
||
- Type: bool
|
||
- Default: false
|
||
|
||
### Metadata
|
||
|
||
User metadata is stored as x-amz-meta- keys. S3 metadata keys are case insensitive and are always returned in lower case.
|
||
|
||
Here are the possible system metadata items for the s3 backend.
|
||
|
||
| Name | Help | Type | Example | Read Only |
|
||
|------|------|------|---------|-----------|
|
||
| btime | Time of file birth (creation) read from Last-Modified header | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | **Y** |
|
||
| cache-control | Cache-Control header | string | no-cache | N |
|
||
| content-disposition | Content-Disposition header | string | inline | N |
|
||
| content-encoding | Content-Encoding header | string | gzip | N |
|
||
| content-language | Content-Language header | string | en-US | N |
|
||
| content-type | Content-Type header | string | text/plain | N |
|
||
| mtime | Time of last modification, read from rclone metadata | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | N |
|
||
| tier | Tier of the object | string | GLACIER | **Y** |
|
||
|
||
See the [metadata](/docs/#metadata) docs for more info.
|
||
|
||
## Backend commands
|
||
|
||
Here are the commands specific to the s3 backend.
|
||
|
||
Run them with
|
||
|
||
rclone backend COMMAND remote:
|
||
|
||
The help below will explain what arguments each command takes.
|
||
|
||
See the [backend](/commands/rclone_backend/) command for more
|
||
info on how to pass options and arguments.
|
||
|
||
These can be run on a running backend using the rc command
|
||
[backend/command](/rc/#backend-command).
|
||
|
||
### restore
|
||
|
||
Restore objects from GLACIER to normal storage
|
||
|
||
rclone backend restore remote: [options] [<arguments>+]
|
||
|
||
This command can be used to restore one or more objects from GLACIER
|
||
to normal storage.
|
||
|
||
Usage Examples:
|
||
|
||
rclone backend restore s3:bucket/path/to/object [-o priority=PRIORITY] [-o lifetime=DAYS]
|
||
rclone backend restore s3:bucket/path/to/directory [-o priority=PRIORITY] [-o lifetime=DAYS]
|
||
rclone backend restore s3:bucket [-o priority=PRIORITY] [-o lifetime=DAYS]
|
||
|
||
This flag also obeys the filters. Test first with -i/--interactive or --dry-run flags
|
||
|
||
rclone -i backend restore --include "*.txt" s3:bucket/path -o priority=Standard
|
||
|
||
All the objects shown will be marked for restore, then
|
||
|
||
rclone backend restore --include "*.txt" s3:bucket/path -o priority=Standard
|
||
|
||
It returns a list of status dictionaries with Remote and Status
|
||
keys. The Status will be OK if it was successful or an error message
|
||
if not.
|
||
|
||
[
|
||
{
|
||
"Status": "OK",
|
||
"Path": "test.txt"
|
||
},
|
||
{
|
||
"Status": "OK",
|
||
"Path": "test/file4.txt"
|
||
}
|
||
]
|
||
|
||
|
||
|
||
Options:
|
||
|
||
- "description": The optional description for the job.
|
||
- "lifetime": Lifetime of the active copy in days
|
||
- "priority": Priority of restore: Standard|Expedited|Bulk
|
||
|
||
### list-multipart-uploads
|
||
|
||
List the unfinished multipart uploads
|
||
|
||
rclone backend list-multipart-uploads remote: [options] [<arguments>+]
|
||
|
||
This command lists the unfinished multipart uploads in JSON format.
|
||
|
||
rclone backend list-multipart s3:bucket/path/to/object
|
||
|
||
It returns a dictionary of buckets with values as lists of unfinished
|
||
multipart uploads.
|
||
|
||
You can call it with no bucket in which case it lists all bucket, with
|
||
a bucket or with a bucket and path.
|
||
|
||
{
|
||
"rclone": [
|
||
{
|
||
"Initiated": "2020-06-26T14:20:36Z",
|
||
"Initiator": {
|
||
"DisplayName": "XXX",
|
||
"ID": "arn:aws:iam::XXX:user/XXX"
|
||
},
|
||
"Key": "KEY",
|
||
"Owner": {
|
||
"DisplayName": null,
|
||
"ID": "XXX"
|
||
},
|
||
"StorageClass": "STANDARD",
|
||
"UploadId": "XXX"
|
||
}
|
||
],
|
||
"rclone-1000files": [],
|
||
"rclone-dst": []
|
||
}
|
||
|
||
|
||
|
||
### cleanup
|
||
|
||
Remove unfinished multipart uploads.
|
||
|
||
rclone backend cleanup remote: [options] [<arguments>+]
|
||
|
||
This command removes unfinished multipart uploads of age greater than
|
||
max-age which defaults to 24 hours.
|
||
|
||
Note that you can use -i/--dry-run with this command to see what it
|
||
would do.
|
||
|
||
rclone backend cleanup s3:bucket/path/to/object
|
||
rclone backend cleanup -o max-age=7w s3:bucket/path/to/object
|
||
|
||
Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.
|
||
|
||
|
||
Options:
|
||
|
||
- "max-age": Max age of upload to delete
|
||
|
||
### versioning
|
||
|
||
Set/get versioning support for a bucket.
|
||
|
||
rclone backend versioning remote: [options] [<arguments>+]
|
||
|
||
This command sets versioning support if a parameter is
|
||
passed and then returns the current versioning status for the bucket
|
||
supplied.
|
||
|
||
rclone backend versioning s3:bucket # read status only
|
||
rclone backend versioning s3:bucket Enabled
|
||
rclone backend versioning s3:bucket Suspended
|
||
|
||
It may return "Enabled", "Suspended" or "Unversioned". Note that once versioning
|
||
has been enabled the status can't be set back to "Unversioned".
|
||
|
||
|
||
{{< rem autogenerated options stop >}}
|
||
|
||
### Anonymous access to public buckets
|
||
|
||
If you want to use rclone to access a public bucket, configure with a
|
||
blank `access_key_id` and `secret_access_key`. Your config should end
|
||
up looking like this:
|
||
|
||
```
|
||
[anons3]
|
||
type = s3
|
||
provider = AWS
|
||
env_auth = false
|
||
access_key_id =
|
||
secret_access_key =
|
||
region = us-east-1
|
||
endpoint =
|
||
location_constraint =
|
||
acl = private
|
||
server_side_encryption =
|
||
storage_class =
|
||
```
|
||
|
||
Then use it as normal with the name of the public bucket, e.g.
|
||
|
||
rclone lsd anons3:1000genomes
|
||
|
||
You will be able to list and copy data but not upload it.
|
||
|
||
## Providers
|
||
|
||
### AWS S3
|
||
|
||
This is the provider used as main example and described in the [configuration](#configuration) section above.
|
||
|
||
### AWS Snowball Edge
|
||
|
||
[AWS Snowball](https://aws.amazon.com/snowball/) is a hardware
|
||
appliance used for transferring bulk data back to AWS. Its main
|
||
software interface is S3 object storage.
|
||
|
||
To use rclone with AWS Snowball Edge devices, configure as standard
|
||
for an 'S3 Compatible Service'.
|
||
|
||
If using rclone pre v1.59 be sure to set `upload_cutoff = 0` otherwise
|
||
you will run into authentication header issues as the snowball device
|
||
does not support query parameter based authentication.
|
||
|
||
With rclone v1.59 or later setting `upload_cutoff` should not be necessary.
|
||
|
||
eg.
|
||
```
|
||
[snowball]
|
||
type = s3
|
||
provider = Other
|
||
access_key_id = YOUR_ACCESS_KEY
|
||
secret_access_key = YOUR_SECRET_KEY
|
||
endpoint = http://[IP of Snowball]:8080
|
||
upload_cutoff = 0
|
||
```
|
||
|
||
### Ceph
|
||
|
||
[Ceph](https://ceph.com/) is an open-source, unified, distributed
|
||
storage system designed for excellent performance, reliability and
|
||
scalability. It has an S3 compatible object storage interface.
|
||
|
||
To use rclone with Ceph, configure as above but leave the region blank
|
||
and set the endpoint. You should end up with something like this in
|
||
your config:
|
||
|
||
|
||
```
|
||
[ceph]
|
||
type = s3
|
||
provider = Ceph
|
||
env_auth = false
|
||
access_key_id = XXX
|
||
secret_access_key = YYY
|
||
region =
|
||
endpoint = https://ceph.endpoint.example.com
|
||
location_constraint =
|
||
acl =
|
||
server_side_encryption =
|
||
storage_class =
|
||
```
|
||
|
||
If you are using an older version of CEPH (e.g. 10.2.x Jewel) and a
|
||
version of rclone before v1.59 then you may need to supply the
|
||
parameter `--s3-upload-cutoff 0` or put this in the config file as
|
||
`upload_cutoff 0` to work around a bug which causes uploading of small
|
||
files to fail.
|
||
|
||
Note also that Ceph sometimes puts `/` in the passwords it gives
|
||
users. If you read the secret access key using the command line tools
|
||
you will get a JSON blob with the `/` escaped as `\/`. Make sure you
|
||
only write `/` in the secret access key.
|
||
|
||
Eg the dump from Ceph looks something like this (irrelevant keys
|
||
removed).
|
||
|
||
```
|
||
{
|
||
"user_id": "xxx",
|
||
"display_name": "xxxx",
|
||
"keys": [
|
||
{
|
||
"user": "xxx",
|
||
"access_key": "xxxxxx",
|
||
"secret_key": "xxxxxx\/xxxx"
|
||
}
|
||
],
|
||
}
|
||
```
|
||
|
||
Because this is a json dump, it is encoding the `/` as `\/`, so if you
|
||
use the secret key as `xxxxxx/xxxx` it will work fine.
|
||
|
||
### Cloudflare R2 {#cloudflare-r2}
|
||
|
||
[Cloudflare R2](https://blog.cloudflare.com/r2-open-beta/) Storage
|
||
allows developers to store large amounts of unstructured data without
|
||
the costly egress bandwidth fees associated with typical cloud storage
|
||
services.
|
||
|
||
Here is an example of making a Cloudflare R2 configuration. First run:
|
||
|
||
rclone config
|
||
|
||
This will guide you through an interactive setup process.
|
||
|
||
Note that all buckets are private, and all are stored in the same
|
||
"auto" region. It is necessary to use Cloudflare workers to share the
|
||
content of a bucket publicly.
|
||
|
||
```
|
||
No remotes found, make a new one?
|
||
n) New remote
|
||
s) Set configuration password
|
||
q) Quit config
|
||
n/s/q> n
|
||
name> r2
|
||
Option Storage.
|
||
Type of storage to configure.
|
||
Choose a number from below, or type in your own value.
|
||
...
|
||
XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi
|
||
\ (s3)
|
||
...
|
||
Storage> s3
|
||
Option provider.
|
||
Choose your S3 provider.
|
||
Choose a number from below, or type in your own value.
|
||
Press Enter to leave empty.
|
||
...
|
||
XX / Cloudflare R2 Storage
|
||
\ (Cloudflare)
|
||
...
|
||
provider> Cloudflare
|
||
Option env_auth.
|
||
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
|
||
Only applies if access_key_id and secret_access_key is blank.
|
||
Choose a number from below, or type in your own boolean value (true or false).
|
||
Press Enter for the default (false).
|
||
1 / Enter AWS credentials in the next step.
|
||
\ (false)
|
||
2 / Get AWS credentials from the environment (env vars or IAM).
|
||
\ (true)
|
||
env_auth> 1
|
||
Option access_key_id.
|
||
AWS Access Key ID.
|
||
Leave blank for anonymous access or runtime credentials.
|
||
Enter a value. Press Enter to leave empty.
|
||
access_key_id> ACCESS_KEY
|
||
Option secret_access_key.
|
||
AWS Secret Access Key (password).
|
||
Leave blank for anonymous access or runtime credentials.
|
||
Enter a value. Press Enter to leave empty.
|
||
secret_access_key> SECRET_ACCESS_KEY
|
||
Option region.
|
||
Region to connect to.
|
||
Choose a number from below, or type in your own value.
|
||
Press Enter to leave empty.
|
||
1 / R2 buckets are automatically distributed across Cloudflare's data centers for low latency.
|
||
\ (auto)
|
||
region> 1
|
||
Option endpoint.
|
||
Endpoint for S3 API.
|
||
Required when using an S3 clone.
|
||
Enter a value. Press Enter to leave empty.
|
||
endpoint> https://ACCOUNT_ID.r2.cloudflarestorage.com
|
||
Edit advanced config?
|
||
y) Yes
|
||
n) No (default)
|
||
y/n> n
|
||
--------------------
|
||
y) Yes this is OK (default)
|
||
e) Edit this remote
|
||
d) Delete this remote
|
||
y/e/d> y
|
||
```
|
||
|
||
This will leave your config looking something like:
|
||
|
||
```
|
||
[r2]
|
||
type = s3
|
||
provider = Cloudflare
|
||
access_key_id = ACCESS_KEY
|
||
secret_access_key = SECRET_ACCESS_KEY
|
||
region = auto
|
||
endpoint = https://ACCOUNT_ID.r2.cloudflarestorage.com
|
||
acl = private
|
||
```
|
||
|
||
Now run `rclone lsf r2:` to see your buckets and `rclone lsf
|
||
r2:bucket` to look within a bucket.
|
||
|
||
### Dreamhost
|
||
|
||
Dreamhost [DreamObjects](https://www.dreamhost.com/cloud/storage/) is
|
||
an object storage system based on CEPH.
|
||
|
||
To use rclone with Dreamhost, configure as above but leave the region blank
|
||
and set the endpoint. You should end up with something like this in
|
||
your config:
|
||
|
||
```
|
||
[dreamobjects]
|
||
type = s3
|
||
provider = DreamHost
|
||
env_auth = false
|
||
access_key_id = your_access_key
|
||
secret_access_key = your_secret_key
|
||
region =
|
||
endpoint = objects-us-west-1.dream.io
|
||
location_constraint =
|
||
acl = private
|
||
server_side_encryption =
|
||
storage_class =
|
||
```
|
||
|
||
### DigitalOcean Spaces
|
||
|
||
[Spaces](https://www.digitalocean.com/products/object-storage/) is an [S3-interoperable](https://developers.digitalocean.com/documentation/spaces/) object storage service from cloud provider DigitalOcean.
|
||
|
||
To connect to DigitalOcean Spaces you will need an access key and secret key. These can be retrieved on the "[Applications & API](https://cloud.digitalocean.com/settings/api/tokens)" page of the DigitalOcean control panel. They will be needed when prompted by `rclone config` for your `access_key_id` and `secret_access_key`.
|
||
|
||
When prompted for a `region` or `location_constraint`, press enter to use the default value. The region must be included in the `endpoint` setting (e.g. `nyc3.digitaloceanspaces.com`). The default values can be used for other settings.
|
||
|
||
Going through the whole process of creating a new remote by running `rclone config`, each prompt should be answered as shown below:
|
||
|
||
```
|
||
Storage> s3
|
||
env_auth> 1
|
||
access_key_id> YOUR_ACCESS_KEY
|
||
secret_access_key> YOUR_SECRET_KEY
|
||
region>
|
||
endpoint> nyc3.digitaloceanspaces.com
|
||
location_constraint>
|
||
acl>
|
||
storage_class>
|
||
```
|
||
|
||
The resulting configuration file should look like:
|
||
|
||
```
|
||
[spaces]
|
||
type = s3
|
||
provider = DigitalOcean
|
||
env_auth = false
|
||
access_key_id = YOUR_ACCESS_KEY
|
||
secret_access_key = YOUR_SECRET_KEY
|
||
region =
|
||
endpoint = nyc3.digitaloceanspaces.com
|
||
location_constraint =
|
||
acl =
|
||
server_side_encryption =
|
||
storage_class =
|
||
```
|
||
|
||
Once configured, you can create a new Space and begin copying files. For example:
|
||
|
||
```
|
||
rclone mkdir spaces:my-new-space
|
||
rclone copy /path/to/files spaces:my-new-space
|
||
```
|
||
### Huawei OBS {#huawei-obs}
|
||
|
||
Object Storage Service (OBS) provides stable, secure, efficient, and easy-to-use cloud storage that lets you store virtually any volume of unstructured data in any format and access it from anywhere.
|
||
|
||
OBS provides an S3 interface, you can copy and modify the following configuration and add it to your rclone configuration file.
|
||
```
|
||
[obs]
|
||
type = s3
|
||
provider = HuaweiOBS
|
||
access_key_id = your-access-key-id
|
||
secret_access_key = your-secret-access-key
|
||
region = af-south-1
|
||
endpoint = obs.af-south-1.myhuaweicloud.com
|
||
acl = private
|
||
```
|
||
|
||
Or you can also configure via the interactive command line:
|
||
```
|
||
No remotes found, make a new one?
|
||
n) New remote
|
||
s) Set configuration password
|
||
q) Quit config
|
||
n/s/q> n
|
||
name> obs
|
||
Option Storage.
|
||
Type of storage to configure.
|
||
Choose a number from below, or type in your own value.
|
||
[snip]
|
||
5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi
|
||
\ (s3)
|
||
[snip]
|
||
Storage> 5
|
||
Option provider.
|
||
Choose your S3 provider.
|
||
Choose a number from below, or type in your own value.
|
||
Press Enter to leave empty.
|
||
[snip]
|
||
9 / Huawei Object Storage Service
|
||
\ (HuaweiOBS)
|
||
[snip]
|
||
provider> 9
|
||
Option env_auth.
|
||
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
|
||
Only applies if access_key_id and secret_access_key is blank.
|
||
Choose a number from below, or type in your own boolean value (true or false).
|
||
Press Enter for the default (false).
|
||
1 / Enter AWS credentials in the next step.
|
||
\ (false)
|
||
2 / Get AWS credentials from the environment (env vars or IAM).
|
||
\ (true)
|
||
env_auth> 1
|
||
Option access_key_id.
|
||
AWS Access Key ID.
|
||
Leave blank for anonymous access or runtime credentials.
|
||
Enter a value. Press Enter to leave empty.
|
||
access_key_id> your-access-key-id
|
||
Option secret_access_key.
|
||
AWS Secret Access Key (password).
|
||
Leave blank for anonymous access or runtime credentials.
|
||
Enter a value. Press Enter to leave empty.
|
||
secret_access_key> your-secret-access-key
|
||
Option region.
|
||
Region to connect to.
|
||
Choose a number from below, or type in your own value.
|
||
Press Enter to leave empty.
|
||
1 / AF-Johannesburg
|
||
\ (af-south-1)
|
||
2 / AP-Bangkok
|
||
\ (ap-southeast-2)
|
||
[snip]
|
||
region> 1
|
||
Option endpoint.
|
||
Endpoint for OBS API.
|
||
Choose a number from below, or type in your own value.
|
||
Press Enter to leave empty.
|
||
1 / AF-Johannesburg
|
||
\ (obs.af-south-1.myhuaweicloud.com)
|
||
2 / AP-Bangkok
|
||
\ (obs.ap-southeast-2.myhuaweicloud.com)
|
||
[snip]
|
||
endpoint> 1
|
||
Option acl.
|
||
Canned ACL used when creating buckets and storing or copying objects.
|
||
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
|
||
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
|
||
Note that this ACL is applied when server-side copying objects as S3
|
||
doesn't copy the ACL from the source but rather writes a fresh one.
|
||
Choose a number from below, or type in your own value.
|
||
Press Enter to leave empty.
|
||
/ Owner gets FULL_CONTROL.
|
||
1 | No one else has access rights (default).
|
||
\ (private)
|
||
[snip]
|
||
acl> 1
|
||
Edit advanced config?
|
||
y) Yes
|
||
n) No (default)
|
||
y/n>
|
||
--------------------
|
||
[obs]
|
||
type = s3
|
||
provider = HuaweiOBS
|
||
access_key_id = your-access-key-id
|
||
secret_access_key = your-secret-access-key
|
||
region = af-south-1
|
||
endpoint = obs.af-south-1.myhuaweicloud.com
|
||
acl = private
|
||
--------------------
|
||
y) Yes this is OK (default)
|
||
e) Edit this remote
|
||
d) Delete this remote
|
||
y/e/d> y
|
||
Current remotes:
|
||
|
||
Name Type
|
||
==== ====
|
||
obs s3
|
||
|
||
e) Edit existing remote
|
||
n) New remote
|
||
d) Delete remote
|
||
r) Rename remote
|
||
c) Copy remote
|
||
s) Set configuration password
|
||
q) Quit config
|
||
e/n/d/r/c/s/q> q
|
||
```
|
||
|
||
### IBM COS (S3)
|
||
|
||
Information stored with IBM Cloud Object Storage is encrypted and dispersed across multiple geographic locations, and accessed through an implementation of the S3 API. This service makes use of the distributed storage technologies provided by IBM’s Cloud Object Storage System (formerly Cleversafe). For more information visit: (http://www.ibm.com/cloud/object-storage)
|
||
|
||
To configure access to IBM COS S3, follow the steps below:
|
||
|
||
1. Run rclone config and select n for a new remote.
|
||
```
|
||
2018/02/14 14:13:11 NOTICE: Config file "C:\\Users\\a\\.config\\rclone\\rclone.conf" not found - using defaults
|
||
No remotes found, make a new one?
|
||
n) New remote
|
||
s) Set configuration password
|
||
q) Quit config
|
||
n/s/q> n
|
||
```
|
||
|
||
2. Enter the name for the configuration
|
||
```
|
||
name> <YOUR NAME>
|
||
```
|
||
|
||
3. Select "s3" storage.
|
||
```
|
||
Choose a number from below, or type in your own value
|
||
1 / Alias for an existing remote
|
||
\ "alias"
|
||
2 / Amazon Drive
|
||
\ "amazon cloud drive"
|
||
3 / Amazon S3 Complaint Storage Providers (Dreamhost, Ceph, ChinaMobile, ArvanCloud, Minio, IBM COS)
|
||
\ "s3"
|
||
4 / Backblaze B2
|
||
\ "b2"
|
||
[snip]
|
||
23 / HTTP
|
||
\ "http"
|
||
Storage> 3
|
||
```
|
||
|
||
4. Select IBM COS as the S3 Storage Provider.
|
||
```
|
||
Choose the S3 provider.
|
||
Choose a number from below, or type in your own value
|
||
1 / Choose this option to configure Storage to AWS S3
|
||
\ "AWS"
|
||
2 / Choose this option to configure Storage to Ceph Systems
|
||
\ "Ceph"
|
||
3 / Choose this option to configure Storage to Dreamhost
|
||
\ "Dreamhost"
|
||
4 / Choose this option to the configure Storage to IBM COS S3
|
||
\ "IBMCOS"
|
||
5 / Choose this option to the configure Storage to Minio
|
||
\ "Minio"
|
||
Provider>4
|
||
```
|
||
|
||
5. Enter the Access Key and Secret.
|
||
```
|
||
AWS Access Key ID - leave blank for anonymous access or runtime credentials.
|
||
access_key_id> <>
|
||
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
|
||
secret_access_key> <>
|
||
```
|
||
|
||
6. Specify the endpoint for IBM COS. For Public IBM COS, choose from the option below. For On Premise IBM COS, enter an endpoint address.
|
||
```
|
||
Endpoint for IBM COS S3 API.
|
||
Specify if using an IBM COS On Premise.
|
||
Choose a number from below, or type in your own value
|
||
1 / US Cross Region Endpoint
|
||
\ "s3-api.us-geo.objectstorage.softlayer.net"
|
||
2 / US Cross Region Dallas Endpoint
|
||
\ "s3-api.dal.us-geo.objectstorage.softlayer.net"
|
||
3 / US Cross Region Washington DC Endpoint
|
||
\ "s3-api.wdc-us-geo.objectstorage.softlayer.net"
|
||
4 / US Cross Region San Jose Endpoint
|
||
\ "s3-api.sjc-us-geo.objectstorage.softlayer.net"
|
||
5 / US Cross Region Private Endpoint
|
||
\ "s3-api.us-geo.objectstorage.service.networklayer.com"
|
||
6 / US Cross Region Dallas Private Endpoint
|
||
\ "s3-api.dal-us-geo.objectstorage.service.networklayer.com"
|
||
7 / US Cross Region Washington DC Private Endpoint
|
||
\ "s3-api.wdc-us-geo.objectstorage.service.networklayer.com"
|
||
8 / US Cross Region San Jose Private Endpoint
|
||
\ "s3-api.sjc-us-geo.objectstorage.service.networklayer.com"
|
||
9 / US Region East Endpoint
|
||
\ "s3.us-east.objectstorage.softlayer.net"
|
||
10 / US Region East Private Endpoint
|
||
\ "s3.us-east.objectstorage.service.networklayer.com"
|
||
11 / US Region South Endpoint
|
||
[snip]
|
||
34 / Toronto Single Site Private Endpoint
|
||
\ "s3.tor01.objectstorage.service.networklayer.com"
|
||
endpoint>1
|
||
```
|
||
|
||
|
||
7. Specify a IBM COS Location Constraint. The location constraint must match endpoint when using IBM Cloud Public. For on-prem COS, do not make a selection from this list, hit enter
|
||
```
|
||
1 / US Cross Region Standard
|
||
\ "us-standard"
|
||
2 / US Cross Region Vault
|
||
\ "us-vault"
|
||
3 / US Cross Region Cold
|
||
\ "us-cold"
|
||
4 / US Cross Region Flex
|
||
\ "us-flex"
|
||
5 / US East Region Standard
|
||
\ "us-east-standard"
|
||
6 / US East Region Vault
|
||
\ "us-east-vault"
|
||
7 / US East Region Cold
|
||
\ "us-east-cold"
|
||
8 / US East Region Flex
|
||
\ "us-east-flex"
|
||
9 / US South Region Standard
|
||
\ "us-south-standard"
|
||
10 / US South Region Vault
|
||
\ "us-south-vault"
|
||
[snip]
|
||
32 / Toronto Flex
|
||
\ "tor01-flex"
|
||
location_constraint>1
|
||
```
|
||
|
||
9. Specify a canned ACL. IBM Cloud (Storage) supports "public-read" and "private". IBM Cloud(Infra) supports all the canned ACLs. On-Premise COS supports all the canned ACLs.
|
||
```
|
||
Canned ACL used when creating buckets and/or storing objects in S3.
|
||
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
|
||
Choose a number from below, or type in your own value
|
||
1 / Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS
|
||
\ "private"
|
||
2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS
|
||
\ "public-read"
|
||
3 / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS
|
||
\ "public-read-write"
|
||
4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS
|
||
\ "authenticated-read"
|
||
acl> 1
|
||
```
|
||
|
||
|
||
12. Review the displayed configuration and accept to save the "remote" then quit. The config file should look like this
|
||
```
|
||
[xxx]
|
||
type = s3
|
||
Provider = IBMCOS
|
||
access_key_id = xxx
|
||
secret_access_key = yyy
|
||
endpoint = s3-api.us-geo.objectstorage.softlayer.net
|
||
location_constraint = us-standard
|
||
acl = private
|
||
```
|
||
|
||
13. Execute rclone commands
|
||
```
|
||
1) Create a bucket.
|
||
rclone mkdir IBM-COS-XREGION:newbucket
|
||
2) List available buckets.
|
||
rclone lsd IBM-COS-XREGION:
|
||
-1 2017-11-08 21:16:22 -1 test
|
||
-1 2018-02-14 20:16:39 -1 newbucket
|
||
3) List contents of a bucket.
|
||
rclone ls IBM-COS-XREGION:newbucket
|
||
18685952 test.exe
|
||
4) Copy a file from local to remote.
|
||
rclone copy /Users/file.txt IBM-COS-XREGION:newbucket
|
||
5) Copy a file from remote to local.
|
||
rclone copy IBM-COS-XREGION:newbucket/file.txt .
|
||
6) Delete a file on remote.
|
||
rclone delete IBM-COS-XREGION:newbucket/file.txt
|
||
```
|
||
|
||
### IDrive e2 {#idrive-e2}
|
||
|
||
Here is an example of making an [IDrive e2](https://www.idrive.com/e2/)
|
||
configuration. First run:
|
||
|
||
rclone config
|
||
|
||
This will guide you through an interactive setup process.
|
||
|
||
```
|
||
No remotes found, make a new one?
|
||
n) New remote
|
||
s) Set configuration password
|
||
q) Quit config
|
||
n/s/q> n
|
||
|
||
Enter name for new remote.
|
||
name> e2
|
||
|
||
Option Storage.
|
||
Type of storage to configure.
|
||
Choose a number from below, or type in your own value.
|
||
[snip]
|
||
XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi
|
||
\ (s3)
|
||
[snip]
|
||
Storage> s3
|
||
|
||
Option provider.
|
||
Choose your S3 provider.
|
||
Choose a number from below, or type in your own value.
|
||
Press Enter to leave empty.
|
||
[snip]
|
||
XX / IDrive e2
|
||
\ (IDrive)
|
||
[snip]
|
||
provider> IDrive
|
||
|
||
Option env_auth.
|
||
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
|
||
Only applies if access_key_id and secret_access_key is blank.
|
||
Choose a number from below, or type in your own boolean value (true or false).
|
||
Press Enter for the default (false).
|
||
1 / Enter AWS credentials in the next step.
|
||
\ (false)
|
||
2 / Get AWS credentials from the environment (env vars or IAM).
|
||
\ (true)
|
||
env_auth>
|
||
|
||
Option access_key_id.
|
||
AWS Access Key ID.
|
||
Leave blank for anonymous access or runtime credentials.
|
||
Enter a value. Press Enter to leave empty.
|
||
access_key_id> YOUR_ACCESS_KEY
|
||
|
||
Option secret_access_key.
|
||
AWS Secret Access Key (password).
|
||
Leave blank for anonymous access or runtime credentials.
|
||
Enter a value. Press Enter to leave empty.
|
||
secret_access_key> YOUR_SECRET_KEY
|
||
|
||
Option acl.
|
||
Canned ACL used when creating buckets and storing or copying objects.
|
||
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
|
||
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
|
||
Note that this ACL is applied when server-side copying objects as S3
|
||
doesn't copy the ACL from the source but rather writes a fresh one.
|
||
Choose a number from below, or type in your own value.
|
||
Press Enter to leave empty.
|
||
/ Owner gets FULL_CONTROL.
|
||
1 | No one else has access rights (default).
|
||
\ (private)
|
||
/ Owner gets FULL_CONTROL.
|
||
2 | The AllUsers group gets READ access.
|
||
\ (public-read)
|
||
/ Owner gets FULL_CONTROL.
|
||
3 | The AllUsers group gets READ and WRITE access.
|
||
| Granting this on a bucket is generally not recommended.
|
||
\ (public-read-write)
|
||
/ Owner gets FULL_CONTROL.
|
||
4 | The AuthenticatedUsers group gets READ access.
|
||
\ (authenticated-read)
|
||
/ Object owner gets FULL_CONTROL.
|
||
5 | Bucket owner gets READ access.
|
||
| If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
|
||
\ (bucket-owner-read)
|
||
/ Both the object owner and the bucket owner get FULL_CONTROL over the object.
|
||
6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
|
||
\ (bucket-owner-full-control)
|
||
acl>
|
||
|
||
Edit advanced config?
|
||
y) Yes
|
||
n) No (default)
|
||
y/n>
|
||
|
||
Configuration complete.
|
||
Options:
|
||
- type: s3
|
||
- provider: IDrive
|
||
- access_key_id: YOUR_ACCESS_KEY
|
||
- secret_access_key: YOUR_SECRET_KEY
|
||
- endpoint: q9d9.la12.idrivee2-5.com
|
||
Keep this "e2" remote?
|
||
y) Yes this is OK (default)
|
||
e) Edit this remote
|
||
d) Delete this remote
|
||
y/e/d> y
|
||
```
|
||
|
||
### Minio
|
||
|
||
[Minio](https://minio.io/) is an object storage server built for cloud application developers and devops.
|
||
|
||
It is very easy to install and provides an S3 compatible server which can be used by rclone.
|
||
|
||
To use it, install Minio following the instructions [here](https://docs.minio.io/docs/minio-quickstart-guide).
|
||
|
||
When it configures itself Minio will print something like this
|
||
|
||
```
|
||
Endpoint: http://192.168.1.106:9000 http://172.23.0.1:9000
|
||
AccessKey: USWUXHGYZQYFYFFIT3RE
|
||
SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
|
||
Region: us-east-1
|
||
SQS ARNs: arn:minio:sqs:us-east-1:1:redis arn:minio:sqs:us-east-1:2:redis
|
||
|
||
Browser Access:
|
||
http://192.168.1.106:9000 http://172.23.0.1:9000
|
||
|
||
Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
|
||
$ mc config host add myminio http://192.168.1.106:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
|
||
|
||
Object API (Amazon S3 compatible):
|
||
Go: https://docs.minio.io/docs/golang-client-quickstart-guide
|
||
Java: https://docs.minio.io/docs/java-client-quickstart-guide
|
||
Python: https://docs.minio.io/docs/python-client-quickstart-guide
|
||
JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
|
||
.NET: https://docs.minio.io/docs/dotnet-client-quickstart-guide
|
||
|
||
Drive Capacity: 26 GiB Free, 165 GiB Total
|
||
```
|
||
|
||
These details need to go into `rclone config` like this. Note that it
|
||
is important to put the region in as stated above.
|
||
|
||
```
|
||
env_auth> 1
|
||
access_key_id> USWUXHGYZQYFYFFIT3RE
|
||
secret_access_key> MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
|
||
region> us-east-1
|
||
endpoint> http://192.168.1.106:9000
|
||
location_constraint>
|
||
server_side_encryption>
|
||
```
|
||
|
||
Which makes the config file look like this
|
||
|
||
```
|
||
[minio]
|
||
type = s3
|
||
provider = Minio
|
||
env_auth = false
|
||
access_key_id = USWUXHGYZQYFYFFIT3RE
|
||
secret_access_key = MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
|
||
region = us-east-1
|
||
endpoint = http://192.168.1.106:9000
|
||
location_constraint =
|
||
server_side_encryption =
|
||
```
|
||
|
||
So once set up, for example, to copy files into a bucket
|
||
|
||
```
|
||
rclone copy /path/to/files minio:bucket
|
||
```
|
||
|
||
### RackCorp {#RackCorp}
|
||
|
||
[RackCorp Object Storage](https://www.rackcorp.com/storage/s3storage) is an S3 compatible object storage platform from your friendly cloud provider RackCorp.
|
||
The service is fast, reliable, well priced and located in many strategic locations unserviced by others, to ensure you can maintain data sovereignty.
|
||
|
||
Before you can use RackCorp Object Storage, you'll need to "[sign up](https://www.rackcorp.com/signup)" for an account on our "[portal](https://portal.rackcorp.com)".
|
||
Next you can create an `access key`, a `secret key` and `buckets`, in your location of choice with ease.
|
||
These details are required for the next steps of configuration, when `rclone config` asks for your `access_key_id` and `secret_access_key`.
|
||
|
||
Your config should end up looking a bit like this:
|
||
|
||
```
|
||
[RCS3-demo-config]
|
||
type = s3
|
||
provider = RackCorp
|
||
env_auth = true
|
||
access_key_id = YOURACCESSKEY
|
||
secret_access_key = YOURSECRETACCESSKEY
|
||
region = au-nsw
|
||
endpoint = s3.rackcorp.com
|
||
location_constraint = au-nsw
|
||
```
|
||
|
||
|
||
### Scaleway
|
||
|
||
[Scaleway](https://www.scaleway.com/object-storage/) The Object Storage platform allows you to store anything from backups, logs and web assets to documents and photos.
|
||
Files can be dropped from the Scaleway console or transferred through our API and CLI or using any S3-compatible tool.
|
||
|
||
Scaleway provides an S3 interface which can be configured for use with rclone like this:
|
||
|
||
```
|
||
[scaleway]
|
||
type = s3
|
||
provider = Scaleway
|
||
env_auth = false
|
||
endpoint = s3.nl-ams.scw.cloud
|
||
access_key_id = SCWXXXXXXXXXXXXXX
|
||
secret_access_key = 1111111-2222-3333-44444-55555555555555
|
||
region = nl-ams
|
||
location_constraint =
|
||
acl = private
|
||
server_side_encryption =
|
||
storage_class =
|
||
```
|
||
|
||
[C14 Cold Storage](https://www.online.net/en/storage/c14-cold-storage) is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" `storage_class`.
|
||
So you can configure your remote with the `storage_class = GLACIER` option to upload directly to C14. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above)
|
||
|
||
### Seagate Lyve Cloud {#lyve}
|
||
|
||
[Seagate Lyve Cloud](https://www.seagate.com/gb/en/services/cloud/storage/) is an S3
|
||
compatible object storage platform from [Seagate](https://seagate.com/) intended for enterprise use.
|
||
|
||
Here is a config run through for a remote called `remote` - you may
|
||
choose a different name of course. Note that to create an access key
|
||
and secret key you will need to create a service account first.
|
||
|
||
```
|
||
$ rclone config
|
||
No remotes found, make a new one?
|
||
n) New remote
|
||
s) Set configuration password
|
||
q) Quit config
|
||
n/s/q> n
|
||
name> remote
|
||
```
|
||
|
||
Choose `s3` backend
|
||
|
||
```
|
||
Type of storage to configure.
|
||
Choose a number from below, or type in your own value.
|
||
[snip]
|
||
XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS
|
||
\ (s3)
|
||
[snip]
|
||
Storage> s3
|
||
```
|
||
|
||
Choose `LyveCloud` as S3 provider
|
||
|
||
```
|
||
Choose your S3 provider.
|
||
Choose a number from below, or type in your own value.
|
||
Press Enter to leave empty.
|
||
[snip]
|
||
XX / Seagate Lyve Cloud
|
||
\ (LyveCloud)
|
||
[snip]
|
||
provider> LyveCloud
|
||
```
|
||
|
||
Take the default (just press enter) to enter access key and secret in the config file.
|
||
|
||
```
|
||
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
|
||
Only applies if access_key_id and secret_access_key is blank.
|
||
Choose a number from below, or type in your own boolean value (true or false).
|
||
Press Enter for the default (false).
|
||
1 / Enter AWS credentials in the next step.
|
||
\ (false)
|
||
2 / Get AWS credentials from the environment (env vars or IAM).
|
||
\ (true)
|
||
env_auth>
|
||
```
|
||
|
||
```
|
||
AWS Access Key ID.
|
||
Leave blank for anonymous access or runtime credentials.
|
||
Enter a value. Press Enter to leave empty.
|
||
access_key_id> XXX
|
||
```
|
||
|
||
```
|
||
AWS Secret Access Key (password).
|
||
Leave blank for anonymous access or runtime credentials.
|
||
Enter a value. Press Enter to leave empty.
|
||
secret_access_key> YYY
|
||
```
|
||
|
||
Leave region blank
|
||
|
||
```
|
||
Region to connect to.
|
||
Leave blank if you are using an S3 clone and you don't have a region.
|
||
Choose a number from below, or type in your own value.
|
||
Press Enter to leave empty.
|
||
/ Use this if unsure.
|
||
1 | Will use v4 signatures and an empty region.
|
||
\ ()
|
||
/ Use this only if v4 signatures don't work.
|
||
2 | E.g. pre Jewel/v10 CEPH.
|
||
\ (other-v2-signature)
|
||
region>
|
||
```
|
||
|
||
Choose an endpoint from the list
|
||
|
||
```
|
||
Endpoint for S3 API.
|
||
Required when using an S3 clone.
|
||
Choose a number from below, or type in your own value.
|
||
Press Enter to leave empty.
|
||
1 / Seagate Lyve Cloud US East 1 (Virginia)
|
||
\ (s3.us-east-1.lyvecloud.seagate.com)
|
||
2 / Seagate Lyve Cloud US West 1 (California)
|
||
\ (s3.us-west-1.lyvecloud.seagate.com)
|
||
3 / Seagate Lyve Cloud AP Southeast 1 (Singapore)
|
||
\ (s3.ap-southeast-1.lyvecloud.seagate.com)
|
||
endpoint> 1
|
||
```
|
||
|
||
Leave location constraint blank
|
||
|
||
```
|
||
Location constraint - must be set to match the Region.
|
||
Leave blank if not sure. Used when creating buckets only.
|
||
Enter a value. Press Enter to leave empty.
|
||
location_constraint>
|
||
```
|
||
|
||
Choose default ACL (`private`).
|
||
|
||
```
|
||
Canned ACL used when creating buckets and storing or copying objects.
|
||
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
|
||
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
|
||
Note that this ACL is applied when server-side copying objects as S3
|
||
doesn't copy the ACL from the source but rather writes a fresh one.
|
||
Choose a number from below, or type in your own value.
|
||
Press Enter to leave empty.
|
||
/ Owner gets FULL_CONTROL.
|
||
1 | No one else has access rights (default).
|
||
\ (private)
|
||
[snip]
|
||
acl>
|
||
```
|
||
|
||
And the config file should end up looking like this:
|
||
|
||
```
|
||
[remote]
|
||
type = s3
|
||
provider = LyveCloud
|
||
access_key_id = XXX
|
||
secret_access_key = YYY
|
||
endpoint = s3.us-east-1.lyvecloud.seagate.com
|
||
```
|
||
|
||
### SeaweedFS
|
||
|
||
[SeaweedFS](https://github.com/chrislusf/seaweedfs/) is a distributed storage system for
|
||
blobs, objects, files, and data lake, with O(1) disk seek and a scalable file metadata store.
|
||
It has an S3 compatible object storage interface. SeaweedFS can also act as a
|
||
[gateway to remote S3 compatible object store](https://github.com/chrislusf/seaweedfs/wiki/Gateway-to-Remote-Object-Storage)
|
||
to cache data and metadata with asynchronous write back, for fast local speed and minimize access cost.
|
||
|
||
Assuming the SeaweedFS are configured with `weed shell` as such:
|
||
```
|
||
> s3.bucket.create -name foo
|
||
> s3.configure -access_key=any -secret_key=any -buckets=foo -user=me -actions=Read,Write,List,Tagging,Admin -apply
|
||
{
|
||
"identities": [
|
||
{
|
||
"name": "me",
|
||
"credentials": [
|
||
{
|
||
"accessKey": "any",
|
||
"secretKey": "any"
|
||
}
|
||
],
|
||
"actions": [
|
||
"Read:foo",
|
||
"Write:foo",
|
||
"List:foo",
|
||
"Tagging:foo",
|
||
"Admin:foo"
|
||
]
|
||
}
|
||
]
|
||
}
|
||
```
|
||
|
||
To use rclone with SeaweedFS, above configuration should end up with something like this in
|
||
your config:
|
||
|
||
```
|
||
[seaweedfs_s3]
|
||
type = s3
|
||
provider = SeaweedFS
|
||
access_key_id = any
|
||
secret_access_key = any
|
||
endpoint = localhost:8333
|
||
```
|
||
|
||
So once set up, for example to copy files into a bucket
|
||
|
||
```
|
||
rclone copy /path/to/files seaweedfs_s3:foo
|
||
```
|
||
|
||
### Wasabi
|
||
|
||
[Wasabi](https://wasabi.com) is a cloud-based object storage service for a
|
||
broad range of applications and use cases. Wasabi is designed for
|
||
individuals and organizations that require a high-performance,
|
||
reliable, and secure data storage infrastructure at minimal cost.
|
||
|
||
Wasabi provides an S3 interface which can be configured for use with
|
||
rclone like this.
|
||
|
||
```
|
||
No remotes found, make a new one?
|
||
n) New remote
|
||
s) Set configuration password
|
||
n/s> n
|
||
name> wasabi
|
||
Type of storage to configure.
|
||
Choose a number from below, or type in your own value
|
||
[snip]
|
||
XX / Amazon S3 (also Dreamhost, Ceph, ChinaMobile, ArvanCloud, Minio)
|
||
\ "s3"
|
||
[snip]
|
||
Storage> s3
|
||
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
|
||
Choose a number from below, or type in your own value
|
||
1 / Enter AWS credentials in the next step
|
||
\ "false"
|
||
2 / Get AWS credentials from the environment (env vars or IAM)
|
||
\ "true"
|
||
env_auth> 1
|
||
AWS Access Key ID - leave blank for anonymous access or runtime credentials.
|
||
access_key_id> YOURACCESSKEY
|
||
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
|
||
secret_access_key> YOURSECRETACCESSKEY
|
||
Region to connect to.
|
||
Choose a number from below, or type in your own value
|
||
/ The default endpoint - a good choice if you are unsure.
|
||
1 | US Region, Northern Virginia, or Pacific Northwest.
|
||
| Leave location constraint empty.
|
||
\ "us-east-1"
|
||
[snip]
|
||
region> us-east-1
|
||
Endpoint for S3 API.
|
||
Leave blank if using AWS to use the default endpoint for the region.
|
||
Specify if using an S3 clone such as Ceph.
|
||
endpoint> s3.wasabisys.com
|
||
Location constraint - must be set to match the Region. Used when creating buckets only.
|
||
Choose a number from below, or type in your own value
|
||
1 / Empty for US Region, Northern Virginia, or Pacific Northwest.
|
||
\ ""
|
||
[snip]
|
||
location_constraint>
|
||
Canned ACL used when creating buckets and/or storing objects in S3.
|
||
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
|
||
Choose a number from below, or type in your own value
|
||
1 / Owner gets FULL_CONTROL. No one else has access rights (default).
|
||
\ "private"
|
||
[snip]
|
||
acl>
|
||
The server-side encryption algorithm used when storing this object in S3.
|
||
Choose a number from below, or type in your own value
|
||
1 / None
|
||
\ ""
|
||
2 / AES256
|
||
\ "AES256"
|
||
server_side_encryption>
|
||
The storage class to use when storing objects in S3.
|
||
Choose a number from below, or type in your own value
|
||
1 / Default
|
||
\ ""
|
||
2 / Standard storage class
|
||
\ "STANDARD"
|
||
3 / Reduced redundancy storage class
|
||
\ "REDUCED_REDUNDANCY"
|
||
4 / Standard Infrequent Access storage class
|
||
\ "STANDARD_IA"
|
||
storage_class>
|
||
Remote config
|
||
--------------------
|
||
[wasabi]
|
||
env_auth = false
|
||
access_key_id = YOURACCESSKEY
|
||
secret_access_key = YOURSECRETACCESSKEY
|
||
region = us-east-1
|
||
endpoint = s3.wasabisys.com
|
||
location_constraint =
|
||
acl =
|
||
server_side_encryption =
|
||
storage_class =
|
||
--------------------
|
||
y) Yes this is OK
|
||
e) Edit this remote
|
||
d) Delete this remote
|
||
y/e/d> y
|
||
```
|
||
|
||
This will leave the config file looking like this.
|
||
|
||
```
|
||
[wasabi]
|
||
type = s3
|
||
provider = Wasabi
|
||
env_auth = false
|
||
access_key_id = YOURACCESSKEY
|
||
secret_access_key = YOURSECRETACCESSKEY
|
||
region =
|
||
endpoint = s3.wasabisys.com
|
||
location_constraint =
|
||
acl =
|
||
server_side_encryption =
|
||
storage_class =
|
||
```
|
||
|
||
### Alibaba OSS {#alibaba-oss}
|
||
|
||
Here is an example of making an [Alibaba Cloud (Aliyun) OSS](https://www.alibabacloud.com/product/oss/)
|
||
configuration. First run:
|
||
|
||
rclone config
|
||
|
||
This will guide you through an interactive setup process.
|
||
|
||
```
|
||
No remotes found, make a new one?
|
||
n) New remote
|
||
s) Set configuration password
|
||
q) Quit config
|
||
n/s/q> n
|
||
name> oss
|
||
Type of storage to configure.
|
||
Enter a string value. Press Enter for the default ("").
|
||
Choose a number from below, or type in your own value
|
||
[snip]
|
||
4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, Minio, and Tencent COS
|
||
\ "s3"
|
||
[snip]
|
||
Storage> s3
|
||
Choose your S3 provider.
|
||
Enter a string value. Press Enter for the default ("").
|
||
Choose a number from below, or type in your own value
|
||
1 / Amazon Web Services (AWS) S3
|
||
\ "AWS"
|
||
2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun
|
||
\ "Alibaba"
|
||
3 / Ceph Object Storage
|
||
\ "Ceph"
|
||
[snip]
|
||
provider> Alibaba
|
||
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
|
||
Only applies if access_key_id and secret_access_key is blank.
|
||
Enter a boolean value (true or false). Press Enter for the default ("false").
|
||
Choose a number from below, or type in your own value
|
||
1 / Enter AWS credentials in the next step
|
||
\ "false"
|
||
2 / Get AWS credentials from the environment (env vars or IAM)
|
||
\ "true"
|
||
env_auth> 1
|
||
AWS Access Key ID.
|
||
Leave blank for anonymous access or runtime credentials.
|
||
Enter a string value. Press Enter for the default ("").
|
||
access_key_id> accesskeyid
|
||
AWS Secret Access Key (password)
|
||
Leave blank for anonymous access or runtime credentials.
|
||
Enter a string value. Press Enter for the default ("").
|
||
secret_access_key> secretaccesskey
|
||
Endpoint for OSS API.
|
||
Enter a string value. Press Enter for the default ("").
|
||
Choose a number from below, or type in your own value
|
||
1 / East China 1 (Hangzhou)
|
||
\ "oss-cn-hangzhou.aliyuncs.com"
|
||
2 / East China 2 (Shanghai)
|
||
\ "oss-cn-shanghai.aliyuncs.com"
|
||
3 / North China 1 (Qingdao)
|
||
\ "oss-cn-qingdao.aliyuncs.com"
|
||
[snip]
|
||
endpoint> 1
|
||
Canned ACL used when creating buckets and storing or copying objects.
|
||
|
||
Note that this ACL is applied when server-side copying objects as S3
|
||
doesn't copy the ACL from the source but rather writes a fresh one.
|
||
Enter a string value. Press Enter for the default ("").
|
||
Choose a number from below, or type in your own value
|
||
1 / Owner gets FULL_CONTROL. No one else has access rights (default).
|
||
\ "private"
|
||
2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
|
||
\ "public-read"
|
||
/ Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
|
||
[snip]
|
||
acl> 1
|
||
The storage class to use when storing new objects in OSS.
|
||
Enter a string value. Press Enter for the default ("").
|
||
Choose a number from below, or type in your own value
|
||
1 / Default
|
||
\ ""
|
||
2 / Standard storage class
|
||
\ "STANDARD"
|
||
3 / Archive storage mode.
|
||
\ "GLACIER"
|
||
4 / Infrequent access storage mode.
|
||
\ "STANDARD_IA"
|
||
storage_class> 1
|
||
Edit advanced config? (y/n)
|
||
y) Yes
|
||
n) No
|
||
y/n> n
|
||
Remote config
|
||
--------------------
|
||
[oss]
|
||
type = s3
|
||
provider = Alibaba
|
||
env_auth = false
|
||
access_key_id = accesskeyid
|
||
secret_access_key = secretaccesskey
|
||
endpoint = oss-cn-hangzhou.aliyuncs.com
|
||
acl = private
|
||
storage_class = Standard
|
||
--------------------
|
||
y) Yes this is OK
|
||
e) Edit this remote
|
||
d) Delete this remote
|
||
y/e/d> y
|
||
```
|
||
|
||
### China Mobile Ecloud Elastic Object Storage (EOS) {#china-mobile-ecloud-eos}
|
||
|
||
Here is an example of making an [China Mobile Ecloud Elastic Object Storage (EOS)](https:///ecloud.10086.cn/home/product-introduction/eos/)
|
||
configuration. First run:
|
||
|
||
rclone config
|
||
|
||
This will guide you through an interactive setup process.
|
||
|
||
```
|
||
No remotes found, make a new one?
|
||
n) New remote
|
||
s) Set configuration password
|
||
q) Quit config
|
||
n/s/q> n
|
||
name> ChinaMobile
|
||
Option Storage.
|
||
Type of storage to configure.
|
||
Choose a number from below, or type in your own value.
|
||
...
|
||
5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS
|
||
\ (s3)
|
||
...
|
||
Storage> s3
|
||
Option provider.
|
||
Choose your S3 provider.
|
||
Choose a number from below, or type in your own value.
|
||
Press Enter to leave empty.
|
||
...
|
||
4 / China Mobile Ecloud Elastic Object Storage (EOS)
|
||
\ (ChinaMobile)
|
||
...
|
||
provider> ChinaMobile
|
||
Option env_auth.
|
||
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
|
||
Only applies if access_key_id and secret_access_key is blank.
|
||
Choose a number from below, or type in your own boolean value (true or false).
|
||
Press Enter for the default (false).
|
||
1 / Enter AWS credentials in the next step.
|
||
\ (false)
|
||
2 / Get AWS credentials from the environment (env vars or IAM).
|
||
\ (true)
|
||
env_auth>
|
||
Option access_key_id.
|
||
AWS Access Key ID.
|
||
Leave blank for anonymous access or runtime credentials.
|
||
Enter a value. Press Enter to leave empty.
|
||
access_key_id> accesskeyid
|
||
Option secret_access_key.
|
||
AWS Secret Access Key (password).
|
||
Leave blank for anonymous access or runtime credentials.
|
||
Enter a value. Press Enter to leave empty.
|
||
secret_access_key> secretaccesskey
|
||
Option endpoint.
|
||
Endpoint for China Mobile Ecloud Elastic Object Storage (EOS) API.
|
||
Choose a number from below, or type in your own value.
|
||
Press Enter to leave empty.
|
||
/ The default endpoint - a good choice if you are unsure.
|
||
1 | East China (Suzhou)
|
||
\ (eos-wuxi-1.cmecloud.cn)
|
||
2 / East China (Jinan)
|
||
\ (eos-jinan-1.cmecloud.cn)
|
||
3 / East China (Hangzhou)
|
||
\ (eos-ningbo-1.cmecloud.cn)
|
||
4 / East China (Shanghai-1)
|
||
\ (eos-shanghai-1.cmecloud.cn)
|
||
5 / Central China (Zhengzhou)
|
||
\ (eos-zhengzhou-1.cmecloud.cn)
|
||
6 / Central China (Changsha-1)
|
||
\ (eos-hunan-1.cmecloud.cn)
|
||
7 / Central China (Changsha-2)
|
||
\ (eos-zhuzhou-1.cmecloud.cn)
|
||
8 / South China (Guangzhou-2)
|
||
\ (eos-guangzhou-1.cmecloud.cn)
|
||
9 / South China (Guangzhou-3)
|
||
\ (eos-dongguan-1.cmecloud.cn)
|
||
10 / North China (Beijing-1)
|
||
\ (eos-beijing-1.cmecloud.cn)
|
||
11 / North China (Beijing-2)
|
||
\ (eos-beijing-2.cmecloud.cn)
|
||
12 / North China (Beijing-3)
|
||
\ (eos-beijing-4.cmecloud.cn)
|
||
13 / North China (Huhehaote)
|
||
\ (eos-huhehaote-1.cmecloud.cn)
|
||
14 / Southwest China (Chengdu)
|
||
\ (eos-chengdu-1.cmecloud.cn)
|
||
15 / Southwest China (Chongqing)
|
||
\ (eos-chongqing-1.cmecloud.cn)
|
||
16 / Southwest China (Guiyang)
|
||
\ (eos-guiyang-1.cmecloud.cn)
|
||
17 / Nouthwest China (Xian)
|
||
\ (eos-xian-1.cmecloud.cn)
|
||
18 / Yunnan China (Kunming)
|
||
\ (eos-yunnan.cmecloud.cn)
|
||
19 / Yunnan China (Kunming-2)
|
||
\ (eos-yunnan-2.cmecloud.cn)
|
||
20 / Tianjin China (Tianjin)
|
||
\ (eos-tianjin-1.cmecloud.cn)
|
||
21 / Jilin China (Changchun)
|
||
\ (eos-jilin-1.cmecloud.cn)
|
||
22 / Hubei China (Xiangyan)
|
||
\ (eos-hubei-1.cmecloud.cn)
|
||
23 / Jiangxi China (Nanchang)
|
||
\ (eos-jiangxi-1.cmecloud.cn)
|
||
24 / Gansu China (Lanzhou)
|
||
\ (eos-gansu-1.cmecloud.cn)
|
||
25 / Shanxi China (Taiyuan)
|
||
\ (eos-shanxi-1.cmecloud.cn)
|
||
26 / Liaoning China (Shenyang)
|
||
\ (eos-liaoning-1.cmecloud.cn)
|
||
27 / Hebei China (Shijiazhuang)
|
||
\ (eos-hebei-1.cmecloud.cn)
|
||
28 / Fujian China (Xiamen)
|
||
\ (eos-fujian-1.cmecloud.cn)
|
||
29 / Guangxi China (Nanning)
|
||
\ (eos-guangxi-1.cmecloud.cn)
|
||
30 / Anhui China (Huainan)
|
||
\ (eos-anhui-1.cmecloud.cn)
|
||
endpoint> 1
|
||
Option location_constraint.
|
||
Location constraint - must match endpoint.
|
||
Used when creating buckets only.
|
||
Choose a number from below, or type in your own value.
|
||
Press Enter to leave empty.
|
||
1 / East China (Suzhou)
|
||
\ (wuxi1)
|
||
2 / East China (Jinan)
|
||
\ (jinan1)
|
||
3 / East China (Hangzhou)
|
||
\ (ningbo1)
|
||
4 / East China (Shanghai-1)
|
||
\ (shanghai1)
|
||
5 / Central China (Zhengzhou)
|
||
\ (zhengzhou1)
|
||
6 / Central China (Changsha-1)
|
||
\ (hunan1)
|
||
7 / Central China (Changsha-2)
|
||
\ (zhuzhou1)
|
||
8 / South China (Guangzhou-2)
|
||
\ (guangzhou1)
|
||
9 / South China (Guangzhou-3)
|
||
\ (dongguan1)
|
||
10 / North China (Beijing-1)
|
||
\ (beijing1)
|
||
11 / North China (Beijing-2)
|
||
\ (beijing2)
|
||
12 / North China (Beijing-3)
|
||
\ (beijing4)
|
||
13 / North China (Huhehaote)
|
||
\ (huhehaote1)
|
||
14 / Southwest China (Chengdu)
|
||
\ (chengdu1)
|
||
15 / Southwest China (Chongqing)
|
||
\ (chongqing1)
|
||
16 / Southwest China (Guiyang)
|
||
\ (guiyang1)
|
||
17 / Nouthwest China (Xian)
|
||
\ (xian1)
|
||
18 / Yunnan China (Kunming)
|
||
\ (yunnan)
|
||
19 / Yunnan China (Kunming-2)
|
||
\ (yunnan2)
|
||
20 / Tianjin China (Tianjin)
|
||
\ (tianjin1)
|
||
21 / Jilin China (Changchun)
|
||
\ (jilin1)
|
||
22 / Hubei China (Xiangyan)
|
||
\ (hubei1)
|
||
23 / Jiangxi China (Nanchang)
|
||
\ (jiangxi1)
|
||
24 / Gansu China (Lanzhou)
|
||
\ (gansu1)
|
||
25 / Shanxi China (Taiyuan)
|
||
\ (shanxi1)
|
||
26 / Liaoning China (Shenyang)
|
||
\ (liaoning1)
|
||
27 / Hebei China (Shijiazhuang)
|
||
\ (hebei1)
|
||
28 / Fujian China (Xiamen)
|
||
\ (fujian1)
|
||
29 / Guangxi China (Nanning)
|
||
\ (guangxi1)
|
||
30 / Anhui China (Huainan)
|
||
\ (anhui1)
|
||
location_constraint> 1
|
||
Option acl.
|
||
Canned ACL used when creating buckets and storing or copying objects.
|
||
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
|
||
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
|
||
Note that this ACL is applied when server-side copying objects as S3
|
||
doesn't copy the ACL from the source but rather writes a fresh one.
|
||
Choose a number from below, or type in your own value.
|
||
Press Enter to leave empty.
|
||
/ Owner gets FULL_CONTROL.
|
||
1 | No one else has access rights (default).
|
||
\ (private)
|
||
/ Owner gets FULL_CONTROL.
|
||
2 | The AllUsers group gets READ access.
|
||
\ (public-read)
|
||
/ Owner gets FULL_CONTROL.
|
||
3 | The AllUsers group gets READ and WRITE access.
|
||
| Granting this on a bucket is generally not recommended.
|
||
\ (public-read-write)
|
||
/ Owner gets FULL_CONTROL.
|
||
4 | The AuthenticatedUsers group gets READ access.
|
||
\ (authenticated-read)
|
||
/ Object owner gets FULL_CONTROL.
|
||
acl> private
|
||
Option server_side_encryption.
|
||
The server-side encryption algorithm used when storing this object in S3.
|
||
Choose a number from below, or type in your own value.
|
||
Press Enter to leave empty.
|
||
1 / None
|
||
\ ()
|
||
2 / AES256
|
||
\ (AES256)
|
||
server_side_encryption>
|
||
Option storage_class.
|
||
The storage class to use when storing new objects in ChinaMobile.
|
||
Choose a number from below, or type in your own value.
|
||
Press Enter to leave empty.
|
||
1 / Default
|
||
\ ()
|
||
2 / Standard storage class
|
||
\ (STANDARD)
|
||
3 / Archive storage mode
|
||
\ (GLACIER)
|
||
4 / Infrequent access storage mode
|
||
\ (STANDARD_IA)
|
||
storage_class>
|
||
Edit advanced config?
|
||
y) Yes
|
||
n) No (default)
|
||
y/n> n
|
||
--------------------
|
||
[ChinaMobile]
|
||
type = s3
|
||
provider = ChinaMobile
|
||
access_key_id = accesskeyid
|
||
secret_access_key = secretaccesskey
|
||
endpoint = eos-wuxi-1.cmecloud.cn
|
||
location_constraint = wuxi1
|
||
acl = private
|
||
--------------------
|
||
y) Yes this is OK (default)
|
||
e) Edit this remote
|
||
d) Delete this remote
|
||
y/e/d> y
|
||
```
|
||
|
||
### ArvanCloud {#arvan-cloud}
|
||
|
||
[ArvanCloud](https://www.arvancloud.com/en/products/cloud-storage) ArvanCloud Object Storage goes beyond the limited traditional file storage.
|
||
It gives you access to backup and archived files and allows sharing.
|
||
Files like profile image in the app, images sent by users or scanned documents can be stored securely and easily in our Object Storage service.
|
||
|
||
ArvanCloud provides an S3 interface which can be configured for use with
|
||
rclone like this.
|
||
|
||
```
|
||
No remotes found, make a new one?
|
||
n) New remote
|
||
s) Set configuration password
|
||
n/s> n
|
||
name> ArvanCloud
|
||
Type of storage to configure.
|
||
Choose a number from below, or type in your own value
|
||
[snip]
|
||
XX / Amazon S3 (also Dreamhost, Ceph, ChinaMobile, ArvanCloud, Minio)
|
||
\ "s3"
|
||
[snip]
|
||
Storage> s3
|
||
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
|
||
Choose a number from below, or type in your own value
|
||
1 / Enter AWS credentials in the next step
|
||
\ "false"
|
||
2 / Get AWS credentials from the environment (env vars or IAM)
|
||
\ "true"
|
||
env_auth> 1
|
||
AWS Access Key ID - leave blank for anonymous access or runtime credentials.
|
||
access_key_id> YOURACCESSKEY
|
||
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
|
||
secret_access_key> YOURSECRETACCESSKEY
|
||
Region to connect to.
|
||
Choose a number from below, or type in your own value
|
||
/ The default endpoint - a good choice if you are unsure.
|
||
1 | US Region, Northern Virginia, or Pacific Northwest.
|
||
| Leave location constraint empty.
|
||
\ "us-east-1"
|
||
[snip]
|
||
region>
|
||
Endpoint for S3 API.
|
||
Leave blank if using ArvanCloud to use the default endpoint for the region.
|
||
Specify if using an S3 clone such as Ceph.
|
||
endpoint> s3.arvanstorage.com
|
||
Location constraint - must be set to match the Region. Used when creating buckets only.
|
||
Choose a number from below, or type in your own value
|
||
1 / Empty for Iran-Tehran Region.
|
||
\ ""
|
||
[snip]
|
||
location_constraint>
|
||
Canned ACL used when creating buckets and/or storing objects in S3.
|
||
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
|
||
Choose a number from below, or type in your own value
|
||
1 / Owner gets FULL_CONTROL. No one else has access rights (default).
|
||
\ "private"
|
||
[snip]
|
||
acl>
|
||
The server-side encryption algorithm used when storing this object in S3.
|
||
Choose a number from below, or type in your own value
|
||
1 / None
|
||
\ ""
|
||
2 / AES256
|
||
\ "AES256"
|
||
server_side_encryption>
|
||
The storage class to use when storing objects in S3.
|
||
Choose a number from below, or type in your own value
|
||
1 / Default
|
||
\ ""
|
||
2 / Standard storage class
|
||
\ "STANDARD"
|
||
storage_class>
|
||
Remote config
|
||
--------------------
|
||
[ArvanCloud]
|
||
env_auth = false
|
||
access_key_id = YOURACCESSKEY
|
||
secret_access_key = YOURSECRETACCESSKEY
|
||
region = ir-thr-at1
|
||
endpoint = s3.arvanstorage.com
|
||
location_constraint =
|
||
acl =
|
||
server_side_encryption =
|
||
storage_class =
|
||
--------------------
|
||
y) Yes this is OK
|
||
e) Edit this remote
|
||
d) Delete this remote
|
||
y/e/d> y
|
||
```
|
||
|
||
This will leave the config file looking like this.
|
||
|
||
```
|
||
[ArvanCloud]
|
||
type = s3
|
||
provider = ArvanCloud
|
||
env_auth = false
|
||
access_key_id = YOURACCESSKEY
|
||
secret_access_key = YOURSECRETACCESSKEY
|
||
region =
|
||
endpoint = s3.arvanstorage.com
|
||
location_constraint =
|
||
acl =
|
||
server_side_encryption =
|
||
storage_class =
|
||
```
|
||
|
||
### Tencent COS {#tencent-cos}
|
||
|
||
[Tencent Cloud Object Storage (COS)](https://intl.cloud.tencent.com/product/cos) is a distributed storage service offered by Tencent Cloud for unstructured data. It is secure, stable, massive, convenient, low-delay and low-cost.
|
||
|
||
To configure access to Tencent COS, follow the steps below:
|
||
|
||
1. Run `rclone config` and select `n` for a new remote.
|
||
|
||
```
|
||
rclone config
|
||
No remotes found, make a new one?
|
||
n) New remote
|
||
s) Set configuration password
|
||
q) Quit config
|
||
n/s/q> n
|
||
```
|
||
|
||
2. Give the name of the configuration. For example, name it 'cos'.
|
||
|
||
```
|
||
name> cos
|
||
```
|
||
|
||
3. Select `s3` storage.
|
||
|
||
```
|
||
Choose a number from below, or type in your own value
|
||
1 / 1Fichier
|
||
\ "fichier"
|
||
2 / Alias for an existing remote
|
||
\ "alias"
|
||
3 / Amazon Drive
|
||
\ "amazon cloud drive"
|
||
4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, Minio, and Tencent COS
|
||
\ "s3"
|
||
[snip]
|
||
Storage> s3
|
||
```
|
||
|
||
4. Select `TencentCOS` provider.
|
||
```
|
||
Choose a number from below, or type in your own value
|
||
1 / Amazon Web Services (AWS) S3
|
||
\ "AWS"
|
||
[snip]
|
||
11 / Tencent Cloud Object Storage (COS)
|
||
\ "TencentCOS"
|
||
[snip]
|
||
provider> TencentCOS
|
||
```
|
||
|
||
5. Enter your SecretId and SecretKey of Tencent Cloud.
|
||
|
||
```
|
||
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
|
||
Only applies if access_key_id and secret_access_key is blank.
|
||
Enter a boolean value (true or false). Press Enter for the default ("false").
|
||
Choose a number from below, or type in your own value
|
||
1 / Enter AWS credentials in the next step
|
||
\ "false"
|
||
2 / Get AWS credentials from the environment (env vars or IAM)
|
||
\ "true"
|
||
env_auth> 1
|
||
AWS Access Key ID.
|
||
Leave blank for anonymous access or runtime credentials.
|
||
Enter a string value. Press Enter for the default ("").
|
||
access_key_id> AKIDxxxxxxxxxx
|
||
AWS Secret Access Key (password)
|
||
Leave blank for anonymous access or runtime credentials.
|
||
Enter a string value. Press Enter for the default ("").
|
||
secret_access_key> xxxxxxxxxxx
|
||
```
|
||
|
||
6. Select endpoint for Tencent COS. This is the standard endpoint for different region.
|
||
|
||
```
|
||
1 / Beijing Region.
|
||
\ "cos.ap-beijing.myqcloud.com"
|
||
2 / Nanjing Region.
|
||
\ "cos.ap-nanjing.myqcloud.com"
|
||
3 / Shanghai Region.
|
||
\ "cos.ap-shanghai.myqcloud.com"
|
||
4 / Guangzhou Region.
|
||
\ "cos.ap-guangzhou.myqcloud.com"
|
||
[snip]
|
||
endpoint> 4
|
||
```
|
||
|
||
7. Choose acl and storage class.
|
||
|
||
```
|
||
Note that this ACL is applied when server-side copying objects as S3
|
||
doesn't copy the ACL from the source but rather writes a fresh one.
|
||
Enter a string value. Press Enter for the default ("").
|
||
Choose a number from below, or type in your own value
|
||
1 / Owner gets Full_CONTROL. No one else has access rights (default).
|
||
\ "default"
|
||
[snip]
|
||
acl> 1
|
||
The storage class to use when storing new objects in Tencent COS.
|
||
Enter a string value. Press Enter for the default ("").
|
||
Choose a number from below, or type in your own value
|
||
1 / Default
|
||
\ ""
|
||
[snip]
|
||
storage_class> 1
|
||
Edit advanced config? (y/n)
|
||
y) Yes
|
||
n) No (default)
|
||
y/n> n
|
||
Remote config
|
||
--------------------
|
||
[cos]
|
||
type = s3
|
||
provider = TencentCOS
|
||
env_auth = false
|
||
access_key_id = xxx
|
||
secret_access_key = xxx
|
||
endpoint = cos.ap-guangzhou.myqcloud.com
|
||
acl = default
|
||
--------------------
|
||
y) Yes this is OK (default)
|
||
e) Edit this remote
|
||
d) Delete this remote
|
||
y/e/d> y
|
||
Current remotes:
|
||
|
||
Name Type
|
||
==== ====
|
||
cos s3
|
||
```
|
||
|
||
### Netease NOS
|
||
|
||
For Netease NOS configure as per the configurator `rclone config`
|
||
setting the provider `Netease`. This will automatically set
|
||
`force_path_style = false` which is necessary for it to run properly.
|
||
|
||
### Storj
|
||
|
||
Storj is a decentralized cloud storage which can be used through its
|
||
native protocol or an S3 compatible gateway.
|
||
|
||
The S3 compatible gateway is configured using `rclone config` with a
|
||
type of `s3` and with a provider name of `Storj`. Here is an example
|
||
run of the configurator.
|
||
|
||
```
|
||
Type of storage to configure.
|
||
Storage> s3
|
||
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
|
||
Only applies if access_key_id and secret_access_key is blank.
|
||
Choose a number from below, or type in your own boolean value (true or false).
|
||
Press Enter for the default (false).
|
||
1 / Enter AWS credentials in the next step.
|
||
\ (false)
|
||
2 / Get AWS credentials from the environment (env vars or IAM).
|
||
\ (true)
|
||
env_auth> 1
|
||
Option access_key_id.
|
||
AWS Access Key ID.
|
||
Leave blank for anonymous access or runtime credentials.
|
||
Enter a value. Press Enter to leave empty.
|
||
access_key_id> XXXX (as shown when creating the access grant)
|
||
Option secret_access_key.
|
||
AWS Secret Access Key (password).
|
||
Leave blank for anonymous access or runtime credentials.
|
||
Enter a value. Press Enter to leave empty.
|
||
secret_access_key> XXXX (as shown when creating the access grant)
|
||
Option endpoint.
|
||
Endpoint of the Shared Gateway.
|
||
Choose a number from below, or type in your own value.
|
||
Press Enter to leave empty.
|
||
1 / EU1 Shared Gateway
|
||
\ (gateway.eu1.storjshare.io)
|
||
2 / US1 Shared Gateway
|
||
\ (gateway.us1.storjshare.io)
|
||
3 / Asia-Pacific Shared Gateway
|
||
\ (gateway.ap1.storjshare.io)
|
||
endpoint> 1 (as shown when creating the access grant)
|
||
Edit advanced config?
|
||
y) Yes
|
||
n) No (default)
|
||
y/n> n
|
||
```
|
||
|
||
Note that s3 credentials are generated when you [create an access
|
||
grant](https://docs.storj.io/dcs/api-reference/s3-compatible-gateway#usage).
|
||
|
||
#### Backend quirks
|
||
|
||
- `--chunk-size` is forced to be 64 MiB or greater. This will use more
|
||
memory than the default of 5 MiB.
|
||
- Server side copy is disabled as it isn't currently supported in the
|
||
gateway.
|
||
- GetTier and SetTier are not supported.
|
||
|
||
#### Backend bugs
|
||
|
||
Due to [issue #39](https://github.com/storj/gateway-mt/issues/39)
|
||
uploading multipart files via the S3 gateway causes them to lose their
|
||
metadata. For rclone's purpose this means that the modification time
|
||
is not stored, nor is any MD5SUM (if one is available from the
|
||
source).
|
||
|
||
This has the following consequences:
|
||
|
||
- Using `rclone rcat` will fail as the medatada doesn't match after upload
|
||
- Uploading files with `rclone mount` will fail for the same reason
|
||
- This can worked around by using `--vfs-cache-mode writes` or `--vfs-cache-mode full` or setting `--s3-upload-cutoff` large
|
||
- Files uploaded via a multipart upload won't have their modtimes
|
||
- This will mean that `rclone sync` will likely keep trying to upload files bigger than `--s3-upload-cutoff`
|
||
- This can be worked around with `--checksum` or `--size-only` or setting `--s3-upload-cutoff` large
|
||
- The maximum value for `--s3-upload-cutoff` is 5GiB though
|
||
|
||
One general purpose workaround is to set `--s3-upload-cutoff 5G`. This
|
||
means that rclone will upload files smaller than 5GiB as single parts.
|
||
Note that this can be set in the config file with `upload_cutoff = 5G`
|
||
or configured in the advanced settings. If you regularly transfer
|
||
files larger than 5G then using `--checksum` or `--size-only` in
|
||
`rclone sync` is the recommended workaround.
|
||
|
||
#### Comparison with the native protocol
|
||
|
||
Use the [the native protocol](/storj) to take advantage of
|
||
client-side encryption as well as to achieve the best possible
|
||
download performance. Uploads will be erasure-coded locally, thus a
|
||
1gb upload will result in 2.68gb of data being uploaded to storage
|
||
nodes across the network.
|
||
|
||
Use this backend and the S3 compatible Hosted Gateway to increase
|
||
upload performance and reduce the load on your systems and network.
|
||
Uploads will be encrypted and erasure-coded server-side, thus a 1GB
|
||
upload will result in only in 1GB of data being uploaded to storage
|
||
nodes across the network.
|
||
|
||
For more detailed comparison please check the documentation of the
|
||
[storj](/storj) backend.
|
||
|
||
## Limitations
|
||
|
||
`rclone about` is not supported by the S3 backend. Backends without
|
||
this capability cannot determine free space for an rclone mount or
|
||
use policy `mfs` (most free space) as a member of an rclone union
|
||
remote.
|
||
|
||
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
|