2014-03-28 23:34:13 +01:00
---
title: "Amazon S3"
description: "Rclone docs for Amazon S3"
2022-11-17 18:56:24 +01:00
versionIntroduced: "v0.91"
2014-03-28 23:34:13 +01:00
---
2021-07-20 20:45:41 +02:00
# {{< icon "fab fa-amazon" >}} Amazon S3 Storage Providers
2018-04-13 17:08:00 +02:00
The S3 backend can be used with a number of different providers:
2020-05-22 17:45:37 +02:00
{{< provider_list > }}
2021-10-14 15:40:18 +02:00
{{< provider name = "AWS S3" home = "https://aws.amazon.com/s3/" config = "/s3/#configuration" start = "true" > }}
2020-05-21 18:28:03 +02:00
{{< provider name = "Alibaba Cloud (Aliyun) Object Storage System (OSS)" home = "https://www.alibabacloud.com/product/oss/" config = "/s3/#alibaba-oss" > }}
{{< provider name = "Ceph" home = "http://ceph.com/" config = "/s3/#ceph" > }}
2022-03-24 12:57:00 +01:00
{{< provider name = "China Mobile Ecloud Elastic Object Storage (EOS)" home = "https://ecloud.10086.cn/home/product-introduction/eos/" config = "/s3/#china-mobile-ecloud-eos" > }}
2022-05-06 13:09:29 +02:00
{{< provider name = "Cloudflare R2" home = "https://blog.cloudflare.com/r2-open-beta/" config = "/s3/#cloudflare-r2" > }}
2022-07-09 19:08:20 +02:00
{{< provider name = "Arvan Cloud Object Storage (AOS)" home = "https://www.arvancloud.com/en/products/cloud-storage" config = "/s3/#arvan-cloud" > }}
2020-05-21 18:28:03 +02:00
{{< provider name = "DigitalOcean Spaces" home = "https://www.digitalocean.com/products/object-storage/" config = "/s3/#digitalocean-spaces" > }}
{{< provider name = "Dreamhost" home = "https://www.dreamhost.com/cloud/storage/" config = "/s3/#dreamhost" > }}
2023-01-09 07:54:51 +01:00
{{< provider name = "GCS" home = "https://cloud.google.com/storage/docs" config = "/s3/#google-cloud-storage" > }}
2022-06-07 09:41:46 +02:00
{{< provider name = "Huawei OBS" home = "https://www.huaweicloud.com/intl/en-us/product/obs.html" config = "/s3/#huawei-obs" > }}
2020-05-21 18:28:03 +02:00
{{< provider name = "IBM COS S3" home = "http://www.ibm.com/cloud/object-storage" config = "/s3/#ibm-cos-s3" > }}
2023-03-10 16:39:41 +01:00
{{< provider name = "IDrive e2" home = "https://www.idrive.com/e2/?refer=rclone" config = "/s3/#idrive-e2" > }}
2022-09-15 17:04:34 +02:00
{{< provider name = "IONOS Cloud" home = "https://cloud.ionos.com/storage/object-storage" config = "/s3/#ionos" > }}
2023-09-21 13:38:06 +02:00
{{< provider name = "Leviia Object Storage" home = "https://www.leviia.com/object-storage/" config = "/s3/#leviia" > }}
2022-12-06 13:25:23 +01:00
{{< provider name = "Liara Object Storage" home = "https://liara.ir/landing/object-storage" config = "/s3/#liara-cloud" > }}
2023-09-21 13:38:06 +02:00
{{< provider name = "Linode Object Storage" home = "https://www.linode.com/products/object-storage/" config = "/s3/#linode" > }}
2024-06-06 12:25:45 +02:00
{{< provider name = "Magalu Object Storage" home = "https://magalu.cloud/object-storage/" config = "/s3/#magalu" > }}
2020-05-21 18:28:03 +02:00
{{< provider name = "Minio" home = "https://www.minio.io/" config = "/s3/#minio" > }}
2024-10-02 11:26:41 +02:00
{{< provider name = "Outscale" home = "https://en.outscale.com/storage/outscale-object-storage/" config = "/s3/#outscale" > }}
2023-05-05 10:44:25 +02:00
{{< provider name = "Petabox" home = "https://petabox.io/" config = "/s3/#petabox" > }}
2022-10-13 16:49:22 +02:00
{{< provider name = "Qiniu Cloud Object Storage (Kodo)" home = "https://www.qiniu.com/en/products/kodo" config = "/s3/#qiniu" > }}
2021-11-09 12:46:58 +01:00
{{< provider name = "RackCorp Object Storage" home = "https://www.rackcorp.com/" config = "/s3/#RackCorp" > }}
2023-11-13 17:40:34 +01:00
{{< provider name = "Rclone Serve S3" home = "/commands/rclone_serve_http/" config = "/s3/#rclone" > }}
2020-05-21 18:28:03 +02:00
{{< provider name = "Scaleway" home = "https://www.scaleway.com/en/object-storage/" config = "/s3/#scaleway" > }}
2022-03-04 13:35:46 +01:00
{{< provider name = "Seagate Lyve Cloud" home = "https://www.seagate.com/gb/en/services/cloud/storage/" config = "/s3/#lyve" > }}
2021-06-08 10:59:57 +02:00
{{< provider name = "SeaweedFS" home = "https://github.com/chrislusf/seaweedfs/" config = "/s3/#seaweedfs" > }}
2024-10-09 11:08:08 +02:00
{{< provider name = "Selectel" home = "https://selectel.ru/services/cloud/storage/" config = "/s3/#selectel" > }}
2020-05-21 18:28:03 +02:00
{{< provider name = "StackPath" home = "https://www.stackpath.com/products/object-storage/" config = "/s3/#stackpath" > }}
2021-09-29 15:43:06 +02:00
{{< provider name = "Storj" home = "https://storj.io/" config = "/s3/#storj" > }}
2023-07-06 11:54:07 +02:00
{{< provider name = "Synology C2 Object Storage" home = "https://c2.synology.com/en-global/object-storage/overview" config = "/s3/#synology-c2" > }}
2020-09-08 17:34:25 +02:00
{{< provider name = "Tencent Cloud Object Storage (COS)" home = "https://intl.cloud.tencent.com/product/cos" config = "/s3/#tencent-cos" > }}
2020-05-21 18:28:03 +02:00
{{< provider name = "Wasabi" home = "https://wasabi.com/" config = "/s3/#wasabi" end = "true" > }}
2020-05-22 17:45:37 +02:00
{{< / provider_list > }}
2018-04-12 18:05:53 +02:00
2014-07-17 21:03:11 +02:00
Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
2020-10-13 23:49:58 +02:00
command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir` .
2014-03-28 23:34:13 +01:00
2018-04-13 17:08:00 +02:00
Once you have made a remote (see the provider specific section above)
you can use it like this:
See all buckets
rclone lsd remote:
Make a new bucket
rclone mkdir remote:bucket
List the contents of a bucket
rclone ls remote:bucket
Sync `/home/local/directory` to the remote bucket, deleting any excess
files in the bucket.
2023-01-20 21:47:36 +01:00
rclone sync --interactive /home/local/directory remote:bucket
2018-04-13 17:08:00 +02:00
2021-10-14 15:40:18 +02:00
## Configuration
2018-04-13 17:08:00 +02:00
2021-10-14 15:40:18 +02:00
Here is an example of making an s3 configuration for the AWS S3 provider.
Most applies to the other providers as well, any differences are described [below ](#providers ).
First run
2014-03-28 23:34:13 +01:00
rclone config
This will guide you through an interactive setup process.
```
2021-11-01 21:34:46 +01:00
No remotes found, make a new one?
2014-03-28 23:34:13 +01:00
n) New remote
2016-02-21 14:39:04 +01:00
s) Set configuration password
2018-03-13 21:47:29 +01:00
q) Quit config
n/s/q> n
2014-03-28 23:34:13 +01:00
name> remote
2016-02-21 14:39:04 +01:00
Type of storage to configure.
Choose a number from below, or type in your own value
2019-08-26 11:47:17 +02:00
[snip]
2024-01-03 12:49:46 +01:00
XX / Amazon S3 Compliant Storage Providers including AWS, ...
2016-02-21 14:39:04 +01:00
\ "s3"
2018-03-13 21:47:29 +01:00
[snip]
2018-04-13 17:08:00 +02:00
Storage> s3
Choose your S3 provider.
2018-04-12 18:05:53 +02:00
Choose a number from below, or type in your own value
2018-04-13 17:08:00 +02:00
1 / Amazon Web Services (AWS) S3
2018-04-12 18:05:53 +02:00
\ "AWS"
2018-04-13 17:08:00 +02:00
2 / Ceph Object Storage
2018-04-12 18:05:53 +02:00
\ "Ceph"
2022-12-14 15:12:58 +01:00
3 / DigitalOcean Spaces
2018-04-13 17:08:00 +02:00
\ "DigitalOcean"
4 / Dreamhost DreamObjects
2018-04-12 18:05:53 +02:00
\ "Dreamhost"
2018-04-13 17:08:00 +02:00
5 / IBM COS S3
2018-04-12 18:05:53 +02:00
\ "IBMCOS"
2018-04-13 17:08:00 +02:00
6 / Minio Object Storage
2018-04-12 18:05:53 +02:00
\ "Minio"
2018-04-13 17:08:00 +02:00
7 / Wasabi Object Storage
\ "Wasabi"
8 / Any other S3 compatible provider
\ "Other"
provider> 1
2017-11-22 22:21:36 +01:00
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
2016-02-01 14:11:27 +01:00
Choose a number from below, or type in your own value
2016-02-21 14:39:04 +01:00
1 / Enter AWS credentials in the next step
\ "false"
2 / Get AWS credentials from the environment (env vars or IAM)
\ "true"
env_auth> 1
2016-02-01 14:11:27 +01:00
AWS Access Key ID - leave blank for anonymous access or runtime credentials.
2018-03-13 21:47:29 +01:00
access_key_id> XXX
2016-02-01 14:11:27 +01:00
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
2018-03-13 21:47:29 +01:00
secret_access_key> YYY
2018-04-13 17:08:00 +02:00
Region to connect to.
2014-03-28 23:34:13 +01:00
Choose a number from below, or type in your own value
2016-02-21 14:39:04 +01:00
/ The default endpoint - a good choice if you are unsure.
2020-10-13 23:43:00 +02:00
1 | US Region, Northern Virginia, or Pacific Northwest.
2016-02-21 14:39:04 +01:00
| Leave location constraint empty.
\ "us-east-1"
2018-03-13 21:47:29 +01:00
/ US East (Ohio) Region
2 | Needs location constraint us-east-2.
\ "us-east-2"
2016-02-21 14:39:04 +01:00
/ US West (Oregon) Region
2018-03-13 21:47:29 +01:00
3 | Needs location constraint us-west-2.
2016-02-21 14:39:04 +01:00
\ "us-west-2"
/ US West (Northern California) Region
2018-03-13 21:47:29 +01:00
4 | Needs location constraint us-west-1.
2016-02-21 14:39:04 +01:00
\ "us-west-1"
2018-03-13 21:47:29 +01:00
/ Canada (Central) Region
5 | Needs location constraint ca-central-1.
\ "ca-central-1"
/ EU (Ireland) Region
6 | Needs location constraint EU or eu-west-1.
2016-02-21 14:39:04 +01:00
\ "eu-west-1"
2018-03-13 21:47:29 +01:00
/ EU (London) Region
7 | Needs location constraint eu-west-2.
\ "eu-west-2"
2016-02-21 14:39:04 +01:00
/ EU (Frankfurt) Region
2018-03-13 21:47:29 +01:00
8 | Needs location constraint eu-central-1.
2016-02-21 14:39:04 +01:00
\ "eu-central-1"
/ Asia Pacific (Singapore) Region
2018-03-13 21:47:29 +01:00
9 | Needs location constraint ap-southeast-1.
2016-02-21 14:39:04 +01:00
\ "ap-southeast-1"
/ Asia Pacific (Sydney) Region
2018-03-13 21:47:29 +01:00
10 | Needs location constraint ap-southeast-2.
2016-02-21 14:39:04 +01:00
\ "ap-southeast-2"
/ Asia Pacific (Tokyo) Region
2018-03-13 21:47:29 +01:00
11 | Needs location constraint ap-northeast-1.
2016-02-21 14:39:04 +01:00
\ "ap-northeast-1"
2017-01-09 06:09:19 +01:00
/ Asia Pacific (Seoul)
2018-03-13 21:47:29 +01:00
12 | Needs location constraint ap-northeast-2.
2017-01-09 06:09:19 +01:00
\ "ap-northeast-2"
/ Asia Pacific (Mumbai)
2018-03-13 21:47:29 +01:00
13 | Needs location constraint ap-south-1.
2017-01-09 06:09:19 +01:00
\ "ap-south-1"
2020-09-18 13:03:13 +02:00
/ Asia Pacific (Hong Kong) Region
2020-01-02 12:10:48 +01:00
14 | Needs location constraint ap-east-1.
\ "ap-east-1"
2016-02-21 14:39:04 +01:00
/ South America (Sao Paulo) Region
2020-01-02 12:10:48 +01:00
15 | Needs location constraint sa-east-1.
2016-02-21 14:39:04 +01:00
\ "sa-east-1"
region> 1
2015-08-15 19:44:45 +02:00
Endpoint for S3 API.
Leave blank if using AWS to use the default endpoint for the region.
2021-11-04 12:50:43 +01:00
endpoint>
2015-08-15 19:44:45 +02:00
Location constraint - must be set to match the Region. Used when creating buckets only.
2014-03-28 23:34:13 +01:00
Choose a number from below, or type in your own value
2020-10-13 23:43:00 +02:00
1 / Empty for US Region, Northern Virginia, or Pacific Northwest.
2016-02-21 14:39:04 +01:00
\ ""
2018-03-13 21:47:29 +01:00
2 / US East (Ohio) Region.
\ "us-east-2"
3 / US West (Oregon) Region.
2016-02-21 14:39:04 +01:00
\ "us-west-2"
2018-03-13 21:47:29 +01:00
4 / US West (Northern California) Region.
2016-02-21 14:39:04 +01:00
\ "us-west-1"
2018-03-13 21:47:29 +01:00
5 / Canada (Central) Region.
\ "ca-central-1"
6 / EU (Ireland) Region.
2016-02-21 14:39:04 +01:00
\ "eu-west-1"
2018-03-13 21:47:29 +01:00
7 / EU (London) Region.
\ "eu-west-2"
8 / EU Region.
2016-02-21 14:39:04 +01:00
\ "EU"
2018-03-13 21:47:29 +01:00
9 / Asia Pacific (Singapore) Region.
2016-02-21 14:39:04 +01:00
\ "ap-southeast-1"
2018-03-13 21:47:29 +01:00
10 / Asia Pacific (Sydney) Region.
2016-02-21 14:39:04 +01:00
\ "ap-southeast-2"
2018-03-13 21:47:29 +01:00
11 / Asia Pacific (Tokyo) Region.
2016-02-21 14:39:04 +01:00
\ "ap-northeast-1"
2018-03-13 21:47:29 +01:00
12 / Asia Pacific (Seoul)
2017-01-09 06:09:19 +01:00
\ "ap-northeast-2"
2018-03-13 21:47:29 +01:00
13 / Asia Pacific (Mumbai)
2017-01-09 06:09:19 +01:00
\ "ap-south-1"
2020-01-02 12:10:48 +01:00
14 / Asia Pacific (Hong Kong)
\ "ap-east-1"
15 / South America (Sao Paulo) Region.
2016-02-21 14:39:04 +01:00
\ "sa-east-1"
location_constraint> 1
2016-08-22 14:59:03 +02:00
Canned ACL used when creating buckets and/or storing objects in S3.
2017-03-29 14:38:34 +02:00
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
2016-08-22 14:59:03 +02:00
Choose a number from below, or type in your own value
1 / Owner gets FULL_CONTROL. No one else has access rights (default).
\ "private"
2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
\ "public-read"
/ Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
3 | Granting this on a bucket is generally not recommended.
\ "public-read-write"
4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
\ "authenticated-read"
/ Object owner gets FULL_CONTROL. Bucket owner gets READ access.
5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ "bucket-owner-read"
/ Both the object owner and the bucket owner get FULL_CONTROL over the object.
6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ "bucket-owner-full-control"
2018-03-13 21:47:29 +01:00
acl> 1
2016-06-14 22:22:54 +02:00
The server-side encryption algorithm used when storing this object in S3.
Choose a number from below, or type in your own value
1 / None
\ ""
2 / AES256
\ "AES256"
2018-03-13 21:47:29 +01:00
server_side_encryption> 1
2016-09-01 23:27:50 +02:00
The storage class to use when storing objects in S3.
Choose a number from below, or type in your own value
1 / Default
\ ""
2 / Standard storage class
\ "STANDARD"
3 / Reduced redundancy storage class
\ "REDUCED_REDUNDANCY"
4 / Standard Infrequent Access storage class
\ "STANDARD_IA"
2018-04-13 14:36:25 +02:00
5 / One Zone Infrequent Access storage class
\ "ONEZONE_IA"
2018-12-06 15:35:29 +01:00
6 / Glacier storage class
\ "GLACIER"
2019-04-05 05:14:05 +02:00
7 / Glacier Deep Archive storage class
\ "DEEP_ARCHIVE"
2019-04-04 19:04:28 +02:00
8 / Intelligent-Tiering storage class
\ "INTELLIGENT_TIERING"
2021-12-02 06:55:57 +01:00
9 / Glacier Instant Retrieval storage class
\ "GLACIER_IR"
2018-03-13 21:47:29 +01:00
storage_class> 1
2015-08-15 19:44:45 +02:00
Remote config
2024-08-16 12:05:43 +02:00
Configuration complete.
Options:
- type: s3
- provider: AWS
- env_auth: false
- access_key_id: XXX
- secret_access_key: YYY
- region: us-east-1
- endpoint:
- location_constraint:
- acl: private
- server_side_encryption:
- storage_class:
Keep this "remote" remote?
2014-03-28 23:34:13 +01:00
y) Yes this is OK
e) Edit this remote
d) Delete this remote
2021-11-04 12:50:43 +01:00
y/e/d>
2014-03-28 23:34:13 +01:00
```
2023-11-18 13:36:46 +01:00
### Modification times and hashes
#### Modification times
2014-03-28 23:34:13 +01:00
The modified time is stored as metadata on the object as
2021-08-10 21:45:49 +02:00
`X-Amz-Meta-Mtime` as floating point since the epoch, accurate to 1 ns.
2015-08-15 19:44:45 +02:00
2019-06-03 16:28:19 +02:00
If the modification time needs to be updated rclone will attempt to perform a server
2020-01-05 20:49:31 +01:00
side copy to update the modification if the object can be copied in a single part.
In the case the object is larger than 5Gb or is in Glacier or Glacier Deep Archive
2019-06-03 16:28:19 +02:00
storage the object will be uploaded rather than copied.
2020-11-23 12:53:31 +01:00
Note that reading this from the object takes an additional `HEAD`
request as the metadata isn't returned in object listings.
2023-11-18 13:36:46 +01:00
#### Hashes
For small objects which weren't uploaded as multipart uploads (objects
sized below `--s3-upload-cutoff` if uploaded with rclone) rclone uses
the `ETag:` header as an MD5 checksum.
However for objects which were uploaded as multipart uploads or with
server side encryption (SSE-AWS or SSE-C) the `ETag` header is no
longer the MD5 sum of the data, so rclone adds an additional piece of
metadata `X-Amz-Meta-Md5chksum` which is a base64 encoded MD5 hash (in
the same format as is required for `Content-MD5` ). You can use base64 -d and hexdump to check this value manually:
echo 'VWTGdNx3LyXQDfA0e2Edxw==' | base64 -d | hexdump
or you can use `rclone check` to verify the hashes are OK.
For large objects, calculating this hash can take some time so the
addition of this hash can be disabled with `--s3-disable-checksum` .
This will mean that these objects do not have an MD5 checksum.
Note that reading this from the object takes an additional `HEAD`
request as the metadata isn't returned in object listings.
2020-11-26 16:00:10 +01:00
### Reducing costs
#### Avoiding HEAD requests to read the modification time
2021-11-04 12:50:43 +01:00
By default, rclone will use the modification time of objects stored in
2020-11-26 16:00:10 +01:00
S3 for syncing. This is stored in object metadata which unfortunately
takes an extra HEAD request to read which can be expensive (in time
and money).
The modification time is used by default for all operations that
require checking the time a file was last updated. It allows rclone to
treat the remote more like a true filesystem, but it is inefficient on
S3 because it requires an extra API call to retrieve the metadata.
The extra API calls can be avoided when syncing (using `rclone sync`
or `rclone copy` ) in a few different ways, each with its own
tradeoffs.
- `--size-only`
- Only checks the size of files.
- Uses no extra transactions.
- If the file doesn't change size then rclone won't detect it has
changed.
- `rclone sync --size-only /path/to/source s3:bucket`
- `--checksum`
- Checks the size and MD5 checksum of files.
- Uses no extra transactions.
- The most accurate detection of changes possible.
- Will cause the source to read an MD5 checksum which, if it is a
local disk, will cause lots of disk activity.
- If the source and destination are both S3 this is the
**recommended** flag to use for maximum efficiency.
- `rclone sync --checksum /path/to/source s3:bucket`
- `--update --use-server-modtime`
- Uses no extra transactions.
- Modification time becomes the time the object was uploaded.
- For many operations this is sufficient to determine if it needs
uploading.
- Using `--update` along with `--use-server-modtime` , avoids the
extra API call and uploads files whose local modification time
is newer than the time it was last uploaded.
- Files created with timestamps in the past will be missed by the sync.
- `rclone sync --update --use-server-modtime /path/to/source s3:bucket`
These flags can and should be used in combination with `--fast-list` -
see below.
If using `rclone mount` or any command using the VFS (eg `rclone
serve`) commands then you might want to consider using the VFS flag
`--no-modtime` which will stop rclone reading the modification time
for every object. You could also use `--use-server-modtime` if you are
happy with the modification times of the objects being the time of
upload.
#### Avoiding GET requests to read directory listings
Rclone's default directory traversal is to process each directory
individually. This takes one API call per directory. Using the
2022-08-14 04:56:32 +02:00
`--fast-list` flag will read all info about the objects into
2020-11-26 16:00:10 +01:00
memory first using a smaller number of API calls (one per 1000
objects). See the [rclone docs ](/docs/#fast-list ) for more details.
rclone sync --fast-list --checksum /path/to/source s3:bucket
`--fast-list` trades off API transactions for memory use. As a rough
guide rclone uses 1k of memory per object stored, so using
2021-03-02 20:11:57 +01:00
`--fast-list` on a sync of a million objects will use roughly 1 GiB of
2020-11-26 16:00:10 +01:00
RAM.
If you are only copying a small number of files into a big repository
then using `--no-traverse` is a good idea. This finds objects directly
instead of through directory listings. You can do a "top-up" sync very
cheaply by using `--max-age` and `--no-traverse` to copy only recent
files, eg
2022-01-12 21:31:54 +01:00
rclone copy --max-age 24h --no-traverse /path/to/source s3:bucket
2020-11-26 16:00:10 +01:00
You'd then do a full `rclone sync` less often.
Note that `--fast-list` isn't required in the top-up sync.
2021-02-03 17:18:29 +01:00
#### Avoiding HEAD requests after PUT
2021-11-04 12:50:43 +01:00
By default, rclone will HEAD every object it uploads. It does this to
2021-02-03 17:18:29 +01:00
check the object got uploaded correctly.
You can disable this with the [--s3-no-head ](#s3-no-head ) option - see
there for more details.
Setting this flag increases the chance for undetected upload failures.
2024-09-11 09:23:49 +02:00
### Increasing performance
#### Using server-side copy
If you are copying objects between S3 buckets in the same region, you should
use server-side copy.
This is much faster than downloading and re-uploading the objects, as no data is transferred.
For rclone to use server-side copy, you must use the same remote for the source and destination.
rclone copy s3:source-bucket s3:destination-bucket
When using server-side copy, the performance is limited by the rate at which rclone issues
API requests to S3.
See below for how to increase the number of API requests rclone makes.
#### Increasing the rate of API requests
You can increase the rate of API requests to S3 by increasing the parallelism using `--transfers` and `--checkers`
options.
Rclone uses a very conservative defaults for these settings, as not all providers support high rates of requests.
Depending on your provider, you can increase significantly the number of transfers and checkers.
For example, with AWS S3, if you can increase the number of checkers to values like 200.
If you are doing a server-side copy, you can also increase the number of transfers to 200.
rclone sync --transfers 200 --checkers 200 --checksum s3:source-bucket s3:destination-bucket
You will need to experiment with these values to find the optimal settings for your setup.
2022-07-25 17:06:15 +02:00
### Versions
When bucket versioning is enabled (this can be done with rclone with
the [`rclone backend versioning` ](#versioning ) command) when rclone
uploads a new version of a file it creates a
[new version of it ](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Versioning.html )
Likewise when you delete a file, the old version will be marked hidden
and still be available.
Old versions of files, where available, are visible using the
2022-07-26 18:58:57 +02:00
[`--s3-versions` ](#s3-versions ) flag.
It is also possible to view a bucket as it was at a certain point in
time, using the [`--s3-version-at` ](#s3-version-at ) flag. This will
show the file versions as they were at that time, showing files that
have been deleted afterwards, and hiding files that were created
since.
2022-07-25 17:06:15 +02:00
If you wish to remove all the old versions then you can use the
[`rclone backend cleanup-hidden remote:bucket` ](#cleanup-hidden )
command which will delete all the old hidden versions of files,
leaving the current ones intact. You can also supply a path and only
old versions under that path will be deleted, e.g.
`rclone backend cleanup-hidden remote:bucket/path/to/stuff` .
When you `purge` a bucket, the current and the old versions will be
deleted then the bucket will be deleted.
However `delete` will cause the current versions of the files to
become hidden old versions.
Here is a session showing the listing and retrieval of an old
version followed by a `cleanup` of the old versions.
Show current version and all the versions with `--s3-versions` flag.
```
$ rclone -q ls s3:cleanup-test
9 one.txt
$ rclone -q --s3-versions ls s3:cleanup-test
9 one.txt
8 one-v2016-07-04-141032-000.txt
16 one-v2016-07-04-141003-000.txt
15 one-v2016-07-02-155621-000.txt
```
Retrieve an old version
```
$ rclone -q --s3-versions copy s3:cleanup-test/one-v2016-07-04-141003-000.txt /tmp
$ ls -l /tmp/one-v2016-07-04-141003-000.txt
-rw-rw-r-- 1 ncw ncw 16 Jul 2 17:46 /tmp/one-v2016-07-04-141003-000.txt
```
Clean up all the old versions and show that they've gone.
```
$ rclone -q backend cleanup-hidden s3:cleanup-test
$ rclone -q ls s3:cleanup-test
9 one.txt
$ rclone -q --s3-versions ls s3:cleanup-test
9 one.txt
```
2023-07-26 12:47:25 +02:00
#### Versions naming caveat
When using `--s3-versions` flag rclone is relying on the file name
to work out whether the objects are versions or not. Versions' names
are created by inserting timestamp between file name and its extension.
```
9 file.txt
8 file-v2023-07-17-161032-000.txt
16 file-v2023-06-15-141003-000.txt
```
If there are real files present with the same names as versions, then
behaviour of `--s3-versions` can be unpredictable.
2021-10-14 15:40:18 +02:00
### Cleanup
2020-06-25 17:11:05 +02:00
If you run `rclone cleanup s3:bucket` then it will remove all pending
2023-01-20 21:47:36 +01:00
multipart uploads older than 24 hours. You can use the `--interactive` /`i`
or `--dry-run` flag to see exactly what it will do. If you want more control over the
expiry date then run `rclone backend cleanup s3:bucket -o max-age=1h`
to expire all uploads older than one hour. You can use `rclone backend
2020-06-25 17:11:05 +02:00
list-multipart-uploads s3:bucket` to see the pending multipart
uploads.
2021-10-14 15:40:18 +02:00
### Restricted filename characters
2018-11-02 13:15:30 +01:00
S3 allows any valid UTF-8 string as a key.
Invalid UTF-8 bytes will be [replaced ](/overview/#invalid-utf8 ), as
they can't be used in XML.
The following characters are replaced since these are problematic when
dealing with the REST API:
| Character | Value | Replacement |
| --------- |:-----:|:-----------:|
| NUL | 0x00 | ␀ |
| / | 0x2F | / |
The encoding will also encode these file names as they don't seem to
work with the SDK properly:
| File name | Replacement |
| --------- |:-----------:|
| . | . |
| .. | . . |
2021-10-14 15:40:18 +02:00
### Multipart uploads
2015-08-15 19:44:45 +02:00
rclone supports multipart uploads with S3 which means that it can
2021-03-02 20:11:57 +01:00
upload files bigger than 5 GiB.
2018-11-26 22:09:23 +01:00
Note that files uploaded *both* with multipart upload *and* through
crypt remotes do not have MD5 sums.
2019-03-27 20:12:24 +01:00
rclone switches from single part uploads to multipart uploads at the
2021-03-02 20:11:57 +01:00
point specified by `--s3-upload-cutoff` . This can be a maximum of 5 GiB
2019-03-27 20:12:24 +01:00
and a minimum of 0 (ie always upload multipart files).
2018-11-26 22:09:23 +01:00
The chunk sizes used in the multipart upload are specified by
`--s3-chunk-size` and the number of chunks uploaded concurrently is
specified by `--s3-upload-concurrency` .
Multipart uploads will use `--transfers` * `--s3-upload-concurrency` *
`--s3-chunk-size` extra memory. Single part uploads to not use extra
memory.
Single part transfers can be faster than multipart transfers or slower
depending on your latency from S3 - the more latency, the more likely
single part transfers will be faster.
Increasing `--s3-upload-concurrency` will increase throughput (8 would
be a sensible value) and increasing `--s3-chunk-size` also increases
2019-03-27 20:12:24 +01:00
throughput (16M would be sensible). Increasing either of these will
2018-11-26 22:09:23 +01:00
use more memory. The default values are high enough to gain most of
the possible performance without using too much memory.
2015-08-15 19:44:45 +02:00
2021-10-14 15:40:18 +02:00
### Buckets and Regions
2015-08-25 21:15:50 +02:00
With Amazon S3 you can list buckets (`rclone lsd`) using any region,
but you can only access the content of a bucket from the region it was
created in. If you attempt to access a bucket from the wrong region,
you will get an error, `incorrect region, the bucket is not in 'XXX'
region`.
2021-10-14 15:40:18 +02:00
### Authentication
2018-04-13 17:08:00 +02:00
2018-04-16 13:14:35 +02:00
There are a number of ways to supply `rclone` with a set of AWS
credentials, with and without using the environment.
The different authentication methods are tried in this order:
- Directly in the rclone configuration file (`env_auth = false` in the config file):
- `access_key_id` and `secret_access_key` are required.
- `session_token` can be optionally set when using AWS STS.
- Runtime configuration (`env_auth = true` in the config file):
- Export the following environment variables before running `rclone` :
2016-02-09 18:19:13 +01:00
- Access Key ID: `AWS_ACCESS_KEY_ID` or `AWS_ACCESS_KEY`
- Secret Access Key: `AWS_SECRET_ACCESS_KEY` or `AWS_SECRET_KEY`
2018-04-16 13:14:35 +02:00
- Session Token: `AWS_SESSION_TOKEN` (optional)
2023-06-09 19:15:54 +02:00
- Or, use a [named profile ](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html ):
2018-04-16 13:14:35 +02:00
- Profile files are standard files used by AWS CLI tools
2024-08-03 12:35:32 +02:00
- By default it will use the profile in your home directory (e.g. `~/.aws/credentials` on unix based systems) file and the "default" profile, to change set these environment variables or config keys:
- `AWS_SHARED_CREDENTIALS_FILE` to control which file or the `shared_credentials_file` config key.
- `AWS_PROFILE` to control which profile to use or the `profile` config key.
2018-04-16 13:14:35 +02:00
- Or, run `rclone` in an ECS task with an IAM role (AWS only).
- Or, run `rclone` on an EC2 instance with an IAM role (AWS only).
2020-01-05 20:49:31 +01:00
- Or, run `rclone` in an EKS pod with an IAM role that is associated with a service account (AWS only).
2024-08-03 12:35:32 +02:00
- Or, use [process credentials ](https://docs.aws.amazon.com/sdkref/latest/guide/feature-process-credentials.html ) to read config from an external program.
With `env_auth = true` rclone (which uses the SDK for Go v2) should support
[all authentication methods ](https://docs.aws.amazon.com/sdkref/latest/guide/standardized-credentials.html )
that the `aws` CLI tool does and the other AWS SDKs.
2016-02-01 14:11:27 +01:00
If none of these option actually end up providing `rclone` with AWS
2024-08-03 12:35:32 +02:00
credentials then S3 interaction will be non-authenticated (see the
[anonymous access ](#anonymous-access ) section for more info).
2016-02-01 14:11:27 +01:00
2021-10-14 15:40:18 +02:00
### S3 Permissions
2017-06-02 13:06:06 +02:00
2018-04-12 18:05:53 +02:00
When using the `sync` subcommand of `rclone` the following minimum
2017-06-02 13:06:06 +02:00
permissions are required to be available on the bucket being written to:
* `ListBucket`
* `DeleteObject`
2017-06-10 16:22:43 +02:00
* `GetObject`
2017-06-02 13:06:06 +02:00
* `PutObject`
* `PutObjectACL`
2023-12-07 14:38:34 +01:00
* `CreateBucket` (unless using [s3-no-check-bucket ](#s3-no-check-bucket ))
2017-06-02 13:06:06 +02:00
2019-07-02 04:56:20 +02:00
When using the `lsd` subcommand, the `ListAllMyBuckets` permission is required.
2017-06-02 13:06:06 +02:00
Example policy:
```
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::USER_SID:user/USER_NAME"
},
"Action": [
"s3:ListBucket",
"s3:DeleteObject",
2017-06-10 16:22:43 +02:00
"s3:GetObject",
2017-06-02 13:06:06 +02:00
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::BUCKET_NAME/*",
"arn:aws:s3:::BUCKET_NAME"
]
2019-07-02 04:56:20 +02:00
},
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "arn:aws:s3:::*"
2021-11-04 12:50:43 +01:00
}
2017-06-02 13:06:06 +02:00
]
}
```
Notes on above:
1. This is a policy that can be used when creating bucket. It assumes
that `USER_NAME` has been created.
2018-04-12 18:05:53 +02:00
2. The Resource entry must include both resource ARNs, as one implies
2017-06-02 13:06:06 +02:00
the bucket and the other implies the bucket's objects.
2023-12-07 14:38:34 +01:00
3. When using [s3-no-check-bucket ](#s3-no-check-bucket ) and the bucket already exsits, the `"arn:aws:s3:::BUCKET_NAME"` doesn't have to be included.
2017-06-02 13:06:06 +02:00
2018-04-12 18:05:53 +02:00
For reference, [here's an Ansible script ](https://gist.github.com/ebridges/ebfc9042dd7c756cd101cfa807b7ae2b )
2017-06-02 13:06:06 +02:00
that will generate one or more buckets that will work with `rclone sync` .
2021-10-14 15:40:18 +02:00
### Key Management System (KMS)
2018-03-17 11:51:45 +01:00
2020-11-20 13:15:56 +01:00
If you are using server-side encryption with KMS then you must make
sure rclone is configured with `server_side_encryption = aws:kms`
otherwise you will find you can't transfer small objects - these will
create checksum errors.
2018-03-17 11:51:45 +01:00
2021-10-14 15:40:18 +02:00
### Glacier and Glacier Deep Archive
2017-09-09 14:02:56 +02:00
2018-12-06 15:35:29 +01:00
You can upload objects using the glacier storage class or transition them to glacier using a [lifecycle policy ](http://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html ).
2017-09-09 14:02:56 +02:00
The bucket can still be synced or copied into normally, but if rclone
2018-12-06 15:35:29 +01:00
tries to access data from the glacier storage class you will see an error like below.
2017-09-09 14:02:56 +02:00
2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file
In this case you need to [restore ](http://docs.aws.amazon.com/AmazonS3/latest/user-guide/restore-archived-objects.html )
the object(s) in question before using rclone.
2019-06-14 10:48:45 +02:00
Note that rclone only speaks the S3 API it does not speak the Glacier
Vault API, so rclone cannot directly access Glacier Vaults.
2022-01-25 17:10:57 +01:00
### Object-lock enabled S3 bucket
According to AWS's [documentation on S3 Object Lock ](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html#object-lock-permission ):
> If you configure a default retention period on a bucket, requests to upload objects in such a bucket must include the Content-MD5 header.
2023-11-18 13:36:46 +01:00
As mentioned in the [Modification times and hashes ](#modification-times-and-hashes ) section,
small files that are not uploaded as multipart, use a different tag, causing the upload to fail.
2022-01-25 17:10:57 +01:00
A simple solution is to set the `--s3-upload-cutoff 0` and force all the files to be uploaded as multipart.
2020-05-22 13:22:52 +02:00
{{< rem autogenerated options start " - DO NOT EDIT - instead edit fs . RegInfo in backend / s3 / s3 . go then run make backenddocs " > }}
2021-11-01 16:42:05 +01:00
### Standard options
2018-10-01 21:48:54 +02:00
2024-06-14 17:04:51 +02:00
Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
2018-10-01 21:48:54 +02:00
#### --s3-provider
Choose your S3 provider.
2022-03-18 13:29:54 +01:00
Properties:
2018-10-01 21:48:54 +02:00
- Config: provider
- Env Var: RCLONE_S3_PROVIDER
- Type: string
2022-03-18 13:29:54 +01:00
- Required: false
2018-10-01 21:48:54 +02:00
- Examples:
- "AWS"
- Amazon Web Services (AWS) S3
2019-01-12 17:46:45 +01:00
- "Alibaba"
- Alibaba Cloud Object Storage System (OSS) formerly Aliyun
2023-06-30 15:11:17 +02:00
- "ArvanCloud"
- Arvan Cloud Object Storage (AOS)
2018-10-01 21:48:54 +02:00
- "Ceph"
- Ceph Object Storage
2022-03-24 12:57:00 +01:00
- "ChinaMobile"
- China Mobile Ecloud Elastic Object Storage (EOS)
2022-07-09 19:08:20 +02:00
- "Cloudflare"
- Cloudflare R2 Storage
2018-10-01 21:48:54 +02:00
- "DigitalOcean"
2022-12-14 15:12:58 +01:00
- DigitalOcean Spaces
2018-10-01 21:48:54 +02:00
- "Dreamhost"
- Dreamhost DreamObjects
2023-06-30 15:11:17 +02:00
- "GCS"
- Google Cloud Storage
2022-06-07 09:41:46 +02:00
- "HuaweiOBS"
- Huawei Object Storage Service
2018-10-01 21:48:54 +02:00
- "IBMCOS"
- IBM COS S3
2022-06-18 09:29:21 +02:00
- "IDrive"
- IDrive e2
2022-10-21 16:06:08 +02:00
- "IONOS"
- IONOS Cloud
2022-03-18 13:29:54 +01:00
- "LyveCloud"
- Seagate Lyve Cloud
2023-09-11 16:59:44 +02:00
- "Leviia"
- Leviia Object Storage
2022-12-20 18:16:14 +01:00
- "Liara"
- Liara Object Storage
2023-11-26 16:59:12 +01:00
- "Linode"
- Linode Object Storage
2024-06-14 17:04:51 +02:00
- "Magalu"
- Magalu Object Storage
2018-10-01 21:48:54 +02:00
- "Minio"
- Minio Object Storage
2019-01-12 17:46:45 +01:00
- "Netease"
- Netease Object Storage (NOS)
2023-06-30 15:11:17 +02:00
- "Petabox"
- Petabox Object Storage
2022-03-18 13:29:54 +01:00
- "RackCorp"
- RackCorp Object Storage
2023-11-26 16:59:12 +01:00
- "Rclone"
- Rclone S3 Server
2020-09-02 17:59:04 +02:00
- "Scaleway"
- Scaleway Object Storage
2021-06-08 10:59:57 +02:00
- "SeaweedFS"
- SeaweedFS S3
2020-01-31 00:21:24 +01:00
- "StackPath"
- StackPath Object Storage
2022-03-18 13:29:54 +01:00
- "Storj"
- Storj (S3 Compatible Gateway)
2023-07-06 11:54:07 +02:00
- "Synology"
- Synology C2 Object Storage
2020-09-08 17:34:25 +02:00
- "TencentCOS"
- Tencent Cloud Object Storage (COS)
2021-02-02 14:42:35 +01:00
- "Wasabi"
- Wasabi Object Storage
2022-10-21 16:06:08 +02:00
- "Qiniu"
- Qiniu Object Storage (Kodo)
2018-10-01 21:48:54 +02:00
- "Other"
- Any other S3 compatible provider
#### --s3-env-auth
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
2021-11-01 16:42:05 +01:00
2018-10-01 21:48:54 +02:00
Only applies if access_key_id and secret_access_key is blank.
2022-03-18 13:29:54 +01:00
Properties:
2018-10-01 21:48:54 +02:00
- Config: env_auth
- Env Var: RCLONE_S3_ENV_AUTH
- Type: bool
- Default: false
- Examples:
- "false"
2021-11-01 16:42:05 +01:00
- Enter AWS credentials in the next step.
2018-10-01 21:48:54 +02:00
- "true"
2021-11-01 16:42:05 +01:00
- Get AWS credentials from the environment (env vars or IAM).
2018-10-01 21:48:54 +02:00
#### --s3-access-key-id
AWS Access Key ID.
2021-11-01 16:42:05 +01:00
2018-10-01 21:48:54 +02:00
Leave blank for anonymous access or runtime credentials.
2022-03-18 13:29:54 +01:00
Properties:
2018-10-01 21:48:54 +02:00
- Config: access_key_id
- Env Var: RCLONE_S3_ACCESS_KEY_ID
- Type: string
2022-03-18 13:29:54 +01:00
- Required: false
2018-10-01 21:48:54 +02:00
#### --s3-secret-access-key
2021-11-01 16:42:05 +01:00
AWS Secret Access Key (password).
2018-10-01 21:48:54 +02:00
Leave blank for anonymous access or runtime credentials.
2022-03-18 13:29:54 +01:00
Properties:
2018-10-01 21:48:54 +02:00
- Config: secret_access_key
- Env Var: RCLONE_S3_SECRET_ACCESS_KEY
- Type: string
2022-03-18 13:29:54 +01:00
- Required: false
2018-10-01 21:48:54 +02:00
#### --s3-region
Region to connect to.
2022-03-18 13:29:54 +01:00
Properties:
2018-10-01 21:48:54 +02:00
- Config: region
- Env Var: RCLONE_S3_REGION
2022-03-18 13:29:54 +01:00
- Provider: AWS
2018-10-01 21:48:54 +02:00
- Type: string
2022-03-18 13:29:54 +01:00
- Required: false
2018-10-01 21:48:54 +02:00
- Examples:
- "us-east-1"
- The default endpoint - a good choice if you are unsure.
2020-10-13 23:43:00 +02:00
- US Region, Northern Virginia, or Pacific Northwest.
2018-10-01 21:48:54 +02:00
- Leave location constraint empty.
- "us-east-2"
2021-11-01 16:42:05 +01:00
- US East (Ohio) Region.
2018-10-01 21:48:54 +02:00
- Needs location constraint us-east-2.
- "us-west-1"
2021-11-01 16:42:05 +01:00
- US West (Northern California) Region.
2018-10-01 21:48:54 +02:00
- Needs location constraint us-west-1.
2021-02-02 14:42:35 +01:00
- "us-west-2"
2021-11-01 16:42:05 +01:00
- US West (Oregon) Region.
2021-02-02 14:42:35 +01:00
- Needs location constraint us-west-2.
2018-10-01 21:48:54 +02:00
- "ca-central-1"
2021-11-01 16:42:05 +01:00
- Canada (Central) Region.
2018-10-01 21:48:54 +02:00
- Needs location constraint ca-central-1.
- "eu-west-1"
2021-11-01 16:42:05 +01:00
- EU (Ireland) Region.
2018-10-01 21:48:54 +02:00
- Needs location constraint EU or eu-west-1.
- "eu-west-2"
2021-11-01 16:42:05 +01:00
- EU (London) Region.
2018-10-01 21:48:54 +02:00
- Needs location constraint eu-west-2.
2021-02-02 14:42:35 +01:00
- "eu-west-3"
2021-11-01 16:42:05 +01:00
- EU (Paris) Region.
2021-02-02 14:42:35 +01:00
- Needs location constraint eu-west-3.
2019-02-09 11:42:57 +01:00
- "eu-north-1"
2021-11-01 16:42:05 +01:00
- EU (Stockholm) Region.
2019-02-09 11:42:57 +01:00
- Needs location constraint eu-north-1.
2021-02-02 14:42:35 +01:00
- "eu-south-1"
2021-11-01 16:42:05 +01:00
- EU (Milan) Region.
2021-02-02 14:42:35 +01:00
- Needs location constraint eu-south-1.
2018-10-01 21:48:54 +02:00
- "eu-central-1"
2021-11-01 16:42:05 +01:00
- EU (Frankfurt) Region.
2018-10-01 21:48:54 +02:00
- Needs location constraint eu-central-1.
- "ap-southeast-1"
2021-11-01 16:42:05 +01:00
- Asia Pacific (Singapore) Region.
2018-10-01 21:48:54 +02:00
- Needs location constraint ap-southeast-1.
- "ap-southeast-2"
2021-11-01 16:42:05 +01:00
- Asia Pacific (Sydney) Region.
2018-10-01 21:48:54 +02:00
- Needs location constraint ap-southeast-2.
- "ap-northeast-1"
2021-11-01 16:42:05 +01:00
- Asia Pacific (Tokyo) Region.
2018-10-01 21:48:54 +02:00
- Needs location constraint ap-northeast-1.
- "ap-northeast-2"
2021-11-01 16:42:05 +01:00
- Asia Pacific (Seoul).
2018-10-01 21:48:54 +02:00
- Needs location constraint ap-northeast-2.
2021-02-02 14:42:35 +01:00
- "ap-northeast-3"
2021-11-01 16:42:05 +01:00
- Asia Pacific (Osaka-Local).
2021-02-02 14:42:35 +01:00
- Needs location constraint ap-northeast-3.
2018-10-01 21:48:54 +02:00
- "ap-south-1"
2021-11-01 16:42:05 +01:00
- Asia Pacific (Mumbai).
2018-10-01 21:48:54 +02:00
- Needs location constraint ap-south-1.
2020-01-02 12:10:48 +01:00
- "ap-east-1"
2021-11-01 16:42:05 +01:00
- Asia Pacific (Hong Kong) Region.
2020-01-02 12:10:48 +01:00
- Needs location constraint ap-east-1.
2018-10-01 21:48:54 +02:00
- "sa-east-1"
2021-11-01 16:42:05 +01:00
- South America (Sao Paulo) Region.
2018-10-01 21:48:54 +02:00
- Needs location constraint sa-east-1.
2024-06-14 17:04:51 +02:00
- "il-central-1"
- Israel (Tel Aviv) Region.
- Needs location constraint il-central-1.
2021-02-02 14:42:35 +01:00
- "me-south-1"
2021-11-01 16:42:05 +01:00
- Middle East (Bahrain) Region.
2021-02-02 14:42:35 +01:00
- Needs location constraint me-south-1.
- "af-south-1"
2021-11-01 16:42:05 +01:00
- Africa (Cape Town) Region.
2021-02-02 14:42:35 +01:00
- Needs location constraint af-south-1.
- "cn-north-1"
2021-11-01 16:42:05 +01:00
- China (Beijing) Region.
2021-02-02 14:42:35 +01:00
- Needs location constraint cn-north-1.
- "cn-northwest-1"
2021-11-01 16:42:05 +01:00
- China (Ningxia) Region.
2021-02-02 14:42:35 +01:00
- Needs location constraint cn-northwest-1.
- "us-gov-east-1"
2021-11-01 16:42:05 +01:00
- AWS GovCloud (US-East) Region.
2021-02-02 14:42:35 +01:00
- Needs location constraint us-gov-east-1.
- "us-gov-west-1"
2021-11-01 16:42:05 +01:00
- AWS GovCloud (US) Region.
2021-02-02 14:42:35 +01:00
- Needs location constraint us-gov-west-1.
2018-10-01 21:48:54 +02:00
2022-10-21 16:06:08 +02:00
#### --s3-endpoint
2018-10-01 21:48:54 +02:00
Endpoint for S3 API.
2021-11-01 16:42:05 +01:00
2023-11-26 16:59:12 +01:00
Leave blank if using AWS to use the default endpoint for the region.
2018-10-01 21:48:54 +02:00
2022-03-18 13:29:54 +01:00
Properties:
2018-10-01 21:48:54 +02:00
- Config: endpoint
- Env Var: RCLONE_S3_ENDPOINT
2023-11-26 16:59:12 +01:00
- Provider: AWS
2018-10-01 21:48:54 +02:00
- Type: string
2022-03-18 13:29:54 +01:00
- Required: false
2018-10-01 21:48:54 +02:00
#### --s3-location-constraint
Location constraint - must be set to match the Region.
2021-11-01 16:42:05 +01:00
2018-10-01 21:48:54 +02:00
Used when creating buckets only.
2022-03-18 13:29:54 +01:00
Properties:
2018-10-01 21:48:54 +02:00
- Config: location_constraint
- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
2022-03-18 13:29:54 +01:00
- Provider: AWS
2018-10-01 21:48:54 +02:00
- Type: string
2022-03-18 13:29:54 +01:00
- Required: false
2018-10-01 21:48:54 +02:00
- Examples:
- ""
2021-11-01 16:42:05 +01:00
- Empty for US Region, Northern Virginia, or Pacific Northwest
2018-10-01 21:48:54 +02:00
- "us-east-2"
2021-11-01 16:42:05 +01:00
- US East (Ohio) Region
2018-10-01 21:48:54 +02:00
- "us-west-1"
2021-11-01 16:42:05 +01:00
- US West (Northern California) Region
2021-02-02 14:42:35 +01:00
- "us-west-2"
2021-11-01 16:42:05 +01:00
- US West (Oregon) Region
2018-10-01 21:48:54 +02:00
- "ca-central-1"
2021-11-01 16:42:05 +01:00
- Canada (Central) Region
2018-10-01 21:48:54 +02:00
- "eu-west-1"
2021-11-01 16:42:05 +01:00
- EU (Ireland) Region
2018-10-01 21:48:54 +02:00
- "eu-west-2"
2021-11-01 16:42:05 +01:00
- EU (London) Region
2021-02-02 14:42:35 +01:00
- "eu-west-3"
2021-11-01 16:42:05 +01:00
- EU (Paris) Region
2019-02-09 11:42:57 +01:00
- "eu-north-1"
2021-11-01 16:42:05 +01:00
- EU (Stockholm) Region
2021-02-02 14:42:35 +01:00
- "eu-south-1"
2021-11-01 16:42:05 +01:00
- EU (Milan) Region
2018-10-01 21:48:54 +02:00
- "EU"
2021-11-01 16:42:05 +01:00
- EU Region
2018-10-01 21:48:54 +02:00
- "ap-southeast-1"
2021-11-01 16:42:05 +01:00
- Asia Pacific (Singapore) Region
2018-10-01 21:48:54 +02:00
- "ap-southeast-2"
2021-11-01 16:42:05 +01:00
- Asia Pacific (Sydney) Region
2018-10-01 21:48:54 +02:00
- "ap-northeast-1"
2021-11-01 16:42:05 +01:00
- Asia Pacific (Tokyo) Region
2018-10-01 21:48:54 +02:00
- "ap-northeast-2"
2021-11-01 16:42:05 +01:00
- Asia Pacific (Seoul) Region
2021-02-02 14:42:35 +01:00
- "ap-northeast-3"
2021-11-01 16:42:05 +01:00
- Asia Pacific (Osaka-Local) Region
2018-10-01 21:48:54 +02:00
- "ap-south-1"
2021-11-01 16:42:05 +01:00
- Asia Pacific (Mumbai) Region
2020-01-02 12:10:48 +01:00
- "ap-east-1"
2021-11-01 16:42:05 +01:00
- Asia Pacific (Hong Kong) Region
2018-10-01 21:48:54 +02:00
- "sa-east-1"
2021-11-01 16:42:05 +01:00
- South America (Sao Paulo) Region
2024-06-14 17:04:51 +02:00
- "il-central-1"
- Israel (Tel Aviv) Region
2021-02-02 14:42:35 +01:00
- "me-south-1"
2021-11-01 16:42:05 +01:00
- Middle East (Bahrain) Region
2021-02-02 14:42:35 +01:00
- "af-south-1"
2021-11-01 16:42:05 +01:00
- Africa (Cape Town) Region
2021-02-02 14:42:35 +01:00
- "cn-north-1"
- China (Beijing) Region
- "cn-northwest-1"
2021-11-01 16:42:05 +01:00
- China (Ningxia) Region
2021-02-02 14:42:35 +01:00
- "us-gov-east-1"
2021-11-01 16:42:05 +01:00
- AWS GovCloud (US-East) Region
2021-02-02 14:42:35 +01:00
- "us-gov-west-1"
2021-11-01 16:42:05 +01:00
- AWS GovCloud (US) Region
2018-10-01 21:48:54 +02:00
#### --s3-acl
2018-11-24 14:44:25 +01:00
Canned ACL used when creating buckets and storing or copying objects.
2019-02-09 11:42:57 +01:00
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
2018-10-01 21:48:54 +02:00
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
2020-10-13 23:43:40 +02:00
Note that this ACL is applied when server-side copying objects as S3
2018-11-24 14:44:25 +01:00
doesn't copy the ACL from the source but rather writes a fresh one.
2022-12-20 18:16:14 +01:00
If the acl is an empty string then no X-Amz-Acl: header is added and
the default (private) will be used.
2022-03-18 13:29:54 +01:00
Properties:
2018-10-01 21:48:54 +02:00
- Config: acl
- Env Var: RCLONE_S3_ACL
2023-09-11 16:59:44 +02:00
- Provider: !Storj,Synology,Cloudflare
2018-10-01 21:48:54 +02:00
- Type: string
2022-03-18 13:29:54 +01:00
- Required: false
2018-10-01 21:48:54 +02:00
- Examples:
2021-02-02 14:42:35 +01:00
- "default"
2021-11-01 16:42:05 +01:00
- Owner gets Full_CONTROL.
- No one else has access rights (default).
2018-10-01 21:48:54 +02:00
- "private"
2021-11-01 16:42:05 +01:00
- Owner gets FULL_CONTROL.
- No one else has access rights (default).
2018-10-01 21:48:54 +02:00
- "public-read"
2021-11-01 16:42:05 +01:00
- Owner gets FULL_CONTROL.
- The AllUsers group gets READ access.
2018-10-01 21:48:54 +02:00
- "public-read-write"
2021-11-01 16:42:05 +01:00
- Owner gets FULL_CONTROL.
- The AllUsers group gets READ and WRITE access.
2018-10-01 21:48:54 +02:00
- Granting this on a bucket is generally not recommended.
- "authenticated-read"
2021-11-01 16:42:05 +01:00
- Owner gets FULL_CONTROL.
- The AuthenticatedUsers group gets READ access.
2018-10-01 21:48:54 +02:00
- "bucket-owner-read"
2021-11-01 16:42:05 +01:00
- Object owner gets FULL_CONTROL.
- Bucket owner gets READ access.
2018-10-01 21:48:54 +02:00
- If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
- "bucket-owner-full-control"
- Both the object owner and the bucket owner get FULL_CONTROL over the object.
- If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
- "private"
2021-11-01 16:42:05 +01:00
- Owner gets FULL_CONTROL.
- No one else has access rights (default).
- This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS.
2018-10-01 21:48:54 +02:00
- "public-read"
2021-11-01 16:42:05 +01:00
- Owner gets FULL_CONTROL.
- The AllUsers group gets READ access.
- This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS.
2018-10-01 21:48:54 +02:00
- "public-read-write"
2021-11-01 16:42:05 +01:00
- Owner gets FULL_CONTROL.
- The AllUsers group gets READ and WRITE access.
- This acl is available on IBM Cloud (Infra), On-Premise IBM COS.
2018-10-01 21:48:54 +02:00
- "authenticated-read"
2021-11-01 16:42:05 +01:00
- Owner gets FULL_CONTROL.
- The AuthenticatedUsers group gets READ access.
- Not supported on Buckets.
- This acl is available on IBM Cloud (Infra) and On-Premise IBM COS.
2018-10-01 21:48:54 +02:00
#### --s3-server-side-encryption
The server-side encryption algorithm used when storing this object in S3.
2022-03-18 13:29:54 +01:00
Properties:
2018-10-01 21:48:54 +02:00
- Config: server_side_encryption
- Env Var: RCLONE_S3_SERVER_SIDE_ENCRYPTION
2022-07-09 19:08:20 +02:00
- Provider: AWS,Ceph,ChinaMobile,Minio
2018-10-01 21:48:54 +02:00
- Type: string
2022-03-18 13:29:54 +01:00
- Required: false
2018-10-01 21:48:54 +02:00
- Examples:
- ""
- None
- "AES256"
- AES256
- "aws:kms"
- aws:kms
#### --s3-sse-kms-key-id
If using KMS ID you must provide the ARN of Key.
2022-03-18 13:29:54 +01:00
Properties:
2018-10-01 21:48:54 +02:00
- Config: sse_kms_key_id
- Env Var: RCLONE_S3_SSE_KMS_KEY_ID
2022-03-18 13:29:54 +01:00
- Provider: AWS,Ceph,Minio
2018-10-01 21:48:54 +02:00
- Type: string
2022-03-18 13:29:54 +01:00
- Required: false
2018-10-01 21:48:54 +02:00
- Examples:
- ""
- None
- "arn:aws:kms:us-east-1:*"
- arn:aws:kms:*
#### --s3-storage-class
The storage class to use when storing new objects in S3.
2022-03-18 13:29:54 +01:00
Properties:
2018-10-01 21:48:54 +02:00
- Config: storage_class
- Env Var: RCLONE_S3_STORAGE_CLASS
2022-03-18 13:29:54 +01:00
- Provider: AWS
2018-10-01 21:48:54 +02:00
- Type: string
2022-03-18 13:29:54 +01:00
- Required: false
2018-10-01 21:48:54 +02:00
- Examples:
- ""
- Default
- "STANDARD"
- Standard storage class
- "REDUCED_REDUNDANCY"
- Reduced redundancy storage class
- "STANDARD_IA"
- Standard Infrequent Access storage class
- "ONEZONE_IA"
- One Zone Infrequent Access storage class
2018-12-06 15:35:29 +01:00
- "GLACIER"
2019-01-12 17:46:45 +01:00
- Glacier storage class
2019-04-13 12:01:58 +02:00
- "DEEP_ARCHIVE"
- Glacier Deep Archive storage class
2019-04-04 19:04:28 +02:00
- "INTELLIGENT_TIERING"
- Intelligent-Tiering storage class
2021-12-02 06:55:57 +01:00
- "GLACIER_IR"
- Glacier Instant Retrieval storage class
2019-01-12 17:46:45 +01:00
2021-11-01 16:42:05 +01:00
### Advanced options
2018-10-01 21:48:54 +02:00
2024-06-14 17:04:51 +02:00
Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
2018-10-01 21:48:54 +02:00
2019-02-09 11:42:57 +01:00
#### --s3-bucket-acl
Canned ACL used when creating buckets.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when only when creating buckets. If it
isn't set then "acl" is used instead.
2022-12-20 18:16:14 +01:00
If the "acl" and "bucket_acl" are empty strings then no X-Amz-Acl:
header is added and the default (private) will be used.
2022-03-18 13:29:54 +01:00
Properties:
2019-02-09 11:42:57 +01:00
- Config: bucket_acl
- Env Var: RCLONE_S3_BUCKET_ACL
- Type: string
2022-03-18 13:29:54 +01:00
- Required: false
2019-02-09 11:42:57 +01:00
- Examples:
- "private"
2021-11-01 16:42:05 +01:00
- Owner gets FULL_CONTROL.
- No one else has access rights (default).
2019-02-09 11:42:57 +01:00
- "public-read"
2021-11-01 16:42:05 +01:00
- Owner gets FULL_CONTROL.
- The AllUsers group gets READ access.
2019-02-09 11:42:57 +01:00
- "public-read-write"
2021-11-01 16:42:05 +01:00
- Owner gets FULL_CONTROL.
- The AllUsers group gets READ and WRITE access.
2019-02-09 11:42:57 +01:00
- Granting this on a bucket is generally not recommended.
- "authenticated-read"
2021-11-01 16:42:05 +01:00
- Owner gets FULL_CONTROL.
- The AuthenticatedUsers group gets READ access.
2019-02-09 11:42:57 +01:00
2021-02-02 14:42:35 +01:00
#### --s3-requester-pays
Enables requester pays option when interacting with S3 bucket.
2022-03-18 13:29:54 +01:00
Properties:
2021-02-02 14:42:35 +01:00
- Config: requester_pays
- Env Var: RCLONE_S3_REQUESTER_PAYS
2022-03-18 13:29:54 +01:00
- Provider: AWS
2021-02-02 14:42:35 +01:00
- Type: bool
- Default: false
2020-05-22 13:22:52 +02:00
#### --s3-sse-customer-algorithm
If using SSE-C, the server-side encryption algorithm used when storing this object in S3.
2022-03-18 13:29:54 +01:00
Properties:
2020-05-22 13:22:52 +02:00
- Config: sse_customer_algorithm
- Env Var: RCLONE_S3_SSE_CUSTOMER_ALGORITHM
2022-07-09 19:08:20 +02:00
- Provider: AWS,Ceph,ChinaMobile,Minio
2020-05-22 13:22:52 +02:00
- Type: string
2022-03-18 13:29:54 +01:00
- Required: false
2020-05-22 13:22:52 +02:00
- Examples:
- ""
- None
- "AES256"
- AES256
#### --s3-sse-customer-key
2022-10-21 16:06:08 +02:00
To use SSE-C you may provide the secret encryption key used to encrypt/decrypt your data.
Alternatively you can provide --sse-customer-key-base64.
2020-05-22 13:22:52 +02:00
2022-03-18 13:29:54 +01:00
Properties:
2020-05-22 13:22:52 +02:00
- Config: sse_customer_key
- Env Var: RCLONE_S3_SSE_CUSTOMER_KEY
2022-07-09 19:08:20 +02:00
- Provider: AWS,Ceph,ChinaMobile,Minio
2020-05-22 13:22:52 +02:00
- Type: string
2022-03-18 13:29:54 +01:00
- Required: false
2020-05-22 13:22:52 +02:00
- Examples:
- ""
- None
2022-10-21 16:06:08 +02:00
#### --s3-sse-customer-key-base64
If using SSE-C you must provide the secret encryption key encoded in base64 format to encrypt/decrypt your data.
Alternatively you can provide --sse-customer-key.
Properties:
- Config: sse_customer_key_base64
- Env Var: RCLONE_S3_SSE_CUSTOMER_KEY_BASE64
- Provider: AWS,Ceph,ChinaMobile,Minio
- Type: string
- Required: false
- Examples:
- ""
- None
2020-05-22 13:22:52 +02:00
#### --s3-sse-customer-key-md5
2021-02-02 14:42:35 +01:00
If using SSE-C you may provide the secret encryption key MD5 checksum (optional).
If you leave it blank, this is calculated automatically from the sse_customer_key provided.
2020-05-22 13:22:52 +02:00
2022-03-18 13:29:54 +01:00
Properties:
2020-05-22 13:22:52 +02:00
- Config: sse_customer_key_md5
- Env Var: RCLONE_S3_SSE_CUSTOMER_KEY_MD5
2022-07-09 19:08:20 +02:00
- Provider: AWS,Ceph,ChinaMobile,Minio
2020-05-22 13:22:52 +02:00
- Type: string
2022-03-18 13:29:54 +01:00
- Required: false
2020-05-22 13:22:52 +02:00
- Examples:
- ""
- None
2018-11-26 22:09:23 +01:00
#### --s3-upload-cutoff
2021-11-01 16:42:05 +01:00
Cutoff for switching to chunked upload.
2018-11-26 22:09:23 +01:00
Any files larger than this will be uploaded in chunks of chunk_size.
2021-03-02 20:11:57 +01:00
The minimum is 0 and the maximum is 5 GiB.
2018-11-26 22:09:23 +01:00
2022-03-18 13:29:54 +01:00
Properties:
2018-11-26 22:09:23 +01:00
- Config: upload_cutoff
- Env Var: RCLONE_S3_UPLOAD_CUTOFF
- Type: SizeSuffix
2021-07-20 20:45:41 +02:00
- Default: 200Mi
2018-11-26 22:09:23 +01:00
2018-10-01 21:48:54 +02:00
#### --s3-chunk-size
Chunk size to use for uploading.
2020-02-01 11:31:42 +01:00
When uploading files larger than upload_cutoff or files with unknown
2020-10-13 23:49:58 +02:00
size (e.g. from "rclone rcat" or uploaded with "rclone mount" or google
2020-02-01 11:31:42 +01:00
photos or google docs) they will be uploaded as multipart uploads
using this chunk size.
2018-10-01 21:48:54 +02:00
Note that "--s3-upload-concurrency" chunks of this size are buffered
in memory per transfer.
2020-10-13 23:50:53 +02:00
If you are transferring large files over high-speed links and you have
2018-10-01 21:48:54 +02:00
enough memory, then increasing this will speed up the transfers.
2020-02-01 11:31:42 +01:00
Rclone will automatically increase the chunk size when uploading a
large file of known size to stay below the 10,000 chunks limit.
Files of unknown size are uploaded with the configured
2021-03-02 20:11:57 +01:00
chunk_size. Since the default chunk size is 5 MiB and there can be at
2020-02-01 11:31:42 +01:00
most 10,000 chunks, this means that by default the maximum size of
2021-03-02 20:11:57 +01:00
a file you can stream upload is 48 GiB. If you wish to stream upload
2020-02-01 11:31:42 +01:00
larger files then you will need to increase chunk_size.
2022-07-09 19:08:20 +02:00
Increasing the chunk size decreases the accuracy of the progress
statistics displayed with "-P" flag. Rclone treats chunk as sent when
it's buffered by the AWS SDK, when in fact it may still be uploading.
A bigger chunk size means a bigger AWS SDK buffer and progress
reporting more deviating from the truth.
2022-03-18 13:29:54 +01:00
Properties:
2018-10-01 21:48:54 +02:00
- Config: chunk_size
- Env Var: RCLONE_S3_CHUNK_SIZE
- Type: SizeSuffix
2021-07-20 20:45:41 +02:00
- Default: 5Mi
2018-10-01 21:48:54 +02:00
2020-09-02 17:59:04 +02:00
#### --s3-max-upload-parts
Maximum number of parts in a multipart upload.
This option defines the maximum number of multipart chunks to use
when doing a multipart upload.
This can be useful if a service does not support the AWS S3
specification of 10,000 chunks.
Rclone will automatically increase the chunk size when uploading a
large file of a known size to stay below this number of chunks limit.
2022-03-18 13:29:54 +01:00
Properties:
2020-09-02 17:59:04 +02:00
- Config: max_upload_parts
- Env Var: RCLONE_S3_MAX_UPLOAD_PARTS
- Type: int
- Default: 10000
2020-02-01 11:31:42 +01:00
#### --s3-copy-cutoff
2021-11-01 16:42:05 +01:00
Cutoff for switching to multipart copy.
2020-02-01 11:31:42 +01:00
2020-10-13 23:43:40 +02:00
Any files larger than this that need to be server-side copied will be
2020-02-01 11:31:42 +01:00
copied in chunks of this size.
2021-03-02 20:11:57 +01:00
The minimum is 0 and the maximum is 5 GiB.
2020-02-01 11:31:42 +01:00
2022-03-18 13:29:54 +01:00
Properties:
2020-02-01 11:31:42 +01:00
- Config: copy_cutoff
- Env Var: RCLONE_S3_COPY_CUTOFF
- Type: SizeSuffix
2021-07-20 20:45:41 +02:00
- Default: 4.656Gi
2020-02-01 11:31:42 +01:00
2018-10-01 21:48:54 +02:00
#### --s3-disable-checksum
2021-11-01 16:42:05 +01:00
Don't store MD5 checksum with object metadata.
2018-10-01 21:48:54 +02:00
2020-05-22 13:22:52 +02:00
Normally rclone will calculate the MD5 checksum of the input before
uploading it so it can add it to metadata on the object. This is great
for data integrity checking but can cause long delays for large files
to start uploading.
2022-03-18 13:29:54 +01:00
Properties:
2018-10-01 21:48:54 +02:00
- Config: disable_checksum
- Env Var: RCLONE_S3_DISABLE_CHECKSUM
- Type: bool
- Default: false
2020-09-02 17:59:04 +02:00
#### --s3-shared-credentials-file
2021-11-01 16:42:05 +01:00
Path to the shared credentials file.
2020-09-02 17:59:04 +02:00
If env_auth = true then rclone can use a shared credentials file.
If this variable is empty rclone will look for the
"AWS_SHARED_CREDENTIALS_FILE" env variable. If the env value is empty
it will default to the current user's home directory.
Linux/OSX: "$HOME/.aws/credentials"
Windows: "%USERPROFILE%\.aws\credentials"
2022-03-18 13:29:54 +01:00
Properties:
2020-09-02 17:59:04 +02:00
- Config: shared_credentials_file
- Env Var: RCLONE_S3_SHARED_CREDENTIALS_FILE
- Type: string
2022-03-18 13:29:54 +01:00
- Required: false
2020-09-02 17:59:04 +02:00
#### --s3-profile
2021-11-01 16:42:05 +01:00
Profile to use in the shared credentials file.
2020-09-02 17:59:04 +02:00
If env_auth = true then rclone can use a shared credentials file. This
variable controls which profile is used in that file.
If empty it will default to the environment variable "AWS_PROFILE" or
"default" if that environment variable is also not set.
2022-03-18 13:29:54 +01:00
Properties:
2020-09-02 17:59:04 +02:00
- Config: profile
- Env Var: RCLONE_S3_PROFILE
- Type: string
2022-03-18 13:29:54 +01:00
- Required: false
2020-09-02 17:59:04 +02:00
2018-10-01 21:48:54 +02:00
#### --s3-session-token
2021-11-01 16:42:05 +01:00
An AWS session token.
2018-10-01 21:48:54 +02:00
2022-03-18 13:29:54 +01:00
Properties:
2018-10-01 21:48:54 +02:00
- Config: session_token
- Env Var: RCLONE_S3_SESSION_TOKEN
- Type: string
2022-03-18 13:29:54 +01:00
- Required: false
2018-10-01 21:48:54 +02:00
#### --s3-upload-concurrency
2024-03-10 12:22:43 +01:00
Concurrency for multipart uploads and copies.
2018-10-01 21:48:54 +02:00
This is the number of chunks of the same file that are uploaded
2024-03-10 12:22:43 +01:00
concurrently for multipart uploads and copies.
2018-10-01 21:48:54 +02:00
2020-10-13 23:59:13 +02:00
If you are uploading small numbers of large files over high-speed links
2018-10-01 21:48:54 +02:00
and these uploads do not fully utilize your bandwidth, then increasing
this may help to speed up the transfers.
2022-03-18 13:29:54 +01:00
Properties:
2018-10-01 21:48:54 +02:00
- Config: upload_concurrency
- Env Var: RCLONE_S3_UPLOAD_CONCURRENCY
- Type: int
2018-12-02 18:51:14 +01:00
- Default: 4
2018-10-01 21:48:54 +02:00
#### --s3-force-path-style
If true use path style access if false use virtual hosted style.
If this is true (the default) then rclone will use path style access,
if false then rclone will use virtual path style. See [the AWS S3
docs](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro)
for more info.
2021-02-02 14:42:35 +01:00
Some providers (e.g. AWS, Aliyun OSS, Netease COS, or Tencent COS) require this set to
2020-02-01 11:31:42 +01:00
false - rclone will do this automatically based on the provider
setting.
2018-10-01 21:48:54 +02:00
2024-09-08 17:21:56 +02:00
Note that if your bucket isn't a valid DNS name, i.e. has '.' or '_' in,
you'll need to set this to true.
2022-03-18 13:29:54 +01:00
Properties:
2018-10-01 21:48:54 +02:00
- Config: force_path_style
- Env Var: RCLONE_S3_FORCE_PATH_STYLE
- Type: bool
- Default: true
2018-10-09 14:03:37 +02:00
#### --s3-v2-auth
If true use v2 authentication.
If this is false (the default) then rclone will use v4 authentication.
If it is set then rclone will use v2 authentication.
2020-10-13 23:49:58 +02:00
Use this only if v4 signatures don't work, e.g. pre Jewel/v10 CEPH.
2018-10-09 14:03:37 +02:00
2022-03-18 13:29:54 +01:00
Properties:
2018-10-09 14:03:37 +02:00
- Config: v2_auth
- Env Var: RCLONE_S3_V2_AUTH
- Type: bool
- Default: false
2024-03-10 12:22:43 +01:00
#### --s3-use-dual-stack
If true use AWS S3 dual-stack endpoint (IPv6 support).
See [AWS Docs on Dualstack Endpoints ](https://docs.aws.amazon.com/AmazonS3/latest/userguide/dual-stack-endpoints.html )
Properties:
- Config: use_dual_stack
- Env Var: RCLONE_S3_USE_DUAL_STACK
- Type: bool
- Default: false
2019-06-15 13:01:29 +02:00
#### --s3-use-accelerate-endpoint
If true use the AWS S3 accelerated endpoint.
See: [AWS S3 Transfer acceleration ](https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration-examples.html )
2022-03-18 13:29:54 +01:00
Properties:
2019-06-15 13:01:29 +02:00
- Config: use_accelerate_endpoint
- Env Var: RCLONE_S3_USE_ACCELERATE_ENDPOINT
2022-03-18 13:29:54 +01:00
- Provider: AWS
2019-06-15 13:01:29 +02:00
- Type: bool
- Default: false
2019-07-24 11:03:38 +02:00
#### --s3-leave-parts-on-error
If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery.
It should be set to true for resuming uploads across different sessions.
WARNING: Storing parts of an incomplete multipart upload counts towards space usage on S3 and will add additional costs if not cleaned up.
2022-03-18 13:29:54 +01:00
Properties:
2019-07-24 11:03:38 +02:00
- Config: leave_parts_on_error
- Env Var: RCLONE_S3_LEAVE_PARTS_ON_ERROR
2022-03-18 13:29:54 +01:00
- Provider: AWS
2019-07-24 11:03:38 +02:00
- Type: bool
- Default: false
2019-12-26 12:05:00 +01:00
#### --s3-list-chunk
Size of listing chunk (response list for each ListObject S3 request).
This option is also known as "MaxKeys", "max-items", or "page-size" from the AWS S3 specification.
Most services truncate the response list to 1000 objects even if requested more than that.
In AWS S3 this is a global maximum and cannot be changed, see [AWS S3 ](https://docs.aws.amazon.com/cli/latest/reference/s3/ls.html ).
In Ceph, this can be increased with the "rgw list buckets max chunk" option.
2022-03-18 13:29:54 +01:00
Properties:
2019-12-26 12:05:00 +01:00
- Config: list_chunk
- Env Var: RCLONE_S3_LIST_CHUNK
- Type: int
- Default: 1000
2022-03-18 13:29:54 +01:00
#### --s3-list-version
Version of ListObjects to use: 1,2 or 0 for auto.
When S3 originally launched it only provided the ListObjects call to
enumerate objects in a bucket.
However in May 2016 the ListObjectsV2 call was introduced. This is
much higher performance and should be used if at all possible.
If set to the default, 0, rclone will guess according to the provider
set which list objects method to call. If it guesses wrong, then it
may be set manually here.
Properties:
- Config: list_version
- Env Var: RCLONE_S3_LIST_VERSION
- Type: int
- Default: 0
#### --s3-list-url-encode
Whether to url encode listings: true/false/unset
Some providers support URL encoding listings and where this is
available this is more reliable when using control characters in file
names. If this is set to unset (the default) then rclone will choose
according to the provider setting what to apply, but you can override
rclone's choice here.
Properties:
- Config: list_url_encode
- Env Var: RCLONE_S3_LIST_URL_ENCODE
- Type: Tristate
- Default: unset
2020-09-02 17:59:04 +02:00
#### --s3-no-check-bucket
2021-11-01 16:42:05 +01:00
If set, don't attempt to check the bucket exists or create it.
2020-09-02 17:59:04 +02:00
This can be useful when trying to minimise the number of transactions
rclone does if you know the bucket exists already.
2021-02-02 14:42:35 +01:00
It can also be needed if the user you are using does not have bucket
creation permissions. Before v1.52.0 this would have passed silently
due to a bug.
2020-09-02 17:59:04 +02:00
2022-03-18 13:29:54 +01:00
Properties:
2020-09-02 17:59:04 +02:00
- Config: no_check_bucket
- Env Var: RCLONE_S3_NO_CHECK_BUCKET
- Type: bool
- Default: false
2021-02-02 14:42:35 +01:00
#### --s3-no-head
2021-11-01 16:42:05 +01:00
If set, don't HEAD uploaded objects to check integrity.
2021-02-02 14:42:35 +01:00
This can be useful when trying to minimise the number of transactions
rclone does.
Setting it means that if rclone receives a 200 OK message after
uploading an object with PUT then it will assume that it got uploaded
properly.
In particular it will assume:
- the metadata, including modtime, storage class and content type was as uploaded
- the size was as uploaded
It reads the following items from the response for a single part PUT:
- the MD5SUM
- The uploaded date
For multipart uploads these items aren't read.
If an source object of unknown length is uploaded then rclone **will** do a
HEAD request.
Setting this flag increases the chance for undetected upload failures,
in particular an incorrect size, so it isn't recommended for normal
operation. In practice the chance of an undetected upload failure is
very small even with this flag.
2022-03-18 13:29:54 +01:00
Properties:
2021-02-02 14:42:35 +01:00
- Config: no_head
- Env Var: RCLONE_S3_NO_HEAD
- Type: bool
- Default: false
2021-07-20 20:45:41 +02:00
#### --s3-no-head-object
2021-11-01 16:42:05 +01:00
If set, do not do HEAD before GET when getting objects.
2021-07-20 20:45:41 +02:00
2022-03-18 13:29:54 +01:00
Properties:
2021-07-20 20:45:41 +02:00
- Config: no_head_object
- Env Var: RCLONE_S3_NO_HEAD_OBJECT
- Type: bool
- Default: false
2020-02-01 11:31:42 +01:00
#### --s3-encoding
2022-03-18 13:29:54 +01:00
The encoding for the backend.
2020-02-01 11:31:42 +01:00
2021-11-01 16:42:05 +01:00
See the [encoding section in the overview ](/overview/#encoding ) for more info.
2020-02-01 11:31:42 +01:00
2022-03-18 13:29:54 +01:00
Properties:
2020-02-01 11:31:42 +01:00
- Config: encoding
- Env Var: RCLONE_S3_ENCODING
2023-11-26 16:59:12 +01:00
- Type: Encoding
2020-02-01 11:31:42 +01:00
- Default: Slash,InvalidUtf8,Dot
2020-05-22 13:22:52 +02:00
#### --s3-memory-pool-flush-time
2023-09-11 16:59:44 +02:00
How often internal memory buffer pools will be flushed. (no longer used)
2020-05-22 13:22:52 +02:00
2022-03-18 13:29:54 +01:00
Properties:
2020-05-22 13:22:52 +02:00
- Config: memory_pool_flush_time
- Env Var: RCLONE_S3_MEMORY_POOL_FLUSH_TIME
- Type: Duration
- Default: 1m0s
#### --s3-memory-pool-use-mmap
2023-09-11 16:59:44 +02:00
Whether to use mmap buffers in internal memory pool. (no longer used)
2020-05-22 13:22:52 +02:00
2022-03-18 13:29:54 +01:00
Properties:
2020-05-22 13:22:52 +02:00
- Config: memory_pool_use_mmap
- Env Var: RCLONE_S3_MEMORY_POOL_USE_MMAP
- Type: bool
- Default: false
2021-02-02 14:42:35 +01:00
#### --s3-disable-http2
2021-11-01 16:42:05 +01:00
Disable usage of http2 for S3 backends.
2021-02-02 14:42:35 +01:00
There is currently an unsolved issue with the s3 (specifically minio) backend
and HTTP/2. HTTP/2 is enabled by default for the s3 backend but can be
disabled here. When the issue is solved this flag will be removed.
See: https://github.com/rclone/rclone/issues/4673, https://github.com/rclone/rclone/issues/3631
2022-03-18 13:29:54 +01:00
Properties:
2021-02-02 14:42:35 +01:00
- Config: disable_http2
- Env Var: RCLONE_S3_DISABLE_HTTP2
- Type: bool
- Default: false
2021-11-01 16:42:05 +01:00
#### --s3-download-url
Custom endpoint for downloads.
This is usually set to a CloudFront CDN URL as AWS S3 offers
cheaper egress for data downloaded through the CloudFront network.
2022-03-18 13:29:54 +01:00
Properties:
2021-11-01 16:42:05 +01:00
- Config: download_url
- Env Var: RCLONE_S3_DOWNLOAD_URL
- Type: string
2022-03-18 13:29:54 +01:00
- Required: false
2023-06-30 15:11:17 +02:00
#### --s3-directory-markers
Upload an empty object with a trailing slash when a new directory is created
Empty folders are unsupported for bucket based remotes, this option creates an empty
object ending with "/", to persist the folder.
Properties:
- Config: directory_markers
- Env Var: RCLONE_S3_DIRECTORY_MARKERS
- Type: bool
- Default: false
2022-03-18 13:29:54 +01:00
#### --s3-use-multipart-etag
Whether to use ETag in multipart uploads for verification
This should be true, false or left unset to use the default for the provider.
Properties:
- Config: use_multipart_etag
- Env Var: RCLONE_S3_USE_MULTIPART_ETAG
- Type: Tristate
- Default: unset
2021-11-01 16:42:05 +01:00
2024-09-08 17:21:56 +02:00
#### --s3-use-unsigned-payload
Whether to use an unsigned payload in PutObject
Rclone has to avoid the AWS SDK seeking the body when calling
PutObject. The AWS provider can add checksums in the trailer to avoid
seeking but other providers can't.
This should be true, false or left unset to use the default for the provider.
Properties:
- Config: use_unsigned_payload
- Env Var: RCLONE_S3_USE_UNSIGNED_PAYLOAD
- Type: Tristate
- Default: unset
2022-07-09 19:08:20 +02:00
#### --s3-use-presigned-request
Whether to use a presigned request or PutObject for single part uploads
If this is false rclone will use PutObject from the AWS SDK to upload
an object.
Versions of rclone < 1.59 use presigned requests to upload a single
part object and setting this flag to true will re-enable that
functionality. This shouldn't be necessary except in exceptional
circumstances or for testing.
Properties:
- Config: use_presigned_request
- Env Var: RCLONE_S3_USE_PRESIGNED_REQUEST
- Type: bool
- Default: false
2022-07-25 17:06:15 +02:00
#### --s3-versions
Include old versions in directory listings.
Properties:
- Config: versions
- Env Var: RCLONE_S3_VERSIONS
- Type: bool
- Default: false
2022-07-26 18:58:57 +02:00
#### --s3-version-at
Show file versions as they were at the specified time.
The parameter should be a date, "2006-01-02", datetime "2006-01-02
15:04:05" or a duration for that long ago, eg "100d" or "1h".
Note that when using this no file write operations are permitted,
so you can't upload files or delete them.
See [the time option docs ](/docs/#time-option ) for valid formats.
Properties:
- Config: version_at
- Env Var: RCLONE_S3_VERSION_AT
- Type: Time
- Default: off
2024-03-10 12:22:43 +01:00
#### --s3-version-deleted
Show deleted file markers when using versions.
This shows deleted file markers in the listing when using versions. These will appear
as 0 size files. The only operation which can be performed on them is deletion.
Deleting a delete marker will reveal the previous version.
Deleted files will always show with a timestamp.
Properties:
- Config: version_deleted
- Env Var: RCLONE_S3_VERSION_DELETED
- Type: bool
- Default: false
2022-10-21 16:06:08 +02:00
#### --s3-decompress
If set this will decompress gzip encoded objects.
It is possible to upload objects to S3 with "Content-Encoding: gzip"
set. Normally rclone will download these files as compressed objects.
If this flag is set then rclone will decompress these files with
"Content-Encoding: gzip" as they are received. This means that rclone
can't check the size and hash but the file contents will be decompressed.
Properties:
- Config: decompress
- Env Var: RCLONE_S3_DECOMPRESS
- Type: bool
- Default: false
2022-12-20 18:16:14 +01:00
#### --s3-might-gzip
Set this if the backend might gzip objects.
Normally providers will not alter objects when they are downloaded. If
an object was not uploaded with `Content-Encoding: gzip` then it won't
be set on download.
However some providers may gzip objects even if they weren't uploaded
with `Content-Encoding: gzip` (eg Cloudflare).
A symptom of this would be receiving errors like
ERROR corrupted on transfer: sizes differ NNN vs MMM
If you set this flag and rclone downloads an object with
Content-Encoding: gzip set and chunked transfer encoding, then rclone
will decompress the object on the fly.
If this is set to unset (the default) then rclone will choose
according to the provider setting what to apply, but you can override
rclone's choice here.
Properties:
- Config: might_gzip
- Env Var: RCLONE_S3_MIGHT_GZIP
- Type: Tristate
- Default: unset
2023-06-30 15:11:17 +02:00
#### --s3-use-accept-encoding-gzip
Whether to send `Accept-Encoding: gzip` header.
By default, rclone will append `Accept-Encoding: gzip` to the request to download
compressed objects whenever possible.
However some providers such as Google Cloud Storage may alter the HTTP headers, breaking
the signature of the request.
A symptom of this would be receiving errors like
SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided.
In this case, you might want to try disabling this option.
Properties:
- Config: use_accept_encoding_gzip
- Env Var: RCLONE_S3_USE_ACCEPT_ENCODING_GZIP
- Type: Tristate
- Default: unset
2022-10-21 16:06:08 +02:00
#### --s3-no-system-metadata
Suppress setting and reading of system metadata
Properties:
- Config: no_system_metadata
- Env Var: RCLONE_S3_NO_SYSTEM_METADATA
- Type: bool
- Default: false
2023-03-14 13:16:30 +01:00
#### --s3-sts-endpoint
2024-09-08 17:21:56 +02:00
Endpoint for STS (deprecated).
2023-03-14 13:16:30 +01:00
Leave blank if using AWS to use the default endpoint for the region.
Properties:
- Config: sts_endpoint
- Env Var: RCLONE_S3_STS_ENDPOINT
- Provider: AWS
- Type: string
- Required: false
2023-11-26 16:59:12 +01:00
#### --s3-use-already-exists
Set if rclone should report BucketAlreadyExists errors on bucket creation.
At some point during the evolution of the s3 protocol, AWS started
returning an `AlreadyOwnedByYou` error when attempting to create a
bucket that the user already owned, rather than a
`BucketAlreadyExists` error.
Unfortunately exactly what has been implemented by s3 clones is a
little inconsistent, some return `AlreadyOwnedByYou` , some return
`BucketAlreadyExists` and some return no error at all.
This is important to rclone because it ensures the bucket exists by
creating it on quite a lot of operations (unless
`--s3-no-check-bucket` is used).
If rclone knows the provider can return `AlreadyOwnedByYou` or returns
no error then it can report `BucketAlreadyExists` errors when the user
attempts to create a bucket not owned by them. Otherwise rclone
ignores the `BucketAlreadyExists` error which can lead to confusion.
This should be automatically set correctly for all providers rclone
knows about - please make a bug report if not.
Properties:
- Config: use_already_exists
- Env Var: RCLONE_S3_USE_ALREADY_EXISTS
- Type: Tristate
- Default: unset
#### --s3-use-multipart-uploads
Set if rclone should use multipart uploads.
You can change this if you want to disable the use of multipart uploads.
This shouldn't be necessary in normal operation.
This should be automatically set correctly for all providers rclone
knows about - please make a bug report if not.
Properties:
- Config: use_multipart_uploads
- Env Var: RCLONE_S3_USE_MULTIPART_UPLOADS
- Type: Tristate
- Default: unset
2024-09-08 17:21:56 +02:00
#### --s3-sdk-log-mode
Set to debug the SDK
This can be set to a comma separated list of the following functions:
- `Signing`
- `Retries`
- `Request`
- `RequestWithBody`
- `Response`
- `ResponseWithBody`
- `DeprecatedUsage`
- `RequestEventMessage`
- `ResponseEventMessage`
Use `Off` to disable and `All` to set all log levels. You will need to
use `-vv` to see the debug level logs.
Properties:
- Config: sdk_log_mode
- Env Var: RCLONE_S3_SDK_LOG_MODE
- Type: Bits
- Default: Off
2024-03-10 12:22:43 +01:00
#### --s3-description
2024-06-14 17:04:51 +02:00
Description of the remote.
2024-03-10 12:22:43 +01:00
Properties:
- Config: description
- Env Var: RCLONE_S3_DESCRIPTION
- Type: string
- Required: false
2022-05-24 13:32:39 +02:00
### Metadata
User metadata is stored as x-amz-meta- keys. S3 metadata keys are case insensitive and are always returned in lower case.
Here are the possible system metadata items for the s3 backend.
| Name | Help | Type | Example | Read Only |
|------|------|------|---------|-----------|
| btime | Time of file birth (creation) read from Last-Modified header | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | **Y** |
| cache-control | Cache-Control header | string | no-cache | N |
| content-disposition | Content-Disposition header | string | inline | N |
| content-encoding | Content-Encoding header | string | gzip | N |
| content-language | Content-Language header | string | en-US | N |
| content-type | Content-Type header | string | text/plain | N |
| mtime | Time of last modification, read from rclone metadata | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | N |
| tier | Tier of the object | string | GLACIER | **Y** |
See the [metadata ](/docs/#metadata ) docs for more info.
2021-11-01 16:42:05 +01:00
## Backend commands
2020-09-02 17:59:04 +02:00
Here are the commands specific to the s3 backend.
Run them with
rclone backend COMMAND remote:
The help below will explain what arguments each command takes.
2022-07-09 19:08:20 +02:00
See the [backend ](/commands/rclone_backend/ ) command for more
2020-09-02 17:59:04 +02:00
info on how to pass options and arguments.
These can be run on a running backend using the rc command
2022-03-18 13:29:54 +01:00
[backend/command ](/rc/#backend-command ).
2020-09-02 17:59:04 +02:00
2021-11-01 16:42:05 +01:00
### restore
2020-09-02 17:59:04 +02:00
2024-09-08 17:21:56 +02:00
Restore objects from GLACIER or INTELLIGENT-TIERING archive tier
2020-09-02 17:59:04 +02:00
rclone backend restore remote: [options] [< arguments > +]
2024-09-08 17:21:56 +02:00
This command can be used to restore one or more objects from GLACIER to normal storage
or from INTELLIGENT-TIERING Archive Access / Deep Archive Access tier to the Frequent Access tier.
2020-09-02 17:59:04 +02:00
Usage Examples:
2023-09-11 16:59:44 +02:00
rclone backend restore s3:bucket/path/to/object -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket -o priority=PRIORITY -o lifetime=DAYS
2024-09-08 17:21:56 +02:00
rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY
2020-09-02 17:59:04 +02:00
2023-03-14 13:16:30 +01:00
This flag also obeys the filters. Test first with --interactive/-i or --dry-run flags
2020-09-02 17:59:04 +02:00
2023-09-11 16:59:44 +02:00
rclone --interactive backend restore --include "*.txt" s3:bucket/path -o priority=Standard -o lifetime=1
2020-09-02 17:59:04 +02:00
All the objects shown will be marked for restore, then
2023-09-11 16:59:44 +02:00
rclone backend restore --include "*.txt" s3:bucket/path -o priority=Standard -o lifetime=1
2020-09-02 17:59:04 +02:00
It returns a list of status dictionaries with Remote and Status
Spelling fixes
Fix spelling of: above, already, anonymous, associated,
authentication, bandwidth, because, between, blocks, calculate,
candidates, cautious, changelog, cleaner, clipboard, command,
completely, concurrently, considered, constructs, corrupt, current,
daemon, dependencies, deprecated, directory, dispatcher, download,
eligible, ellipsis, encrypter, endpoint, entrieslist, essentially,
existing writers, existing, expires, filesystem, flushing, frequently,
hierarchy, however, implementation, implements, inaccurate,
individually, insensitive, longer, maximum, metadata, modified,
multipart, namedirfirst, nextcloud, obscured, opened, optional,
owncloud, pacific, passphrase, password, permanently, persimmon,
positive, potato, protocol, quota, receiving, recommends, referring,
requires, revisited, satisfied, satisfies, satisfy, semver,
serialized, session, storage, strategies, stringlist, successful,
supported, surprise, temporarily, temporary, transactions, unneeded,
update, uploads, wrapped
Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
2020-10-09 02:17:24 +02:00
keys. The Status will be OK if it was successful or an error message
2020-09-02 17:59:04 +02:00
if not.
[
{
"Status": "OK",
2023-09-11 16:59:44 +02:00
"Remote": "test.txt"
2020-09-02 17:59:04 +02:00
},
{
"Status": "OK",
2023-09-11 16:59:44 +02:00
"Remote": "test/file4.txt"
2020-09-02 17:59:04 +02:00
}
]
Options:
- "description": The optional description for the job.
2024-09-08 17:21:56 +02:00
- "lifetime": Lifetime of the active copy in days, ignored for INTELLIGENT-TIERING storage
2020-09-02 17:59:04 +02:00
- "priority": Priority of restore: Standard|Expedited|Bulk
2023-09-11 16:59:44 +02:00
### restore-status
2024-09-08 17:21:56 +02:00
Show the restore status for objects being restored from GLACIER or INTELLIGENT-TIERING storage
2023-09-11 16:59:44 +02:00
rclone backend restore-status remote: [options] [< arguments > +]
2024-09-08 17:21:56 +02:00
This command can be used to show the status for objects being restored from GLACIER to normal storage
or from INTELLIGENT-TIERING Archive Access / Deep Archive Access tier to the Frequent Access tier.
2023-09-11 16:59:44 +02:00
Usage Examples:
rclone backend restore-status s3:bucket/path/to/object
rclone backend restore-status s3:bucket/path/to/directory
rclone backend restore-status -o all s3:bucket/path/to/directory
This command does not obey the filters.
It returns a list of status dictionaries.
[
{
"Remote": "file.txt",
"VersionID": null,
"RestoreStatus": {
"IsRestoreInProgress": true,
"RestoreExpiryDate": "2023-09-06T12:29:19+01:00"
},
"StorageClass": "GLACIER"
},
{
"Remote": "test.pdf",
"VersionID": null,
"RestoreStatus": {
"IsRestoreInProgress": false,
"RestoreExpiryDate": "2023-09-06T12:29:19+01:00"
},
"StorageClass": "DEEP_ARCHIVE"
2024-09-08 17:21:56 +02:00
},
{
"Remote": "test.gz",
"VersionID": null,
"RestoreStatus": {
"IsRestoreInProgress": true,
"RestoreExpiryDate": "null"
},
"StorageClass": "INTELLIGENT_TIERING"
2023-09-11 16:59:44 +02:00
}
]
Options:
- "all": if set then show all objects, not just ones with restore status
2021-11-01 16:42:05 +01:00
### list-multipart-uploads
2020-09-02 17:59:04 +02:00
List the unfinished multipart uploads
rclone backend list-multipart-uploads remote: [options] [< arguments > +]
This command lists the unfinished multipart uploads in JSON format.
rclone backend list-multipart s3:bucket/path/to/object
It returns a dictionary of buckets with values as lists of unfinished
multipart uploads.
You can call it with no bucket in which case it lists all bucket, with
a bucket or with a bucket and path.
{
"rclone": [
{
"Initiated": "2020-06-26T14:20:36Z",
"Initiator": {
"DisplayName": "XXX",
"ID": "arn:aws:iam::XXX:user/XXX"
},
"Key": "KEY",
"Owner": {
"DisplayName": null,
"ID": "XXX"
},
"StorageClass": "STANDARD",
"UploadId": "XXX"
}
],
"rclone-1000files": [],
"rclone-dst": []
}
2021-11-01 16:42:05 +01:00
### cleanup
2020-09-02 17:59:04 +02:00
Remove unfinished multipart uploads.
rclone backend cleanup remote: [options] [< arguments > +]
This command removes unfinished multipart uploads of age greater than
max-age which defaults to 24 hours.
2023-03-14 13:16:30 +01:00
Note that you can use --interactive/-i or --dry-run with this command to see what
it would do.
2020-09-02 17:59:04 +02:00
rclone backend cleanup s3:bucket/path/to/object
rclone backend cleanup -o max-age=7w s3:bucket/path/to/object
Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.
Options:
- "max-age": Max age of upload to delete
2022-07-26 16:03:32 +02:00
### cleanup-hidden
Remove old versions of files.
rclone backend cleanup-hidden remote: [options] [< arguments > +]
This command removes any old hidden versions of files
on a versions enabled bucket.
2023-03-14 13:16:30 +01:00
Note that you can use --interactive/-i or --dry-run with this command to see what
it would do.
2022-07-26 16:03:32 +02:00
rclone backend cleanup-hidden s3:bucket/path/to/dir
2022-07-25 17:05:17 +02:00
### versioning
Set/get versioning support for a bucket.
rclone backend versioning remote: [options] [< arguments > +]
This command sets versioning support if a parameter is
passed and then returns the current versioning status for the bucket
supplied.
rclone backend versioning s3:bucket # read status only
rclone backend versioning s3:bucket Enabled
rclone backend versioning s3:bucket Suspended
It may return "Enabled", "Suspended" or "Unversioned". Note that once versioning
has been enabled the status can't be set back to "Unversioned".
2023-09-11 16:59:44 +02:00
### set
Set command for updating the config parameters.
rclone backend set remote: [options] [< arguments > +]
This set command can be used to update the config parameters
for a running s3 backend.
Usage Examples:
rclone backend set s3: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
rclone rc backend/command command=set fs=s3: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
rclone rc backend/command command=set fs=s3: -o session_token=X -o access_key_id=X -o secret_access_key=X
The option keys are named as they are in the config file.
This rebuilds the connection to the s3 backend when it is called with
the new parameters. Only new parameters need be passed as the values
will default to those currently in use.
It doesn't return anything.
2020-05-22 13:22:52 +02:00
{{< rem autogenerated options stop > }}
2018-06-13 12:02:56 +02:00
2024-08-03 12:35:32 +02:00
### Anonymous access to public buckets {#anonymous-access}
2015-09-29 10:58:03 +02:00
If you want to use rclone to access a public bucket, configure with a
2018-04-13 17:08:00 +02:00
blank `access_key_id` and `secret_access_key` . Your config should end
up looking like this:
2015-09-29 10:58:03 +02:00
```
2018-04-13 17:08:00 +02:00
[anons3]
type = s3
provider = AWS
2015-09-29 10:58:03 +02:00
```
2020-10-13 23:49:58 +02:00
Then use it as normal with the name of the public bucket, e.g.
2015-09-29 10:58:03 +02:00
rclone lsd anons3:1000genomes
You will be able to list and copy data but not upload it.
2024-08-03 12:35:32 +02:00
You can also do this entirely on the command line
rclone lsd :s3,provider=AWS:1000genomes
2021-10-14 15:40:18 +02:00
## Providers
### AWS S3
This is the provider used as main example and described in the [configuration ](#configuration ) section above.
2024-09-13 19:56:22 +02:00
### AWS Directory Buckets
From rclone v1.69 [Directory Buckets ](https://docs.aws.amazon.com/AmazonS3/latest/userguide/directory-buckets-overview.html )
are supported.
You will need to set the `directory_buckets = true` config parameter
or use `--s3-directory-buckets` .
Note that rclone cannot yet:
- Create directory buckets
- List directory buckets
See [the --s3-directory-buckets flag ](#s3-directory-buckets ) for more info
2021-10-18 13:52:59 +02:00
### AWS Snowball Edge
2022-05-03 18:39:01 +02:00
[AWS Snowball ](https://aws.amazon.com/snowball/ ) is a hardware
appliance used for transferring bulk data back to AWS. Its main
software interface is S3 object storage.
To use rclone with AWS Snowball Edge devices, configure as standard
for an 'S3 Compatible Service'.
If using rclone pre v1.59 be sure to set `upload_cutoff = 0` otherwise
you will run into authentication header issues as the snowball device
does not support query parameter based authentication.
With rclone v1.59 or later setting `upload_cutoff` should not be necessary.
2021-10-18 13:52:59 +02:00
eg.
```
[snowball]
type = s3
provider = Other
access_key_id = YOUR_ACCESS_KEY
secret_access_key = YOUR_SECRET_KEY
endpoint = http://[IP of Snowball]:8080
upload_cutoff = 0
```
2021-10-14 15:40:18 +02:00
### Ceph
2015-08-15 19:44:45 +02:00
2021-11-04 12:50:43 +01:00
[Ceph ](https://ceph.com/ ) is an open-source, unified, distributed
2018-03-13 21:47:29 +01:00
storage system designed for excellent performance, reliability and
scalability. It has an S3 compatible object storage interface.
To use rclone with Ceph, configure as above but leave the region blank
and set the endpoint. You should end up with something like this in
your config:
2015-08-15 19:44:45 +02:00
```
2018-03-13 21:47:29 +01:00
[ceph]
type = s3
2018-04-13 17:08:00 +02:00
provider = Ceph
2018-03-13 21:47:29 +01:00
env_auth = false
access_key_id = XXX
secret_access_key = YYY
2018-04-12 18:05:53 +02:00
region =
2018-03-13 21:47:29 +01:00
endpoint = https://ceph.endpoint.example.com
2018-04-12 18:05:53 +02:00
location_constraint =
acl =
server_side_encryption =
storage_class =
2015-08-15 19:44:45 +02:00
```
2022-05-03 18:39:01 +02:00
If you are using an older version of CEPH (e.g. 10.2.x Jewel) and a
version of rclone before v1.59 then you may need to supply the
parameter `--s3-upload-cutoff 0` or put this in the config file as
`upload_cutoff 0` to work around a bug which causes uploading of small
files to fail.
2019-03-06 12:50:25 +01:00
2015-08-15 19:44:45 +02:00
Note also that Ceph sometimes puts `/` in the passwords it gives
users. If you read the secret access key using the command line tools
you will get a JSON blob with the `/` escaped as `\/` . Make sure you
only write `/` in the secret access key.
Eg the dump from Ceph looks something like this (irrelevant keys
removed).
```
{
"user_id": "xxx",
"display_name": "xxxx",
"keys": [
{
"user": "xxx",
"access_key": "xxxxxx",
"secret_key": "xxxxxx\/xxxx"
}
],
}
```
Because this is a json dump, it is encoding the `/` as `\/` , so if you
use the secret key as `xxxxxx/xxxx` it will work fine.
2016-07-11 13:12:28 +02:00
2022-05-06 13:09:29 +02:00
### Cloudflare R2 {#cloudflare-r2}
[Cloudflare R2 ](https://blog.cloudflare.com/r2-open-beta/ ) Storage
allows developers to store large amounts of unstructured data without
the costly egress bandwidth fees associated with typical cloud storage
services.
Here is an example of making a Cloudflare R2 configuration. First run:
rclone config
This will guide you through an interactive setup process.
2022-05-17 16:55:42 +02:00
Note that all buckets are private, and all are stored in the same
"auto" region. It is necessary to use Cloudflare workers to share the
content of a bucket publicly.
2022-05-06 13:09:29 +02:00
```
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> r2
Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
...
2024-06-06 12:25:45 +02:00
XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Magalu, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS and Wasabi
2022-05-06 13:09:29 +02:00
\ (s3)
...
Storage> s3
Option provider.
Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
...
XX / Cloudflare R2 Storage
\ (Cloudflare)
...
provider> Cloudflare
Option env_auth.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
1 / Enter AWS credentials in the next step.
\ (false)
2 / Get AWS credentials from the environment (env vars or IAM).
\ (true)
env_auth> 1
Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id> ACCESS_KEY
Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key> SECRET_ACCESS_KEY
Option region.
Region to connect to.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / R2 buckets are automatically distributed across Cloudflare's data centers for low latency.
\ (auto)
region> 1
Option endpoint.
Endpoint for S3 API.
Required when using an S3 clone.
Enter a value. Press Enter to leave empty.
endpoint> https://ACCOUNT_ID.r2.cloudflarestorage.com
Edit advanced config?
y) Yes
n) No (default)
y/n> n
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
```
This will leave your config looking something like:
```
[r2]
type = s3
provider = Cloudflare
access_key_id = ACCESS_KEY
secret_access_key = SECRET_ACCESS_KEY
region = auto
endpoint = https://ACCOUNT_ID.r2.cloudflarestorage.com
acl = private
```
Now run `rclone lsf r2:` to see your buckets and `rclone lsf
r2:bucket` to look within a bucket.
2024-08-06 12:04:40 +02:00
For R2 tokens with the "Object Read & Write" permission, you may also
need to add `no_check_bucket = true` for object uploads to work
correctly.
Note that Cloudflare decompresses files uploaded with
`Content-Encoding: gzip` by default which is a deviation from what AWS
does. If this is causing a problem then upload the files with
`--header-upload "Cache-Control: no-transform"`
2024-03-29 12:52:05 +01:00
2021-10-14 15:40:18 +02:00
### Dreamhost
2018-03-13 21:47:29 +01:00
Dreamhost [DreamObjects ](https://www.dreamhost.com/cloud/storage/ ) is
an object storage system based on CEPH.
To use rclone with Dreamhost, configure as above but leave the region blank
and set the endpoint. You should end up with something like this in
your config:
```
[dreamobjects]
2018-04-13 17:08:00 +02:00
type = s3
provider = DreamHost
2018-03-13 21:47:29 +01:00
env_auth = false
access_key_id = your_access_key
secret_access_key = your_secret_key
region =
endpoint = objects-us-west-1.dream.io
location_constraint =
acl = private
server_side_encryption =
storage_class =
```
2023-01-09 07:54:51 +01:00
### Google Cloud Storage
[GoogleCloudStorage ](https://cloud.google.com/storage/docs ) is an [S3-interoperable ](https://cloud.google.com/storage/docs/interoperability ) object storage service from Google Cloud Platform.
To connect to Google Cloud Storage you will need an access key and secret key. These can be retrieved by creating an [HMAC key ](https://cloud.google.com/storage/docs/authentication/managing-hmackeys ).
```
[gs]
type = s3
provider = GCS
access_key_id = your_access_key
secret_access_key = your_secret_key
endpoint = https://storage.googleapis.com
```
2023-11-21 11:30:47 +01:00
**Note** that `--s3-versions` does not work with GCS when it needs to do directory paging. Rclone will return the error:
s3 protocol error: received versions listing with IsTruncated set with no NextKeyMarker
This is Google bug [#312292516 ](https://issuetracker.google.com/u/0/issues/312292516 ).
2021-10-14 15:40:18 +02:00
### DigitalOcean Spaces
2017-10-19 18:31:25 +02:00
[Spaces ](https://www.digitalocean.com/products/object-storage/ ) is an [S3-interoperable ](https://developers.digitalocean.com/documentation/spaces/ ) object storage service from cloud provider DigitalOcean.
2020-05-19 13:02:44 +02:00
To connect to DigitalOcean Spaces you will need an access key and secret key. These can be retrieved on the "[Applications & API](https://cloud.digitalocean.com/settings/api/tokens)" page of the DigitalOcean control panel. They will be needed when prompted by `rclone config` for your `access_key_id` and `secret_access_key` .
2017-10-19 18:31:25 +02:00
2019-03-27 20:12:24 +01:00
When prompted for a `region` or `location_constraint` , press enter to use the default value. The region must be included in the `endpoint` setting (e.g. `nyc3.digitaloceanspaces.com` ). The default values can be used for other settings.
2017-10-19 18:31:25 +02:00
Going through the whole process of creating a new remote by running `rclone config` , each prompt should be answered as shown below:
```
2018-03-13 21:47:29 +01:00
Storage> s3
2017-10-19 18:31:25 +02:00
env_auth> 1
access_key_id> YOUR_ACCESS_KEY
secret_access_key> YOUR_SECRET_KEY
2018-04-12 18:05:53 +02:00
region>
2017-10-19 18:31:25 +02:00
endpoint> nyc3.digitaloceanspaces.com
2018-04-12 18:05:53 +02:00
location_constraint>
acl>
storage_class>
2017-10-19 18:31:25 +02:00
```
The resulting configuration file should look like:
```
[spaces]
type = s3
2018-04-13 17:08:00 +02:00
provider = DigitalOcean
2017-10-19 18:31:25 +02:00
env_auth = false
access_key_id = YOUR_ACCESS_KEY
secret_access_key = YOUR_SECRET_KEY
2018-04-12 18:05:53 +02:00
region =
2017-10-19 18:31:25 +02:00
endpoint = nyc3.digitaloceanspaces.com
2018-04-12 18:05:53 +02:00
location_constraint =
acl =
server_side_encryption =
storage_class =
2017-10-19 18:31:25 +02:00
```
Once configured, you can create a new Space and begin copying files. For example:
```
rclone mkdir spaces:my-new-space
rclone copy /path/to/files spaces:my-new-space
```
2024-04-04 16:51:42 +02:00
2022-06-07 09:41:46 +02:00
### Huawei OBS {#huawei-obs}
Object Storage Service (OBS) provides stable, secure, efficient, and easy-to-use cloud storage that lets you store virtually any volume of unstructured data in any format and access it from anywhere.
OBS provides an S3 interface, you can copy and modify the following configuration and add it to your rclone configuration file.
```
[obs]
type = s3
provider = HuaweiOBS
access_key_id = your-access-key-id
secret_access_key = your-secret-access-key
region = af-south-1
endpoint = obs.af-south-1.myhuaweicloud.com
acl = private
```
Or you can also configure via the interactive command line:
```
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> obs
Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
[snip]
2024-01-03 12:49:46 +01:00
XX / Amazon S3 Compliant Storage Providers including AWS, ...
2022-06-07 09:41:46 +02:00
\ (s3)
[snip]
2024-01-03 12:49:46 +01:00
Storage> s3
2022-06-07 09:41:46 +02:00
Option provider.
Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
[snip]
9 / Huawei Object Storage Service
\ (HuaweiOBS)
[snip]
provider> 9
Option env_auth.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
1 / Enter AWS credentials in the next step.
\ (false)
2 / Get AWS credentials from the environment (env vars or IAM).
\ (true)
env_auth> 1
Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id> your-access-key-id
Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key> your-secret-access-key
Option region.
Region to connect to.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / AF-Johannesburg
\ (af-south-1)
2 / AP-Bangkok
\ (ap-southeast-2)
[snip]
region> 1
Option endpoint.
Endpoint for OBS API.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / AF-Johannesburg
\ (obs.af-south-1.myhuaweicloud.com)
2 / AP-Bangkok
\ (obs.ap-southeast-2.myhuaweicloud.com)
[snip]
endpoint> 1
Option acl.
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server-side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
/ Owner gets FULL_CONTROL.
1 | No one else has access rights (default).
\ (private)
[snip]
acl> 1
Edit advanced config?
y) Yes
n) No (default)
y/n>
--------------------
[obs]
type = s3
provider = HuaweiOBS
access_key_id = your-access-key-id
secret_access_key = your-secret-access-key
region = af-south-1
endpoint = obs.af-south-1.myhuaweicloud.com
acl = private
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
Current remotes:
Name Type
==== ====
obs s3
e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
2022-06-19 19:29:02 +02:00
e/n/d/r/c/s/q> q
2022-06-07 09:41:46 +02:00
```
2017-10-19 18:31:25 +02:00
2021-10-14 15:40:18 +02:00
### IBM COS (S3)
2018-04-13 17:08:00 +02:00
2018-03-26 21:49:53 +02:00
Information stored with IBM Cloud Object Storage is encrypted and dispersed across multiple geographic locations, and accessed through an implementation of the S3 API. This service makes use of the distributed storage technologies provided by IBM’ s Cloud Object Storage System (formerly Cleversafe). For more information visit: (http://www.ibm.com/cloud/object-storage)
2018-03-15 15:11:32 +01:00
To configure access to IBM COS S3, follow the steps below:
1. Run rclone config and select n for a new remote.
```
2018/02/14 14:13:11 NOTICE: Config file "C:\\Users\\a\\.config\\rclone\\rclone.conf" not found - using defaults
2021-11-01 21:34:46 +01:00
No remotes found, make a new one?
2018-03-15 15:11:32 +01:00
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
```
2. Enter the name for the configuration
```
2018-04-12 18:05:53 +02:00
name> < YOUR NAME >
2018-03-15 15:11:32 +01:00
```
3. Select "s3" storage.
```
2018-04-12 18:05:53 +02:00
Choose a number from below, or type in your own value
[snip]
2024-01-03 12:49:46 +01:00
XX / Amazon S3 Compliant Storage Providers including AWS, ...
\ "s3"
[snip]
Storage> s3
2018-03-15 15:11:32 +01:00
```
2018-04-12 18:05:53 +02:00
4. Select IBM COS as the S3 Storage Provider.
2018-03-15 15:11:32 +01:00
```
2018-04-12 18:05:53 +02:00
Choose the S3 provider.
Choose a number from below, or type in your own value
1 / Choose this option to configure Storage to AWS S3
\ "AWS"
2 / Choose this option to configure Storage to Ceph Systems
\ "Ceph"
3 / Choose this option to configure Storage to Dreamhost
\ "Dreamhost"
4 / Choose this option to the configure Storage to IBM COS S3
\ "IBMCOS"
5 / Choose this option to the configure Storage to Minio
\ "Minio"
Provider>4
2018-03-15 15:11:32 +01:00
```
5. Enter the Access Key and Secret.
```
AWS Access Key ID - leave blank for anonymous access or runtime credentials.
access_key_id> < >
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
secret_access_key> < >
```
Spelling fixes
Fix spelling of: above, already, anonymous, associated,
authentication, bandwidth, because, between, blocks, calculate,
candidates, cautious, changelog, cleaner, clipboard, command,
completely, concurrently, considered, constructs, corrupt, current,
daemon, dependencies, deprecated, directory, dispatcher, download,
eligible, ellipsis, encrypter, endpoint, entrieslist, essentially,
existing writers, existing, expires, filesystem, flushing, frequently,
hierarchy, however, implementation, implements, inaccurate,
individually, insensitive, longer, maximum, metadata, modified,
multipart, namedirfirst, nextcloud, obscured, opened, optional,
owncloud, pacific, passphrase, password, permanently, persimmon,
positive, potato, protocol, quota, receiving, recommends, referring,
requires, revisited, satisfied, satisfies, satisfy, semver,
serialized, session, storage, strategies, stringlist, successful,
supported, surprise, temporarily, temporary, transactions, unneeded,
update, uploads, wrapped
Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
2020-10-09 02:17:24 +02:00
6. Specify the endpoint for IBM COS. For Public IBM COS, choose from the option below. For On Premise IBM COS, enter an endpoint address.
2018-03-15 15:11:32 +01:00
```
2018-04-12 18:05:53 +02:00
Endpoint for IBM COS S3 API.
Specify if using an IBM COS On Premise.
2018-03-15 15:11:32 +01:00
Choose a number from below, or type in your own value
2018-04-12 18:05:53 +02:00
1 / US Cross Region Endpoint
\ "s3-api.us-geo.objectstorage.softlayer.net"
2 / US Cross Region Dallas Endpoint
\ "s3-api.dal.us-geo.objectstorage.softlayer.net"
3 / US Cross Region Washington DC Endpoint
\ "s3-api.wdc-us-geo.objectstorage.softlayer.net"
4 / US Cross Region San Jose Endpoint
\ "s3-api.sjc-us-geo.objectstorage.softlayer.net"
5 / US Cross Region Private Endpoint
\ "s3-api.us-geo.objectstorage.service.networklayer.com"
6 / US Cross Region Dallas Private Endpoint
\ "s3-api.dal-us-geo.objectstorage.service.networklayer.com"
7 / US Cross Region Washington DC Private Endpoint
\ "s3-api.wdc-us-geo.objectstorage.service.networklayer.com"
8 / US Cross Region San Jose Private Endpoint
\ "s3-api.sjc-us-geo.objectstorage.service.networklayer.com"
9 / US Region East Endpoint
\ "s3.us-east.objectstorage.softlayer.net"
10 / US Region East Private Endpoint
\ "s3.us-east.objectstorage.service.networklayer.com"
11 / US Region South Endpoint
[snip]
34 / Toronto Single Site Private Endpoint
\ "s3.tor01.objectstorage.service.networklayer.com"
endpoint>1
```
7. Specify a IBM COS Location Constraint. The location constraint must match endpoint when using IBM Cloud Public. For on-prem COS, do not make a selection from this list, hit enter
```
1 / US Cross Region Standard
\ "us-standard"
2 / US Cross Region Vault
\ "us-vault"
3 / US Cross Region Cold
\ "us-cold"
4 / US Cross Region Flex
\ "us-flex"
5 / US East Region Standard
\ "us-east-standard"
6 / US East Region Vault
\ "us-east-vault"
7 / US East Region Cold
\ "us-east-cold"
8 / US East Region Flex
\ "us-east-flex"
9 / US South Region Standard
\ "us-south-standard"
10 / US South Region Vault
\ "us-south-vault"
[snip]
32 / Toronto Flex
\ "tor01-flex"
location_constraint>1
2018-03-15 15:11:32 +01:00
```
Spelling fixes
Fix spelling of: above, already, anonymous, associated,
authentication, bandwidth, because, between, blocks, calculate,
candidates, cautious, changelog, cleaner, clipboard, command,
completely, concurrently, considered, constructs, corrupt, current,
daemon, dependencies, deprecated, directory, dispatcher, download,
eligible, ellipsis, encrypter, endpoint, entrieslist, essentially,
existing writers, existing, expires, filesystem, flushing, frequently,
hierarchy, however, implementation, implements, inaccurate,
individually, insensitive, longer, maximum, metadata, modified,
multipart, namedirfirst, nextcloud, obscured, opened, optional,
owncloud, pacific, passphrase, password, permanently, persimmon,
positive, potato, protocol, quota, receiving, recommends, referring,
requires, revisited, satisfied, satisfies, satisfy, semver,
serialized, session, storage, strategies, stringlist, successful,
supported, surprise, temporarily, temporary, transactions, unneeded,
update, uploads, wrapped
Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
2020-10-09 02:17:24 +02:00
9. Specify a canned ACL. IBM Cloud (Storage) supports "public-read" and "private". IBM Cloud(Infra) supports all the canned ACLs. On-Premise COS supports all the canned ACLs.
2018-03-15 15:11:32 +01:00
```
2018-04-12 18:05:53 +02:00
Canned ACL used when creating buckets and/or storing objects in S3.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Choose a number from below, or type in your own value
1 / Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS
\ "private"
2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS
\ "public-read"
3 / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS
\ "public-read-write"
4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS
\ "authenticated-read"
acl> 1
2018-03-15 15:11:32 +01:00
```
2018-03-26 21:49:53 +02:00
12. Review the displayed configuration and accept to save the "remote" then quit. The config file should look like this
2018-03-15 15:11:32 +01:00
```
2018-04-12 18:05:53 +02:00
[xxx]
type = s3
Provider = IBMCOS
access_key_id = xxx
secret_access_key = yyy
2018-03-15 15:11:32 +01:00
endpoint = s3-api.us-geo.objectstorage.softlayer.net
location_constraint = us-standard
acl = private
```
13. Execute rclone commands
```
1) Create a bucket.
rclone mkdir IBM-COS-XREGION:newbucket
2) List available buckets.
rclone lsd IBM-COS-XREGION:
-1 2017-11-08 21:16:22 -1 test
-1 2018-02-14 20:16:39 -1 newbucket
3) List contents of a bucket.
rclone ls IBM-COS-XREGION:newbucket
18685952 test.exe
4) Copy a file from local to remote.
rclone copy /Users/file.txt IBM-COS-XREGION:newbucket
5) Copy a file from remote to local.
rclone copy IBM-COS-XREGION:newbucket/file.txt .
6) Delete a file on remote.
rclone delete IBM-COS-XREGION:newbucket/file.txt
```
2022-06-18 09:29:21 +02:00
### IDrive e2 {#idrive-e2}
Here is an example of making an [IDrive e2 ](https://www.idrive.com/e2/ )
configuration. First run:
rclone config
This will guide you through an interactive setup process.
```
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
Enter name for new remote.
name> e2
Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
[snip]
2024-01-03 12:49:46 +01:00
XX / Amazon S3 Compliant Storage Providers including AWS, ...
2022-06-18 09:29:21 +02:00
\ (s3)
[snip]
Storage> s3
Option provider.
Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
[snip]
XX / IDrive e2
\ (IDrive)
[snip]
provider> IDrive
Option env_auth.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
1 / Enter AWS credentials in the next step.
\ (false)
2 / Get AWS credentials from the environment (env vars or IAM).
\ (true)
env_auth>
Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id> YOUR_ACCESS_KEY
Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key> YOUR_SECRET_KEY
Option acl.
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server-side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
/ Owner gets FULL_CONTROL.
1 | No one else has access rights (default).
\ (private)
/ Owner gets FULL_CONTROL.
2 | The AllUsers group gets READ access.
\ (public-read)
/ Owner gets FULL_CONTROL.
3 | The AllUsers group gets READ and WRITE access.
| Granting this on a bucket is generally not recommended.
\ (public-read-write)
/ Owner gets FULL_CONTROL.
4 | The AuthenticatedUsers group gets READ access.
\ (authenticated-read)
/ Object owner gets FULL_CONTROL.
5 | Bucket owner gets READ access.
| If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ (bucket-owner-read)
/ Both the object owner and the bucket owner get FULL_CONTROL over the object.
6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ (bucket-owner-full-control)
acl>
Edit advanced config?
y) Yes
n) No (default)
y/n>
Configuration complete.
Options:
- type: s3
- provider: IDrive
- access_key_id: YOUR_ACCESS_KEY
- secret_access_key: YOUR_SECRET_KEY
- endpoint: q9d9.la12.idrivee2-5.com
Keep this "e2" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
```
2022-09-15 17:04:34 +02:00
### IONOS Cloud {#ionos}
[IONOS S3 Object Storage ](https://cloud.ionos.com/storage/object-storage ) is a service offered by IONOS for storing and accessing unstructured data.
To connect to the service, you will need an access key and a secret key. These can be found in the [Data Center Designer ](https://dcd.ionos.com/ ), by selecting **Manager resources** > **Object Storage Key Manager** .
Here is an example of a configuration. First, run `rclone config` . This will walk you through an interactive setup process. Type `n` to add the new remote, and then enter a name:
```
Enter name for new remote.
name> ionos-fra
```
Type `s3` to choose the connection type:
```
Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
[snip]
2024-01-03 12:49:46 +01:00
XX / Amazon S3 Compliant Storage Providers including AWS, ...
2022-09-15 17:04:34 +02:00
\ (s3)
[snip]
Storage> s3
```
Type `IONOS` :
```
Option provider.
Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
[snip]
XX / IONOS Cloud
\ (IONOS)
[snip]
provider> IONOS
```
Press Enter to choose the default option `Enter AWS credentials in the next step` :
```
Option env_auth.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
1 / Enter AWS credentials in the next step.
\ (false)
2 / Get AWS credentials from the environment (env vars or IAM).
\ (true)
env_auth>
```
Enter your Access Key and Secret key. These can be retrieved in the [Data Center Designer ](https://dcd.ionos.com/ ), click on the menu “Manager resources” / "Object Storage Key Manager".
```
Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id> YOUR_ACCESS_KEY
Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key> YOUR_SECRET_KEY
```
Choose the region where your bucket is located:
```
Option region.
Region where your bucket will be created and your data stored.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / Frankfurt, Germany
\ (de)
2 / Berlin, Germany
\ (eu-central-2)
3 / Logrono, Spain
\ (eu-south-2)
region> 2
```
Choose the endpoint from the same region:
```
Option endpoint.
Endpoint for IONOS S3 Object Storage.
Specify the endpoint from the same region.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / Frankfurt, Germany
\ (s3-eu-central-1.ionoscloud.com)
2 / Berlin, Germany
\ (s3-eu-central-2.ionoscloud.com)
3 / Logrono, Spain
\ (s3-eu-south-2.ionoscloud.com)
endpoint> 1
```
Press Enter to choose the default option or choose the desired ACL setting:
```
Option acl.
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server-side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
/ Owner gets FULL_CONTROL.
1 | No one else has access rights (default).
\ (private)
/ Owner gets FULL_CONTROL.
[snip]
acl>
```
Press Enter to skip the advanced config:
```
Edit advanced config?
y) Yes
n) No (default)
y/n>
```
Press Enter to save the configuration, and then `q` to quit the configuration process:
```
Configuration complete.
Options:
- type: s3
- provider: IONOS
- access_key_id: YOUR_ACCESS_KEY
- secret_access_key: YOUR_SECRET_KEY
- endpoint: s3-eu-central-1.ionoscloud.com
Keep this "ionos-fra" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
```
Done! Now you can try some commands (for macOS, use `./rclone` instead of `rclone` ).
1) Create a bucket (the name must be unique within the whole IONOS S3)
```
rclone mkdir ionos-fra:my-bucket
```
2) List available buckets
```
rclone lsd ionos-fra:
```
4) Copy a file from local to remote
```
rclone copy /Users/file.txt ionos-fra:my-bucket
```
3) List contents of a bucket
```
rclone ls ionos-fra:my-bucket
```
5) Copy a file from remote to local
```
rclone copy ionos-fra:my-bucket/file.txt
```
2021-10-14 15:40:18 +02:00
### Minio
2016-07-11 13:12:28 +02:00
[Minio ](https://minio.io/ ) is an object storage server built for cloud application developers and devops.
It is very easy to install and provides an S3 compatible server which can be used by rclone.
2017-06-20 00:51:39 +02:00
To use it, install Minio following the instructions [here ](https://docs.minio.io/docs/minio-quickstart-guide ).
2016-07-11 13:12:28 +02:00
When it configures itself Minio will print something like this
```
2017-06-20 00:51:39 +02:00
Endpoint: http://192.168.1.106:9000 http://172.23.0.1:9000
AccessKey: USWUXHGYZQYFYFFIT3RE
SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
Region: us-east-1
SQS ARNs: arn:minio:sqs:us-east-1:1:redis arn:minio:sqs:us-east-1:2:redis
Browser Access:
http://192.168.1.106:9000 http://172.23.0.1:9000
Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
$ mc config host add myminio http://192.168.1.106:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
Object API (Amazon S3 compatible):
Go: https://docs.minio.io/docs/golang-client-quickstart-guide
Java: https://docs.minio.io/docs/java-client-quickstart-guide
Python: https://docs.minio.io/docs/python-client-quickstart-guide
JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
.NET: https://docs.minio.io/docs/dotnet-client-quickstart-guide
Drive Capacity: 26 GiB Free, 165 GiB Total
2016-07-11 13:12:28 +02:00
```
These details need to go into `rclone config` like this. Note that it
is important to put the region in as stated above.
```
env_auth> 1
2017-06-20 00:51:39 +02:00
access_key_id> USWUXHGYZQYFYFFIT3RE
secret_access_key> MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
2016-07-11 13:12:28 +02:00
region> us-east-1
2017-06-20 00:51:39 +02:00
endpoint> http://192.168.1.106:9000
2017-01-09 06:09:19 +01:00
location_constraint>
2016-07-11 13:12:28 +02:00
server_side_encryption>
```
Which makes the config file look like this
```
[minio]
2018-04-13 17:08:00 +02:00
type = s3
provider = Minio
2016-07-11 13:12:28 +02:00
env_auth = false
2017-06-20 00:51:39 +02:00
access_key_id = USWUXHGYZQYFYFFIT3RE
secret_access_key = MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
2016-07-11 13:12:28 +02:00
region = us-east-1
2017-06-20 00:51:39 +02:00
endpoint = http://192.168.1.106:9000
2017-01-09 06:09:19 +01:00
location_constraint =
server_side_encryption =
2016-07-11 13:12:28 +02:00
```
2021-11-04 12:50:43 +01:00
So once set up, for example, to copy files into a bucket
2016-07-11 13:12:28 +02:00
2017-06-20 00:51:39 +02:00
```
rclone copy /path/to/files minio:bucket
2017-09-11 23:49:59 +02:00
```
2017-08-30 16:55:51 +02:00
2024-10-02 11:26:41 +02:00
### Outscale
[OUTSCALE Object Storage (OOS) ](https://en.outscale.com/storage/outscale-object-storage/ ) is an enterprise-grade, S3-compatible storage service provided by OUTSCALE, a brand of Dassault Systèmes. For more information about OOS, see the [official documentation ](https://docs.outscale.com/en/userguide/OUTSCALE-Object-Storage-OOS.html ).
Here is an example of an OOS configuration that you can paste into your rclone configuration file:
```
[outscale]
type = s3
provider = Outscale
env_auth = false
access_key_id = ABCDEFGHIJ0123456789
secret_access_key = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
region = eu-west-2
endpoint = oos.eu-west-2.outscale.com
acl = private
```
You can also run `rclone config` to go through the interactive setup process:
```
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
```
```
Enter name for new remote.
name> outscale
```
```
Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
[snip]
X / Amazon S3 Compliant Storage Providers including AWS, ...Outscale, ...and others
\ (s3)
[snip]
Storage> outscale
```
```
Option provider.
Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
[snip]
XX / OUTSCALE Object Storage (OOS)
\ (Outscale)
[snip]
provider> Outscale
```
```
Option env_auth.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
1 / Enter AWS credentials in the next step.
\ (false)
2 / Get AWS credentials from the environment (env vars or IAM).
\ (true)
env_auth>
```
```
Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id> ABCDEFGHIJ0123456789
```
```
Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key> XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
```
```
Option region.
Region where your bucket will be created and your data stored.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / Paris, France
\ (eu-west-2)
2 / New Jersey, USA
\ (us-east-2)
3 / California, USA
\ (us-west-1)
4 / SecNumCloud, Paris, France
\ (cloudgouv-eu-west-1)
5 / Tokyo, Japan
\ (ap-northeast-1)
region> 1
```
```
Option endpoint.
Endpoint for S3 API.
Required when using an S3 clone.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / Outscale EU West 2 (Paris)
\ (oos.eu-west-2.outscale.com)
2 / Outscale US east 2 (New Jersey)
\ (oos.us-east-2.outscale.com)
3 / Outscale EU West 1 (California)
\ (oos.us-west-1.outscale.com)
4 / Outscale SecNumCloud (Paris)
\ (oos.cloudgouv-eu-west-1.outscale.com)
5 / Outscale AP Northeast 1 (Japan)
\ (oos.ap-northeast-1.outscale.com)
endpoint> 1
```
```
Option acl.
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server-side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
If the acl is an empty string then no X-Amz-Acl: header is added and
the default (private) will be used.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
/ Owner gets FULL_CONTROL.
1 | No one else has access rights (default).
\ (private)
[snip]
acl> 1
```
```
Edit advanced config?
y) Yes
n) No (default)
y/n> n
```
```
Configuration complete.
Options:
- type: s3
- provider: Outscale
- access_key_id: ABCDEFGHIJ0123456789
- secret_access_key: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
- endpoint: oos.eu-west-2.outscale.com
Keep this "outscale" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
```
2022-10-13 16:49:22 +02:00
### Qiniu Cloud Object Storage (Kodo) {#qiniu}
[Qiniu Cloud Object Storage (Kodo) ](https://www.qiniu.com/en/products/kodo ), a completely independent-researched core technology which is proven by repeated customer experience has occupied absolute leading market leader position. Kodo can be widely applied to mass data management.
To configure access to Qiniu Kodo, follow the steps below:
1. Run `rclone config` and select `n` for a new remote.
```
rclone config
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
```
2. Give the name of the configuration. For example, name it 'qiniu'.
```
name> qiniu
```
3. Select `s3` storage.
```
Choose a number from below, or type in your own value
2024-01-03 12:49:46 +01:00
[snip]
XX / Amazon S3 Compliant Storage Providers including AWS, ...
2022-10-13 16:49:22 +02:00
\ (s3)
[snip]
Storage> s3
```
4. Select `Qiniu` provider.
```
Choose a number from below, or type in your own value
1 / Amazon Web Services (AWS) S3
\ "AWS"
[snip]
22 / Qiniu Object Storage (Kodo)
\ (Qiniu)
[snip]
provider> Qiniu
```
5. Enter your SecretId and SecretKey of Qiniu Kodo.
```
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Enter a boolean value (true or false). Press Enter for the default ("false").
Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
\ "false"
2 / Get AWS credentials from the environment (env vars or IAM)
\ "true"
env_auth> 1
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
access_key_id> AKIDxxxxxxxxxx
AWS Secret Access Key (password)
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
secret_access_key> xxxxxxxxxxx
```
6. Select endpoint for Qiniu Kodo. This is the standard endpoint for different region.
```
/ The default endpoint - a good choice if you are unsure.
1 | East China Region 1.
| Needs location constraint cn-east-1.
\ (cn-east-1)
/ East China Region 2.
2 | Needs location constraint cn-east-2.
\ (cn-east-2)
/ North China Region 1.
3 | Needs location constraint cn-north-1.
\ (cn-north-1)
/ South China Region 1.
4 | Needs location constraint cn-south-1.
\ (cn-south-1)
/ North America Region.
5 | Needs location constraint us-north-1.
\ (us-north-1)
/ Southeast Asia Region 1.
6 | Needs location constraint ap-southeast-1.
\ (ap-southeast-1)
/ Northeast Asia Region 1.
7 | Needs location constraint ap-northeast-1.
\ (ap-northeast-1)
[snip]
endpoint> 1
Option endpoint.
Endpoint for Qiniu Object Storage.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / East China Endpoint 1
\ (s3-cn-east-1.qiniucs.com)
2 / East China Endpoint 2
\ (s3-cn-east-2.qiniucs.com)
3 / North China Endpoint 1
\ (s3-cn-north-1.qiniucs.com)
4 / South China Endpoint 1
\ (s3-cn-south-1.qiniucs.com)
5 / North America Endpoint 1
\ (s3-us-north-1.qiniucs.com)
6 / Southeast Asia Endpoint 1
\ (s3-ap-southeast-1.qiniucs.com)
7 / Northeast Asia Endpoint 1
\ (s3-ap-northeast-1.qiniucs.com)
endpoint> 1
Option location_constraint.
Location constraint - must be set to match the Region.
Used when creating buckets only.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / East China Region 1
\ (cn-east-1)
2 / East China Region 2
\ (cn-east-2)
3 / North China Region 1
\ (cn-north-1)
4 / South China Region 1
\ (cn-south-1)
5 / North America Region 1
\ (us-north-1)
6 / Southeast Asia Region 1
\ (ap-southeast-1)
7 / Northeast Asia Region 1
\ (ap-northeast-1)
location_constraint> 1
```
7. Choose acl and storage class.
```
Note that this ACL is applied when server-side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
/ Owner gets FULL_CONTROL.
1 | No one else has access rights (default).
\ (private)
/ Owner gets FULL_CONTROL.
2 | The AllUsers group gets READ access.
\ (public-read)
[snip]
acl> 2
The storage class to use when storing new objects in Tencent COS.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / Standard storage class
\ (STANDARD)
2 / Infrequent access storage mode
\ (LINE)
3 / Archive storage mode
\ (GLACIER)
4 / Deep archive storage mode
\ (DEEP_ARCHIVE)
[snip]
storage_class> 1
Edit advanced config? (y/n)
y) Yes
n) No (default)
y/n> n
Remote config
--------------------
[qiniu]
- type: s3
- provider: Qiniu
- access_key_id: xxx
- secret_access_key: xxx
- region: cn-east-1
- endpoint: s3-cn-east-1.qiniucs.com
- location_constraint: cn-east-1
- acl: public-read
- storage_class: STANDARD
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
Current remotes:
Name Type
==== ====
qiniu s3
```
2021-11-09 12:46:58 +01:00
### RackCorp {#RackCorp}
[RackCorp Object Storage ](https://www.rackcorp.com/storage/s3storage ) is an S3 compatible object storage platform from your friendly cloud provider RackCorp.
The service is fast, reliable, well priced and located in many strategic locations unserviced by others, to ensure you can maintain data sovereignty.
Before you can use RackCorp Object Storage, you'll need to "[sign up](https://www.rackcorp.com/signup)" for an account on our "[portal](https://portal.rackcorp.com)".
Next you can create an `access key` , a `secret key` and `buckets` , in your location of choice with ease.
These details are required for the next steps of configuration, when `rclone config` asks for your `access_key_id` and `secret_access_key` .
Your config should end up looking a bit like this:
```
[RCS3-demo-config]
type = s3
provider = RackCorp
env_auth = true
access_key_id = YOURACCESSKEY
secret_access_key = YOURSECRETACCESSKEY
region = au-nsw
endpoint = s3.rackcorp.com
location_constraint = au-nsw
```
2023-11-13 17:40:34 +01:00
### Rclone Serve S3 {#rclone}
Rclone can serve any remote over the S3 protocol. For details see the
[rclone serve s3 ](/commands/rclone_serve_http/ ) documentation.
For example, to serve `remote:path` over s3, run the server like this:
```
rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY remote:path
```
This will be compatible with an rclone remote which is defined like this:
```
[serves3]
type = s3
provider = Rclone
endpoint = http://127.0.0.1:8080/
access_key_id = ACCESS_KEY_ID
secret_access_key = SECRET_ACCESS_KEY
use_multipart_uploads = false
```
Note that setting `disable_multipart_uploads = true` is to work around
[a bug ](/commands/rclone_serve_http/#bugs ) which will be fixed in due course.
2021-11-09 12:46:58 +01:00
2021-10-14 15:40:18 +02:00
### Scaleway
2019-02-01 18:09:57 +01:00
[Scaleway ](https://www.scaleway.com/object-storage/ ) The Object Storage platform allows you to store anything from backups, logs and web assets to documents and photos.
Files can be dropped from the Scaleway console or transferred through our API and CLI or using any S3-compatible tool.
Scaleway provides an S3 interface which can be configured for use with rclone like this:
```
[scaleway]
type = s3
2020-06-12 17:04:16 +02:00
provider = Scaleway
2019-02-01 18:09:57 +01:00
env_auth = false
endpoint = s3.nl-ams.scw.cloud
access_key_id = SCWXXXXXXXXXXXXXX
secret_access_key = 1111111-2222-3333-44444-55555555555555
region = nl-ams
2024-03-31 18:42:20 +02:00
location_constraint = nl-ams
2019-02-01 18:09:57 +01:00
acl = private
2024-03-31 18:42:20 +02:00
upload_cutoff = 5M
chunk_size = 5M
copy_cutoff = 5M
2019-02-01 18:09:57 +01:00
```
2024-10-10 09:57:43 +02:00
[Scaleway Glacier ](https://www.scaleway.com/en/glacier-cold-storage/ ) is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" `storage_class` .
So you can configure your remote with the `storage_class = GLACIER` option to upload directly to Scaleway Glacier. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above)
2022-04-05 12:11:52 +02:00
2022-03-04 13:35:46 +01:00
### Seagate Lyve Cloud {#lyve}
[Seagate Lyve Cloud ](https://www.seagate.com/gb/en/services/cloud/storage/ ) is an S3
compatible object storage platform from [Seagate ](https://seagate.com/ ) intended for enterprise use.
Here is a config run through for a remote called `remote` - you may
choose a different name of course. Note that to create an access key
and secret key you will need to create a service account first.
```
$ rclone config
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> remote
```
Choose `s3` backend
```
Type of storage to configure.
Choose a number from below, or type in your own value.
[snip]
2024-01-03 12:49:46 +01:00
XX / Amazon S3 Compliant Storage Providers including AWS, ...
2022-03-04 13:35:46 +01:00
\ (s3)
[snip]
Storage> s3
```
2022-03-14 16:37:34 +01:00
Choose `LyveCloud` as S3 provider
2022-03-04 13:35:46 +01:00
```
Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
[snip]
XX / Seagate Lyve Cloud
2022-03-14 16:37:34 +01:00
\ (LyveCloud)
2022-03-04 13:35:46 +01:00
[snip]
2022-03-14 16:37:34 +01:00
provider> LyveCloud
2022-03-04 13:35:46 +01:00
```
Take the default (just press enter) to enter access key and secret in the config file.
```
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
1 / Enter AWS credentials in the next step.
\ (false)
2 / Get AWS credentials from the environment (env vars or IAM).
\ (true)
env_auth>
```
```
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id> XXX
```
```
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key> YYY
```
Leave region blank
```
Region to connect to.
Leave blank if you are using an S3 clone and you don't have a region.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
/ Use this if unsure.
1 | Will use v4 signatures and an empty region.
\ ()
/ Use this only if v4 signatures don't work.
2 | E.g. pre Jewel/v10 CEPH.
\ (other-v2-signature)
region>
```
Choose an endpoint from the list
```
Endpoint for S3 API.
Required when using an S3 clone.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
2022-03-14 16:37:34 +01:00
1 / Seagate Lyve Cloud US East 1 (Virginia)
2022-03-04 13:35:46 +01:00
\ (s3.us-east-1.lyvecloud.seagate.com)
2022-03-14 16:37:34 +01:00
2 / Seagate Lyve Cloud US West 1 (California)
\ (s3.us-west-1.lyvecloud.seagate.com)
3 / Seagate Lyve Cloud AP Southeast 1 (Singapore)
\ (s3.ap-southeast-1.lyvecloud.seagate.com)
2022-03-04 13:35:46 +01:00
endpoint> 1
```
2022-03-14 16:37:34 +01:00
Leave location constraint blank
2022-03-04 13:35:46 +01:00
```
Location constraint - must be set to match the Region.
Leave blank if not sure. Used when creating buckets only.
Enter a value. Press Enter to leave empty.
location_constraint>
```
Choose default ACL (`private`).
```
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server-side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
/ Owner gets FULL_CONTROL.
1 | No one else has access rights (default).
\ (private)
[snip]
acl>
```
And the config file should end up looking like this:
```
[remote]
type = s3
2022-03-14 16:37:34 +01:00
provider = LyveCloud
2022-03-04 13:35:46 +01:00
access_key_id = XXX
secret_access_key = YYY
endpoint = s3.us-east-1.lyvecloud.seagate.com
```
2021-10-14 15:40:18 +02:00
### SeaweedFS
2021-06-08 10:59:57 +02:00
[SeaweedFS ](https://github.com/chrislusf/seaweedfs/ ) is a distributed storage system for
blobs, objects, files, and data lake, with O(1) disk seek and a scalable file metadata store.
2021-11-02 18:07:58 +01:00
It has an S3 compatible object storage interface. SeaweedFS can also act as a
[gateway to remote S3 compatible object store ](https://github.com/chrislusf/seaweedfs/wiki/Gateway-to-Remote-Object-Storage )
to cache data and metadata with asynchronous write back, for fast local speed and minimize access cost.
2021-06-08 10:59:57 +02:00
Assuming the SeaweedFS are configured with `weed shell` as such:
```
> s3.bucket.create -name foo
> s3.configure -access_key=any -secret_key=any -buckets=foo -user=me -actions=Read,Write,List,Tagging,Admin -apply
{
"identities": [
{
"name": "me",
"credentials": [
{
"accessKey": "any",
"secretKey": "any"
}
],
"actions": [
"Read:foo",
"Write:foo",
"List:foo",
"Tagging:foo",
"Admin:foo"
]
}
]
}
```
To use rclone with SeaweedFS, above configuration should end up with something like this in
your config:
```
[seaweedfs_s3]
type = s3
provider = SeaweedFS
access_key_id = any
secret_access_key = any
endpoint = localhost:8333
```
So once set up, for example to copy files into a bucket
```
rclone copy /path/to/files seaweedfs_s3:foo
```
2024-10-09 11:08:08 +02:00
### Selectel
[Selectel Cloud Storage ](https://selectel.ru/services/cloud/storage/ )
is an S3 compatible storage system which features triple redundancy
storage, automatic scaling, high availability and a comprehensive IAM
system.
Selectel have a section on their website for [configuring
rclone](https://docs.selectel.ru/en/cloud/object-storage/tools/rclone/)
which shows how to make the right API keys.
From rclone v1.69 Selectel is a supported operator - please choose the
`Selectel` provider type.
Note that you should use "vHosted" access for the buckets (which is
the recommended default), not "path style".
You can use `rclone config` to make a new provider like this
```
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
Enter name for new remote.
name> selectel
Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
[snip]
XX / Amazon S3 Compliant Storage Providers including ..., Selectel, ...
\ (s3)
[snip]
Storage> s3
Option provider.
Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
[snip]
XX / Selectel Object Storage
\ (Selectel)
[snip]
provider> Selectel
Option env_auth.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
1 / Enter AWS credentials in the next step.
\ (false)
2 / Get AWS credentials from the environment (env vars or IAM).
\ (true)
env_auth> 1
Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id> ACCESS_KEY
Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key> SECRET_ACCESS_KEY
Option region.
Region where your data stored.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / St. Petersburg
\ (ru-1)
region> 1
Option endpoint.
Endpoint for Selectel Object Storage.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / Saint Petersburg
\ (s3.ru-1.storage.selcloud.ru)
endpoint> 1
Edit advanced config?
y) Yes
n) No (default)
y/n> n
Configuration complete.
Options:
- type: s3
- provider: Selectel
- access_key_id: ACCESS_KEY
- secret_access_key: SECRET_ACCESS_KEY
- region: ru-1
- endpoint: s3.ru-1.storage.selcloud.ru
Keep this "selectel" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
```
And your config should end up looking like this:
```
[selectel]
type = s3
provider = Selectel
access_key_id = ACCESS_KEY
secret_access_key = SECRET_ACCESS_KEY
region = ru-1
endpoint = s3.ru-1.storage.selcloud.ru
```
2021-10-14 15:40:18 +02:00
### Wasabi
2017-08-30 16:55:51 +02:00
2017-09-25 18:55:19 +02:00
[Wasabi ](https://wasabi.com ) is a cloud-based object storage service for a
2017-08-30 16:55:51 +02:00
broad range of applications and use cases. Wasabi is designed for
individuals and organizations that require a high-performance,
reliable, and secure data storage infrastructure at minimal cost.
Wasabi provides an S3 interface which can be configured for use with
rclone like this.
```
2021-11-01 21:34:46 +01:00
No remotes found, make a new one?
2017-08-30 16:55:51 +02:00
n) New remote
s) Set configuration password
n/s> n
name> wasabi
Type of storage to configure.
Choose a number from below, or type in your own value
2019-08-26 11:47:17 +02:00
[snip]
2022-12-06 13:25:23 +01:00
XX / Amazon S3 (also Dreamhost, Ceph, ChinaMobile, ArvanCloud, Minio, Liara)
2017-08-30 16:55:51 +02:00
\ "s3"
[snip]
Storage> s3
2017-11-22 22:21:36 +01:00
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
2017-08-30 16:55:51 +02:00
Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
\ "false"
2 / Get AWS credentials from the environment (env vars or IAM)
\ "true"
env_auth> 1
AWS Access Key ID - leave blank for anonymous access or runtime credentials.
access_key_id> YOURACCESSKEY
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
secret_access_key> YOURSECRETACCESSKEY
Region to connect to.
Choose a number from below, or type in your own value
/ The default endpoint - a good choice if you are unsure.
2020-10-13 23:43:00 +02:00
1 | US Region, Northern Virginia, or Pacific Northwest.
2017-08-30 16:55:51 +02:00
| Leave location constraint empty.
\ "us-east-1"
[snip]
region> us-east-1
Endpoint for S3 API.
Leave blank if using AWS to use the default endpoint for the region.
Specify if using an S3 clone such as Ceph.
endpoint> s3.wasabisys.com
Location constraint - must be set to match the Region. Used when creating buckets only.
Choose a number from below, or type in your own value
2020-10-13 23:43:00 +02:00
1 / Empty for US Region, Northern Virginia, or Pacific Northwest.
2017-08-30 16:55:51 +02:00
\ ""
[snip]
2018-04-12 18:05:53 +02:00
location_constraint>
2017-08-30 16:55:51 +02:00
Canned ACL used when creating buckets and/or storing objects in S3.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Choose a number from below, or type in your own value
1 / Owner gets FULL_CONTROL. No one else has access rights (default).
\ "private"
[snip]
2018-04-12 18:05:53 +02:00
acl>
2017-08-30 16:55:51 +02:00
The server-side encryption algorithm used when storing this object in S3.
Choose a number from below, or type in your own value
1 / None
\ ""
2 / AES256
\ "AES256"
2018-04-12 18:05:53 +02:00
server_side_encryption>
2017-08-30 16:55:51 +02:00
The storage class to use when storing objects in S3.
Choose a number from below, or type in your own value
1 / Default
\ ""
2 / Standard storage class
\ "STANDARD"
3 / Reduced redundancy storage class
\ "REDUCED_REDUNDANCY"
4 / Standard Infrequent Access storage class
\ "STANDARD_IA"
2018-04-12 18:05:53 +02:00
storage_class>
2017-08-30 16:55:51 +02:00
Remote config
--------------------
[wasabi]
env_auth = false
access_key_id = YOURACCESSKEY
secret_access_key = YOURSECRETACCESSKEY
region = us-east-1
endpoint = s3.wasabisys.com
2018-04-12 18:05:53 +02:00
location_constraint =
acl =
server_side_encryption =
storage_class =
2017-08-30 16:55:51 +02:00
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
```
This will leave the config file looking like this.
```
[wasabi]
2018-04-13 17:08:00 +02:00
type = s3
provider = Wasabi
2017-08-30 16:55:51 +02:00
env_auth = false
access_key_id = YOURACCESSKEY
secret_access_key = YOURSECRETACCESSKEY
2018-04-13 17:08:00 +02:00
region =
2017-08-30 16:55:51 +02:00
endpoint = s3.wasabisys.com
2018-04-12 18:05:53 +02:00
location_constraint =
acl =
server_side_encryption =
storage_class =
2017-08-30 16:55:51 +02:00
```
2018-07-20 16:49:07 +02:00
2021-10-14 15:40:18 +02:00
### Alibaba OSS {#alibaba-oss}
2018-07-20 16:49:07 +02:00
2019-01-12 17:46:45 +01:00
Here is an example of making an [Alibaba Cloud (Aliyun) OSS ](https://www.alibabacloud.com/product/oss/ )
configuration. First run:
2018-07-20 16:49:07 +02:00
2019-01-12 17:46:45 +01:00
rclone config
This will guide you through an interactive setup process.
2018-07-20 16:49:07 +02:00
```
2021-11-01 21:34:46 +01:00
No remotes found, make a new one?
2019-01-12 17:46:45 +01:00
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
2018-07-20 16:49:07 +02:00
name> oss
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
2019-01-12 17:46:45 +01:00
[snip]
2024-01-03 12:49:46 +01:00
XX / Amazon S3 Compliant Storage Providers including AWS, ...
2018-07-20 16:49:07 +02:00
\ "s3"
2019-01-12 17:46:45 +01:00
[snip]
2018-07-20 16:49:07 +02:00
Storage> s3
Choose your S3 provider.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
2019-01-12 17:46:45 +01:00
1 / Amazon Web Services (AWS) S3
\ "AWS"
2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun
\ "Alibaba"
3 / Ceph Object Storage
\ "Ceph"
[snip]
provider> Alibaba
2018-07-20 16:49:07 +02:00
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Enter a boolean value (true or false). Press Enter for the default ("false").
Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
\ "false"
2 / Get AWS credentials from the environment (env vars or IAM)
\ "true"
env_auth> 1
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
2019-01-12 17:46:45 +01:00
access_key_id> accesskeyid
2018-07-20 16:49:07 +02:00
AWS Secret Access Key (password)
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
2019-01-12 17:46:45 +01:00
secret_access_key> secretaccesskey
Endpoint for OSS API.
2018-07-20 16:49:07 +02:00
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
2019-01-12 17:46:45 +01:00
1 / East China 1 (Hangzhou)
\ "oss-cn-hangzhou.aliyuncs.com"
2 / East China 2 (Shanghai)
\ "oss-cn-shanghai.aliyuncs.com"
3 / North China 1 (Qingdao)
\ "oss-cn-qingdao.aliyuncs.com"
[snip]
endpoint> 1
Canned ACL used when creating buckets and storing or copying objects.
2020-10-13 23:43:40 +02:00
Note that this ACL is applied when server-side copying objects as S3
2019-01-12 17:46:45 +01:00
doesn't copy the ACL from the source but rather writes a fresh one.
2018-07-20 16:49:07 +02:00
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / Owner gets FULL_CONTROL. No one else has access rights (default).
\ "private"
2019-01-12 17:46:45 +01:00
2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
\ "public-read"
/ Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
[snip]
2018-07-20 16:49:07 +02:00
acl> 1
2019-01-12 17:46:45 +01:00
The storage class to use when storing new objects in OSS.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
2019-01-12 21:41:47 +01:00
1 / Default
\ ""
2 / Standard storage class
\ "STANDARD"
3 / Archive storage mode.
\ "GLACIER"
4 / Infrequent access storage mode.
\ "STANDARD_IA"
2019-01-12 17:46:45 +01:00
storage_class> 1
2018-07-20 16:49:07 +02:00
Edit advanced config? (y/n)
y) Yes
n) No
2019-01-12 17:46:45 +01:00
y/n> n
2018-07-20 16:49:07 +02:00
Remote config
--------------------
[oss]
type = s3
2019-01-12 17:46:45 +01:00
provider = Alibaba
2018-07-20 16:49:07 +02:00
env_auth = false
2019-01-12 17:46:45 +01:00
access_key_id = accesskeyid
secret_access_key = secretaccesskey
endpoint = oss-cn-hangzhou.aliyuncs.com
2018-07-20 16:49:07 +02:00
acl = private
2019-01-12 17:46:45 +01:00
storage_class = Standard
2018-07-20 16:49:07 +02:00
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
```
2019-01-12 17:46:45 +01:00
2022-03-24 12:57:00 +01:00
### China Mobile Ecloud Elastic Object Storage (EOS) {#china-mobile-ecloud-eos}
Here is an example of making an [China Mobile Ecloud Elastic Object Storage (EOS) ](https:///ecloud.10086.cn/home/product-introduction/eos/ )
configuration. First run:
rclone config
This will guide you through an interactive setup process.
```
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> ChinaMobile
Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
...
2024-01-03 12:49:46 +01:00
XX / Amazon S3 Compliant Storage Providers including AWS, ...
2022-03-24 12:57:00 +01:00
\ (s3)
...
Storage> s3
Option provider.
Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
...
4 / China Mobile Ecloud Elastic Object Storage (EOS)
\ (ChinaMobile)
...
provider> ChinaMobile
Option env_auth.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
1 / Enter AWS credentials in the next step.
\ (false)
2 / Get AWS credentials from the environment (env vars or IAM).
\ (true)
env_auth>
Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id> accesskeyid
Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key> secretaccesskey
Option endpoint.
Endpoint for China Mobile Ecloud Elastic Object Storage (EOS) API.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
/ The default endpoint - a good choice if you are unsure.
1 | East China (Suzhou)
\ (eos-wuxi-1.cmecloud.cn)
2 / East China (Jinan)
\ (eos-jinan-1.cmecloud.cn)
3 / East China (Hangzhou)
\ (eos-ningbo-1.cmecloud.cn)
4 / East China (Shanghai-1)
\ (eos-shanghai-1.cmecloud.cn)
5 / Central China (Zhengzhou)
\ (eos-zhengzhou-1.cmecloud.cn)
6 / Central China (Changsha-1)
\ (eos-hunan-1.cmecloud.cn)
7 / Central China (Changsha-2)
\ (eos-zhuzhou-1.cmecloud.cn)
8 / South China (Guangzhou-2)
\ (eos-guangzhou-1.cmecloud.cn)
9 / South China (Guangzhou-3)
\ (eos-dongguan-1.cmecloud.cn)
10 / North China (Beijing-1)
\ (eos-beijing-1.cmecloud.cn)
11 / North China (Beijing-2)
\ (eos-beijing-2.cmecloud.cn)
12 / North China (Beijing-3)
\ (eos-beijing-4.cmecloud.cn)
13 / North China (Huhehaote)
\ (eos-huhehaote-1.cmecloud.cn)
14 / Southwest China (Chengdu)
\ (eos-chengdu-1.cmecloud.cn)
15 / Southwest China (Chongqing)
\ (eos-chongqing-1.cmecloud.cn)
16 / Southwest China (Guiyang)
\ (eos-guiyang-1.cmecloud.cn)
17 / Nouthwest China (Xian)
\ (eos-xian-1.cmecloud.cn)
18 / Yunnan China (Kunming)
\ (eos-yunnan.cmecloud.cn)
19 / Yunnan China (Kunming-2)
\ (eos-yunnan-2.cmecloud.cn)
20 / Tianjin China (Tianjin)
\ (eos-tianjin-1.cmecloud.cn)
21 / Jilin China (Changchun)
\ (eos-jilin-1.cmecloud.cn)
22 / Hubei China (Xiangyan)
\ (eos-hubei-1.cmecloud.cn)
23 / Jiangxi China (Nanchang)
\ (eos-jiangxi-1.cmecloud.cn)
24 / Gansu China (Lanzhou)
\ (eos-gansu-1.cmecloud.cn)
25 / Shanxi China (Taiyuan)
\ (eos-shanxi-1.cmecloud.cn)
26 / Liaoning China (Shenyang)
\ (eos-liaoning-1.cmecloud.cn)
27 / Hebei China (Shijiazhuang)
\ (eos-hebei-1.cmecloud.cn)
28 / Fujian China (Xiamen)
\ (eos-fujian-1.cmecloud.cn)
29 / Guangxi China (Nanning)
\ (eos-guangxi-1.cmecloud.cn)
30 / Anhui China (Huainan)
\ (eos-anhui-1.cmecloud.cn)
endpoint> 1
Option location_constraint.
Location constraint - must match endpoint.
Used when creating buckets only.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / East China (Suzhou)
\ (wuxi1)
2 / East China (Jinan)
\ (jinan1)
3 / East China (Hangzhou)
\ (ningbo1)
4 / East China (Shanghai-1)
\ (shanghai1)
5 / Central China (Zhengzhou)
\ (zhengzhou1)
6 / Central China (Changsha-1)
\ (hunan1)
7 / Central China (Changsha-2)
\ (zhuzhou1)
8 / South China (Guangzhou-2)
\ (guangzhou1)
9 / South China (Guangzhou-3)
\ (dongguan1)
10 / North China (Beijing-1)
\ (beijing1)
11 / North China (Beijing-2)
\ (beijing2)
12 / North China (Beijing-3)
\ (beijing4)
13 / North China (Huhehaote)
\ (huhehaote1)
14 / Southwest China (Chengdu)
\ (chengdu1)
15 / Southwest China (Chongqing)
\ (chongqing1)
16 / Southwest China (Guiyang)
\ (guiyang1)
17 / Nouthwest China (Xian)
\ (xian1)
18 / Yunnan China (Kunming)
\ (yunnan)
19 / Yunnan China (Kunming-2)
\ (yunnan2)
20 / Tianjin China (Tianjin)
\ (tianjin1)
21 / Jilin China (Changchun)
\ (jilin1)
22 / Hubei China (Xiangyan)
\ (hubei1)
23 / Jiangxi China (Nanchang)
\ (jiangxi1)
24 / Gansu China (Lanzhou)
\ (gansu1)
25 / Shanxi China (Taiyuan)
\ (shanxi1)
26 / Liaoning China (Shenyang)
\ (liaoning1)
27 / Hebei China (Shijiazhuang)
\ (hebei1)
28 / Fujian China (Xiamen)
\ (fujian1)
29 / Guangxi China (Nanning)
\ (guangxi1)
30 / Anhui China (Huainan)
\ (anhui1)
location_constraint> 1
Option acl.
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server-side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
/ Owner gets FULL_CONTROL.
1 | No one else has access rights (default).
\ (private)
/ Owner gets FULL_CONTROL.
2 | The AllUsers group gets READ access.
\ (public-read)
/ Owner gets FULL_CONTROL.
3 | The AllUsers group gets READ and WRITE access.
| Granting this on a bucket is generally not recommended.
\ (public-read-write)
/ Owner gets FULL_CONTROL.
4 | The AuthenticatedUsers group gets READ access.
\ (authenticated-read)
/ Object owner gets FULL_CONTROL.
acl> private
Option server_side_encryption.
The server-side encryption algorithm used when storing this object in S3.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / None
\ ()
2 / AES256
\ (AES256)
server_side_encryption>
Option storage_class.
The storage class to use when storing new objects in ChinaMobile.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / Default
\ ()
2 / Standard storage class
\ (STANDARD)
3 / Archive storage mode
\ (GLACIER)
4 / Infrequent access storage mode
\ (STANDARD_IA)
storage_class>
Edit advanced config?
y) Yes
n) No (default)
y/n> n
--------------------
[ChinaMobile]
type = s3
provider = ChinaMobile
access_key_id = accesskeyid
secret_access_key = secretaccesskey
endpoint = eos-wuxi-1.cmecloud.cn
location_constraint = wuxi1
acl = private
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
```
2024-04-04 16:51:42 +02:00
2023-07-16 19:08:47 +02:00
### Leviia Cloud Object Storage {#leviia}
2022-03-24 12:57:00 +01:00
2023-07-16 19:08:47 +02:00
[Leviia Object Storage ](https://www.leviia.com/object-storage/ ), backup and secure your data in a 100% French cloud, independent of GAFAM..
To configure access to Leviia, follow the steps below:
1. Run `rclone config` and select `n` for a new remote.
```
rclone config
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
```
2. Give the name of the configuration. For example, name it 'leviia'.
```
name> leviia
```
3. Select `s3` storage.
```
Choose a number from below, or type in your own value
2024-01-03 12:49:46 +01:00
[snip]
XX / Amazon S3 Compliant Storage Providers including AWS, ...
2023-07-16 19:08:47 +02:00
\ (s3)
[snip]
Storage> s3
```
4. Select `Leviia` provider.
```
Choose a number from below, or type in your own value
1 / Amazon Web Services (AWS) S3
\ "AWS"
[snip]
15 / Leviia Object Storage
\ (Leviia)
[snip]
provider> Leviia
```
5. Enter your SecretId and SecretKey of Leviia.
```
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Enter a boolean value (true or false). Press Enter for the default ("false").
Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
\ "false"
2 / Get AWS credentials from the environment (env vars or IAM)
\ "true"
env_auth> 1
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
access_key_id> ZnIx.xxxxxxxxxxxxxxx
AWS Secret Access Key (password)
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
secret_access_key> xxxxxxxxxxx
```
6. Select endpoint for Leviia.
```
/ The default endpoint
1 | Leviia.
\ (s3.leviia.com)
[snip]
endpoint> 1
2023-07-18 11:52:51 +02:00
```
2023-07-16 19:08:47 +02:00
7. Choose acl.
```
Note that this ACL is applied when server-side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
/ Owner gets FULL_CONTROL.
1 | No one else has access rights (default).
\ (private)
/ Owner gets FULL_CONTROL.
2 | The AllUsers group gets READ access.
\ (public-read)
[snip]
acl> 1
Edit advanced config? (y/n)
y) Yes
n) No (default)
y/n> n
Remote config
--------------------
[leviia]
- type: s3
- provider: Leviia
- access_key_id: ZnIx.xxxxxxx
- secret_access_key: xxxxxxxx
- endpoint: s3.leviia.com
- acl: private
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
Current remotes:
Name Type
==== ====
leviia s3
```
2023-09-21 13:38:06 +02:00
2022-12-06 13:25:23 +01:00
### Liara {#liara-cloud}
Here is an example of making a [Liara Object Storage ](https://liara.ir/landing/object-storage )
configuration. First run:
rclone config
This will guide you through an interactive setup process.
```
No remotes found, make a new one?
n) New remote
s) Set configuration password
n/s> n
name> Liara
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Amazon S3 (also Dreamhost, Ceph, ChinaMobile, ArvanCloud, Liara, Minio)
\ "s3"
[snip]
Storage> s3
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
\ "false"
2 / Get AWS credentials from the environment (env vars or IAM)
\ "true"
env_auth> 1
AWS Access Key ID - leave blank for anonymous access or runtime credentials.
access_key_id> YOURACCESSKEY
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
secret_access_key> YOURSECRETACCESSKEY
Region to connect to.
Choose a number from below, or type in your own value
/ The default endpoint
1 | US Region, Northern Virginia, or Pacific Northwest.
| Leave location constraint empty.
\ "us-east-1"
[snip]
region>
Endpoint for S3 API.
Leave blank if using Liara to use the default endpoint for the region.
Specify if using an S3 clone such as Ceph.
endpoint> storage.iran.liara.space
Canned ACL used when creating buckets and/or storing objects in S3.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Choose a number from below, or type in your own value
1 / Owner gets FULL_CONTROL. No one else has access rights (default).
\ "private"
[snip]
acl>
The server-side encryption algorithm used when storing this object in S3.
Choose a number from below, or type in your own value
1 / None
\ ""
2 / AES256
\ "AES256"
server_side_encryption>
The storage class to use when storing objects in S3.
Choose a number from below, or type in your own value
1 / Default
\ ""
2 / Standard storage class
\ "STANDARD"
storage_class>
Remote config
--------------------
[Liara]
env_auth = false
access_key_id = YOURACCESSKEY
secret_access_key = YOURSECRETACCESSKEY
endpoint = storage.iran.liara.space
location_constraint =
acl =
server_side_encryption =
storage_class =
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
```
This will leave the config file looking like this.
```
[Liara]
type = s3
provider = Liara
env_auth = false
access_key_id = YOURACCESSKEY
secret_access_key = YOURSECRETACCESSKEY
region =
endpoint = storage.iran.liara.space
location_constraint =
acl =
server_side_encryption =
storage_class =
```
2023-09-21 13:38:06 +02:00
### Linode {#linode}
Here is an example of making a [Linode Object Storage ](https://www.linode.com/products/object-storage/ )
configuration. First run:
rclone config
This will guide you through an interactive setup process.
```
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
Enter name for new remote.
name> linode
Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
[snip]
2024-01-03 12:49:46 +01:00
XX / Amazon S3 Compliant Storage Providers including AWS, ...Linode, ...and others
2023-09-21 13:38:06 +02:00
\ (s3)
[snip]
Storage> s3
Option provider.
Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
[snip]
XX / Linode Object Storage
\ (Linode)
[snip]
provider> Linode
Option env_auth.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
1 / Enter AWS credentials in the next step.
\ (false)
2 / Get AWS credentials from the environment (env vars or IAM).
\ (true)
env_auth>
Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id> ACCESS_KEY
Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key> SECRET_ACCESS_KEY
Option endpoint.
Endpoint for Linode Object Storage API.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / Atlanta, GA (USA), us-southeast-1
\ (us-southeast-1.linodeobjects.com)
2 / Chicago, IL (USA), us-ord-1
\ (us-ord-1.linodeobjects.com)
3 / Frankfurt (Germany), eu-central-1
\ (eu-central-1.linodeobjects.com)
4 / Milan (Italy), it-mil-1
\ (it-mil-1.linodeobjects.com)
5 / Newark, NJ (USA), us-east-1
\ (us-east-1.linodeobjects.com)
6 / Paris (France), fr-par-1
\ (fr-par-1.linodeobjects.com)
7 / Seattle, WA (USA), us-sea-1
\ (us-sea-1.linodeobjects.com)
8 / Singapore ap-south-1
\ (ap-south-1.linodeobjects.com)
9 / Stockholm (Sweden), se-sto-1
\ (se-sto-1.linodeobjects.com)
10 / Washington, DC, (USA), us-iad-1
\ (us-iad-1.linodeobjects.com)
endpoint> 3
Option acl.
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server-side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
If the acl is an empty string then no X-Amz-Acl: header is added and
the default (private) will be used.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
/ Owner gets FULL_CONTROL.
1 | No one else has access rights (default).
\ (private)
[snip]
acl>
Edit advanced config?
y) Yes
n) No (default)
y/n> n
Configuration complete.
Options:
- type: s3
- provider: Linode
- access_key_id: ACCESS_KEY
- secret_access_key: SECRET_ACCESS_KEY
- endpoint: eu-central-1.linodeobjects.com
Keep this "linode" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
```
This will leave the config file looking like this.
```
[linode]
type = s3
provider = Linode
access_key_id = ACCESS_KEY
secret_access_key = SECRET_ACCESS_KEY
endpoint = eu-central-1.linodeobjects.com
```
2024-06-06 12:25:45 +02:00
### Magalu {#magalu}
Here is an example of making a [Magalu Object Storage ](https://magalu.cloud/object-storage/ )
configuration. First run:
rclone config
This will guide you through an interactive setup process.
```
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
Enter name for new remote.
name> magalu
Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
[snip]
XX / Amazon S3 Compliant Storage Providers including AWS, ...Magalu, ...and others
\ (s3)
[snip]
Storage> s3
Option provider.
Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
[snip]
XX / Magalu Object Storage
\ (Magalu)
[snip]
provider> Magalu
Option env_auth.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
1 / Enter AWS credentials in the next step.
\ (false)
2 / Get AWS credentials from the environment (env vars or IAM).
\ (true)
env_auth> 1
Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id> ACCESS_KEY
Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key> SECRET_ACCESS_KEY
Option endpoint.
Endpoint for Magalu Object Storage API.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / São Paulo, SP (BR), br-se1
\ (br-se1.magaluobjects.com)
2 / Fortaleza, CE (BR), br-ne1
\ (br-ne1.magaluobjects.com)
endpoint> 2
Option acl.
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server-side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
If the acl is an empty string then no X-Amz-Acl: header is added and
the default (private) will be used.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
/ Owner gets FULL_CONTROL.
1 | No one else has access rights (default).
\ (private)
[snip]
acl>
Edit advanced config?
y) Yes
n) No (default)
y/n> n
Configuration complete.
Options:
- type: s3
- provider: magalu
- access_key_id: ACCESS_KEY
- secret_access_key: SECRET_ACCESS_KEY
- endpoint: br-ne1.magaluobjects.com
Keep this "magalu" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
```
This will leave the config file looking like this.
```
[magalu]
type = s3
provider = Magalu
access_key_id = ACCESS_KEY
secret_access_key = SECRET_ACCESS_KEY
endpoint = br-ne1.magaluobjects.com
```
2022-07-09 19:08:20 +02:00
### ArvanCloud {#arvan-cloud}
2022-04-28 11:42:30 +02:00
[ArvanCloud ](https://www.arvancloud.com/en/products/cloud-storage ) ArvanCloud Object Storage goes beyond the limited traditional file storage.
It gives you access to backup and archived files and allows sharing.
Files like profile image in the app, images sent by users or scanned documents can be stored securely and easily in our Object Storage service.
ArvanCloud provides an S3 interface which can be configured for use with
rclone like this.
```
No remotes found, make a new one?
n) New remote
s) Set configuration password
n/s> n
name> ArvanCloud
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
2022-12-06 13:25:23 +01:00
XX / Amazon S3 (also Dreamhost, Ceph, ChinaMobile, ArvanCloud, Liara, Minio)
2022-04-28 11:42:30 +02:00
\ "s3"
[snip]
Storage> s3
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
\ "false"
2 / Get AWS credentials from the environment (env vars or IAM)
\ "true"
env_auth> 1
AWS Access Key ID - leave blank for anonymous access or runtime credentials.
access_key_id> YOURACCESSKEY
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
secret_access_key> YOURSECRETACCESSKEY
Region to connect to.
Choose a number from below, or type in your own value
/ The default endpoint - a good choice if you are unsure.
1 | US Region, Northern Virginia, or Pacific Northwest.
| Leave location constraint empty.
\ "us-east-1"
[snip]
region>
Endpoint for S3 API.
Leave blank if using ArvanCloud to use the default endpoint for the region.
Specify if using an S3 clone such as Ceph.
endpoint> s3.arvanstorage.com
Location constraint - must be set to match the Region. Used when creating buckets only.
Choose a number from below, or type in your own value
1 / Empty for Iran-Tehran Region.
\ ""
[snip]
location_constraint>
Canned ACL used when creating buckets and/or storing objects in S3.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Choose a number from below, or type in your own value
1 / Owner gets FULL_CONTROL. No one else has access rights (default).
\ "private"
[snip]
acl>
The server-side encryption algorithm used when storing this object in S3.
Choose a number from below, or type in your own value
1 / None
\ ""
2 / AES256
\ "AES256"
server_side_encryption>
The storage class to use when storing objects in S3.
Choose a number from below, or type in your own value
1 / Default
\ ""
2 / Standard storage class
\ "STANDARD"
storage_class>
Remote config
--------------------
[ArvanCloud]
env_auth = false
access_key_id = YOURACCESSKEY
secret_access_key = YOURSECRETACCESSKEY
region = ir-thr-at1
endpoint = s3.arvanstorage.com
location_constraint =
acl =
server_side_encryption =
storage_class =
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
```
This will leave the config file looking like this.
```
[ArvanCloud]
type = s3
provider = ArvanCloud
env_auth = false
access_key_id = YOURACCESSKEY
secret_access_key = YOURSECRETACCESSKEY
region =
endpoint = s3.arvanstorage.com
location_constraint =
acl =
server_side_encryption =
storage_class =
```
2021-10-14 15:40:18 +02:00
### Tencent COS {#tencent-cos}
2020-09-08 17:34:25 +02:00
[Tencent Cloud Object Storage (COS) ](https://intl.cloud.tencent.com/product/cos ) is a distributed storage service offered by Tencent Cloud for unstructured data. It is secure, stable, massive, convenient, low-delay and low-cost.
To configure access to Tencent COS, follow the steps below:
1. Run `rclone config` and select `n` for a new remote.
```
rclone config
2021-11-01 21:34:46 +01:00
No remotes found, make a new one?
2020-09-08 17:34:25 +02:00
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
```
2. Give the name of the configuration. For example, name it 'cos'.
```
name> cos
```
3. Select `s3` storage.
```
Choose a number from below, or type in your own value
2024-01-03 12:49:46 +01:00
[snip]
XX / Amazon S3 Compliant Storage Providers including AWS, ...
2020-09-08 17:34:25 +02:00
\ "s3"
[snip]
Storage> s3
```
4. Select `TencentCOS` provider.
```
Choose a number from below, or type in your own value
1 / Amazon Web Services (AWS) S3
\ "AWS"
[snip]
11 / Tencent Cloud Object Storage (COS)
\ "TencentCOS"
[snip]
provider> TencentCOS
```
5. Enter your SecretId and SecretKey of Tencent Cloud.
```
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Enter a boolean value (true or false). Press Enter for the default ("false").
Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
\ "false"
2 / Get AWS credentials from the environment (env vars or IAM)
\ "true"
env_auth> 1
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
access_key_id> AKIDxxxxxxxxxx
AWS Secret Access Key (password)
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
secret_access_key> xxxxxxxxxxx
```
6. Select endpoint for Tencent COS. This is the standard endpoint for different region.
```
1 / Beijing Region.
\ "cos.ap-beijing.myqcloud.com"
2 / Nanjing Region.
\ "cos.ap-nanjing.myqcloud.com"
3 / Shanghai Region.
\ "cos.ap-shanghai.myqcloud.com"
4 / Guangzhou Region.
\ "cos.ap-guangzhou.myqcloud.com"
[snip]
endpoint> 4
```
7. Choose acl and storage class.
```
2020-10-13 23:43:40 +02:00
Note that this ACL is applied when server-side copying objects as S3
2020-09-08 17:34:25 +02:00
doesn't copy the ACL from the source but rather writes a fresh one.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / Owner gets Full_CONTROL. No one else has access rights (default).
\ "default"
[snip]
acl> 1
The storage class to use when storing new objects in Tencent COS.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / Default
\ ""
[snip]
storage_class> 1
Edit advanced config? (y/n)
y) Yes
n) No (default)
y/n> n
Remote config
--------------------
[cos]
type = s3
provider = TencentCOS
env_auth = false
access_key_id = xxx
secret_access_key = xxx
endpoint = cos.ap-guangzhou.myqcloud.com
acl = default
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
Current remotes:
Name Type
==== ====
cos s3
```
2021-10-14 15:40:18 +02:00
### Netease NOS
2019-01-12 17:46:45 +01:00
For Netease NOS configure as per the configurator `rclone config`
setting the provider `Netease` . This will automatically set
`force_path_style = false` which is necessary for it to run properly.
2020-11-27 15:08:52 +01:00
2023-05-05 10:44:25 +02:00
### Petabox
Here is an example of making a [Petabox ](https://petabox.io/ )
configuration. First run:
```bash
rclone config
```
This will guide you through an interactive setup process.
```
No remotes found, make a new one?
n) New remote
s) Set configuration password
n/s> n
Enter name for new remote.
name> My Petabox Storage
Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
[snip]
XX / Amazon S3 Compliant Storage Providers including AWS, ...
\ "s3"
[snip]
Storage> s3
Option provider.
Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
[snip]
XX / Petabox Object Storage
\ (Petabox)
[snip]
provider> Petabox
Option env_auth.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
1 / Enter AWS credentials in the next step.
\ (false)
2 / Get AWS credentials from the environment (env vars or IAM).
\ (true)
env_auth> 1
Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id> YOUR_ACCESS_KEY_ID
Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key> YOUR_SECRET_ACCESS_KEY
Option region.
Region where your bucket will be created and your data stored.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / US East (N. Virginia)
\ (us-east-1)
2 / Europe (Frankfurt)
\ (eu-central-1)
3 / Asia Pacific (Singapore)
\ (ap-southeast-1)
4 / Middle East (Bahrain)
\ (me-south-1)
5 / South America (São Paulo)
\ (sa-east-1)
region> 1
Option endpoint.
Endpoint for Petabox S3 Object Storage.
Specify the endpoint from the same region.
Choose a number from below, or type in your own value.
1 / US East (N. Virginia)
\ (s3.petabox.io)
2 / US East (N. Virginia)
\ (s3.us-east-1.petabox.io)
3 / Europe (Frankfurt)
\ (s3.eu-central-1.petabox.io)
4 / Asia Pacific (Singapore)
\ (s3.ap-southeast-1.petabox.io)
5 / Middle East (Bahrain)
\ (s3.me-south-1.petabox.io)
6 / South America (São Paulo)
\ (s3.sa-east-1.petabox.io)
endpoint> 1
Option acl.
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server-side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
If the acl is an empty string then no X-Amz-Acl: header is added and
the default (private) will be used.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
/ Owner gets FULL_CONTROL.
1 | No one else has access rights (default).
\ (private)
/ Owner gets FULL_CONTROL.
2 | The AllUsers group gets READ access.
\ (public-read)
/ Owner gets FULL_CONTROL.
3 | The AllUsers group gets READ and WRITE access.
| Granting this on a bucket is generally not recommended.
\ (public-read-write)
/ Owner gets FULL_CONTROL.
4 | The AuthenticatedUsers group gets READ access.
\ (authenticated-read)
/ Object owner gets FULL_CONTROL.
5 | Bucket owner gets READ access.
| If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ (bucket-owner-read)
/ Both the object owner and the bucket owner get FULL_CONTROL over the object.
6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ (bucket-owner-full-control)
acl> 1
Edit advanced config?
y) Yes
n) No (default)
y/n> No
Configuration complete.
Options:
- type: s3
- provider: Petabox
- access_key_id: YOUR_ACCESS_KEY_ID
- secret_access_key: YOUR_SECRET_ACCESS_KEY
- region: us-east-1
- endpoint: s3.petabox.io
Keep this "My Petabox Storage" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
```
This will leave the config file looking like this.
```
[My Petabox Storage]
type = s3
provider = Petabox
access_key_id = YOUR_ACCESS_KEY_ID
secret_access_key = YOUR_SECRET_ACCESS_KEY
region = us-east-1
endpoint = s3.petabox.io
```
2021-09-29 15:43:06 +02:00
### Storj
Storj is a decentralized cloud storage which can be used through its
native protocol or an S3 compatible gateway.
The S3 compatible gateway is configured using `rclone config` with a
type of `s3` and with a provider name of `Storj` . Here is an example
run of the configurator.
```
Type of storage to configure.
Storage> s3
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
1 / Enter AWS credentials in the next step.
\ (false)
2 / Get AWS credentials from the environment (env vars or IAM).
\ (true)
env_auth> 1
Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id> XXXX (as shown when creating the access grant)
Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key> XXXX (as shown when creating the access grant)
Option endpoint.
Endpoint of the Shared Gateway.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / EU1 Shared Gateway
\ (gateway.eu1.storjshare.io)
2 / US1 Shared Gateway
\ (gateway.us1.storjshare.io)
3 / Asia-Pacific Shared Gateway
\ (gateway.ap1.storjshare.io)
endpoint> 1 (as shown when creating the access grant)
Edit advanced config?
y) Yes
n) No (default)
y/n> n
```
Note that s3 credentials are generated when you [create an access
grant](https://docs.storj.io/dcs/api-reference/s3-compatible-gateway#usage).
#### Backend quirks
- `--chunk-size` is forced to be 64 MiB or greater. This will use more
memory than the default of 5 MiB.
- Server side copy is disabled as it isn't currently supported in the
gateway.
- GetTier and SetTier are not supported.
2022-02-07 16:25:53 +01:00
#### Backend bugs
Due to [issue #39 ](https://github.com/storj/gateway-mt/issues/39 )
uploading multipart files via the S3 gateway causes them to lose their
metadata. For rclone's purpose this means that the modification time
is not stored, nor is any MD5SUM (if one is available from the
source).
This has the following consequences:
- Using `rclone rcat` will fail as the medatada doesn't match after upload
- Uploading files with `rclone mount` will fail for the same reason
- This can worked around by using `--vfs-cache-mode writes` or `--vfs-cache-mode full` or setting `--s3-upload-cutoff` large
- Files uploaded via a multipart upload won't have their modtimes
- This will mean that `rclone sync` will likely keep trying to upload files bigger than `--s3-upload-cutoff`
- This can be worked around with `--checksum` or `--size-only` or setting `--s3-upload-cutoff` large
- The maximum value for `--s3-upload-cutoff` is 5GiB though
One general purpose workaround is to set `--s3-upload-cutoff 5G` . This
means that rclone will upload files smaller than 5GiB as single parts.
Note that this can be set in the config file with `upload_cutoff = 5G`
or configured in the advanced settings. If you regularly transfer
files larger than 5G then using `--checksum` or `--size-only` in
`rclone sync` is the recommended workaround.
2021-09-29 15:43:06 +02:00
#### Comparison with the native protocol
2022-02-04 18:58:04 +01:00
Use the [the native protocol ](/storj ) to take advantage of
2021-09-29 15:43:06 +02:00
client-side encryption as well as to achieve the best possible
download performance. Uploads will be erasure-coded locally, thus a
1gb upload will result in 2.68gb of data being uploaded to storage
nodes across the network.
Use this backend and the S3 compatible Hosted Gateway to increase
upload performance and reduce the load on your systems and network.
Uploads will be encrypted and erasure-coded server-side, thus a 1GB
upload will result in only in 1GB of data being uploaded to storage
nodes across the network.
For more detailed comparison please check the documentation of the
2022-02-04 18:58:04 +01:00
[storj ](/storj ) backend.
2021-09-29 15:43:06 +02:00
2024-07-30 10:51:30 +02:00
## Memory usage {memory}
The most common cause of rclone using lots of memory is a single
directory with millions of files in. Despite s3 not really having the
concepts of directories, rclone does the sync on a directory by
directory basis to be compatible with normal filing systems.
Rclone loads each directory into memory as rclone objects. Each rclone
object takes 0.5k-1k of memory, so approximately 1GB per 1,000,000
files, and the sync for that directory does not begin until it is
entirely loaded in memory. So the sync can take a long time to start
for large directories.
To sync a directory with 100,000,000 files in you would need approximately
100 GB of memory. At some point the amount of memory becomes difficult
to provide so there is
[a workaround for this ](https://github.com/rclone/rclone/wiki/Big-syncs-with-millions-of-files )
which involves a bit of scripting.
At some point rclone will gain a sync mode which is effectively this
workaround but built in to rclone.
2021-07-20 20:45:41 +02:00
## Limitations
2020-11-27 15:08:52 +01:00
`rclone about` is not supported by the S3 backend. Backends without
this capability cannot determine free space for an rclone mount or
use policy `mfs` (most free space) as a member of an rclone union
remote.
2022-05-14 00:06:45 +02:00
See [List of backends that do not support rclone about ](https://rclone.org/overview/#optional-features ) and [rclone about ](https://rclone.org/commands/rclone_about/ )
2023-07-06 11:54:07 +02:00
### Synology C2 Object Storage {#synology-c2}
[Synology C2 Object Storage ](https://c2.synology.com/en-global/object-storage/overview ) provides a secure, S3-compatible, and cost-effective cloud storage solution without API request, download fees, and deletion penalty.
The S3 compatible gateway is configured using `rclone config` with a
type of `s3` and with a provider name of `Synology` . Here is an example
run of the configurator.
First run:
```
rclone config
```
This will guide you through an interactive setup process.
```
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
Enter name for new remote.1
name> syno
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
2024-01-03 12:49:46 +01:00
XX / Amazon S3 Compliant Storage Providers including AWS, ...
2023-07-06 11:54:07 +02:00
\ "s3"
Storage> s3
Choose your S3 provider.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
24 / Synology C2 Object Storage
\ (Synology)
provider> Synology
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Enter a boolean value (true or false). Press Enter for the default ("false").
Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
\ "false"
2 / Get AWS credentials from the environment (env vars or IAM)
\ "true"
env_auth> 1
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
access_key_id> accesskeyid
AWS Secret Access Key (password)
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
secret_access_key> secretaccesskey
Region where your data stored.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / Europe Region 1
\ (eu-001)
2 / Europe Region 2
\ (eu-002)
3 / US Region 1
\ (us-001)
4 / US Region 2
\ (us-002)
5 / Asia (Taiwan)
\ (tw-001)
region > 1
Option endpoint.
Endpoint for Synology C2 Object Storage API.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / EU Endpoint 1
\ (eu-001.s3.synologyc2.net)
2 / US Endpoint 1
\ (us-001.s3.synologyc2.net)
3 / TW Endpoint 1
\ (tw-001.s3.synologyc2.net)
endpoint> 1
Option location_constraint.
Location constraint - must be set to match the Region.
Leave blank if not sure. Used when creating buckets only.
Enter a value. Press Enter to leave empty.
location_constraint>
Edit advanced config? (y/n)
y) Yes
n) No
y/n> y
Option no_check_bucket.
If set, don't attempt to check the bucket exists or create it.
This can be useful when trying to minimise the number of transactions
rclone does if you know the bucket exists already.
It can also be needed if the user you are using does not have bucket
creation permissions. Before v1.52.0 this would have passed silently
due to a bug.
Enter a boolean value (true or false). Press Enter for the default (true).
no_check_bucket> true
Configuration complete.
Options:
- type: s3
- provider: Synology
- region: eu-001
- endpoint: eu-001.s3.synologyc2.net
- no_check_bucket: true
Keep this "syno" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
2024-04-04 16:51:42 +02:00
```