2014-03-28 23:34:13 +01:00
---
title: "Amazon S3"
description: "Rclone docs for Amazon S3"
2016-07-11 13:12:28 +02:00
date: "2016-07-11"
2014-03-28 23:34:13 +01:00
---
2015-10-15 17:57:21 +02:00
< i class = "fa fa-amazon" > < / i > Amazon S3
2014-07-17 21:03:11 +02:00
---------------------------------------
2014-03-28 23:34:13 +01:00
2014-07-17 21:03:11 +02:00
Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
command.) You may put subdirectories in too, eg `remote:bucket/path/to/dir` .
2014-03-28 23:34:13 +01:00
Here is an example of making an s3 configuration. First run
rclone config
This will guide you through an interactive setup process.
```
No remotes found - make a new one
n) New remote
2016-02-21 14:39:04 +01:00
s) Set configuration password
n/s> n
2014-03-28 23:34:13 +01:00
name> remote
2016-02-21 14:39:04 +01:00
Type of storage to configure.
Choose a number from below, or type in your own value
2016-07-11 13:42:44 +02:00
1 / Amazon Drive
2016-02-21 14:39:04 +01:00
\ "amazon cloud drive"
2017-01-09 06:09:19 +01:00
2 / Amazon S3 (also Dreamhost, Ceph, Minio)
2016-02-21 14:39:04 +01:00
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
2017-01-09 06:09:19 +01:00
5 / Encrypt/Decrypt a remote
\ "crypt"
6 / Google Cloud Storage (this is not Google Drive)
2016-02-21 14:39:04 +01:00
\ "google cloud storage"
2017-01-09 06:09:19 +01:00
7 / Google Drive
2016-02-21 14:39:04 +01:00
\ "drive"
2017-01-09 06:09:19 +01:00
8 / Hubic
2016-02-21 14:39:04 +01:00
\ "hubic"
2017-01-09 06:09:19 +01:00
9 / Local Disk
2016-02-21 14:39:04 +01:00
\ "local"
2017-01-09 06:09:19 +01:00
10 / Microsoft OneDrive
2016-02-21 14:39:04 +01:00
\ "onedrive"
2017-01-09 06:09:19 +01:00
11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
2016-02-21 14:39:04 +01:00
\ "swift"
2017-03-05 11:14:57 +01:00
12 / SSH/SFTP Connection
\ "sftp"
13 / Yandex Disk
2016-02-21 14:39:04 +01:00
\ "yandex"
Storage> 2
2017-11-22 22:21:36 +01:00
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
2016-02-01 14:11:27 +01:00
Choose a number from below, or type in your own value
2016-02-21 14:39:04 +01:00
1 / Enter AWS credentials in the next step
\ "false"
2 / Get AWS credentials from the environment (env vars or IAM)
\ "true"
env_auth> 1
2016-02-01 14:11:27 +01:00
AWS Access Key ID - leave blank for anonymous access or runtime credentials.
2016-02-21 14:39:04 +01:00
access_key_id> access_key
2016-02-01 14:11:27 +01:00
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
2016-02-21 14:39:04 +01:00
secret_access_key> secret_key
2015-08-15 19:44:45 +02:00
Region to connect to.
2014-03-28 23:34:13 +01:00
Choose a number from below, or type in your own value
2016-02-21 14:39:04 +01:00
/ The default endpoint - a good choice if you are unsure.
1 | US Region, Northern Virginia or Pacific Northwest.
| Leave location constraint empty.
\ "us-east-1"
/ US West (Oregon) Region
2 | Needs location constraint us-west-2.
\ "us-west-2"
/ US West (Northern California) Region
3 | Needs location constraint us-west-1.
\ "us-west-1"
/ EU (Ireland) Region Region
4 | Needs location constraint EU or eu-west-1.
\ "eu-west-1"
/ EU (Frankfurt) Region
5 | Needs location constraint eu-central-1.
\ "eu-central-1"
/ Asia Pacific (Singapore) Region
6 | Needs location constraint ap-southeast-1.
\ "ap-southeast-1"
/ Asia Pacific (Sydney) Region
7 | Needs location constraint ap-southeast-2.
\ "ap-southeast-2"
/ Asia Pacific (Tokyo) Region
8 | Needs location constraint ap-northeast-1.
\ "ap-northeast-1"
2017-01-09 06:09:19 +01:00
/ Asia Pacific (Seoul)
9 | Needs location constraint ap-northeast-2.
\ "ap-northeast-2"
/ Asia Pacific (Mumbai)
10 | Needs location constraint ap-south-1.
\ "ap-south-1"
2016-02-21 14:39:04 +01:00
/ South America (Sao Paulo) Region
2017-01-09 06:09:19 +01:00
11 | Needs location constraint sa-east-1.
2016-02-21 14:39:04 +01:00
\ "sa-east-1"
/ If using an S3 clone that only understands v2 signatures
2017-01-09 06:09:19 +01:00
12 | eg Ceph/Dreamhost
2016-02-21 14:39:04 +01:00
| set this and make sure you set the endpoint.
\ "other-v2-signature"
/ If using an S3 clone that understands v4 signatures set this
2017-01-09 06:09:19 +01:00
13 | and make sure you set the endpoint.
2016-02-21 14:39:04 +01:00
\ "other-v4-signature"
region> 1
2015-08-15 19:44:45 +02:00
Endpoint for S3 API.
Leave blank if using AWS to use the default endpoint for the region.
Specify if using an S3 clone such as Ceph.
2017-01-09 06:09:19 +01:00
endpoint>
2015-08-15 19:44:45 +02:00
Location constraint - must be set to match the Region. Used when creating buckets only.
2014-03-28 23:34:13 +01:00
Choose a number from below, or type in your own value
2016-02-21 14:39:04 +01:00
1 / Empty for US Region, Northern Virginia or Pacific Northwest.
\ ""
2 / US West (Oregon) Region.
\ "us-west-2"
3 / US West (Northern California) Region.
\ "us-west-1"
4 / EU (Ireland) Region.
\ "eu-west-1"
5 / EU Region.
\ "EU"
6 / Asia Pacific (Singapore) Region.
\ "ap-southeast-1"
7 / Asia Pacific (Sydney) Region.
\ "ap-southeast-2"
8 / Asia Pacific (Tokyo) Region.
\ "ap-northeast-1"
2017-01-09 06:09:19 +01:00
9 / Asia Pacific (Seoul)
\ "ap-northeast-2"
10 / Asia Pacific (Mumbai)
\ "ap-south-1"
11 / South America (Sao Paulo) Region.
2016-02-21 14:39:04 +01:00
\ "sa-east-1"
location_constraint> 1
2016-08-22 14:59:03 +02:00
Canned ACL used when creating buckets and/or storing objects in S3.
2017-03-29 14:38:34 +02:00
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
2016-08-22 14:59:03 +02:00
Choose a number from below, or type in your own value
1 / Owner gets FULL_CONTROL. No one else has access rights (default).
\ "private"
2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
\ "public-read"
/ Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
3 | Granting this on a bucket is generally not recommended.
\ "public-read-write"
4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
\ "authenticated-read"
/ Object owner gets FULL_CONTROL. Bucket owner gets READ access.
5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ "bucket-owner-read"
/ Both the object owner and the bucket owner get FULL_CONTROL over the object.
6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ "bucket-owner-full-control"
acl> private
2016-06-14 22:22:54 +02:00
The server-side encryption algorithm used when storing this object in S3.
Choose a number from below, or type in your own value
1 / None
\ ""
2 / AES256
\ "AES256"
server_side_encryption>
2016-09-01 23:27:50 +02:00
The storage class to use when storing objects in S3.
Choose a number from below, or type in your own value
1 / Default
\ ""
2 / Standard storage class
\ "STANDARD"
3 / Reduced redundancy storage class
\ "REDUCED_REDUNDANCY"
4 / Standard Infrequent Access storage class
\ "STANDARD_IA"
storage_class>
2015-08-15 19:44:45 +02:00
Remote config
2014-03-28 23:34:13 +01:00
--------------------
[remote]
2016-02-21 14:39:04 +01:00
env_auth = false
access_key_id = access_key
secret_access_key = secret_key
region = us-east-1
2017-01-09 06:09:19 +01:00
endpoint =
location_constraint =
acl = private
server_side_encryption =
storage_class =
2014-03-28 23:34:13 +01:00
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
```
This remote is called `remote` and can now be used like this
See all buckets
rclone lsd remote:
Make a new bucket
rclone mkdir remote:bucket
List the contents of a bucket
rclone ls remote:bucket
Sync `/home/local/directory` to the remote bucket, deleting any excess
files in the bucket.
rclone sync /home/local/directory remote:bucket
2017-06-06 17:40:00 +02:00
### --fast-list ###
This remote supports `--fast-list` which allows you to use fewer
transactions in exchange for more memory. See the [rclone
docs](/docs/#fast-list) for more details.
2015-06-06 11:05:21 +02:00
### Modified time ###
2014-03-28 23:34:13 +01:00
The modified time is stored as metadata on the object as
`X-Amz-Meta-Mtime` as floating point since the epoch accurate to 1 ns.
2015-08-15 19:44:45 +02:00
### Multipart uploads ###
rclone supports multipart uploads with S3 which means that it can
2018-01-06 15:30:10 +01:00
upload files bigger than 5GB. Note that files uploaded *both* with
multipart upload *and* through crypt remotes do not have MD5 sums.
2015-08-15 19:44:45 +02:00
2015-08-25 21:15:50 +02:00
### Buckets and Regions ###
With Amazon S3 you can list buckets (`rclone lsd`) using any region,
but you can only access the content of a bucket from the region it was
created in. If you attempt to access a bucket from the wrong region,
you will get an error, `incorrect region, the bucket is not in 'XXX'
region`.
2016-02-01 14:11:27 +01:00
### Authentication ###
There are two ways to supply `rclone` with a set of AWS
credentials. In order of precedence:
- Directly in the rclone configuration file (as configured by `rclone config` )
2017-09-11 23:49:59 +02:00
- set `access_key_id` and `secret_access_key` . `session_token` can be
optionally set when using AWS STS.
2016-02-01 14:11:27 +01:00
- Runtime configuration:
- set `env_auth` to `true` in the config file
2016-02-09 18:19:13 +01:00
- Exporting the following environment variables before running `rclone`
- Access Key ID: `AWS_ACCESS_KEY_ID` or `AWS_ACCESS_KEY`
- Secret Access Key: `AWS_SECRET_ACCESS_KEY` or `AWS_SECRET_KEY`
2017-09-11 23:49:59 +02:00
- Session Token: `AWS_SESSION_TOKEN`
2017-11-22 22:21:36 +01:00
- Running `rclone` in an ECS task with an IAM role
2016-02-01 14:11:27 +01:00
- Running `rclone` on an EC2 instance with an IAM role
If none of these option actually end up providing `rclone` with AWS
credentials then S3 interaction will be non-authenticated (see below).
2017-06-02 13:06:06 +02:00
### S3 Permissions ###
When using the `sync` subcommand of `rclone` the following minimum
permissions are required to be available on the bucket being written to:
* `ListBucket`
* `DeleteObject`
2017-06-10 16:22:43 +02:00
* `GetObject`
2017-06-02 13:06:06 +02:00
* `PutObject`
* `PutObjectACL`
Example policy:
```
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::USER_SID:user/USER_NAME"
},
"Action": [
"s3:ListBucket",
"s3:DeleteObject",
2017-06-10 16:22:43 +02:00
"s3:GetObject",
2017-06-02 13:06:06 +02:00
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::BUCKET_NAME/*",
"arn:aws:s3:::BUCKET_NAME"
]
}
]
}
```
Notes on above:
1. This is a policy that can be used when creating bucket. It assumes
that `USER_NAME` has been created.
2. The Resource entry must include both resource ARNs, as one implies
the bucket and the other implies the bucket's objects.
For reference, [here's an Ansible script ](https://gist.github.com/ebridges/ebfc9042dd7c756cd101cfa807b7ae2b )
that will generate one or more buckets that will work with `rclone sync` .
2017-09-09 14:02:56 +02:00
### Glacier ###
You can transition objects to glacier storage using a [lifecycle policy ](http://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html ).
The bucket can still be synced or copied into normally, but if rclone
tries to access the data you will see an error like below.
2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file
In this case you need to [restore ](http://docs.aws.amazon.com/AmazonS3/latest/user-guide/restore-archived-objects.html )
the object(s) in question before using rclone.
2016-09-01 23:27:50 +02:00
### Specific options ###
Here are the command line options specific to this cloud storage
system.
2016-08-23 17:46:09 +02:00
#### --s3-acl=STRING ####
Canned ACL used when creating buckets and/or storing objects in S3.
2017-03-29 14:38:34 +02:00
For more info visit the [canned ACL docs ](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl ).
2016-08-23 17:46:09 +02:00
#### --s3-storage-class=STRING ####
2016-09-01 23:27:50 +02:00
Storage class to upload new objects with.
Available options include:
- STANDARD - default storage class
- STANDARD_IA - for less frequently accessed data (e.g backups)
- REDUCED_REDUNDANCY (only for noncritical, reproducible data, has lower redundancy)
2015-09-29 10:58:03 +02:00
### Anonymous access to public buckets ###
If you want to use rclone to access a public bucket, configure with a
blank `access_key_id` and `secret_access_key` . Eg
```
2016-02-01 14:11:27 +01:00
No remotes found - make a new one
2015-09-29 10:58:03 +02:00
n) New remote
q) Quit config
2016-02-01 14:11:27 +01:00
n/q> n
2015-09-29 10:58:03 +02:00
name> anons3
What type of source is it?
Choose a number from below
1) amazon cloud drive
2016-02-01 14:11:27 +01:00
2) b2
3) drive
4) dropbox
5) google cloud storage
6) swift
7) hubic
8) local
9) onedrive
10) s3
11) yandex
type> 10
2017-11-22 22:21:36 +01:00
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
2016-02-01 14:11:27 +01:00
Choose a number from below, or type in your own value
* Enter AWS credentials in the next step
1) false
* Get AWS credentials from the environment (env vars or IAM)
2) true
env_auth> 1
AWS Access Key ID - leave blank for anonymous access or runtime credentials.
access_key_id>
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
secret_access_key>
...
2015-09-29 10:58:03 +02:00
```
Then use it as normal with the name of the public bucket, eg
rclone lsd anons3:1000genomes
You will be able to list and copy data but not upload it.
2015-08-15 19:44:45 +02:00
### Ceph ###
Ceph is an object storage system which presents an Amazon S3 interface.
To use rclone with ceph, you need to set the following parameters in
the config.
```
access_key_id = Whatever
secret_access_key = Whatever
endpoint = https://ceph.endpoint.goes.here/
region = other-v2-signature
```
Note also that Ceph sometimes puts `/` in the passwords it gives
users. If you read the secret access key using the command line tools
you will get a JSON blob with the `/` escaped as `\/` . Make sure you
only write `/` in the secret access key.
Eg the dump from Ceph looks something like this (irrelevant keys
removed).
```
{
"user_id": "xxx",
"display_name": "xxxx",
"keys": [
{
"user": "xxx",
"access_key": "xxxxxx",
"secret_key": "xxxxxx\/xxxx"
}
],
}
```
Because this is a json dump, it is encoding the `/` as `\/` , so if you
use the secret key as `xxxxxx/xxxx` it will work fine.
2016-07-11 13:12:28 +02:00
2017-10-19 18:31:25 +02:00
### DigitalOcean Spaces ###
[Spaces ](https://www.digitalocean.com/products/object-storage/ ) is an [S3-interoperable ](https://developers.digitalocean.com/documentation/spaces/ ) object storage service from cloud provider DigitalOcean.
To connect to DigitalOcean Spaces you will need an access key and secret key. These can be retrieved on the "[Applications & API](https://cloud.digitalocean.com/settings/api/tokens)" page of the DigitalOcean control panel. They will be needed when promted by `rclone config` for your `access_key_id` and `secret_access_key` .
2017-10-28 08:03:51 +02:00
When prompted for a `region` or `location_constraint` , press enter to use the default value. The region must be included in the `endpoint` setting (e.g. `nyc3.digitaloceanspaces.com` ). The defualt values can be used for other settings.
2017-10-19 18:31:25 +02:00
Going through the whole process of creating a new remote by running `rclone config` , each prompt should be answered as shown below:
```
Storage> 2
env_auth> 1
access_key_id> YOUR_ACCESS_KEY
secret_access_key> YOUR_SECRET_KEY
region>
endpoint> nyc3.digitaloceanspaces.com
location_constraint>
acl>
storage_class>
```
The resulting configuration file should look like:
```
[spaces]
type = s3
env_auth = false
access_key_id = YOUR_ACCESS_KEY
secret_access_key = YOUR_SECRET_KEY
region =
endpoint = nyc3.digitaloceanspaces.com
location_constraint =
acl =
server_side_encryption =
storage_class =
```
Once configured, you can create a new Space and begin copying files. For example:
```
rclone mkdir spaces:my-new-space
rclone copy /path/to/files spaces:my-new-space
```
2016-07-11 13:12:28 +02:00
### Minio ###
[Minio ](https://minio.io/ ) is an object storage server built for cloud application developers and devops.
It is very easy to install and provides an S3 compatible server which can be used by rclone.
2017-06-20 00:51:39 +02:00
To use it, install Minio following the instructions [here ](https://docs.minio.io/docs/minio-quickstart-guide ).
2016-07-11 13:12:28 +02:00
When it configures itself Minio will print something like this
```
2017-06-20 00:51:39 +02:00
Endpoint: http://192.168.1.106:9000 http://172.23.0.1:9000
AccessKey: USWUXHGYZQYFYFFIT3RE
SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
Region: us-east-1
SQS ARNs: arn:minio:sqs:us-east-1:1:redis arn:minio:sqs:us-east-1:2:redis
Browser Access:
http://192.168.1.106:9000 http://172.23.0.1:9000
Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
$ mc config host add myminio http://192.168.1.106:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
Object API (Amazon S3 compatible):
Go: https://docs.minio.io/docs/golang-client-quickstart-guide
Java: https://docs.minio.io/docs/java-client-quickstart-guide
Python: https://docs.minio.io/docs/python-client-quickstart-guide
JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
.NET: https://docs.minio.io/docs/dotnet-client-quickstart-guide
Drive Capacity: 26 GiB Free, 165 GiB Total
2016-07-11 13:12:28 +02:00
```
These details need to go into `rclone config` like this. Note that it
is important to put the region in as stated above.
```
env_auth> 1
2017-06-20 00:51:39 +02:00
access_key_id> USWUXHGYZQYFYFFIT3RE
secret_access_key> MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
2016-07-11 13:12:28 +02:00
region> us-east-1
2017-06-20 00:51:39 +02:00
endpoint> http://192.168.1.106:9000
2017-01-09 06:09:19 +01:00
location_constraint>
2016-07-11 13:12:28 +02:00
server_side_encryption>
```
Which makes the config file look like this
```
[minio]
env_auth = false
2017-06-20 00:51:39 +02:00
access_key_id = USWUXHGYZQYFYFFIT3RE
secret_access_key = MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
2016-07-11 13:12:28 +02:00
region = us-east-1
2017-06-20 00:51:39 +02:00
endpoint = http://192.168.1.106:9000
2017-01-09 06:09:19 +01:00
location_constraint =
server_side_encryption =
2016-07-11 13:12:28 +02:00
```
So once set up, for example to copy files into a bucket
2017-06-20 00:51:39 +02:00
```
rclone copy /path/to/files minio:bucket
2017-09-11 23:49:59 +02:00
```
2017-08-30 16:55:51 +02:00
### Wasabi ###
2017-09-25 18:55:19 +02:00
[Wasabi ](https://wasabi.com ) is a cloud-based object storage service for a
2017-08-30 16:55:51 +02:00
broad range of applications and use cases. Wasabi is designed for
individuals and organizations that require a high-performance,
reliable, and secure data storage infrastructure at minimal cost.
Wasabi provides an S3 interface which can be configured for use with
rclone like this.
```
No remotes found - make a new one
n) New remote
s) Set configuration password
n/s> n
name> wasabi
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
[snip]
Storage> s3
2017-11-22 22:21:36 +01:00
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
2017-08-30 16:55:51 +02:00
Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
\ "false"
2 / Get AWS credentials from the environment (env vars or IAM)
\ "true"
env_auth> 1
AWS Access Key ID - leave blank for anonymous access or runtime credentials.
access_key_id> YOURACCESSKEY
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
secret_access_key> YOURSECRETACCESSKEY
Region to connect to.
Choose a number from below, or type in your own value
/ The default endpoint - a good choice if you are unsure.
1 | US Region, Northern Virginia or Pacific Northwest.
| Leave location constraint empty.
\ "us-east-1"
[snip]
region> us-east-1
Endpoint for S3 API.
Leave blank if using AWS to use the default endpoint for the region.
Specify if using an S3 clone such as Ceph.
endpoint> s3.wasabisys.com
Location constraint - must be set to match the Region. Used when creating buckets only.
Choose a number from below, or type in your own value
1 / Empty for US Region, Northern Virginia or Pacific Northwest.
\ ""
[snip]
location_constraint>
Canned ACL used when creating buckets and/or storing objects in S3.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Choose a number from below, or type in your own value
1 / Owner gets FULL_CONTROL. No one else has access rights (default).
\ "private"
[snip]
acl>
The server-side encryption algorithm used when storing this object in S3.
Choose a number from below, or type in your own value
1 / None
\ ""
2 / AES256
\ "AES256"
server_side_encryption>
The storage class to use when storing objects in S3.
Choose a number from below, or type in your own value
1 / Default
\ ""
2 / Standard storage class
\ "STANDARD"
3 / Reduced redundancy storage class
\ "REDUCED_REDUNDANCY"
4 / Standard Infrequent Access storage class
\ "STANDARD_IA"
storage_class>
Remote config
--------------------
[wasabi]
env_auth = false
access_key_id = YOURACCESSKEY
secret_access_key = YOURSECRETACCESSKEY
region = us-east-1
endpoint = s3.wasabisys.com
location_constraint =
acl =
server_side_encryption =
storage_class =
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
```
This will leave the config file looking like this.
```
[wasabi]
env_auth = false
access_key_id = YOURACCESSKEY
secret_access_key = YOURSECRETACCESSKEY
region = us-east-1
endpoint = s3.wasabisys.com
location_constraint =
acl =
server_side_encryption =
storage_class =
```