rclone/docs/content/s3.md
2017-03-05 10:14:57 +00:00

12 KiB

title description date
Amazon S3 Rclone docs for Amazon S3 2016-07-11

Amazon S3

Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, eg remote:bucket/path/to/dir.

Here is an example of making an s3 configuration. First run

rclone config

This will guide you through an interactive setup process.

No remotes found - make a new one
n) New remote
s) Set configuration password
n/s> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
 1 / Amazon Drive
   \ "amazon cloud drive"
 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
   \ "s3"
 3 / Backblaze B2
   \ "b2"
 4 / Dropbox
   \ "dropbox"
 5 / Encrypt/Decrypt a remote
   \ "crypt"
 6 / Google Cloud Storage (this is not Google Drive)
   \ "google cloud storage"
 7 / Google Drive
   \ "drive"
 8 / Hubic
   \ "hubic"
 9 / Local Disk
   \ "local"
10 / Microsoft OneDrive
   \ "onedrive"
11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
   \ "swift"
12 / SSH/SFTP Connection
   \ "sftp"
13 / Yandex Disk
   \ "yandex"
Storage> 2
Get AWS credentials from runtime (environment variables or EC2 meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own value
 1 / Enter AWS credentials in the next step
   \ "false"
 2 / Get AWS credentials from the environment (env vars or IAM)
   \ "true"
env_auth> 1
AWS Access Key ID - leave blank for anonymous access or runtime credentials.
access_key_id> access_key
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
secret_access_key> secret_key
Region to connect to.
Choose a number from below, or type in your own value
   / The default endpoint - a good choice if you are unsure.
 1 | US Region, Northern Virginia or Pacific Northwest.
   | Leave location constraint empty.
   \ "us-east-1"
   / US West (Oregon) Region
 2 | Needs location constraint us-west-2.
   \ "us-west-2"
   / US West (Northern California) Region
 3 | Needs location constraint us-west-1.
   \ "us-west-1"
   / EU (Ireland) Region Region
 4 | Needs location constraint EU or eu-west-1.
   \ "eu-west-1"
   / EU (Frankfurt) Region
 5 | Needs location constraint eu-central-1.
   \ "eu-central-1"
   / Asia Pacific (Singapore) Region
 6 | Needs location constraint ap-southeast-1.
   \ "ap-southeast-1"
   / Asia Pacific (Sydney) Region
 7 | Needs location constraint ap-southeast-2.
   \ "ap-southeast-2"
   / Asia Pacific (Tokyo) Region
 8 | Needs location constraint ap-northeast-1.
   \ "ap-northeast-1"
   / Asia Pacific (Seoul)
 9 | Needs location constraint ap-northeast-2.
   \ "ap-northeast-2"
   / Asia Pacific (Mumbai)
10 | Needs location constraint ap-south-1.
   \ "ap-south-1"
   / South America (Sao Paulo) Region
11 | Needs location constraint sa-east-1.
   \ "sa-east-1"
   / If using an S3 clone that only understands v2 signatures
12 | eg Ceph/Dreamhost
   | set this and make sure you set the endpoint.
   \ "other-v2-signature"
   / If using an S3 clone that understands v4 signatures set this
13 | and make sure you set the endpoint.
   \ "other-v4-signature"
region> 1
Endpoint for S3 API.
Leave blank if using AWS to use the default endpoint for the region.
Specify if using an S3 clone such as Ceph.
endpoint>
Location constraint - must be set to match the Region. Used when creating buckets only.
Choose a number from below, or type in your own value
 1 / Empty for US Region, Northern Virginia or Pacific Northwest.
   \ ""
 2 / US West (Oregon) Region.
   \ "us-west-2"
 3 / US West (Northern California) Region.
   \ "us-west-1"
 4 / EU (Ireland) Region.
   \ "eu-west-1"
 5 / EU Region.
   \ "EU"
 6 / Asia Pacific (Singapore) Region.
   \ "ap-southeast-1"
 7 / Asia Pacific (Sydney) Region.
   \ "ap-southeast-2"
 8 / Asia Pacific (Tokyo) Region.
   \ "ap-northeast-1"
 9 / Asia Pacific (Seoul)
   \ "ap-northeast-2"
10 / Asia Pacific (Mumbai)
   \ "ap-south-1"
11 / South America (Sao Paulo) Region.
   \ "sa-east-1"
location_constraint> 1
Canned ACL used when creating buckets and/or storing objects in S3.
For more info visit http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Choose a number from below, or type in your own value
 1 / Owner gets FULL_CONTROL. No one else has access rights (default).
   \ "private"
 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
   \ "public-read"
   / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
 3 | Granting this on a bucket is generally not recommended.
   \ "public-read-write"
 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
   \ "authenticated-read"
   / Object owner gets FULL_CONTROL. Bucket owner gets READ access.
 5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   \ "bucket-owner-read"
   / Both the object owner and the bucket owner get FULL_CONTROL over the object.
 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   \ "bucket-owner-full-control"
acl> private
The server-side encryption algorithm used when storing this object in S3.
Choose a number from below, or type in your own value
 1 / None
   \ ""
 2 / AES256
   \ "AES256"
server_side_encryption>
The storage class to use when storing objects in S3.
Choose a number from below, or type in your own value
 1 / Default
   \ ""
 2 / Standard storage class
   \ "STANDARD"
 3 / Reduced redundancy storage class
   \ "REDUCED_REDUNDANCY"
 4 / Standard Infrequent Access storage class
   \ "STANDARD_IA"
storage_class>
Remote config
--------------------
[remote]
env_auth = false
access_key_id = access_key
secret_access_key = secret_key
region = us-east-1
endpoint =
location_constraint =
acl = private
server_side_encryption =
storage_class =
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y

This remote is called remote and can now be used like this

See all buckets

rclone lsd remote:

Make a new bucket

rclone mkdir remote:bucket

List the contents of a bucket

rclone ls remote:bucket

Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.

rclone sync /home/local/directory remote:bucket

Modified time

The modified time is stored as metadata on the object as X-Amz-Meta-Mtime as floating point since the epoch accurate to 1 ns.

Multipart uploads

rclone supports multipart uploads with S3 which means that it can upload files bigger than 5GB. Note that files uploaded with multipart upload don't have an MD5SUM.

Buckets and Regions

With Amazon S3 you can list buckets (rclone lsd) using any region, but you can only access the content of a bucket from the region it was created in. If you attempt to access a bucket from the wrong region, you will get an error, incorrect region, the bucket is not in 'XXX' region.

Authentication

There are two ways to supply rclone with a set of AWS credentials. In order of precedence:

  • Directly in the rclone configuration file (as configured by rclone config)
    • set access_key_id and secret_access_key
  • Runtime configuration:
    • set env_auth to true in the config file
    • Exporting the following environment variables before running rclone
      • Access Key ID: AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY
      • Secret Access Key: AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY
    • Running rclone on an EC2 instance with an IAM role

If none of these option actually end up providing rclone with AWS credentials then S3 interaction will be non-authenticated (see below).

Specific options

Here are the command line options specific to this cloud storage system.

--s3-acl=STRING

Canned ACL used when creating buckets and/or storing objects in S3.

For more info visit the canned ACL docs.

--s3-storage-class=STRING

Storage class to upload new objects with.

Available options include:

  • STANDARD - default storage class
  • STANDARD_IA - for less frequently accessed data (e.g backups)
  • REDUCED_REDUNDANCY (only for noncritical, reproducible data, has lower redundancy)

Anonymous access to public buckets

If you want to use rclone to access a public bucket, configure with a blank access_key_id and secret_access_key. Eg

No remotes found - make a new one
n) New remote
q) Quit config
n/q> n
name> anons3
What type of source is it?
Choose a number from below
 1) amazon cloud drive
 2) b2
 3) drive
 4) dropbox
 5) google cloud storage
 6) swift
 7) hubic
 8) local
 9) onedrive
10) s3
11) yandex
type> 10
Get AWS credentials from runtime (environment variables or EC2 meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own value
 * Enter AWS credentials in the next step
 1) false
 * Get AWS credentials from the environment (env vars or IAM)
 2) true
env_auth> 1
AWS Access Key ID - leave blank for anonymous access or runtime credentials.
access_key_id>
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
secret_access_key>
...

Then use it as normal with the name of the public bucket, eg

rclone lsd anons3:1000genomes

You will be able to list and copy data but not upload it.

Ceph

Ceph is an object storage system which presents an Amazon S3 interface.

To use rclone with ceph, you need to set the following parameters in the config.

access_key_id = Whatever
secret_access_key = Whatever
endpoint = https://ceph.endpoint.goes.here/
region = other-v2-signature

Note also that Ceph sometimes puts / in the passwords it gives users. If you read the secret access key using the command line tools you will get a JSON blob with the / escaped as \/. Make sure you only write / in the secret access key.

Eg the dump from Ceph looks something like this (irrelevant keys removed).

{
    "user_id": "xxx",
    "display_name": "xxxx",
    "keys": [
        {
            "user": "xxx",
            "access_key": "xxxxxx",
            "secret_key": "xxxxxx\/xxxx"
        }
    ],
}

Because this is a json dump, it is encoding the / as \/, so if you use the secret key as xxxxxx/xxxx it will work fine.

Minio

Minio is an object storage server built for cloud application developers and devops.

It is very easy to install and provides an S3 compatible server which can be used by rclone.

To use it, install Minio following the instructions from the web site.

When it configures itself Minio will print something like this

AccessKey: WLGDGYAQYIGI833EV05A  SecretKey: BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF Region: us-east-1

Minio Object Storage:
     http://127.0.0.1:9000
     http://10.0.0.3:9000

Minio Browser:
     http://127.0.0.1:9000
     http://10.0.0.3:9000

These details need to go into rclone config like this. Note that it is important to put the region in as stated above.

env_auth> 1
access_key_id> WLGDGYAQYIGI833EV05A
secret_access_key> BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF   
region> us-east-1
endpoint> http://10.0.0.3:9000
location_constraint>
server_side_encryption>

Which makes the config file look like this

[minio]
env_auth = false
access_key_id = WLGDGYAQYIGI833EV05A
secret_access_key = BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF
region = us-east-1
endpoint = http://10.0.0.3:9000
location_constraint =
server_side_encryption =

Minio doesn't support all the features of S3 yet. In particular it doesn't support MD5 checksums (ETags) or metadata. This means rclone can't check MD5SUMs or store the modified date. However you can work around this with the --size-only flag of rclone.

So once set up, for example to copy files into a bucket

rclone --size-only copy /path/to/files minio:bucket