rclone/docs/content/s3.md

4.0 KiB

title description date
Amazon S3 Rclone docs for Amazon S3 2014-04-26

Amazon S3

Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, eg remote:bucket/path/to/dir.

Here is an example of making an s3 configuration. First run

rclone config

This will guide you through an interactive setup process.

No remotes found - make a new one
n) New remote
q) Quit config
n/q> n
name> remote
What type of source is it?
Choose a number from below
 1) swift
 2) s3
 3) local
 4) google cloud storage
 5) dropbox
 6) drive
type> 2
AWS Access Key ID.
access_key_id> accesskey
AWS Secret Access Key (password). 
secret_access_key> secretaccesskey
Region to connect to.
Choose a number from below, or type in your own value
 * The default endpoint - a good choice if you are unsure.
 * US Region, Northern Virginia or Pacific Northwest.
 * Leave location constraint empty.
 1) us-east-1
 * US West (Oregon) Region
 * Needs location constraint us-west-2.
 2) us-west-2
[snip]
 * South America (Sao Paulo) Region
 * Needs location constraint sa-east-1.
 9) sa-east-1
 * If using an S3 clone that only understands v2 signatures - eg Ceph - set this and make sure you set the endpoint.
10) other-v2-signature
 * If using an S3 clone that understands v4 signatures set this and make sure you set the endpoint.
11) other-v4-signature
region> 1
Endpoint for S3 API.
Leave blank if using AWS to use the default endpoint for the region.
Specify if using an S3 clone such as Ceph.
endpoint> 
Location constraint - must be set to match the Region. Used when creating buckets only.
Choose a number from below, or type in your own value
 * Empty for US Region, Northern Virginia or Pacific Northwest.
 1) 
 * US West (Oregon) Region.
 2) us-west-2
 * US West (Northern California) Region.
 3) us-west-1
 * EU (Ireland) Region.
 4) eu-west-1
[snip]
location_constraint> 1
Remote config
--------------------
[remote]
access_key_id = accesskey
secret_access_key = secretaccesskey
region = us-east-1
endpoint = 
location_constraint = 
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
Current remotes:

Name                 Type
====                 ====
remote               s3

e) Edit existing remote
n) New remote
d) Delete remote
q) Quit config
e/n/d/q> q

This remote is called remote and can now be used like this

See all buckets

rclone lsd remote:

Make a new bucket

rclone mkdir remote:bucket

List the contents of a bucket

rclone ls remote:bucket

Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.

rclone sync /home/local/directory remote:bucket

Modified time

The modified time is stored as metadata on the object as X-Amz-Meta-Mtime as floating point since the epoch accurate to 1 ns.

Multipart uploads

rclone supports multipart uploads with S3 which means that it can upload files bigger than 5GB. Note that files uploaded with multipart upload don't have an MD5SUM.

Ceph

Ceph is an object storage system which presents an Amazon S3 interface.

To use rclone with ceph, you need to set the following parameters in the config.

access_key_id = Whatever
secret_access_key = Whatever
endpoint = https://ceph.endpoint.goes.here/
region = other-v2-signature

Note also that Ceph sometimes puts / in the passwords it gives users. If you read the secret access key using the command line tools you will get a JSON blob with the / escaped as \/. Make sure you only write / in the secret access key.

Eg the dump from Ceph looks something like this (irrelevant keys removed).

{
    "user_id": "xxx",
    "display_name": "xxxx",
    "keys": [
        {
            "user": "xxx",
            "access_key": "xxxxxx",
            "secret_key": "xxxxxx\/xxxx"
        }
    ],
}

Because this is a json dump, it is encoding the / as \/, so if you use the secret key as xxxxxx/xxxx it will work fine.