--- title: "Amazon S3" description: "Rclone docs for Amazon S3" date: "2014-04-26" --- Amazon S3 --------------------------------------- Paths are specified as `remote:bucket` (or `remote:` for the `lsd` command.) You may put subdirectories in too, eg `remote:bucket/path/to/dir`. Here is an example of making an s3 configuration. First run rclone config This will guide you through an interactive setup process. ``` No remotes found - make a new one n) New remote s) Set configuration password n/s> n name> remote Type of storage to configure. Choose a number from below, or type in your own value 1 / Amazon Cloud Drive \ "amazon cloud drive" 2 / Amazon S3 (also Dreamhost, Ceph) \ "s3" 3 / Backblaze B2 \ "b2" 4 / Dropbox \ "dropbox" 5 / Google Cloud Storage (this is not Google Drive) \ "google cloud storage" 6 / Google Drive \ "drive" 7 / Hubic \ "hubic" 8 / Local Disk \ "local" 9 / Microsoft OneDrive \ "onedrive" 10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) \ "swift" 11 / Yandex Disk \ "yandex" Storage> 2 Get AWS credentials from runtime (environment variables or EC2 meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. Choose a number from below, or type in your own value 1 / Enter AWS credentials in the next step \ "false" 2 / Get AWS credentials from the environment (env vars or IAM) \ "true" env_auth> 1 AWS Access Key ID - leave blank for anonymous access or runtime credentials. access_key_id> access_key AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials. secret_access_key> secret_key Region to connect to. Choose a number from below, or type in your own value / The default endpoint - a good choice if you are unsure. 1 | US Region, Northern Virginia or Pacific Northwest. | Leave location constraint empty. \ "us-east-1" / US West (Oregon) Region 2 | Needs location constraint us-west-2. \ "us-west-2" / US West (Northern California) Region 3 | Needs location constraint us-west-1. \ "us-west-1" / EU (Ireland) Region Region 4 | Needs location constraint EU or eu-west-1. \ "eu-west-1" / EU (Frankfurt) Region 5 | Needs location constraint eu-central-1. \ "eu-central-1" / Asia Pacific (Singapore) Region 6 | Needs location constraint ap-southeast-1. \ "ap-southeast-1" / Asia Pacific (Sydney) Region 7 | Needs location constraint ap-southeast-2. \ "ap-southeast-2" / Asia Pacific (Tokyo) Region 8 | Needs location constraint ap-northeast-1. \ "ap-northeast-1" / South America (Sao Paulo) Region 9 | Needs location constraint sa-east-1. \ "sa-east-1" / If using an S3 clone that only understands v2 signatures 10 | eg Ceph/Dreamhost | set this and make sure you set the endpoint. \ "other-v2-signature" / If using an S3 clone that understands v4 signatures set this 11 | and make sure you set the endpoint. \ "other-v4-signature" region> 1 Endpoint for S3 API. Leave blank if using AWS to use the default endpoint for the region. Specify if using an S3 clone such as Ceph. endpoint> Location constraint - must be set to match the Region. Used when creating buckets only. Choose a number from below, or type in your own value 1 / Empty for US Region, Northern Virginia or Pacific Northwest. \ "" 2 / US West (Oregon) Region. \ "us-west-2" 3 / US West (Northern California) Region. \ "us-west-1" 4 / EU (Ireland) Region. \ "eu-west-1" 5 / EU Region. \ "EU" 6 / Asia Pacific (Singapore) Region. \ "ap-southeast-1" 7 / Asia Pacific (Sydney) Region. \ "ap-southeast-2" 8 / Asia Pacific (Tokyo) Region. \ "ap-northeast-1" 9 / South America (Sao Paulo) Region. \ "sa-east-1" location_constraint> 1 The server-side encryption algorithm used when storing this object in S3. Choose a number from below, or type in your own value 1 / None \ "" 2 / AES256 \ "AES256" server_side_encryption> Remote config -------------------- [remote] env_auth = false access_key_id = access_key secret_access_key = secret_key region = us-east-1 endpoint = location_constraint = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` This remote is called `remote` and can now be used like this See all buckets rclone lsd remote: Make a new bucket rclone mkdir remote:bucket List the contents of a bucket rclone ls remote:bucket Sync `/home/local/directory` to the remote bucket, deleting any excess files in the bucket. rclone sync /home/local/directory remote:bucket ### Modified time ### The modified time is stored as metadata on the object as `X-Amz-Meta-Mtime` as floating point since the epoch accurate to 1 ns. ### Multipart uploads ### rclone supports multipart uploads with S3 which means that it can upload files bigger than 5GB. Note that files uploaded with multipart upload don't have an MD5SUM. ### Buckets and Regions ### With Amazon S3 you can list buckets (`rclone lsd`) using any region, but you can only access the content of a bucket from the region it was created in. If you attempt to access a bucket from the wrong region, you will get an error, `incorrect region, the bucket is not in 'XXX' region`. ### Authentication ### There are two ways to supply `rclone` with a set of AWS credentials. In order of precedence: - Directly in the rclone configuration file (as configured by `rclone config`) - set `access_key_id` and `secret_access_key` - Runtime configuration: - set `env_auth` to `true` in the config file - Exporting the following environment variables before running `rclone` - Access Key ID: `AWS_ACCESS_KEY_ID` or `AWS_ACCESS_KEY` - Secret Access Key: `AWS_SECRET_ACCESS_KEY` or `AWS_SECRET_KEY` - Running `rclone` on an EC2 instance with an IAM role If none of these option actually end up providing `rclone` with AWS credentials then S3 interaction will be non-authenticated (see below). ### Anonymous access to public buckets ### If you want to use rclone to access a public bucket, configure with a blank `access_key_id` and `secret_access_key`. Eg ``` No remotes found - make a new one n) New remote q) Quit config n/q> n name> anons3 What type of source is it? Choose a number from below 1) amazon cloud drive 2) b2 3) drive 4) dropbox 5) google cloud storage 6) swift 7) hubic 8) local 9) onedrive 10) s3 11) yandex type> 10 Get AWS credentials from runtime (environment variables or EC2 meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. Choose a number from below, or type in your own value * Enter AWS credentials in the next step 1) false * Get AWS credentials from the environment (env vars or IAM) 2) true env_auth> 1 AWS Access Key ID - leave blank for anonymous access or runtime credentials. access_key_id> AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials. secret_access_key> ... ``` Then use it as normal with the name of the public bucket, eg rclone lsd anons3:1000genomes You will be able to list and copy data but not upload it. ### Ceph ### Ceph is an object storage system which presents an Amazon S3 interface. To use rclone with ceph, you need to set the following parameters in the config. ``` access_key_id = Whatever secret_access_key = Whatever endpoint = https://ceph.endpoint.goes.here/ region = other-v2-signature ``` Note also that Ceph sometimes puts `/` in the passwords it gives users. If you read the secret access key using the command line tools you will get a JSON blob with the `/` escaped as `\/`. Make sure you only write `/` in the secret access key. Eg the dump from Ceph looks something like this (irrelevant keys removed). ``` { "user_id": "xxx", "display_name": "xxxx", "keys": [ { "user": "xxx", "access_key": "xxxxxx", "secret_key": "xxxxxx\/xxxx" } ], } ``` Because this is a json dump, it is encoding the `/` as `\/`, so if you use the secret key as `xxxxxx/xxxx` it will work fine.