Before this change, large objects which had had their contents deleted
would return "Object not found" and break the listing.
This change makes these objects appear as 0 sized entities so they can
be listed and deleted.
Pcloud appears to have opened up a new region and they are returning
the hostname in the oauth callback, thus
GET /?code=XXX&locationid=1&hostname=api.pcloud.com&state=XXX HTTP/1.1
GET /?code=XXX&locationid=2&hostname=eapi.pcloud.com&state=XXX HTTP/1.1
This isn't documented yet, however pCloud have confirmed that this is
the correct interpretation.
Rclone now reads the "hostname" parameter in the oauth callback and
stores it in the config file. It uses it for all subequent API calls.
Previous to this a dangling shortcut would error the directory
listing.
This patch makes dangling shortcuts appear as 0 sized objects in the
directory listing so they can be deleted. These objects can't be read
though.
For some objects the onedrive backend has been doing a server side
copy and a delete when a server side move would have worked OK.
This was caused by not detecting the home drive correctly (when it was
an empty string) and assuming that these transfers were cross drive.
This is fixed by comparing canonicalizing drive IDs before comparing them.
Currently credentials are required to download a public bucket file
which is not really necessary and makes automated usage more complex.
Add a new option "anonymous" which when enabled configures the gcs
backend to use an anonymous HTTP client. This of course only works
for read access and trying to write will lead to errors like that:
"googleapi: Error 401: Anonymous caller does not not have
storage.objects.create access to the Google Cloud Storage object.",
as expected. By default the anonymous access option is disabled so that
the GCS Application Default Credentials are still used by default as
before and an error is given if they can't be found.
Before this change rclone used the relative path from the current
working directory.
It appears that WS FTP doesn't like this and the openssh sftp tool
also uses absolute paths which is a good reason for switching to
absolute paths.
This change reads the current working directory at startup and bases
all file requests from there.
See: https://forum.rclone.org/t/sftp-ssh-fx-failure-directory-not-found/17436
Before this change the --local-no-updated flag would not error if the
files changed in size during the transfer. The file could still be
read beyond the size advertised though which caused problems with
certain backends.
After this change we attempt to provide a consistent view of the file
once it has been opened.
Once the file has had stat() called on it for the first time we
- Only transfer the size that stat gave
- Only checksum the size that stat gave
- Don't update the stat info for the file
This means that files that are extending can be transferred - rclone
will transfer the length it saw the first time it listed the file.
See: https://forum.rclone.org/t/transport-connection-broken/16494/21
In this commit
5c5ad6220 drive: fix --drive-impersonate with cached root_folder_id
We disabled the use of root_folder_id with --drive-impersonate to fix
a problem with a cached root_folder_id giving the wrong results.
This, alas, broke one users setup with a root_folder_id of
appDataFolder. Since this is identifiable and definitely couldn't have
been cached, we can safely skip this check in this case.
See: https://forum.rclone.org/t/rclone-gdrive-no-longer-returning-anything/17215/10
Before this change there was lots of duplicated code in all the
dircache using backends to support DirMove.
This change factors this code into the dircache library.
Dircache was changed to:
- Remove special cases for the root directory
- Remove Fatal errors
- Call FindRoot on behalf of the user wherever possible
- Bring up to modern Go standards
Backends were changed to:
- Remove calls to FindRoot
- Change calls to FindRootAndPath to FindPath
- Don't make special cases for the root
This fixes several corner cases, for example removing a non existent
directory if FindRoot hasn't been called.
Before this fix rclone would continually try to delete non empty
segment containers which made deleting lots of files very slow.
This fix makes rclone just try the delete once and then carry on which
was the original intent of the code before the retry logic got put in.
Before this change if the server sent us xml like this
```
<D:propstat>
<D:prop>
<g0:quota-available-bytes/>
<g0:quota-used-bytes/>
</D:prop>
<D:status>HTTP/1.1 404 Not Found</D:status>
</D:propstat>
```
Rclone would read the empty XML items as containing 0
After this fix we make sure that we have a value before using it.
Before this fix rclone v1.51 and 1.52 would incorrectly use the cached
root_folder_id when the --drive-impersonate flag was in use. This
meant that rclone could be looking up the wrong directory ID with
unpredictable results - usually all files apparently being missing.
This fix makes rclone look up the root_folder_id always when using
--drive-impersonate. It does this by clearing the root_folder_id and
making a NOTICE message that it is ignoring the cached value.
It also stops rclone caching the root_folder_id when using
--drive-impersonate.
See: https://forum.rclone.org/t/rclone-gdrive-no-longer-returning-anything/17215
Adding the expires parameter gives settings_error/not_authorized/.. errors.
The expires setting isn't in the documentation so this commit removes
it for now.
For SSH authentication, `key_pem` should both override `key_file`
and not require other SSH authentication methods to be set.
Prior to this fix, rclone would attempt to use an ssh-agent
when `key_pem` was the only SSH authentication method set.
Fixes#4240
Before this change we were setting the headers on the PUT
request for normal and multipart uploads. For normal uploads this caused the error
403 Forbidden: There were headers present in the request which were not signed
After this fix we set the headers in the object upload request itself
as the s3 SDK expects.
This means that we only support a limited range of headers
- Cache-Control
- Content-Disposition
- Content-Encoding
- Content-Language
- Content-Type
- X-Amz-Tagging
- X-Amz-Meta-
Note for the last of those are for setting custom metadata in the form
"X-Amz-Meta-Key: value".
This now works for multipart uploads and single part uploads
See also #59
This provides two things:
* It gives Storj insight into which uplink clients are using the
network.
* It facilitate rclone participating in the Tardigrade Open Source
Partner Program https://tardigrade.io/partner/
* s3: add `max_upload_parts` support
This allows to configure a maximum amount of chunks used to upload file:
- Support Scaleway which has a limit of 1k chunks currently
- Reduce a cost on S3 when each request costs some money at the expense of memory used
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
This adds expire and unlink fields to the PublicLink interface.
This fixes up the affected backends and removes unlink parameters
where they are present.
This factors copy out of SetModTime and Copy so it can be called from
both places.
This also reworks all the multipart uploading to use sync.Errgroup and
memory pooling like the other backends. This makes it more memory
efficient and handle errors better.
See: https://forum.rclone.org/t/copying-files-within-a-b2-bucket/16680/10
Before this change, attempting to upload a single file into an s3
bucket which did not have create permission gave AccessDenied: Access
Denied error when it tried to create the bucket.
This was masked until e2bf91452a was
fixed.
This fix marks the bucket as OK if a fetch on an object indicates it
is OK. This stops rclone thinking it has to create the bucket in the
first place.
Fixes#4297
This is caused by a bug in Google drive where, in some circumstances
querying for "(A in parents) or (B in parents)" returns nothing
whereas querying for "A in parents" and "B in parents" separately
works fine.
This has been reported here:
https://issuetracker.google.com/issues/149522397
This workaround detects this condition by seeing if a listing for more
than one directory at once returns nothing.
If it does then it retries each one individually.
This can potentially have a false positive if the user has multiple
empty directories which are queried at once. The consequence of this
will be that ListR is disabled for a while until the directories are
found to be actually empty in which case it will be re-enabled.
Fixes#3114 and Fixes#4289
This reverts commit 9e4b68a364.
This does not work as intended - it only changes docs files and to
make it change drive files would take an extra roundtrip.
I think the sematics of server side copy are now correct - additional
features should be added with a new flag.
See #4230
When wrapping a backend that supports Server Side Copy (e.g. `b2`, `s3`)
and configuring the `tmp_upload_path` option, the `cache` backend would
erroneously report that Server Side Copy/Move was not supported, causing
operations such as file moves to fail. This change fixes this issue
under these circumstances such that Server Side Copy will now be used
when the wrapped backend supports it.
Fixes#3206
Before this change we early exited the SetModTime call which means we
skipped reading the info about the file.
This change reads info about the file in the SetModTime call even if
we are skipping setting the modtime.
See: https://forum.rclone.org/t/sftp-and-set-modtime-false-error/16362
This commit changes the output of the rclone backend encode crypt: and
decode commands to output a plain list of decoded or encoded file
names.
This makes the command much more useful for command line scripting.
Enable fast list functions for union backend when:
- at least one of the upstreams supports fast list
- upstreams only consist of backends that support fast list and local backend.
Fixes#3000
When server side copying Google docs files we attempt to preserve the
description.
This patch makes it so that we use the default description if the
original description was empty.
See: 6fdd7149c1 (commitcomment-38008638)
Before this change, for some operations, eg rcat or copyto (of a file)
rclone would attempt to create the container when using a SAS URL
limited to a container.
After this change we assume the container does not need creating when
using a container SAS URL.
See: https://forum.rclone.org/t/rclone-rcat-azure-blob-container-sas-token-403-error/16286
This also fixes typo in the name of the function, and allows making
shortcuts from the root directory which are useful in cross drive
shortcut creation.
This also adds a basic suite of tests for creating listing, removing
shortcuts.
This means that we can return ErrorNotAFile when there is an object
with the same name as a directory rather than potentially creating a
duplicate name.
Before this code we were settig the headers on the PUT request. However this isn't where GCS needs them.
After this fix we set the headers in the object upload request itself.
This means that we only support a limited range of headers
- Cache-Control
- Content-Disposition
- Content-Encoding
- Content-Language
- Content-Type
- X-Goog-Meta-
Note for the last of those are for setting custom metadata in the form
"X-Goog-Meta-Key: value".
Before this change the local backend was returning file not found
errors for post transfer hashes for files which were moved. This was
caused by the routine which checks for the object being changed.
After this change we ignore file not found errors while checking to
see if the object has changed. If the hash has to be computed then a
file not found error will be thrown when it is opened, otherwise the
cached hash will be returned.
Before this change rclone would skip all shortcuts with a message
Ignoring unknown document type "application/vnd.google-apps.shortcut"
After this message rclone resolves the shortcuts by default to the
actual files that they point to. See the docs for more info.
The --drive-skip-shortcuts flag can be used to skip shortcuts.
Before this change the newObject* functions could return object=nil
with err=nil. The result of these functions are passed outside of the
backend code (eg in Copy, Move) and returning a nil object with a nil
error leads to crashes elsewhere as it breaks expectations.
After this change we return (nil, fs.ErrorObjectNotFound) in these
cases. The one place this is actually needd internally (when turning
items into listings) we detect that error and use it to mean skip the
directory item.
This problem was noticed while testing the shortcuts code. It
shouldn't happen normally but it is conceivable it could.
Apparently some tools (eg duplicati) upload the SHA1 in uppercase to
b2 to be stored in the `large_file_sha1` metadata. This patch forces
it to lower case.
According to Microsoft support this error can be caused by
> A timing/concurrency issue where the PUT operations are happening
> about the same time for a single blob. The Put Block List operation
> writes a blob by specifying the list of block IDs that make up the
> blob. In order to be written as part of a blob, a block must have
> been successfully written to the server in a prior Put Block
> operation.
>
> Documentation reference:
>
> https://docs.microsoft.com/en-us/rest/api/storageservices/put-block
>
> This error can happen when doing concurrent upload commits after you
> have started the upload but before you commit. In that case, the
> upload fails. The application can retry this error or attempt some
> other recovery action based on the required scenario.
See: https://forum.rclone.org/t/error-while-syncing-with-azure-blob-storage-x-ms-error-code-invalidbloborblock/15561
For a certain class of broken or missing image Google Photos puts an
image in the error message.
Before this fix we blindly chucked it into the error message.
After this fix we replace it with some sensible text.
Before this change crypt would not calculate hashes for files it was
uploading. This is because, in the general case, they have to be
downloaded, encrypted and hashed which is too resource intensive.
However this causes backends which need the hash first before
uploading (eg s3/b2 when uploading chunked files) not to have a hash
of the file. This causes cryptcheck to complain about missing hashes
on large files uploaded via s3/b2.
This change calculates hashes for the upload if the upload is coming
from a local filesystem. It does this by encrypting and hashing the
local file re-using the code used by cryptcheck. For a local disk this
is not a lot more intensive than calculating the hash.
See: https://forum.rclone.org/t/strange-output-for-cryptcheck/15437Fixes: #2809
Previously we had a map of pools for different chunk sizes.
In practice the mapping is not very useful and requires a lock.
Pools of size other that ChunkSize can only happen when we have a huge file (over 10k * ChunkSize).
We need to have a bunch of identically sized huge files.
In such case most likely ChunkSize should be increased.
The mapping and its lock is replaced with a single initialised pool for ChunkSize, in other cases pool is allocated and freed on per file basis.
Rclone can't safely delete files with multiple parents without
PATCHing the parents list. This can be done, but since multiple
parents are going away to be replaced by drive shortcuts we return an
error for now.
See #4013
Before this change we queries /me/drives for a list of the users
drives and asked the user to choose. Sometimes this does not return
the users main drive for reasons unknown.
After this change we query /me/drives first then /me/drive and add
that to the list of drives if it wasn't already there.
In 5470d34740 "backend/s3: use low-level-retries as the number
of SDK retries" we switched over to using the AWS SDK low level
retries instead of rclone's low level retry logic.
This had the unfortunate attempt that retrying listings to correct XML
Syntax errors failed on non S3 backends such as CEPH. The AWS SDK was
also retrying the XML Syntax error request which doesn't make sense.
This change turns off the AWS SDK retries in favour of just using
rclone's retry logic.
If chunk size is more than 250M (262,144,000 bytes) then API throws the following error:
Microsoft.SharePoint.Client.InvalidClientQueryException: The request message is too big. The server does not allow messages larger than 262144000 bytes.
Before this change rclone didn't use sparse files on Windows. This
means that when you downloaded a file with multithread download it
wrote the entire file with zeros first on the first write not at the
start of the file.
This change makes the file be sparse on Windows. Linux/macOS files
were already sparse.
Before this change shared with me items with multiple parents (ie most
of them that aren't in the root) would appear twice in the directory
listings.
This fixes the problem by doing an early exit for shared with me
items.
This bug was introduced here by removing some necessary code detecting
shared with me items at the root with no parents.
4453fa4ba6 "drive: fix --fast-list when using appDataFolder"
This fix reverts that part of the patch.
Fixes#4018
This adds a bit of missed locking around the uploaded info to fix the
concurrent map write.
All the other accesses have locking - this one must have got missed.
pureftpd has a bug where it sends messages like this
```
150-Accepted data connection\r\n
Response code: File status okay; about to open data connection (150)
Response arg: Accepted data connection
150 32768.0 kbytes to download\r\n
150 0.014 seconds (measured here), 1665.27 Mbytes per second\r\n
```
The last `150` is treated as a new response - the previous `150` should have been `150-`.
This means that rclone sees the `150 0.014 seconds (measured here),
1665.27 Mbytes per second` as a reply to the next message and reports
it as an error.
This fix ignores that specific message when it is received in the
`Close` method. It dumps the FTP connection after as it is out of
sync.
See: #3984Fixes#3445
Before this change if rclone failed to close a file download for some
reason it would leak a concurrency token. When all the tokens were
leaked then rclone would lock up.
This fix returns the concurrency token regardless of the error status.
Before this change if rclone failed to upload a file for some reason
it would leak a concurrency token. When all the tokens were leaked
then rclone would lock up.
The fix returns the concurrency token regardless of the error state.
Before this change if rclone failed to make an FTP connection for some
reason it would leak a concurrency token. When all the tokens were
leaked then rclone would lock up.
The fix returns the concurrency token if creating the FTP connection
returns an error.
Amazon S3 is built to handle different kinds of workloads.
In rare cases where S3 is not able to scale for whatever reason users
will face status 500 errors.
Main mechanism for handling these errors are retries.
Amount of needed retries varies for each different use case.
This change is making retries for s3 backend configurable by using
--low-level-retries option.
Currently each multipart upload allocated his own buffers, which after
file upload was garbaged. Next files couldn't leverage already allocated
memory which resulted in inefficent memory management. This change
introduces backend memory pool keeping memory chunks which can be
used during object operations.
Fixes#3967
The error code 500 Internal Error indicates that Amazon S3 is unable to handle the request at that time. The error code 503 Slow Down typically indicates that the requests to the S3 bucket are very high, exceeding the request rates described in Request Rate and Performance Guidelines.
Because Amazon S3 is a distributed service, a very small percentage of 5xx errors are expected during normal use of the service. All requests that return 5xx errors from Amazon S3 can and should be retried, so we recommend that applications making requests to Amazon S3 have a fault-tolerance mechanism to recover from these errors.
https://aws.amazon.com/premiumsupport/knowledge-center/http-5xx-errors-s3/
This removes the unused functions run.writeRemoteRandomBytes() run.writeObjectRandomBytes() run.listPath() Directory.parentRemote() and Persistent.dumpRoot().
Before this change, when uploading multipart files, onedrive would
sometimes return an unexpected 416 error and rclone would abort the
transfer.
This is usually after a 500 error which caused rclone to do a retry.
This change checks the upload position on a 416 error and works how
much of the current chunk to skip, then retries (or skips) the current
chunk as appropriate.
If the position is before the current chunk or after the current chunk
then rclone will abort the transfer.
See: https://forum.rclone.org/t/fragment-overlap-error-with-encrypted-onedrive/14001Fixes#3131
This hides:
- "use_created_date"
- "use_shared_date"
- "size_as_quota"
from the configurator (rclone config) as they interfere with normal
operations and shouldn't be set in a backend config.
They can still be put in the config file by hand and will still work
as variables, etc.
This adds some more docs to "size_as_quota" also.
Fixes#3912
Before this change we used non multipart uploads for files of unknown
size (streaming and uploads in mount). This is slower and less
reliable and is not recommended by Google for files smaller than 5MB.
After this change we use multipart resumable uploads for all files of
unknown length. This will use an extra transaction so is less
efficient for files under the chunk size, however the natural
buffering in the operations.Rcat call specified by
`--streaming-upload-cutoff` will overcome this.
See: https://forum.rclone.org/t/upload-behaviour-and-speed-when-using-vfs-cache/9920/
This error started happening after updating golang/x/crypto which was
done as a side effect of:
3801b8109 vendor: update termbox-go to fix ncdu command on FreeBSD
This turned out to be a deliberate policy of making
ssh.ParsePrivateKeyWithPassphrase fail if the passphrase was empty.
See: https://go-review.googlesource.com/c/crypto/+/207599
This fix calls ssh.ParsePrivateKey if the passphrase is empty and
ssh.ParsePrivateKeyWithPassphrase otherwise which fixes the problem.
If the --drive-stop-on-upload-limit flag is in effect this checks the
error string from Google Drive to see if it is the error you get when
you've breached your 750GB a day limit.
If so then it turns this error into a Fatal error which should stop
the sync.
Fixes#3857
In listings if the ID `appDataFolder` is used to list a directory the
parents of the items returned have the actual ID instead the alias
`appDataFolder`. This confused the ListR routine into ignoring all
these items.
This change makes the listing routine accept all parent IDs returned
if there was only one ID in the query. This fixes the `appDataFolder`
problem. This means we are relying on Google Drive to only return the
items we asked for which is probably OK.
Fixes#3851
The S3 ListObject API returns paginated bucket listings, with
"MaxKeys" items for each GET call.
The default value is 1000 entries, but for buckets with millions of
objects it might make sense to request more elements per request, if
the backend supports it. This commit adds a "list_chunk" option for
the user to specify a lower or higher value.
This commit does not add safe guards around this value - if a user
decides to request a too large list, it might result in connection
timeouts (on the server or client).
In AWS S3, there is a fixed limit of 1000, some other services might
have one too. In Ceph, this can be configured in RadosGW.
Before this patch we were failing to URL decode the NextMarker when
url encoding was used for the listing.
The result of this was duplicated listings entries for directories
with >1000 entries where the NextMarker was a file containing a space.
Before this change we used the same (relatively low limits) for server
side copy as we did for multipart uploads. It doesn't make sense to
use the same limits since no data is being downloaded or uploaded for
a server side copy.
This change introduces a new parameter --s3-copy-cutoff to control
when the switch from single to multipart server size copy happens and
defaults it to the maximum 5GB.
This makes server side copies much more efficient.
It also fixes the erroneous error when trying to set the modification
time of a file bigger than 5GB.
See #3778
Before this change multipart copies were giving the error
Range specified is not valid for source object of size
This was due to an off by one error in the range source introduced in
7b1274e29a "s3: support for multipart copy"
Before this change rclone used "Authorization: BEARER token". However
according the the RFC this should be "Bearer"
https://tools.ietf.org/html/rfc6750#section-2.1
This changes it to "Authorization: Bearer token"
Fixes#3751 and interop with Salesforce Webdav server
When using nextcloud, before this change we only uploaded one of SHA1
or MD5 checksum in the OC-Checksum header with preference to SHA1 if
both were set.
This makes the MD5 checksums read as empty string which makes syncing
with checksums less useful than they should be as all the MD5
checksums are blank.
This change makes it so that we only upload the SHA1 to nextcloud.
The behaviour of owncloud is unchanged as owncloud uses the checksum
as an upload integrity check only and calculates its own checksums.
See: https://forum.rclone.org/t/how-to-specify-hash-method-to-checksum/13055
This also corrects the symlink detection logic to only check symlink
files. Previous to this it was checking all directories too which was
making it do more stat calls than was necessary.
Before this change we forgot to URL decode the X-Object-Manifest in a dynamic large object.
This problem was introduced by 2fe8285f89 "swift: reserve
segments of dynamic large object when delete objects in container what
was enabled versioning."
For few commands, RClone counts a error multiple times. This was fixed by
creating a new error type which keeps a flag to remember if the error has
already been counted or not. The CountError function now wraps the original
error eith the above new error type and returns it.
Before this change rclone used the team_drive ID as the root if set
even if the root_folder_id was set too.
This change uses the root_folder_id in preference over the team_drive
which restores the functionality.
This problem was introduced by ba7c2ac443Fixes#3742
We attempt to find the ID of the root folder by doing a GET on the
folder ID "root". With scope "drive.files" this fails with a 404
message.
After this change if we get the 404 message, we just carry on using
"root" as the root folder ID and we cache that for future lookups.
This means that changenotify messages will not work correctly in the
root folder but otherwise has minor consequences.
See: https://forum.rclone.org/t/fresh-raspberry-pi-build-google-drive-404-error-failed-to-ls-googleapi-error-404-file-not-found/12791
Before this change rclone would allow the user to stream (eg with
rclone mount, rclone rcat or uploading google photos or docs) 5TB
files. This meant that rclone allocated 4 * 525 MB buffers per
transfer which is way too much memory by default.
This change makes rclone use the configured chunk size for streamed
uploads. This is 5MB by default which means that rclone can stream
upload files up to 48GB by default staying below the 10,000 chunks
limit.
This can be increased with --s3-chunk-size if necessary.
If rclone detects that a file is being streamed to s3 it will make a
single NOTICE level log stating the limitation.
This fixes the enormous memory usage.
Fixes#3568
See: https://forum.rclone.org/t/how-much-memory-does-rclone-need/12743
Before this fix we neglected to add the shared drive ID to the request
when asking for an initial change notify token and this caused a lot
more results to be returned than was necessary.
When we changed recursive lists to use --fast-list by default this
broke listing with --drive-shared-with-me from the root.
This turned out to be an unwarranted assumption in the ListR code that
all items would have a parent folder that we had searched for - this
isn't true for shared with me items.
This was fixed when using --drive-shared-with-me to give items that
didn't have any parents a synthetic parent.
Fixes#3639
Before this change we used the id "root" as an alias for the root drive ID.
However this causes problems when we receive IDs back from drive which
are not in this format and have been expanded to their canonical ID.
This change looks up the ID "root" and stores it in the
"drive_folder_id" parameter in the config file.
This helps with
- Notifying changes at the root
- Files shared with me at the root
See #3639
Before this change when rclone was compiled with go1.13 it used HTTP/2
to contact drive by default.
This causes lockups and INTERNAL_ERRORs from the HTTP/2 code.
This is a workaround disabling the HTTP/2 code on an option.
It can be re-enabled with `--drive-disable-http2=false`
See #3631
Before this change we silently skipped uploads to dropbox of
disallowed file names. However this then caused "corrupted on
transfer" errors because the sizes were wrong.
After this change we return an no retry error which will mean that the
sync fails (as it should - not all files were uploaded) but no
unecessary retries happened.
This works around a bug in Ceph which doesn't encode CommonPrefixes
when using URL encoded directory listings.
See: https://tracker.ceph.com/issues/41870
changes:
- chunker: remove GetTier and SetTier
- remove wdmrcompat metaformat
- remove fastopen strategy
- make hash_type option non-advanced
- adverise hash support when possible
- add metadata field "ver", run strict checks
- describe internal behavior in comments
- improve documentation
note:
wdmrcompat used to write file name in the metadata, so maximum metadata
size was 1K; removing it allows to cap size by 200 bytes now.
Note: chunker implements many irrelevant methods (UserInfo, Disconnect etc),
but they are required by TestIntegration/FsCheckWrap and cannot be removed.
Dropped API methods: MergeDirs DirCacheFlush PublicLink UserInfo Disconnect OpenWriterAt
Meta formats:
- renamed old simplejson format to wdmrcompat.
- new simplejson format supports hash sums and verification of chunk size/count.
Change list:
- split-chunking overlay for mailru
- add to all
- fix linter errors
- fix integration tests
- support chunks without meta object
- fix package paths
- propagate context
- fix formatting
- implement new required wrapper interfaces
- also test large file uploads
- simplify options
- user friendly name pattern
- set default chunk size 2G
- fix building with golang 1.9
- fix ci/cd on a separate branch
- fix updated object name (SyncUTFNorm failed)
- fix panic in Box overlay
- workaround: Box rename failed if name taken
- enhance comments in unit test
- fix formatting
- embed wrapped remote rather than inherit
- require wrapped remote to support move (or copy)
- implement 3 (keep fstest)
- drop irrelevant file system interfaces
- factor out Object.mainChunk
- refactor TestLargeUpload as InternalTest
- add unit test for chunk name formats
- new improved simplejson meta format
- tricky case in test FsIsFile (fix+ignore)
- remove debugging print
- hide temporary objects from listings
- fix bugs in chunking reader:
- return EOF immediately when all data is sent
- handle case when wrapped remote puts by hash (bug detected by TestRcat)
- chunked file hashing (feature)
- server-side copy across configs (feature)
- robust cleanup of temporary chunks in Put
- linear download strategy (no read-ahead, feature)
- fix unexpected EOF in the box multipart uploader
- throw error if destination ignores data
When used with v2_auth = true, PresignRequest doesn't return
signed headers, so remote dest authentication would be fail.
This commit copying back HTTPRequest.Header to headers.
Tested with RiakCS v2.1.0.
Signed-off-by: Anthony Rusdi <33247310+antrusd@users.noreply.github.com>
- Read the storage class for each object
- Implement SetTier/GetTier
- Check the storage class on the **object** before using SetModTime
This updates the fix in 1a2fb52 so that SetModTime works when you are
using objects which have been migrated to GLACIER but you aren't using
GLACIER as a storage class.
Fixes#3522
Before this change we used PATCH on the object to update the metadata.
Apparently this requires the "full_control" scope which Google were
unhappy with in their oauth review.
This changes it to update the metadata by copying the object ontop of
itself (which is the way s3 works). This can be done with normal
permissions.
This fixes a crash on the google photos backend when an error is
returned from the rest.Call function.
This turned out to be a mis-understanding of the rest docs so
- improved rest.Call docs
- fixed mis-understanding in google photos backend
- fixed similar mis-understading in onedrive backend
- change the interface of listBuckets() removing dir parameter and adding context
- add makeBucket() and use in place of Mkdir("")
- this fixes some corner cases in Copy/Update
- mark all the listed buckets OK in ListR
Thanks to @yparitcher for the review.
Before this change, if the caller didn't provide a hint, we would
calculate all hashes for reads and writes.
The new whirlpool hash is particularly expensive and that has become noticeable.
Now we don't calculate any hashes on upload or download unless hints are provided.
This means that some operations may run slower and these will need to be discovered!
It does not affect anything calling operations.Copy which already puts
the corrects hints in.
When using the VFS with swift and --swift-no-chunk, PutStream was
returning objects with size -1 which was causing corrupted transfer
messages.
This was fixed by counting the bytes transferred in a streamed file
and updating the metadata with that.
This was factored from fstest as we were including the testing
enviroment into the main binary because of it.
This was causing opening the browser to fail because of 8243ff8bc8.
In 53a1a0e3ef we started returning non nil from NewObject when
an object isn't found. This breaks the integration tests and the API
expected of a backend.
This fixes it.
Introduce stats groups that will isolate accounting for logically
different transferring operations. That way multiple accounting
operations can be done in parallel without interfering with each other
stats.
Using groups is optional. There is dedicated global stats that will be
used by default if no group is specified. This is operating mode for CLI
usage which is just fire and forget operation.
For running rclone as rc http server each request will create it's own
group. Also there is an option to specify your own group.
Configuration time option to disable the above for if using Dropbox (does not
allow setting mtime on copy) or Amazon Drive (neither on upload nor on copy).
Before this change rclone was sending a MimeType in the requests for
server side Move and Copy.
The conjecture is that if you attempt to set the MimeType to something
different in a Copy then Google Drive has to do an actual copy of the
file data. This takes a very long time (since it is large) and fails
after a 90s timeout.
After the change we no longer set the MimeType in Move or Copy and the
copies happen instantly and correctly.
Many thanks to @darthShadow for discovering that this was causing the
problem.
Fixes#3070Fixes#3033Fixes#3300Fixes#3155
This was started by Fionera, finished off by Laura with fixes and more
docs from Nick.
Co-authored-by: Fionera <fionera@fionera.de>
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
* azureblob - Add support for Azure Storage Emulator to test things locally.
Testing - Verified changes by testing manually.
* docs: update azureblob docs to reflect support of storage emulator
This bug was introduced as part of adding context to the backends and
slipped through the net because the About call did not have an
interface assertion in the sftp backend.
I checked there were no other missing interface assertions on all the
optional methods on all the backends.
- Change rclone/fs interfaces to accept context.Context
- Update interface implementations to use context.Context
- Change top level usage to propagate context to lover level functions
Context propagation is needed for stopping transfers and passing other
request-scoped values.
Before this change rclone attempted to set the "updated" field in
uploaded objects to the modification time.
However when this modification time was before 1970, google drive
would return the rather cryptic error:
googleapi: Error 400: Invalid value for UnsignedLong: -42000, invalid
However API docs: https://cloud.google.com/storage/docs/json_api/v1/objects#resource
state the "updated" field is read only and tests confirm that. Even
though the field is read only, it looks like Google parses it.
This change therefore removes the attempt to set the "updated" field
(which was doing nothing anyway) and fixes the problem uploading pre
1970 files.
See #3196 and https://forum.rclone.org/t/invalid-value-for-unsignedlong-file-missing-date-modified/3466
In #2728 and 55b9a4e we decided to allow server side operations
between google drives with different configurations.
This works in some cases (eg between teamdrives) but does not work in
the general case, and this caused breakage in quite a number of
people's workflows.
This change makes the feature conditional on the
--drive-server-side-across-configs flag which defaults to off.
See: https://forum.rclone.org/t/gdrive-to-gdrive-error-404-file-not-found/9621/10Fixes#3119
Under Linux, rclone attempts to preallocate files for efficiency.
Before this change, pre-allocation would fail on ZFS with the error
Failed to pre-allocate: operation not supported
After this change rclone tries a different flag combination for ZFS
then disables pre-allocate if that doesn't work.
Fixes#3066
Before this change rclone would fail with
Failed to set modification time: InvalidObjectState: Operation is not valid for the source object's storage class
when attempting to set the modification time of an object in GLACIER.
After this change rclone will re-upload the object as part of a sync if it needs to change the modification time.
See: https://forum.rclone.org/t/suspected-bug-in-s3-or-compatible-sync-logic-to-glacier/10187