- add a directory to the optional Purge interface
- fix up all the backends
- add an additional integration test to test for the feature
- use the new feature in operations.Purge
Many of the backends had been prepared in advance for this so the
change was trivial for them.
If this option is enabled, rclone will not set modtime of uploaded files and
the backend will return ModTimeNotSupported as its Precision.
Normally rclone updates modification time of files after they are done
uploading. This can cause permissions issues on Linux platforms when
rclone is copying to a CIFS mount where the user rclone is
running as does not own the file uploaded. If this option is enabled,
rclone will no longer update the modtime after copying a file.
See: https://forum.rclone.org/t/chtimes-error-on-local-mounted-copy/17784
Due to Chrome's rather complicated use of file handles when saving
files from the download windows, rclone was attempting to truncate a
closed file.
The file appeared closed due to the handling of 0 length files.
This patch removes the check for the file being closed in the
WriteFileHandle.Truncate call. This is safe because the only action
this method takes is to emit an error message if the file is the wrong
size.
See: https://forum.rclone.org/t/google-drive-cannot-save-files-directly-from-browser-to-gdrive-mounted-path/17992/
This is preparation for getting the Accounting to check the context,
buf first we need to get it in place. Since this is one of those
changes that makes lots of noise, this is in a seperate commit.
Before this change if the user supplied `-o uid=XXX` then rclone would
write `-o uid=-1 -o uid=XXX` so duplicating the uid value.
After this change rclone doesn't write the default `-1` version.
This fix affects `uid` and `gid`.
See: https://forum.rclone.org/t/issue-with-rclone-mount-and-resilio-sync/14730/27
This implements `rclone cleanup` to remove multipart uploads over 24
hours old. It also implements the backend command
`list-multipart-uploads` to see which ones are available and `cleanup`
to delete them with a configurable expiry interval.
See #4302
Before this fix, if an object had ID set and download_url was in use,
downloading the object would give this error:
failed to open for download: bucket example_bucket does not have file: /b2api/v1/b2_download_file_by_id (404 not_found)
After this fix we only download by ID if download_url is not set
See: https://forum.rclone.org/t/correct-format-for-rclone-b2-download-url-variable/15498
This patch enables rclone to be used as a library from within restic
- exposes NewServer
- exposes Server
- implements http.RoundTripper
Co-authored-by: Jack Deng <jackdeng@gmail.com>
- Use cache to store package versions
- Update actions/setup-go to v2
- Add go1.15-rc1 build
- Make seperate build step
- stop downloading code into special path
- leave adding ~/go/bin to PATH to sction/setup-go
- remove docker build from xgo as we are building rclone anyway
- remove modules setting since it is now always on
- use ./... instead of listing files in tests
go1.15 introduced a stricter policy for what you can convert with
`string()` and now `go vet` warns if you try to do `string(int)`.
See: https://github.com/golang/go/issues/32479
When we run MKCOL on 4shared on a directory that already exists, this
returns a 409/Conflict error. However this error code usually means
that the intermediate collections need creating.
The actual error code to return when trying to create a directory that
already exists isn't specified in the RFC, only that an error MUST be
returned and there are already 3 statuses checked in the code.
However using 409 makes rclone's usual strategy for making directories
fail and return the 409 error.
This patch tries the MKCOL and if it returns an unrecognised error
code, then calls PROPFIND on the directory to discover whether the
directory really exists or not.
This should also cover other WebDAV servers returning other error
messages we haven't accounted for in the code yet.
Before this change when reading directories we would use the directory
handle and the Readdir(-1) call on the directory handle. This worked
fine for the first read, but if the directory was read again on the
same handle Readdir(-1) returns nothing (as per its design).
It turns out that macOS leaves the directory handle open and just
re-reads the data from it, so this problem causes directories to start
out full then subsequently appear empty.
macOS/OSXFUSE is passing an offset of 0 to the Readdir call telling
rclone to seek in the directory, but we've told FUSE that we can't
seek by always returning ofst=0 in the fill function.
This fix works around the problem by reading the directory from the
path each time, ignoring the actual handle. This should be no less
efficient.
We will return an ESPIPE if offset is ever non 0.
There are possible corner cases reading deleted directories which this
ignores.
Before this change the cache backend contained its own routines for
mounting testing on that mount.
These tests are never run on the CI and cause a maintenance burden.
This commit removes the tests.
If the parameter being passed is an object then it can be passed as a
JSON string rather than using the `--json` flag which simplifies the
command line.
rclone rc operations/list fs=/tmp remote=test opt='{"showHash": true}'
Rather than
rclone rc operations/list --json '{"fs": "/tmp", "remote": "test", "opt": {"showHash": true}}'
This uses the refactored goftp library which doesn't include the minio
driver. This reduces the binary size by 1.5MB
See: https://gitea.com/goftp/server/pulls/120