This enables compatibility with versions of git-annex currently
available on GitHub's "ubuntu-latest" image, aka Ubuntu 22.04 Jammy.
Currently, Jammy is shipping git-annex 8.20210223-2ubuntu2.
https://packages.ubuntu.com/jammy/git-annex
Issue #7625
This commit implements milestone 2.1 for the gitannex subcommand:
https://github.com/rclone/rclone/issues/7625#issuecomment-1951403856
This rewrite makes a few improvements over the old shell script:
(1) It no longer uses the system's rclone.conf. Now, it writes the
rclone.conf file in an ephemeral directory.
(2) It no longer makes any assumptions about the contents of /tmp.
However, it now assumes that an rclone built from the HEAD commit is on
the PATH. It makes a best-effort attempt to verify this assumption, but
I'm not sure it's bulletproof.
I'm hoping that writing this in Go will enable more cross-platform
support in the future, but for now we're still restricted to Unixy
systems due to reliance on the HOME environment variable.
Issue #7625
Before this change, calling SetModTime on owncloud and nextcloud would
inadvertently erase the object's stored hashes. This change fixes the issue,
which was discovered by the bisync integration tests.
In this commit we merged an unreliable test
e053c8a1c0 copy: fix nil pointer dereference when corrupted on transfer with nil dst
It is a good idea but very hard to implement so it always works.
Hence this disables it for the moment.
Before this change, the --metadata-mapper was called twice if an object was
uploaded via multipart upload with --metadata and --onedrive-metadata-permissions
"write" or "read,write". This change fixes the issue.
- fix parsing of connection string remotes (comma in name)
- skip remotes that can't upload empty files
- Mkdir the test case subdir before cache.Get-ing it
(only storj seems to need this... bug?)
Several fixes for the bisync integration tests:
- use unique initdir and datadir for each subtest so concurrent tests don't interfere with each other
- remove dots from dir names for bucket backends
- ignore messages specific to cache backend
- skip fix-case tests on backends that can't fix-case
- don't expect "{hashtype} differ" messages on backends with no hash types
- print timestamps in UTC local
More fixes will still be needed, but this should hopefully fix a good portion of them.
Help people handle an issue which might be difficult to understand
otherwise.
If you have recursive shortcuts (pointing to a parent folder) in a
google drive, rclone is doing infinite recursion, never ending and
filling the disk. Even if you ask not to get shortcuts content.
Windows webdav does an OPTIONS request on the root even when given a
path and if we return 404 here then Windows refuses to use the path.
This patch allows OPTIONS requests only on the root to fix this.
This affects all the HTTP servers.
Before this change, List would return incorrect directory paths (relative to the
wrong root) if the Fs root pointed to a subdirectory. For example, listing dir
"a/b/c/d" of remote :memory: would work correctly, but listing dir "c/d" of
remote :memory:a/b would not, and would result in "Entry doesn't belong in
directory %q (contains subdir)" errors.
This change fixes the issue and adds a test to detect any other backends that
might have the same issue.
Before this change, the Memory backend had the potential to deadlock under
certain conditions, if the ListR callback required locking the b.mu mutex. This
was the case with operations.Purge, because Memory has no Purge method, and the
fallback option does:
err = DeleteFiles(ctx, listToChan(ctx, f, dir))
which potentially starts removing objects before the listing has completed.
This change fixes the issue by batching all the entries before calling the
callback on them.
This change officially adds bisync to the nightly integration tests for all
backends.
This will be part of giving us the confidence to take bisync out of beta.
A number of fixes have been added to account for features which can differ on
different backends -- for example, hash types / modtime support, empty
directories, unicode normalization, and unimportant differences in log output.
We will likely find that more of these are needed once we start running these
with the full set of remotes.
Additionally, bisync's extremely sensitive tests revealed a few bugs in other
backends that weren't previously covered by other tests. Fixes for those issues
have been submitted on the following separate PRs (and bisync test failures will
be expected until they are merged):
- #7670 memory: fix deadlock in operations.Purge
- #7688 memory: fix incorrect list entries when rooted at subdirectory
- #7690 memory: fix dst mutating src after server-side copy
- #7692 dropbox: fix chunked uploads when size <= chunkSize
Relatedly, workarounds have been put in place for the following backend
limitations that are unsolvable for the time being:
- #3262 drive is sometimes aware of trashed files/folders when it shouldn't be
- #6199 dropbox can't handle emojis and certain other characters
- #4590 onedrive API has longstanding bug for conflictBehavior=replace in
server-side copy/move
Before this change, the Memory backend's Copy method created a dst object that
referenced the src's objectData by pointer instead of making a copy. While this
minimized memory usage, an unintended consequence was that subsequently mutating
the src (such as changing the modtime) would inadvertently also mutate the dst,
and vice versa.
This change fixes the issue and adds a test.
This change adds the --direct-io flag to the mount. This means the
page cache is completely bypassed for reads and writes. No read-ahead
takes place. Shared mmap is disabled.
This is useful to accurately read files which may change length
frequently on the source.
Before this change, if a file shrunk in size on the remote then rclone
could get into an loop trying to download the file forever.
The symptom was repeating errors like this:
vfs cache: restart download failed: failed to start downloader: failed to open downloader: vfs reader: failed to open source file: invalid seek position
The fix was to check that file size in various places and makes sure
that we weren't trying to download too much data.
This was a problems with backends (like s3) which update the size of
the object on Open to the actual size of the object.