When an external OAuth flow is being used (i.e. a client ID and an
OAuth token are set in the config), a client secret should not be set.
If one is, the server may reject a token refresh attempt.
But there's no way to clear out a backend's default client secret via
configuration, since empty-string config values are ignored.
So instead, when a client ID is set, we should clear out any default
client secret, since it wouldn't apply anyway.
Version 5 removed go cache management, and therefore also options skip-pkg-cache and
skip-build-cache, because the cache related to go itself is already handled by
actions/setup-go, and now it only caches golangci-lint analysis. Since we run multiple
golangci-lint-action steps for different goos, we want to cache package and build cache
and golangci-lint results from all of them, and therefore this commit now changes the
approach by disabling all built-in caching and introducing a separate cache step to
handle it properly.
This change adds support for "group" identities, and SharePoint variants
"siteUser" and "siteGroup". It also adds support for using any identity type
(including "application" and "device") as a recipient source when adding
permissions.
Before this change, metadata permissions used the `grantedTo` and
`grantedToIdentities` properties, which are deprecated on OneDrive Business in
favor of `grantedToV2` and `grantedToIdentitiesV2`. After this change, OneDrive
Business uses the new V2 versions, while OneDrive Personal still uses the
originals, as the V2 versions are not available for OneDrive Personal. (see
https://learn.microsoft.com/en-us/answers/questions/1079737/inconsistency-between-grantedtov2-and-grantedto-re)
Previously, `getFile()` was called indiscriminately during uploads, moves,
and download link generation. This could lead to users with download limit
causing subsequent operations like uploads and moves to fail.
This PR optimizes the use of getFile(), by only calling it
when it's strictly necessary.
For unknown reasons the precision of modification times of directories
on the CI is > 15mS compared to files which are 100nS. The tests
work fine when run in Virtualbox though so I conjecture this is
something to do with the file system used there.
Before this change we synced directories regardless if the source
directory existed. It is irrelevant whether the source directory
exists or not, what we need to know is has the directory been
modified.
Co-authored-by: nielash <nielronash@gmail.com>
Before this change we used the same datastructure for managing empty
directories for both --create-empty-src-dirs in sync/copy/move and for
the --delete-empty-src-dirs flag in move.
These two uses are subtly incompatible and this change uses a separate
datastructure for both uses. This makes it more accurate and easier to
understand.
This switches between storing chunks in a separate container suffixed
with `_segments` (the default) and a directory in the root
`.file-segments`)
By default the `.file-segments` mode will be auto selected if
`auth_url`s that require it are detected.
If the `.file-segments` mode is in use then rclone will omit that
directory from listings.
See: https://forum.rclone.org/t/blomp-unable-to-upload-5gb-files/42498/
The .lck file filename length needs to be less than 255 bytes (not symbols) on
linux, and it was still too long on this test, because of the
subdir=測試_Русский_{spc}_{spc}_ě_áñ
on remotes with long names, such as TestChunkerChunk3bNoRenameLocal:
This changes as many of the integraton tests as possible so that they
use port forwarding rather than the docker IP directly.
Using the docker IP directly does not work on macOS and Windows as the
docker images are running in a VM rather than a container.
This adds the PORTS.md document to document which port numbers we are
using for which service as they need to be unique.