Before this change, bisync used the "canonical" Fs name in the filename for its
listing files, including any {hexstring} suffix. An unintended consequence of
this was that if a user added a backend-specific flag from the command line
(thus "overriding" the config), bisync would fail to find the listing files it
created during the prior run without this flag, due to the path now having a
{hexstring} suffix that wasn't there before (or vice versa, if the flag was
present when the session was established, and later removed.) This would
sometimes cause bisync to fail with a critical error (if no listing existed
with the alternate name), or worse -- it would sometimes cause bisync to use an
old, incorrect listing (if old listings with the alternate name DID still
exist, from before the user changed their flags.)
After this change, the issue is fixed by always normalizing the SessionName to
the non-canonical version (no {hexstring} suffix), regardless of the flags. To
avoid a breaking change, we first check if a suffixed listing exists. If so, we
rename it (and overwrite the non-suffixed version, if any.) If not, we carry on
with the non-suffixed version. (We should only find a suffixed version if
created prior to this commit.)
The result for the user is that the same pair of paths will always use the same
.lst filenames, with or without backend-specific flags.
Bisync checks file equality before renaming sync conflicts by comparing
checksums. Before this change, backends without checksum support (notably
Crypt) would fall back to --size-only for these checks, which is not a very
safe method (differing files can sometimes have the same size, especially if
they're small.) After this change, Crypt remotes fallback to using Cryptcheck
so that checksums can be compared. As a last resort when neither Check nor
Cryptcheck are available, files are compared using --download so that we can be
certain the files are identical regardless of checksum support.
Before this change, bisync supported `--backup-dir` only when `Path1` and
`Path2` were different paths on the same remote. With this change, bisync
introduces new `--backup-dir1` and `--backup-dir2` flags to support separate
backup-dirs for `Path1` and `Path2`.
`--backup-dir1` and `--backup-dir2` can use different remotes from each other,
but `--backup-dir1` must use the same remote as `Path1`, and `--backup-dir2`
must use the same remote as `Path2`. Each backup directory must not overlap its
respective bisync Path without being excluded by a filter rule.
The standard `--backup-dir` will also work, if both paths use the same remote
(but note that deleted files from both paths would be mixed together in the
same dir). If either `--backup-dir1` and `--backup-dir2` are set, they will
override `--backup-dir`.
Before this change, bisync intentionally ignored Google Docs (albeit in a
buggy way that caused problems during --resync.) After this change, Google Docs
(including Google Sheets, Slides, etc.) are now supported in bisync, subject to
the same options, defaults, and limitations as in `rclone sync`. When bisyncing
drive with non-drive backends, the drive -> non-drive direction is controlled
by `--drive-export-formats` (default `"docx,xlsx,pptx,svg"`) and the non-drive
-> drive direction is controlled by `--drive-import-formats` (default none.)
For example, with the default export/import formats, a Google Sheet on the
drive side will be synced to an `.xlsx` file on the non-drive side. In the
reverse direction, `.xlsx` files with filenames that match an existing Google
Sheet will be synced to that Google Sheet, while `.xlsx` files that do NOT
match an existing Google Sheet will be copied to drive as normal `.xlsx` files
(without conversion to Sheets, although the Google Drive web browser UI may
still give you the option to open it as one.)
If `--drive-import-formats` is set (it's not, by default), then all of the
specified formats will be converted to Google Docs, if there is no existing
Google Doc with a matching name. Caution: such conversion can be quite lossy,
and in most cases it's probably not what you want!
To bisync Google Docs as URL shortcut links (in a manner similar to "Drive for
Desktop"), use: `--drive-export-formats url` (or alternatives.)
Note that these link files cannot be edited on the non-drive side -- you will
get errors if you try to sync an edited link file back to drive. They CAN be
deleted (it will result in deleting the corresponding Google Doc.) If you
create a `.url` file on the non-drive side that does not match an existing
Google Doc, bisyncing it will just result in copying the literal `.url` file
over to drive (no Google Doc will be created.) So, as a general rule of thumb,
think of them as read-only placeholders on the non-drive side, and make all
your changes on the drive side.
Likewise, even with other export-formats, it is best to only move/rename Google
Docs on the drive side. This is because otherwise, bisync will interpret this
as a file deleted and another created, and accordingly, it will delete the
Google Doc and create a new file at the new path. (Whether or not that new file
is a Google Doc depends on `--drive-import-formats`.)
Lastly, take note that all Google Docs on the drive side have a size of `-1`
and no checksum. Therefore, they cannot be reliably synced with the
`--checksum` or `--size-only` flags. (To be exact: they will still get
created/deleted, and bisync's delta engine will notice changes and queue them
for syncing, but the underlying sync function will consider them identical and
skip them.) To work around this, use the default (modtime and size) instead of
`--checksum` or `--size-only`.
To ignore Google Docs entirely, use `--drive-skip-gdocs`.
Nearly all of the Google Docs logic is outsourced to the Drive backend, so
future changes should also be supported by bisync.
Refactored the case / unicode normalization logic to be much more efficient,
and fix the last outstanding issue from #7270. Before this change, we were
doing lots of for loops and re-normalizing strings we had already normalized
earlier. Now, we leave the normalizing entirely to March and avoid
re-transforming later, which seems to make a large difference in terms of
performance.
Before this change, --resync was handled in three steps, and needed to do a lot
of unnecessary work to implement its own --ignore-existing logic, which also
caused problems with unicode normalization, in addition to being pretty slow.
After this change, it is refactored to produce the same result much more
efficiently, by reducing the three steps to two and letting ci.IgnoreExisting
do the work instead of reinventing the wheel.
The behavior and sync order remain unchanged for now -- just faster (but see
the ongoing lively discussions about potential future changes in #5681!)
Before this change, Bisync sometimes normalized NFD to NFC and sometimes
did not, causing errors in some scenarios (particularly for users of macOS).
It was similarly inconsistent in its handling of case-insensitivity.
There were three main places where Bisync should have normalized, but didn't:
1. When building the list of files that need to be transferred during --resync
2. When building the list of deltas during a non-resync
3. When comparing Path1 to Path2 during --check-sync
After this change, 1 and 3 are resolved, and bisync supports
--no-unicode-normalization and --ignore-case-sync in the same way as sync.
2 will be addressed in a future update.
Before this change, a sync to a case insensitive dest (such as macOS / Windows)
would not result in a matching filename if the source and dest had casing
differences but were otherwise equal. For example, syncing `hello.txt` to
`HELLO.txt` would result in the dest filename remaining `HELLO.txt`.
Furthermore, `--local-case-sensitive` did not solve this, as it actually caused
`HELLO.txt` to get deleted!
After this change, `HELLO.txt` is renamed to `hello.txt` to match the source,
only if the `--fix-case` flag is specified. (The old behavior remains the
default.)
Before this change, bisync needed to build a full listing for Path1, then a
full listing for Path2, then compare them -- and each of those tasks needed to
finish before the next one could start. In addition to being slow and
inefficient, it also caused real problems if a file changed between the time
bisync checked it on Path1 and the time it checked the corresponding file on
Path2.
This change solves these problems by listing both paths concurrently, using
the same March infrastructure that check and sync use to traverse two
directories in lock-step, optimized by Go's robust concurrency support.
Listings should now be much faster, and any given path is now checked
nearly-instantaneously on both sides, minimizing room for error.
Further discussion:
https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=4.%20Listings%20should%20alternate%20between%20paths%20to%20minimize%20errors
Before this change, bisync had no mechanism for "retrying" a file again next
time, in the event of an unexpected and possibly temporary error. After this
change, bisync is now essentially able to mark a file as needing to be
rechecked next time. Bisync does this by keeping one prior listing on hand at
all times. In a low-confidence situation, bisync can revert a given file row
back to its state at the end of the last known successful sync, ensuring that
any subsequent changes will be re-noticed on the next run.
This can potentially be helpful for a dynamically changing file system, where
files may be changing quickly while bisync is working with them.
Before this change, if there were changes to sync, bisync listed each path
twice: once before the sync and once after. The second listing caused quite
a lot of problems, in addition to making each run much slower and more
expensive. A serious side-effect was that file changes could slip through
undetected, if they happened to occur while a sync was running (between the
first and second listing snapshots.)
After this change, the second listing is eliminated by getting the underlying
sync operation to report back a list of what it changed. Not only is this more
efficient, but also much more robust to concurrent modifications. It should no
longer be necessary to avoid make changes while it's running -- bisync will
simply learn about those changes next time and handle them on the next run.
Additionally, this also makes --check-sync usable again.
For further discussion, see:
https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=5.%20Final%20listings%20should%20be%20created%20from%20initial%20snapshot%20%2B%20deltas%2C%20not%20full%20re%2Dscans%2C%20to%20avoid%20errors%20if%20files%20changed%20during%20sync
Allows rclone sync to accept the same output file flags as rclone check,
for the purpose of writing results to a file.
A new --dest-after option is also supported, which writes a list file using
the same ListFormat flags as lsf (including customizable options for hash,
modtime, etc.) Conceptually it is similar to rsync's --itemize-changes, but
not identical -- it should output an accurate list of what will be on the
destination after the sync.
Note that it has a few limitations, and certain scenarios
are not currently supported:
--max-duration / CutoffModeHard
--compare-dest / --copy-dest (because equal() is called multiple times for the
same file)
server-side moves of an entire dir at once (because we never get the individual
file objects in the dir)
High-level retries, because there would be dupes
Possibly some error scenarios that didn't come up on the tests
Note also that each file is logged during the sync, as opposed to after, so it
is most useful as a predictor of what SHOULD happen to each file
(which may or may not match what actually DID.)
Only rclone sync is currently supported -- support for copy and move may be
added in the future.
Before this change, bisync handled copies and deletes in separate operations.
After this change, they are combined in one sync operation, which is faster
and also allows bisync to support --track-renames and --backup-dir.
Bisync uses a --files-from filter containing only the paths bisync has
determined need to be synced. Just like in sync (but in both directions),
if a path is present on the dst but not the src, it's interpreted as a delete
rather than a copy.
Before this change ListR was unconditionally enabled on onedrive.
This caused performance problems for some uses, so now the
--onedrive-delta flag has to be supplied.
Fixes#7362
- use rclone's http Transport
- fix handling of 0 length files
- combine into one file and remove uneeded abstraction
- make `chunk_size` and `upload_concurrency` settable
- make auth the same as azureblob
- set the Features correctly
- implement `--azurefiles-max-stream-size`
- remove arbitrary sleep on Mkdir
- implement `--header-upload`
- implement read and write MimeType for objects
- implement optional methods
- About
- Copy
- DirMove
- Move
- OpenWriterAt
- PutStream
- finish documentation
- disable build on plan9 and js
Fixes#365Fixes#7378
- Changes
- Rename `--s3-authkey` to `--auth-key` to get it out of the s3 backend namespace
- Enable `Content-MD5` integrity checks
- Remove locking after code audit
- Documentation
- Factor out documentation into seperate file
- Add Quickstart to docs
- Add Bugs section to docs
- Add experimental tag to docs
- Add rclone provider to s3 backend docs
- Fixes
- Correct quirks in s3 backend
- Change fmt.Printlns into fs.Logs
- Make metadata storage per backend not global
- Log on startup if anonymous access is enabled
- Coding style fixes
- rename fs to vfs to save confusion with the rest of rclone code
- rename db to b for *s3Backend
Fixes#7062
Most useful is the addition of the file created timestamp, but also a timestamp for
when the file was uploaded.
Currently supporting a rather minimalistic set of metadata items, see PR #6359 for
some thoughts about possible extensions.
Before this change the concurrency used for an upload was rather
inconsistent.
- if size below `--backend-upload-cutoff` (default 200M) do single part upload.
- if size below `--multi-thread-cutoff` (default 256M) or using streaming
uploads (eg `rclone rcat) do multipart upload using
`--backend-upload-concurrency` to set the concurrency used by the uploader.
- otherwise do multipart upload using `--multi-thread-streams` to set the
concurrency.
This change makes the default for the concurrency used be the
`--backend-upload-concurrency`. If `--multi-thread-streams` is set and larger
than the `--backend-upload-concurrency` then that will be used instead.
This means that if the user sets `--backend-upload-concurrency` then it will be
obeyed for all multipart/multi-thread transfers and the user can override them
all with `--multi-thread-streams`.
See: #7056
I ( @boukendesho ) have volunteered to maintain the snap package so
this adds it back into the installation instructions.
It will set a `snap` tag visible in `rclone version` so we know where
it came from for support queries.
As suddenly many people move to Box - another "unlimited" cloud story migration saga there are frequent questions about crypt files encoding to be used.
Box is base32768 friendly.
It has been tested with:
https://pub.rclone.org/base32768.zip
and:
rclone test info --check-length boxremote:
maxFileLength = 255 // for 1 byte unicode characters
maxFileLength = 255 // for 2 byte unicode characters
maxFileLength = 255 // for 3 byte unicode characters
maxFileLength = -1 // for 4 byte unicode characters
This sentence was written at the time when backend used access token, nowadays, users need to generate and use application password instead, see #6398.
At some point the sharefile API changed to require the size of the
file in the initial transaction which makes the streaming upload fail
with this error:
upload failed: file size does not match (-2)
This was discovered by the integration tests.