Updated Big syncs with millions of files (markdown)

lc63 2023-07-21 17:12:21 +02:00
parent 6ccb592edc
commit 16a3d3eb62

@ -9,9 +9,9 @@ Transferred: 0 B / 0 B, -, 0 B/s, ETA -
Elapsed time: 1m0.0s
```
... although HTTP REQUEST requests are made, with HTTP RESPONSE 200 in response (--dump-headers option), no copy is made.
... although HTTP REQUEST requests are made, with HTTP RESPONSE 200 in response (`--dump-headers` option), no copy is made.
This problem exists until at least version rclone v1.64.0-beta.7132.f1a842081.
This problem exists until at least version `rclone v1.64.0-beta.7132.f1a842081`.
# Workaround
@ -33,7 +33,7 @@ comm -13 src dst > need-to-delete
You now have a list of files you need to transfer from src to dst and another list of files in dst that aren't in src so should likely be deleted.
Then break the need-to-transfer file up into chunks of (say) 10,000 lines with something like split -l 10000 need-to-transfer and run this on each chunk to transfer 10,000 files at a time. The --files-from and the --no-traverse means that this won't list the source or the destination so will avoid using too much memory.
Then break the need-to-transfer file up into chunks of (say) 10,000 lines with something like `split -l 10000 need-to-transfer` and run this on each chunk to transfer 10,000 files at a time. The `--files-from` and the `--no-traverse` means that this won't list the source or the destination
```
rclone copy src:bucket dst:bucket --files-from need-to-transfer-aa --no-traverse