When you run a new profile for the first time, it will almost always take significantly longer than subsequent runs of the same profile. This is normal and expected behavior. This article explains the reasons and what to expect.
Why the First Run Is Slow
Everything Must Be Scanned
On the first run, SyncBack must scan every file and folder in both the source and destination (if the destination already contains files). It has no prior knowledge of what exists, so it must build a complete picture from scratch.
On subsequent runs with Fast Backup enabled, SyncBack can use the database from the previous run to quickly identify which files have changed. Only changed files need to be compared and processed. This typically reduces the number of files that need attention from hundreds of thousands down to a small fraction.
Everything Must Be Copied
The first run of a backup profile must copy every file from the source to the destination. If the source contains 50 GB of data, all 50 GB must be transferred. Subsequent runs only need to copy files that are new or have changed since the last run, which is usually a much smaller amount.
For synchronization profiles, the first run must compare all files on both sides and resolve every difference. On subsequent runs, most files will already be identical and can be skipped.
No Fast Backup Database Exists
Fast Backup works by storing a record of every file's name, size, modification date, and attributes at the end of each profile run. On the next run, SyncBack compares the current state of each file against this stored record. Files that match exactly can be skipped without reading the destination.
On the very first run, this database does not exist, so SyncBack cannot skip anything. It must compare every file in the source against every file in the destination, which requires scanning and querying both locations in full.
No Delta Copy Baseline Exists
If delta (partial) copying is enabled, SyncBack can transfer only the parts of a file that have changed, rather than the entire file. This requires a hash of the previous version of the file. On the first run, no previous hash exists, so the entire file must be copied. On subsequent runs, only the changed portions are transferred, which can be dramatically faster for large files with small changes.
Destination Folders Must Be Created
On the first run, SyncBack must create the entire destination folder structure. Each folder creation is an additional operation. On subsequent runs, the folders already exist and no creation is needed.
What to Expect
As a rough guide:
- The first run of a backup profile takes as long as a full copy of all source files to the destination, plus the time to scan both locations.
- The second run typically takes a small fraction of the first run time, assuming Fast Backup is enabled and only a few files have changed.
- Subsequent runs remain fast as long as the number of changes between runs is small.
For example, if a profile backs up 100 GB of data containing 200,000 files:
- First run: may take 1 to 2 hours depending on storage and network speed, because all 100 GB must be copied and all 200,000 files must be scanned on both sides.
- Second run (with 500 files changed, 1 GB of new data): may take only 2 to 5 minutes, because SyncBack skips the 199,500 unchanged files and only copies the 1 GB of changes.
Recommendations
Enable Fast Backup
Fast Backup is the single most important setting for reducing the time of subsequent runs. It allows SyncBack to skip unchanged files without checking the destination. This is especially important when the destination is remote (network share, FTP, cloud) because checking the destination requires network round-trips for every file.
Run the First Profile During Off-Hours
Because the first run will be significantly longer, consider scheduling it during a time when the system is not in heavy use, such as overnight or during a weekend. Subsequent runs can then be scheduled during normal hours.
Do Not Abort the First Run
If you abort the first run partway through, SyncBack will save its progress (depending on your settings), but files that were not reached will need to be processed on the next run. Allowing the first run to complete fully gives you the best baseline for fast subsequent runs.
Consider a Full Backup Followed by Incremental
If the source contains a very large amount of data and the destination is remote, consider performing the first backup to a local drive and then copying the result to the remote location. Subsequent incremental runs can then go directly to the remote destination. This avoids the slow first run over the network.