Skip to content

[PB-5727]: upload concurrently with worker pool to improve speed#283

Open
AlexisMora wants to merge 9 commits intomainfrom
fix/pb-5727-improve-upload-backups
Open

[PB-5727]: upload concurrently with worker pool to improve speed#283
AlexisMora wants to merge 9 commits intomainfrom
fix/pb-5727-improve-upload-backups

Conversation

@AlexisMora
Copy link

@AlexisMora AlexisMora commented Mar 24, 2026

What is Changed / Added

now, create and update flows now run 10 uploads concurrently via async.queue,
also.introduced executeAsyncQueue(items, executor, config) as a generic wrapper. Both createBackupUploadExecutor and createBackupUpdateExecutor implement the same TaskExecutor interface and plug directly into it.
uploads now retry up to 3 times with [1s, 2s, 4s] exponential backoff before giving up on a file
The logic per file is identical to the old implementation:

Upload bytes to bucket -> get contentsId
Create/override metadata in backend
On metadata failure -> clean up uploaded content (create only; update matches old behavior of not cleaning up)
FILE_ALREADY_EXISTS -> skip silently
Non-fatal errors ->tracked in backupErrorsTracker
Fatal errors -> stop all uploads immediately
The only intentional behavior changes are: concurrency (10 parallel instead of 1) and retry (3 attempts with backoff instead of 0).

@sonarqubecloud
Copy link

Quality Gate Failed Quality Gate failed

Failed conditions
66.0% Coverage on New Code (required ≥ 80%)

See analysis details on SonarQube Cloud

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant