generated from cloudnative-pg/cnpg-template
-
Notifications
You must be signed in to change notification settings - Fork 59
Open
Labels
bugSomething isn't workingSomething isn't working
Description
Say you have 10 WAL files ready for archiving and maxParallel is set to 3.
- PG runs
archive_commandfor WAL file 1:- barman-plugin looks at ready files and run the archiver for WAL 1, 2 and 3.
- Those 3 files are uploaded to the archive.
- Command exits successfully and PG marks WAL 1 as done.
- Then PG runs
archive_commandfor WAL file 2:- barman-plugin looks are ready file and runs the archiver for WAL 2, 3 and 4
- WAL file 2 and 3 are already archived.
- WAL file 4 is uploaded to the archive.
- Command exits successfully and PG marks WAL 2 as done.
- And so on, PG archives WAL file 3 and barman-plugin uploads WAL file 5.
This results in always uploading one single WAL file at a time which is pretty slow and is contrary to what the doc says: “Number of WAL files to be […] archived in parallel”.
What I would expect to happen is:
- PG runs
archive_commandfor WAL file 1:- WAL file 1, 2 and 3 are uploaded
- WAL file 1 is marked as done
- PG runs
archive_commandfor WAL file 2:- WAL file 4, 5, 6 are uploaded
- WAL file 2 is marked as done
- And so on until there are no more files to upload and the
archive_commandsimply exits successfully without doing anything.
I think this happens because internalRun does not pass WALs that have already been archived to GatherReadyWALFiles:
| SkipWALs: []string{baseWalName}, |
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working