-
Notifications
You must be signed in to change notification settings - Fork 71
pkg-auto: Parallelize generating SDK reports and package updates handling. #3210
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
krnowak
wants to merge
21
commits into
main
Choose a base branch
from
krnowak/pkg-auto-jobs
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The library will be used for running emerge report and package update report generation in separate processes to make them faster. I initially wanted to use the relatively unknown feature of bash named coprocs, but it was an unfinished feature as of bash 5.2, so I decided to write my own then. The library is rather basic - allows to fork a subprocess that will run some bash function, communicate with it using subprocesses' standard input/output, and reap the subprocess. Signed-off-by: Krzesimir Nowak <[email protected]>
We can run report generation for old and new in parallel in two separate processes. Ought to be a bit less of wait. This is more or less straightforward parallelization, since there are only two jobs running. The only thing that needs taking care of is forwarding job's output to the terminal and handling job failures. Signed-off-by: Krzesimir Nowak <[email protected]>
This will come in handy for spawning jobs for handling package updates. Since we don't want to spawn as many jobs as there are packages, then limiting ourselves to the job count matching the processor or core count sounds like a better idea. Signed-off-by: Krzesimir Nowak <[email protected]>
The slots were only used to repeatedly generate the same path to a directory where the package ebuild diff is saved. So instead, generate the output paths somewhere in outer scope, put them into a struct and pass that around. That means that: - We pass one parameter less (a name of a struct instead of two slots). - We can make it easier to change the output directory later (changing it in a function like update_dir or update_dir_non_slot may affect locations we didn't want to change, whereas changing the value in struct scopes the affected areas). This will come in handy later, when we put package update handling into jobs, where each job will have its own output directory. This does not remove the repeated generation of the paths, but it is a first step. Signed-off-by: Krzesimir Nowak <[email protected]>
…dling This is a step towards using different output directory in package handling. This will be needed for the eventual package handling jobs system, where each job has its own output directory. Signed-off-by: Krzesimir Nowak <[email protected]>
This is a step towards using different output directory in package handling. This will be needed for the eventual package handling jobs system, where each job has its own output directory. Signed-off-by: Krzesimir Nowak <[email protected]>
This is a step towards using different output directory in package handling. This will be needed for the eventual package handling jobs system, where each job has its own output directory. Signed-off-by: Krzesimir Nowak <[email protected]>
…ling This is a step towards using different output directory in package handling. This will be needed for the eventual package handling jobs system, where each job has its own output directory. Signed-off-by: Krzesimir Nowak <[email protected]>
This is a step towards using different output directory in package handling. This will be needed for the eventual package handling jobs system, where each job has its own output directory. Signed-off-by: Krzesimir Nowak <[email protected]>
This is a step towards using different output directory in package handling. This will be needed for the eventual package handling jobs system, where each job has its own output directory. Signed-off-by: Krzesimir Nowak <[email protected]>
This is a step towards using different output directory in package handling. This will be needed for the eventual package handling jobs system, where each job has its own output directory. Signed-off-by: Krzesimir Nowak <[email protected]>
This is a step towards using different output directory in package handling. This will be needed for the eventual package handling jobs system, where each job has its own output directory. Signed-off-by: Krzesimir Nowak <[email protected]>
This is a continuation of passing the explicit location of an output directory instead of hardcoding `${REPORTS_DIR}`. Signed-off-by: Krzesimir Nowak <[email protected]>
These functions were either inlined in those few (one?) place they were used or just replaced. Signed-off-by: Krzesimir Nowak <[email protected]>
The purpose of this struct is to collect all the information that is needed for handling package updates in one place. It is not really used right now, but when the package handling is split off into a separate function, it will come in handy as we can then pass a couple of parameters to the new function instead of many. Also, in future the struct will grow, when we add ignoring irrelevant information in summary stubs or license filtering. Signed-off-by: Krzesimir Nowak <[email protected]>
There is no functional change, other than the fact that the new function now uses the bunch of maps to access some package information. The split off inches us closer towards running the package handling in multiple jobs. Signed-off-by: Krzesimir Nowak <[email protected]>
This is to fill the silent moment between report generation in SDKs and the beginning of package updates handling. Also adds missing info about handling non-package updates. Signed-off-by: Krzesimir Nowak <[email protected]>
This spawns some jobs, where each is waiting for messages from main process. The message can be either a number followed by the number of packages to handle (a batch) or command to shut down when there is no more packages left to process. On the other hand, job can send a message to the main process that it is done with the batch and is ready for the next one. Any other message is printed on the terminal by the main process. After the packages are processed, the main process will collect and merge the job reports into the main one. Signed-off-by: Krzesimir Nowak <[email protected]>
After the split off and adding jobs, the comment was bit outdated and out of place, but still useful enough to keep it, but reword it and move into a more relevant place. Signed-off-by: Krzesimir Nowak <[email protected]>
Signed-off-by: Krzesimir Nowak <[email protected]>
Mostly to avoid repeating variable names when declaring them and initializing them. Signed-off-by: Krzesimir Nowak <[email protected]>
647f115
to
7fdaa4e
Compare
Rebased, mostly for adding DCO to commits. |
Build action triggered: https://github.com/flatcar/scripts/actions/runs/17269040020 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The automation generates reports using emerge in two separate SDK containers - one with old packages and one with new packages. Both jobs create their reports in separate directories, so if we take care of printing messages to the terminal without producing garbled text and with being able to discern which job produced the terminal output, then there is nothing else that would prevent to run them in parallel.
Parallelizing handling of package updates was a bit more involved:
In order to get the last point, I needed to refactor the some of the code to take an output directory path instead of hardcoding it to some subdirectory of
${REPORTS_DIR}
and to split off the code that handles the package update as this code would be running inside the job instead of the main process.I think it's best to review the PR commit by commit.