-
Notifications
You must be signed in to change notification settings - Fork 62
skip pruned TUF repos when creating artifact config #9109
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -62,15 +62,13 @@ use std::ops::ControlFlow; | |
| use std::str::FromStr; | ||
| use std::sync::Arc; | ||
|
|
||
| use anyhow::{Context, Result}; | ||
| use anyhow::{Context, Result, ensure}; | ||
| use chrono::Utc; | ||
| use futures::future::{BoxFuture, FutureExt}; | ||
| use futures::stream::{FuturesUnordered, Stream, StreamExt}; | ||
| use http::StatusCode; | ||
| use nexus_auth::context::OpContext; | ||
| use nexus_db_queries::db::{ | ||
| DataStore, datastore::SQL_BATCH_SIZE, pagination::Paginator, | ||
| }; | ||
| use nexus_db_queries::db::DataStore; | ||
| use nexus_networking::sled_client_from_address; | ||
| use nexus_types::deployment::SledFilter; | ||
| use nexus_types::identity::Asset; | ||
|
|
@@ -79,7 +77,7 @@ use nexus_types::internal_api::background::{ | |
| TufArtifactReplicationRequest, TufArtifactReplicationStatus, | ||
| }; | ||
| use omicron_common::api::external::Generation; | ||
| use omicron_uuid_kinds::{GenericUuid, SledUuid}; | ||
| use omicron_uuid_kinds::SledUuid; | ||
| use rand::seq::{IndexedRandom, SliceRandom}; | ||
| use serde_json::json; | ||
| use sled_agent_client::types::ArtifactConfig; | ||
|
|
@@ -593,18 +591,26 @@ impl ArtifactReplication { | |
| opctx: &OpContext, | ||
| ) -> Result<(ArtifactConfig, Inventory)> { | ||
| let generation = self.datastore.tuf_get_generation(opctx).await?; | ||
| let repos = | ||
| self.datastore.tuf_list_repos_unpruned_batched(opctx).await?; | ||
| // `tuf_list_repos_unpruned_batched` performs pagination internally, | ||
| // so check that the generation hasn't changed during our pagination to | ||
| // ensure we got a consistent read. | ||
| { | ||
| let generation_now = | ||
| self.datastore.tuf_get_generation(opctx).await?; | ||
| ensure!( | ||
| generation == generation_now, | ||
| "generation changed from {generation} \ | ||
| to {generation_now}, bailing" | ||
| ); | ||
| } | ||
|
Comment on lines
+599
to
+607
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Making sure I understand: we have to do this check because if the generation changed, the config we'd build from Alternatively: should
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Initially I did have this implemented where we read So the second check is my interpretation of making subsequent queries conditional on the generation not having changed. I will add a comment to this effect.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hah, yeah, one conditional check after does seem better than reasserting the condition as we page through a table. Thanks. |
||
|
|
||
| let mut inventory = Inventory::default(); | ||
| let mut paginator = Paginator::new( | ||
| SQL_BATCH_SIZE, | ||
| dropshot::PaginationOrder::Ascending, | ||
| ); | ||
| while let Some(p) = paginator.next() { | ||
| let batch = self | ||
| .datastore | ||
| .tuf_list_repos(opctx, generation, &p.current_pagparams()) | ||
| .await?; | ||
| paginator = p.found_batch(&batch, &|a| a.id.into_untyped_uuid()); | ||
| for artifact in batch { | ||
| for repo in repos { | ||
| for artifact in | ||
| self.datastore.tuf_list_repo_artifacts(opctx, repo.id()).await? | ||
| { | ||
| inventory.0.entry(artifact.sha256.0).or_insert_with(|| { | ||
| ArtifactPresence { sleds: BTreeMap::new(), local: None } | ||
| }); | ||
|
|
@@ -785,6 +791,7 @@ mod tests { | |
| use std::fmt::Write; | ||
|
|
||
| use expectorate::assert_contents; | ||
| use omicron_uuid_kinds::GenericUuid; | ||
| use rand::{Rng, SeedableRng, rngs::StdRng}; | ||
|
|
||
| use super::*; | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd add do two things to try to avoid people accidentally using this in API endpoints (since we're making multiple queries here):
opctx.check_complex_operations_allowed()?;_batched()to the name to convey that (we do this in a few other places in the datastore)Really, I'd be tempted to apply this to
artifacts_for_repo, but that may currently break some callers that might be using it from the API. Those should probably be made paginated but we can do that when the dust settles on these APIs.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After #9106 lands we may be able to refactor things a little and apply this to
artifacts_for_repo, since it removes the list of artifacts from the public APIs.