-
Notifications
You must be signed in to change notification settings - Fork 112
[CI] Add Terraform resources for daily CronJob that processes LLVM commits #495
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
} | ||
|
||
# The container for scraping LLVM commits needs persistent storage | ||
# for a local check-out of llvm/llvm-project |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why does this need to be stored persistently? It's pretty cheap to clone LLVM and a PVC I think adds unnecessary complexity on top of making things more complicated because they are now stateful.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I neglected to mention this, but there's also a persistent file that keeps track of the last commits we've seen. Originally, the script was to run at a more frequent cadence so we wanted to keep track of commits we've seen as to avoid reprocessing them.
Now that the script only scrapes a day worth of data at a time, maybe we don't need a persistent state to keep track of commits we've seen. Although it might still be valuable for ensuring the quality of the commit data between iterations
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using a PVC for a persistent file would make more sense.
I still think it's a bit of an antipattern though. If you want to ensure you're only looking at new commits and its a cron job, you can just look at the last 24 hours of commits (which it seems like you're already doing?). Making this stateless makes things quite a bit simpler and aligns things more with how k8s expects them to work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed removal of dependency on persistent storage in #501
depends_on = [kubernetes_namespace.operational_metrics] | ||
} | ||
|
||
resource "kubernetes_secret" "operational_metrics_secrets" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why does this need a separate Github token instead of reusing one of the existing ones?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's the same Github token, just under a separate secrets object to keep separation between the premerge metrics and operational metrics
Although I'm not opposed to scrapping this and just reusing the metrics secrets if that's more appropriate
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this creates any tangible separation if they're the same token. You should reuse the metrics container secret, but probably rename the kubernetes_secret
object and maybe the underlying GCP object. You'll need to use a terraform moved
block (https://developer.hashicorp.com/terraform/language/modules/develop/refactoring#moved-block-syntax) so that TF doesn't try to delete and recreate everything.
These resources are for a CronJob that executes the container at
ghcr.io/llvm/operations-metrics:latest
on a daily basis (07:00 UTC), which will scrape daily metrics regarding LLVM's commit volume and upload them for visualization in Grafana.Changes were made to the already existing terraform files since many of the same resources are being reused anyway. This way we can keep all relevant changes in the same place instead of having two separate terraform directories that access and modify shared resources.
Since the container needs access to the BigQuery Google Cloud API, IAM and K8S service accounts were used to grant that access via Workload Identity Federation for GKE. More details at https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity