-
Notifications
You must be signed in to change notification settings - Fork 330
Doc: catalog bootstrap steps for helm deployment #1243
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -96,35 +96,90 @@ eval $(minikube -p minikube docker-env) | |
|
|
||
| The below instructions assume a local Kubernetes cluster is running and Helm is installed. | ||
|
|
||
| #### Common setup | ||
|
|
||
| Create and populate the target namespace: | ||
|
|
||
| ```bash | ||
| kubectl create namespace polaris | ||
| kubectl apply --namespace polaris -f helm/polaris/ci/fixtures/ | ||
|
|
||
| kubectl wait --namespace polaris --for=condition=ready pod --selector=app.kubernetes.io/name=postgres --timeout=120s | ||
| ``` | ||
|
|
||
| The `helm/polaris/ci` contains a number of values files that can be used to install the chart with | ||
| different configurations. | ||
|
|
||
| You can also run `ct` (chart-testing): | ||
|
|
||
| ```bash | ||
| ct lint --charts helm/polaris | ||
| ct install --namespace polaris --debug --charts ./helm/polaris | ||
| ``` | ||
|
|
||
| Finally, install the chart. From Polaris repo root: | ||
| Below are two sample deployment models for installing the chart: one with a non-persistent backend and another with a persistent backend. | ||
|
|
||
| #### Non-persistent backend | ||
|
|
||
| Install the chart with a non-persistent backend. From Polaris repo root: | ||
|
|
||
| ```bash | ||
| helm upgrade --install --namespace polaris \ | ||
| --debug --values helm/polaris/ci/simple-values.yaml \ | ||
| polaris helm/polaris | ||
| ``` | ||
|
|
||
| The `helm/polaris/ci` contains a number of values files that can be used to install the chart with | ||
| different configurations. | ||
| #### Persistent backend | ||
|
|
||
| You can also run `ct` (chart-testing): | ||
| > [!WARNING] | ||
| > The Postgres deployment set up in the fixtures directory is intended for testing purposes only and is not suitable for production use. For production deployments, use a managed Postgres service or a properly configured and secured Postgres instance. | ||
|
|
||
| Install the chart with a persistent backend. From Polaris repo root: | ||
|
|
||
| ```bash | ||
| ct lint --charts helm/polaris | ||
| ct install --namespace polaris --debug --charts ./helm/polaris | ||
| helm upgrade --install --namespace polaris \ | ||
| --debug --values helm/polaris/ci/persistence-values.yaml \ | ||
| polaris helm/polaris | ||
|
|
||
| kubectl wait --namespace polaris --for=condition=ready pod --selector=app.kubernetes.io/name=polaris --timeout=120s | ||
| ``` | ||
|
|
||
| After deploying the chart with a persistent backend, the `persistence.xml` file, originally loaded into the Kubernetes pod via a secret, can be accessed locally if needed. This file contains the persistence configuration required for the next steps. Use the following command to retrieve it: | ||
|
|
||
| ```bash | ||
| kubectl exec -it -n polaris $(kubectl get pod -n polaris -l app.kubernetes.io/name=polaris -o jsonpath='{.items[0].metadata.name}') -- cat /deployments/config/persistence.xml > persistence.xml | ||
| ``` | ||
|
|
||
| The `persistence.xml` file references the Postgres hostname as postgres. Update it to localhost to enable local connections: | ||
|
|
||
| ```bash | ||
| sed -i .bak 's/postgres:/localhost:/g' persistence.xml | ||
| ``` | ||
|
|
||
| ### Uninstalling the chart | ||
| To access Polaris and Postgres locally, set up port forwarding for both services: | ||
| ```bash | ||
| kubectl port-forward -n polaris $(kubectl get pod -n polaris -l app.kubernetes.io/name=polaris -o jsonpath='{.items[0].metadata.name}') 8181:8181 | ||
|
|
||
| kubectl port-forward -n polaris $(kubectl get pod -n polaris -l app.kubernetes.io/name=postgres -o jsonpath='{.items[0].metadata.name}') 5432:5432 | ||
| ``` | ||
|
Comment on lines
+160
to
+164
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Not familiar with Helm chart, wondering if this could be automated. Not a blocker though.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. By default, nothing is exposed to the host computer from k8s. To expose service from k8s to local, there are couple options for it such as node port, ingress, API gateway, as well as simple port forwarding. Our current helm chart does support ingress (via proper config as well as ensure the ingress controller is installed...ingress controller installation is not part of Polaris helm deployment). If this is preferred as a way to avoid port forwarding, I can draft a PR for this as well. This also means when end-users are using proper postgres setup, this won't be an issue at all as port forwarding will no longer be needed and not need to manually change hostname from postgres to localhost as now the config will contains a resolvable FQDN that is routable both within the K8S as well as users' computers.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thanks for explanation. I'm not sure if ingress fit or not. I will defer to you and others to make that decision. It isn't a blocker for this PR anyway. |
||
|
|
||
| Run the catalog bootstrap using the Polaris admin tool. This step initializes the catalog with the required configuration: | ||
|
|
||
| ```bash | ||
| java -Dpolaris.persistence.eclipselink.configuration-file=./persistence.xml \ | ||
| -Dpolaris.persistence.eclipselink.persistence-unit=polaris \ | ||
| -jar quarkus/admin/build/polaris-quarkus-admin-*-runner.jar \ | ||
| bootstrap -c POLARIS,root,pass -r POLARIS | ||
| ``` | ||
|
|
||
| ### Uninstalling | ||
|
|
||
| ```bash | ||
| helm uninstall --namespace polaris polaris | ||
|
|
||
| kubectl delete --namespace polaris -f helm/polaris/ci/fixtures/ | ||
|
|
||
| kubectl delete namespace polaris | ||
| ``` | ||
|
|
||
| ## Values | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Curious if we could change the value in persistence.yaml to
localhost?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can't. This is because when this file got loaded into the pod, it is via k8s secret. Then this file uses the internal DNS record of Postges's SVC to do the routing. Thus, within the k8s pod, postgres will resolve thus being able to route traffic from Polaris pod to Postgres pod. Alternative will have a diff file but set to localhost (a bit duplicate). The whole reason on localhost is because the port forwarding piece where we are port forwarding k8s Postgres pod to local.