Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
69 changes: 62 additions & 7 deletions helm/polaris/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -96,35 +96,90 @@ eval $(minikube -p minikube docker-env)

The below instructions assume a local Kubernetes cluster is running and Helm is installed.

#### Common setup

Create and populate the target namespace:

```bash
kubectl create namespace polaris
kubectl apply --namespace polaris -f helm/polaris/ci/fixtures/

kubectl wait --namespace polaris --for=condition=ready pod --selector=app.kubernetes.io/name=postgres --timeout=120s
```

The `helm/polaris/ci` contains a number of values files that can be used to install the chart with
different configurations.

You can also run `ct` (chart-testing):

```bash
ct lint --charts helm/polaris
ct install --namespace polaris --debug --charts ./helm/polaris
```

Finally, install the chart. From Polaris repo root:
Below are two sample deployment models for installing the chart: one with a non-persistent backend and another with a persistent backend.

#### Non-persistent backend

Install the chart with a non-persistent backend. From Polaris repo root:

```bash
helm upgrade --install --namespace polaris \
--debug --values helm/polaris/ci/simple-values.yaml \
polaris helm/polaris
```

The `helm/polaris/ci` contains a number of values files that can be used to install the chart with
different configurations.
#### Persistent backend

You can also run `ct` (chart-testing):
> [!WARNING]
> The Postgres deployment set up in the fixtures directory is intended for testing purposes only and is not suitable for production use. For production deployments, use a managed Postgres service or a properly configured and secured Postgres instance.

Install the chart with a persistent backend. From Polaris repo root:

```bash
ct lint --charts helm/polaris
ct install --namespace polaris --debug --charts ./helm/polaris
helm upgrade --install --namespace polaris \
--debug --values helm/polaris/ci/persistence-values.yaml \
polaris helm/polaris

kubectl wait --namespace polaris --for=condition=ready pod --selector=app.kubernetes.io/name=polaris --timeout=120s
```

After deploying the chart with a persistent backend, the `persistence.xml` file, originally loaded into the Kubernetes pod via a secret, can be accessed locally if needed. This file contains the persistence configuration required for the next steps. Use the following command to retrieve it:

```bash
kubectl exec -it -n polaris $(kubectl get pod -n polaris -l app.kubernetes.io/name=polaris -o jsonpath='{.items[0].metadata.name}') -- cat /deployments/config/persistence.xml > persistence.xml
```

The `persistence.xml` file references the Postgres hostname as postgres. Update it to localhost to enable local connections:

```bash
sed -i .bak 's/postgres:/localhost:/g' persistence.xml
Comment on lines +155 to +156
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Curious if we could change the value in persistence.yaml to localhost?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can't. This is because when this file got loaded into the pod, it is via k8s secret. Then this file uses the internal DNS record of Postges's SVC to do the routing. Thus, within the k8s pod, postgres will resolve thus being able to route traffic from Polaris pod to Postgres pod. Alternative will have a diff file but set to localhost (a bit duplicate). The whole reason on localhost is because the port forwarding piece where we are port forwarding k8s Postgres pod to local.

```

### Uninstalling the chart
To access Polaris and Postgres locally, set up port forwarding for both services:
```bash
kubectl port-forward -n polaris $(kubectl get pod -n polaris -l app.kubernetes.io/name=polaris -o jsonpath='{.items[0].metadata.name}') 8181:8181

kubectl port-forward -n polaris $(kubectl get pod -n polaris -l app.kubernetes.io/name=postgres -o jsonpath='{.items[0].metadata.name}') 5432:5432
```
Comment on lines +160 to +164
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not familiar with Helm chart, wondering if this could be automated. Not a blocker though.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By default, nothing is exposed to the host computer from k8s. To expose service from k8s to local, there are couple options for it such as node port, ingress, API gateway, as well as simple port forwarding. Our current helm chart does support ingress (via proper config as well as ensure the ingress controller is installed...ingress controller installation is not part of Polaris helm deployment). If this is preferred as a way to avoid port forwarding, I can draft a PR for this as well.

This also means when end-users are using proper postgres setup, this won't be an issue at all as port forwarding will no longer be needed and not need to manually change hostname from postgres to localhost as now the config will contains a resolvable FQDN that is routable both within the K8S as well as users' computers.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for explanation. I'm not sure if ingress fit or not. I will defer to you and others to make that decision. It isn't a blocker for this PR anyway.


Run the catalog bootstrap using the Polaris admin tool. This step initializes the catalog with the required configuration:

```bash
java -Dpolaris.persistence.eclipselink.configuration-file=./persistence.xml \
-Dpolaris.persistence.eclipselink.persistence-unit=polaris \
-jar quarkus/admin/build/polaris-quarkus-admin-*-runner.jar \
bootstrap -c POLARIS,root,pass -r POLARIS
```

### Uninstalling

```bash
helm uninstall --namespace polaris polaris

kubectl delete --namespace polaris -f helm/polaris/ci/fixtures/

kubectl delete namespace polaris
```

## Values
Expand Down
71 changes: 63 additions & 8 deletions helm/polaris/README.md.gotmpl
Original file line number Diff line number Diff line change
Expand Up @@ -97,35 +97,90 @@ eval $(minikube -p minikube docker-env)

The below instructions assume a local Kubernetes cluster is running and Helm is installed.

#### Common setup

Create and populate the target namespace:

```bash
kubectl create namespace polaris
kubectl apply --namespace polaris -f helm/polaris/ci/fixtures/

kubectl wait --namespace polaris --for=condition=ready pod --selector=app.kubernetes.io/name=postgres --timeout=120s
```

The `helm/polaris/ci` contains a number of values files that can be used to install the chart with
different configurations.

You can also run `ct` (chart-testing):

```bash
ct lint --charts helm/polaris
ct install --namespace polaris --debug --charts ./helm/polaris
```

Finally, install the chart. From Polaris repo root:
Below are two sample deployment models for installing the chart: one with a non-persistent backend and another with a persistent backend.

#### Non-persistent backend

Install the chart with a non-persistent backend. From Polaris repo root:

```bash
helm upgrade --install --namespace polaris \
--debug --values helm/polaris/ci/simple-values.yaml \
polaris helm/polaris
```

The `helm/polaris/ci` contains a number of values files that can be used to install the chart with
different configurations.
#### Persistent backend

You can also run `ct` (chart-testing):
> [!WARNING]
> The Postgres deployment set up in the fixtures directory is intended for testing purposes only and is not suitable for production use. For production deployments, use a managed Postgres service or a properly configured and secured Postgres instance.

Install the chart with a persistent backend. From Polaris repo root:

```bash
ct lint --charts helm/polaris
ct install --namespace polaris --debug --charts ./helm/polaris
helm upgrade --install --namespace polaris \
--debug --values helm/polaris/ci/persistence-values.yaml \
polaris helm/polaris

kubectl wait --namespace polaris --for=condition=ready pod --selector=app.kubernetes.io/name=polaris --timeout=120s
```

After deploying the chart with a persistent backend, the `persistence.xml` file, originally loaded into the Kubernetes pod via a secret, can be accessed locally if needed. This file contains the persistence configuration required for the next steps. Use the following command to retrieve it:

```bash
kubectl exec -it -n polaris $(kubectl get pod -n polaris -l app.kubernetes.io/name=polaris -o jsonpath='{.items[0].metadata.name}') -- cat /deployments/config/persistence.xml > persistence.xml
```

The `persistence.xml` file references the Postgres hostname as postgres. Update it to localhost to enable local connections:

```bash
sed -i .bak 's/postgres:/localhost:/g' persistence.xml
```

### Uninstalling the chart
To access Polaris and Postgres locally, set up port forwarding for both services:
```bash
kubectl port-forward -n polaris $(kubectl get pod -n polaris -l app.kubernetes.io/name=polaris -o jsonpath='{.items[0].metadata.name}') 8181:8181

kubectl port-forward -n polaris $(kubectl get pod -n polaris -l app.kubernetes.io/name=postgres -o jsonpath='{.items[0].metadata.name}') 5432:5432
```

Run the catalog bootstrap using the Polaris admin tool. This step initializes the catalog with the required configuration:

```bash
java -Dpolaris.persistence.eclipselink.configuration-file=./persistence.xml \
-Dpolaris.persistence.eclipselink.persistence-unit=polaris \
-jar quarkus/admin/build/polaris-quarkus-admin-*-runner.jar \
bootstrap -c POLARIS,root,pass -r POLARIS
```

### Uninstalling

```bash
helm uninstall --namespace polaris polaris

kubectl delete --namespace polaris -f helm/polaris/ci/fixtures/

kubectl delete namespace polaris
```

{{ template "chart.valuesSection" . }}
{{ template "chart.valuesSection" . }}
12 changes: 6 additions & 6 deletions helm/polaris/ci/fixtures/postgres.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -25,11 +25,11 @@ spec:
replicas: 1
selector:
matchLabels:
app: postgres
app.kubernetes.io/name: postgres
template:
metadata:
labels:
app: postgres
app.kubernetes.io/name: postgres
spec:
containers:
- name: postgres
Expand Down Expand Up @@ -65,8 +65,8 @@ metadata:
name: postgres
spec:
selector:
app: postgres
app.kubernetes.io/name: postgres
ports:
- protocol: TCP
port: 5432
targetPort: 5432
- protocol: TCP
port: 5432
targetPort: 5432