Showcases the Zero Trust capabilities across Red Hat's product portfolio in a reproducible manner.
The following components are included in the Layered Zero Trust Pattern
- OpenShift cluster hardening
- Red Hat Build of Keycloak
- Identities to access pattern components
- OIDC client to authenticate uses to a web application
- Red Hat Zero Trust Workload Identity Manager
- Provides identities to workloads running within OpenShift
- HashiCorp Vault
- Secure storage of sensitive assets
- External Secrets Operator (ESO)
- Synchronizes secrets stored in HashiCorp Vault with OpenShift
- Red Hat Advanced Cluster Management (ACM)
- Provides a management control in multi-cluster scenarios
Utilize the following steps to prepare your machine and complete any and all prerequisites needed.
- An OpenShift Container Platform 4.19+ cluster with:
- Publicly signed certificates for Ingress
- A default
StorageClasswhich provides dynamicPersistentVolumestorage
- To customize the provided default configuration, a GitHub account and a token for it with repositories permissions, to read from and write to your forks, is required.
- Access to Podman (or Docker) for execution of the container images used by pattern.sh script for provisioning.
- Validated Patterns Tooling
- Depending on the characteristics of your cluster, you may need additional hardware resources for Advanced Cluster Management (ACM) component. For a single node cluster you can start with 4 vCPUs, 16 GB of memory and 120 GB of storage. For more detailed information about ACM sizing, please refer to the official documentation.
Warning
The default deployment of this patterns assumes that none of the components associated with the pattern have been deployed previously. Ensure that your OpenShift environment does not include any of the preceding components.
-
While not required, it is recommended that you Fork the Validated Pattern repository.From the layered-zero-trust repository on GitHub, click the Fork button.
-
Clone the forked copy of this repository by running the following command.
git clone [email protected]:<your-username>/layered-zero-trust.git
-
Navigate to your repository: Ensure you are in the root directory of your Git repository by using:
cd /path/to/your/repository -
Run the following command to set the upstream repository:
git remote add -f upstream [email protected]/validatedpatterns/layered-zero-trust.git
-
Verify the setup of your remote repositories by running the following command:
git remote -v
Example Output:
origin [email protected]:<your-username>/layered-zero-trust.git (fetch) origin [email protected]:<your-username>/layered-zero-trust.git (push) upstream https://github.com/validatedpatterns/layered-zero-trust.git (fetch) upstream https://github.com/validatedpatterns/layered-zero-trust.git (push)
-
Create a local copy of the secret values file that can safely include credentials. Run the following command :
cp values-secret.yaml.template ~/values-secret-layered-zero-trust.yaml[!NOTE] Putting the
values-secret.yamlin your home directory ensures that it does not get pushed to your Git repository. It is based on thevalues-secrets.yaml.templatefile provided by the pattern in the top level directory. When you create your own patterns you will add your secrets to this file and save. At the moment the focus is on getting started and familiar with this pattern. -
Create a new feature branch, for example
my-branchfrom themainbranch for your content:git checkout -b my-branch main
-
Perform any desired changes to the Helm values files to customize the execution of the pattern (optional). Commit the changes
git add <file(s)> git commit -m "Pattern customization"
-
Push the changes in the branch to your forked repository
git push origin my-branch
The pattern.sh script is used to deploy the Layered Zero Trust Validated pattern.
-
Login to your OpenShift cluster a. Obtain an API token by visiting https://oauth-openshift.apps../oauth/token/request. b. Log in with this retrieved token by running the following command:
oc login --token=<retrieved-token> --server=https://api.<your-cluster>.<domain>:6443
-
Alternatively log in by referencing an existing KUBECONFIG file:
export KUBECONFIG=~/<path_to_kubeconfig>
-
Deploy the pattern
Default deployment (without Quay - using other registry options):
./pattern.sh make install
Deployment with optional Layer 1 components (Quay Registry, RHTAS):
To include optional components like Quay Registry + NooBaa MCG storage or RHTAS:
- Edit
values-hub.yaml - Uncomment the relevant sections for the components you want
- Run:
./pattern.sh make install
Note: Optional Layer 1 components are commented out by default. Quay Registry + NooBaa MCG adds ~6 CPU cores, ~12Gi memory, ~10Gi storage when enabled.
- Edit
Once the pattern has been successfully deployed, you can review the deployed components. Each are deployed and managed using OpenShift GitOps.
Two (2) instances of OpenShift GitOps has been deployed to your Hub Cluster. Each can be seen within the OpenShift Console by selecting the Application Selector at the top navigation bar (box with 9 smaller squares).
- Cluster Argo CD - Deploys an Argo CD App of Apps Application called layered-zero-trust-hub which deploys the components associated with the pattern. These Applications are managed by the Hub Argo CD instance (see below)
- Hub Argo CD - Manages the individual components associated with the pattern on the Hub OpenShift instance.
If all of the Argo CD applications are reporting healthy, the pattern has been deployed successfully.
One of the most common application design patterns makes use of a frontend application leveraging a database for persistent storage. Instead of traditional methods for accessing the database from the frontend application using static values stored within the application, this pattern will make use "just in time" methods for obtaining these database values from a credential store. A more detailed overview is located below:
- qtodo - Quarkus based frontend application. Access is protected via OIDC based authentication with users defined within an external identity store (Red Hat Build of Keycloak by default)
- PostgreSQL - Relational database for use by the qtodo application. Credentials are generated dynamically and stored within Vault.
- External Identity store - Users have been defined to enable access to the qtodo frontend. OIDC clients have also been created and configured within the todo application
- HashiCorp Vault - Several features are being leveraged within this use case the external identity store
- Secrets store - Storage of sensitive values for components including PostgreSQL and RHBK
- JWT based authentication - Enables access using ZTWIM based identities
- Zero Trust Workload Identity - Enables an identity to be assigned to the qtodo application to communicate with HashiCorp Vault and obtain the credentials to access the PostgreSQL database
- spiffe-helper - Supplemental component provided for the qtodo application to dynamically fetch JWT based identities from the SPIFFE Workload API.
With an understanding of the goals and architecture of qtodo, launch the OpenShift Console of the Hub cluster and navigate to the qtodo project
- Select Home -> Projects from the left hand navigation bar
- Locate and select the qtodo project
The qtodo Quarkus application and qtodo-db PostgreSQL database are found within this namespace.
View the running pods associated with these components
- Select Workloads -> Pods from the left hand navigation bar
- Explore both the qtodo and qtodo-db pods.
Take note that the qtodo Pod makes use of a series of init containers and sidecar containers that are used to supply the application with credentials needed to support the application.
Access the qtodo application in a browser. The URL can be located from the OpenShift Route in the qtodo Project
- Select Networking -> Routes from the lefthand navigation bar
- Click the arrow next to the the URL underneath the Location column to open the qtodo application in a new browser tab
You will be presented with a login page to access the application. When using the default External Identity Provider (RHBK), two users (qtodo-admin and qtodo-user) were provisioned automatically. Their initial credentials are stored in a Secret in the keycloak-system namespace called keycloak-users. You can reveal the credentials by switching to the tab containing the OpenShift Console using the following steps:
- Select Home -> Projects from the left hand navigation bar
- Locate and select the keycloak-system project
- Select Workloads -> Secrets from the left hand navigation bar
- Select the keycloak-users Secret
- Click the Reveal values link to uncover the underlying values for the users
Switch back to the qtodo application and enter the username and password on the login page for one of the users using the values discovered previously.
Once you have authenticated to RHBK, you will be instructed to change the temporary password and set a more permanent password. Once complete, you will be redirected to the qtodo application verifying the OIDC based authentication functions properly.
Feel free to add new items to the list of todos. By being able to add and remove items from the page, the integration between the Quarkus application and the backend PostgreSQL database using credentials sourced from HashiCorp Vault was successful.
Warning
Since ACM chart provisioning functionality uses ClusterPools and these technology is limited to Cloud environments, we do not recommend use those configuration settings.
Instead, we have enabled the option to import your existing standalone clusters using the acm-managed-clusters chart.
The pattern supports importating pre-existing Openshift clusters into the Hub cluster, converting them into Managed Clusters.
-
Copy the
kubeconfigfile of the cluster you want to import to your local system. -
In the
values-secret.yamlfile, add a new secret with the contents of thekubeconfigfile.- name: kubeconfig-spoke vaultPrefixes: - hub fields: - name: content path: ~/.kube/kubeconfig-ztvp-spoke
-
In the
values-hub.yamlfile, add a new entry in theclusterGroup.managedClusterGroupskey.managedClusterGroups: exampleRegion: name: group-one acmlabels: - name: clusterGroup value: group-one helmOverrides: - name: clusterGroup.isHubCluster value: false
-
Also in the
values-hub.yamlfile, add your cluster definition in theacmManagedClusters.clusterskey.acmManagedClusters: clusters: - name: ztvp-spoke-1 clusterGroup: group-one labels: cloud: auto-detect vendor: auto-detect kubeconfigVaultPath: secret/data/hub/kubeconfig-spoke
-
Deploy the pattern.