-
Notifications
You must be signed in to change notification settings - Fork 311
feat: Allow suppress diff line output by regex #475
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
e4d73c1 to
e1d52fc
Compare
Signed-off-by: Jan-Otto Kröpke <[email protected]>
Signed-off-by: Jan-Otto Kröpke <[email protected]>
Signed-off-by: Jan-Otto Kröpke <[email protected]>
|
@yxxhero @databus23 Do you have the time to look into it? |
|
@mumoshu @yxxhero @databus23 Please let me know, if I can assists here. |
|
@mumoshu @yxxhero @databus23 I would appreciate a review here. |
Co-authored-by: Yusuke Kuoka <[email protected]>
Signed-off-by: Jan-Otto Kröpke <[email protected]>
Signed-off-by: Jan-Otto Kröpke <[email protected]>
Signed-off-by: Jan-Otto Kröpke <[email protected]>
|
@mumoshu @yxxhero @databus23 I would appreciate a additional review here. |
|
Hi @mumoshu @yxxhero @databus23 I would appreciate a additional review here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks a lot for your patience and contribution @jkroepke!!
|
Thanks a lot! @mumoshu do you plan a release wich include this change? |
|
@jkroepke Indeed! For transparency, the last thing we need before cutting the next release is to modify another feature we merged recently #458 somehow to make the dry-run=server an optional feature. That's to address @dudicoco's great insight shared in #449 (comment) |
|
when this feature will be released approximately? desperately waiting for it |
|
I too will sing your praises when this is released |
|
Hi @jkroepke. Can you please document the feature within the readme? |
|
Hi @dudicoco in our setup, we are using this: it omits single lines. About multiple lines and regex, you have to be a regex pro, maybe this can work for you: https://regex101.com/r/OHEFVb/1 |
|
Thanks @jkroepke. The example you have provided did not work, I have even tried a more simple example which just captures the first two lines of the ports block and it also didn't work: So is the issue with the regex or just the fact that the new feature does not support multi lines regex? |
|
You are right, multi line is not supported. Reason: The underlaying library generated a line-by-line diff and the regex will be matched to each line. Technically, multi line regex is supported, but since its a line-by-line diff, it a regex across multiple lines will never work. The chances to support multiple are supper low because helm-diff have to merge the single lines first which would result into a too complex situation. |
|
thanks for info @jkroepke |
|
Does this affect only visible output or status codes/api results as well? I'm asking because Helmfile uses helm-diff under the hood to determine if a release is outdated and some Charts will always be re-deployed due to suboptimal design choices (e.g. random db passwords with no way to configure them via values). |
|
only visible output
but maybe |
This PR allow to suppress the diff report by using regex.
This option is more designed for power users and give them full control about the output.
There is a new diff option
--suppress-output-line-regexwhich can be applied multiple time.If a line from the report matches the regex, the line gets removed from the report. If the diff entry has no deltas, the whole diffEntry (file) gets removed, except the headline
default, nginx, Deployment (apps) has changed:Since
--suppress-output-line-regexapplied only to the output of an diff, the behavior of--detailed-exit-codeis not touch. If there are differences which are suppressed, the exit code remains still 2.Feature in action:
helm diff upgrade prometheus-node-exporter prometheus-community/prometheus-node-exporter --version 4.18.0
default, prometheus-node-exporter, DaemonSet (apps) has changed: # Source: prometheus-node-exporter/templates/daemonset.yaml apiVersion: apps/v1 kind: DaemonSet metadata: name: prometheus-node-exporter namespace: default labels: - helm.sh/chart: prometheus-node-exporter-4.17.0 + helm.sh/chart: prometheus-node-exporter-4.18.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: metrics app.kubernetes.io/part-of: prometheus-node-exporter app.kubernetes.io/name: prometheus-node-exporter app.kubernetes.io/instance: prometheus-node-exporter - app.kubernetes.io/version: "1.5.0" + app.kubernetes.io/version: "1.6.0" spec: selector: matchLabels: app.kubernetes.io/name: prometheus-node-exporter app.kubernetes.io/instance: prometheus-node-exporter updateStrategy: rollingUpdate: maxUnavailable: 1 type: RollingUpdate template: metadata: annotations: cluster-autoscaler.kubernetes.io/safe-to-evict: "true" labels: - helm.sh/chart: prometheus-node-exporter-4.17.0 + helm.sh/chart: prometheus-node-exporter-4.18.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: metrics app.kubernetes.io/part-of: prometheus-node-exporter app.kubernetes.io/name: prometheus-node-exporter app.kubernetes.io/instance: prometheus-node-exporter - app.kubernetes.io/version: "1.5.0" + app.kubernetes.io/version: "1.6.0" spec: automountServiceAccountToken: false securityContext: fsGroup: 65534 runAsGroup: 65534 runAsNonRoot: true runAsUser: 65534 serviceAccountName: prometheus-node-exporter containers: - name: node-exporter - image: quay.io/prometheus/node-exporter:v1.5.0 + image: quay.io/prometheus/node-exporter:v1.6.0 imagePullPolicy: IfNotPresent args: - --path.procfs=/host/proc - --path.sysfs=/host/sys - --path.rootfs=/host/root - --path.udev.data=/host/root/run/udev/data - --web.listen-address=[$(HOST_IP)]:9100 securityContext: readOnlyRootFilesystem: true env: - name: HOST_IP value: 0.0.0.0 ports: - name: metrics containerPort: 9100 protocol: TCP livenessProbe: failureThreshold: 3 httpGet: httpHeaders: path: / port: 9100 scheme: HTTP initialDelaySeconds: 0 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 readinessProbe: failureThreshold: 3 httpGet: httpHeaders: path: / port: 9100 scheme: HTTP initialDelaySeconds: 0 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 volumeMounts: - name: proc mountPath: /host/proc readOnly: true - name: sys mountPath: /host/sys readOnly: true - name: root mountPath: /host/root mountPropagation: HostToContainer readOnly: true hostNetwork: true hostPID: true + nodeSelector: + kubernetes.io/os: linux tolerations: - effect: NoSchedule operator: Exists volumes: - name: proc hostPath: path: /proc - name: sys hostPath: path: /sys - name: root hostPath: path: / default, prometheus-node-exporter, Service (v1) has changed: # Source: prometheus-node-exporter/templates/service.yaml apiVersion: v1 kind: Service metadata: name: prometheus-node-exporter namespace: default labels: - helm.sh/chart: prometheus-node-exporter-4.17.0 + helm.sh/chart: prometheus-node-exporter-4.18.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: metrics app.kubernetes.io/part-of: prometheus-node-exporter app.kubernetes.io/name: prometheus-node-exporter app.kubernetes.io/instance: prometheus-node-exporter - app.kubernetes.io/version: "1.5.0" + app.kubernetes.io/version: "1.6.0" annotations: + prometheus.io/scrape: "true" prometheus.io/scrape: "true" spec: type: ClusterIP ports: - port: 9100 targetPort: 9100 protocol: TCP name: metrics selector: app.kubernetes.io/name: prometheus-node-exporter app.kubernetes.io/instance: prometheus-node-exporter default, prometheus-node-exporter, ServiceAccount (v1) has changed: # Source: prometheus-node-exporter/templates/serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: name: prometheus-node-exporter namespace: default labels: - helm.sh/chart: prometheus-node-exporter-4.17.0 + helm.sh/chart: prometheus-node-exporter-4.18.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: metrics app.kubernetes.io/part-of: prometheus-node-exporter app.kubernetes.io/name: prometheus-node-exporter app.kubernetes.io/instance: prometheus-node-exporter - app.kubernetes.io/version: "1.5.0" + app.kubernetes.io/version: "1.6.0"helm diff upgrade prometheus-node-exporter prometheus-community/prometheus-node-exporter --version 4.18.0 --suppress-output-line-regex "helm.sh/chart" --suppress-output-line-regex "version"
default, prometheus-node-exporter, DaemonSet (apps) has changed: # Source: prometheus-node-exporter/templates/daemonset.yaml apiVersion: apps/v1 kind: DaemonSet metadata: name: prometheus-node-exporter namespace: default labels: app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: metrics app.kubernetes.io/part-of: prometheus-node-exporter app.kubernetes.io/name: prometheus-node-exporter app.kubernetes.io/instance: prometheus-node-exporter spec: selector: matchLabels: app.kubernetes.io/name: prometheus-node-exporter app.kubernetes.io/instance: prometheus-node-exporter updateStrategy: rollingUpdate: maxUnavailable: 1 type: RollingUpdate template: metadata: annotations: cluster-autoscaler.kubernetes.io/safe-to-evict: "true" labels: app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: metrics app.kubernetes.io/part-of: prometheus-node-exporter app.kubernetes.io/name: prometheus-node-exporter app.kubernetes.io/instance: prometheus-node-exporter spec: automountServiceAccountToken: false securityContext: fsGroup: 65534 runAsGroup: 65534 runAsNonRoot: true runAsUser: 65534 serviceAccountName: prometheus-node-exporter containers: - name: node-exporter - image: quay.io/prometheus/node-exporter:v1.5.0 + image: quay.io/prometheus/node-exporter:v1.6.0 imagePullPolicy: IfNotPresent args: - --path.procfs=/host/proc - --path.sysfs=/host/sys - --path.rootfs=/host/root - --path.udev.data=/host/root/run/udev/data - --web.listen-address=[$(HOST_IP)]:9100 securityContext: readOnlyRootFilesystem: true env: - name: HOST_IP value: 0.0.0.0 ports: - name: metrics containerPort: 9100 protocol: TCP livenessProbe: failureThreshold: 3 httpGet: httpHeaders: path: / port: 9100 scheme: HTTP initialDelaySeconds: 0 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 readinessProbe: failureThreshold: 3 httpGet: httpHeaders: path: / port: 9100 scheme: HTTP initialDelaySeconds: 0 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 volumeMounts: - name: proc mountPath: /host/proc readOnly: true - name: sys mountPath: /host/sys readOnly: true - name: root mountPath: /host/root mountPropagation: HostToContainer readOnly: true hostNetwork: true hostPID: true + nodeSelector: + kubernetes.io/os: linux tolerations: - effect: NoSchedule operator: Exists volumes: - name: proc hostPath: path: /proc - name: sys hostPath: path: /sys - name: root hostPath: path: / default, prometheus-node-exporter, Service (v1) has changed: # Source: prometheus-node-exporter/templates/service.yaml apiVersion: v1 kind: Service metadata: name: prometheus-node-exporter namespace: default labels: app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: metrics app.kubernetes.io/part-of: prometheus-node-exporter app.kubernetes.io/name: prometheus-node-exporter app.kubernetes.io/instance: prometheus-node-exporter annotations: + prometheus.io/scrape: "true" prometheus.io/scrape: "true" spec: type: ClusterIP ports: - port: 9100 targetPort: 9100 protocol: TCP name: metrics selector: app.kubernetes.io/name: prometheus-node-exporter app.kubernetes.io/instance: prometheus-node-exporter default, prometheus-node-exporter, ServiceAccount (v1) has changed: