Closed
Description
Component(s)
exporter/prometheusremotewrite
What happened?
Description
Using prometheusremotewrite to write fails, the corresponding compression type is not supported, and specifying it manually still gives an error. Setting compression: none
fail as well.
Steps to Reproduce
Expected Result
The change in compression type is written to the prometheus instance correctly, as indicated by the error message.
Actual Result
Unable to remotely write to prometheus instance.
Collector version
v0.114.0
Environment information
Environment
OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")
OpenTelemetry Collector configuration
apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
name: test-ack-public
spec:
mode: daemonset
hostNetwork: true
config:
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
prometheus:
config:
scrape_configs:
- job_name: 'otelcol'
scrape_interval: 10s
static_configs:
- targets: ['0.0.0.0:8888']
exporters:
otlp:
endpoint: "http://192.168.255.155:4317"
tls:
insecure: true
compression: gzip
prometheusremotewrite:
endpoint: "http://192.168.255.155:9090/api/v1/otlp/v1/metrics"
resource_to_telemetry_conversion:
enabled: true
compression: gzip
tls:
insecure: true
processors:
memory_limiter:
check_interval: 1s
limit_percentage: 75
spike_limit_percentage: 15
batch:
send_batch_size: 10000
timeout: 10s
# k8sattributes processor to get the metadata from K8s
k8sattributes:
auth_type: "serviceAccount"
passthrough: false
extract:
metadata:
- k8s.pod.name
- k8s.pod.uid
- k8s.deployment.name
- k8s.namespace.name
- k8s.node.name
- k8s.pod.start_time
- k8s.cluster.uid
# Pod labels which can be fetched via K8sattributeprocessor
labels:
- tag_name: key1
key: label1
from: pod
- tag_name: key2
key: label2
from: pod
# Pod association using resource attributes and connection
pod_association:
- sources:
- from: resource_attribute
name: k8s.pod.uid
- from: resource_attribute
name: k8s.pod.ip
- from: connection
service:
telemetry:
logs:
level: "debug"
metrics:
address: "0.0.0.0:8888"
pipelines:
traces:
receivers: [otlp]
processors: [memory_limiter, batch, k8sattributes]
exporters: [otlp]
metrics:
receivers: [prometheus]
processors: [memory_limiter, batch, k8sattributes]
exporters: [prometheusremotewrite]
resources:
limits:
cpu: 100m
memory: 200M
Log output
2025-01-15T03:30:59.129Z debug [email protected]/processor.go:141 evaluating pod identifier {"kind": "processor", "name": "k8sattributes", "pipeline": "metrics", "value": [{"Source":{"From":"","Name":""},"Value":""},{"Source":{"From":"","Name":""},"Value":""},{"Source":{"From":"","Name":""},"Value":""},{"Source":{"From":"","Name":""},"Value":""}]}
2025-01-15T03:30:59.223Z error internal/queue_sender.go:92 Exporting failed. Dropping data. {"kind": "exporter", "data_type": "metrics", "name": "prometheusremotewrite", "error": "Permanent error: Permanent error: Permanent error: remote write returned HTTP status 400 Bad Request; err = %!w(<nil>): unsupported compression: snappy. Only \"gzip\" or no compression supported\n", "dropped_items": 29}
go.opentelemetry.io/collector/exporter/exporterhelper/internal.NewQueueSender.func1
Additional context
No response