-
Notifications
You must be signed in to change notification settings - Fork 2.9k
Description
Component(s)
processor/deltatocumulative
What happened?
Description
0.118.0 caps the number of exponential histogram buckets produced by deltatocumulative
to 160 by downscaling if the number of buckets becomes too large (#33277).
However, it calculates the new scale independently for the positive and negative buckets and then uses the greater result (instead of the lesser), resulting in only one of them being capped and the other still exceeding the limit.
Steps to Reproduce
Send two positive exponential histogram data points with the same scale but offsets more than 160 apart. Do not include any negative data points.
For example (some fields omitted for brevity),
"dataPoints": [
{
"count": 1,
"scale": 20,
"positive": {
"offset": 0,
"bucket_counts": [1]
}
},
{
"count": 1,
"scale": 20,
"positive": {
"offset": 1000,
"bucket_counts": [1]
}
}
]
Expected Result
The exponential histogram is scaled to 17 to keep the number of positive buckets below 160.
Actual Result
The exponential histogram is not scaled and contains 1001 positive buckets.
Collector version
v0.118.0
Environment information
Environment
OS: macOS Sequoia 15.2
OpenTelemetry Collector configuration
receivers:
otlp:
protocols:
http:
processors:
deltatocumulative:
exporters:
debug:
verbosity: detailed
service:
pipelines:
metrics:
receivers: [otlp]
processors: [deltatocumulative]
exporters: [debug]
Log output
2025-01-22T16:28:58.968+0200 info Metrics {"kind": "exporter", "data_type": "metrics", "name": "debug", "resource metrics": 1, "metrics": 1, "data points": 2}
2025-01-22T16:28:58.970+0200 info ResourceMetrics #0
Resource SchemaURL:
ScopeMetrics #0
ScopeMetrics SchemaURL:
InstrumentationScope
Metric #0
Descriptor:
-> Name: proof-of-concept
-> Description:
-> Unit: 1
-> DataType: ExponentialHistogram
-> AggregationTemporality: Cumulative
ExponentialHistogramDataPoints #0
StartTimestamp: 2025-01-22 14:28:57 +0000 UTC
Timestamp: 2025-01-22 14:28:57 +0000 UTC
Count: 1
Bucket (1.000000, 1.000001], Count: 1
ExponentialHistogramDataPoints #1
StartTimestamp: 2025-01-22 14:28:57 +0000 UTC
Timestamp: 2025-01-22 14:28:58 +0000 UTC
Count: 2
Bucket (1.000000, 1.000001], Count: 1
Bucket (1.000001, 1.000001], Count: 0
... [997 lines removed with Count: 0]
Bucket (1.000661, 1.000661], Count: 0
Bucket (1.000661, 1.000662], Count: 1
Additional context
I believe we should be taking the lesser of the limited scales for positive and negative buckets, see here: #36874 (comment)