Skip to content

Deprecate the unused configuration DimensionsCacheSize #39646

Closed
@Frapschen

Description

@Frapschen

Component(s)

connector/spanmetrics

Describe the issue you're reporting

The DimensionsCacheSize is the limit of the metricKeyToDimensions cache size:

metricKeyToDimensionsCache, err := cache.NewCache[metrics.Key, pcommon.Map](cfg.DimensionsCacheSize)

It's use to keep the attributes build from span and try to improve collectors memory usage.

As the code of spanconnector iterates, spanconnector uses ResourceMetrics to maintain the intermediate state of metrics:

type resourceMetrics struct {
histograms metrics.HistogramMetrics
sums metrics.SumMetrics
events metrics.SumMetrics
attributes pcommon.Map
// lastSeen captures when the last data points for this resource were recorded.
lastSeen time.Time
}

Below is an example of the SumMetrics struct. Please note the signature of the GetOrCreate function.

GetOrCreate(key Key, attributes pcommon.Map, startTimestamp pcommon.Timestamp) *Sum

type Sum struct {
attributes pcommon.Map
count uint64
exemplars pmetric.ExemplarSlice
maxExemplarCount *int
startTimestamp pcommon.Timestamp
// isFirst is used to track if this datapoint is new to the Sum. This
// is used to ensure that new Sum metrics being with 0, and then are incremented
// to the desired value. This avoids Prometheus throwing away the first
// value in the series, due to the transition from null -> x.
isFirst bool
}
func (s *Sum) Add(value uint64) {
s.count += value
}
func NewSumMetrics(maxExemplarCount *int, cardinalityLimit int) SumMetrics {
return SumMetrics{
metrics: make(map[Key]*Sum),
maxExemplarCount: maxExemplarCount,
cardinalityLimit: cardinalityLimit,
}
}
type SumMetrics struct {
metrics map[Key]*Sum
maxExemplarCount *int
cardinalityLimit int
}
func (m *SumMetrics) IsCardinalityLimitReached() bool {
return m.cardinalityLimit > 0 && len(m.metrics) >= m.cardinalityLimit
}
func (m *SumMetrics) GetOrCreate(key Key, attributes pcommon.Map, startTimestamp pcommon.Timestamp) *Sum {
s, ok := m.metrics[key]
if !ok {
s = &Sum{
attributes: attributes,
exemplars: pmetric.NewExemplarSlice(),
maxExemplarCount: m.maxExemplarCount,
startTimestamp: startTimestamp,
isFirst: true,
}
m.metrics[key] = s
}
return s
}

I found that https://pkg.go.dev/go.opentelemetry.io/collector/pdata/pcommon#Map is also a struct, so the attributes variable is passed by value when GetOrCreate is invoked.

It means that the attributes var in Sum struct have no relationship with the attributes value in metricKeyToDimensionsCache .

As per the analysis above, I believe that metricKeyToDimensionsCache can no longer help improve the collector's memory usage. We can use aggregation_cardinality_limit instead.

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions