@@ -187,17 +187,17 @@ summary_value['count'] # => 100
187
187
All metrics can have labels, allowing grouping of related time series.
188
188
189
189
Labels are an extremely powerful feature, but one that must be used with care.
190
- Refer to the best practices on [ naming] ( https://prometheus.io/docs/practices/naming/ ) and
190
+ Refer to the best practices on [ naming] ( https://prometheus.io/docs/practices/naming/ ) and
191
191
[ labels] ( https://prometheus.io/docs/practices/instrumentation/#use-labels ) .
192
192
193
- Most importantly, avoid labels that can have a large number of possible values (high
193
+ Most importantly, avoid labels that can have a large number of possible values (high
194
194
cardinality). For example, an HTTP Status Code is a good label. A User ID is ** not** .
195
195
196
196
Labels are specified optionally when updating metrics, as a hash of ` label_name => value ` .
197
- Refer to [ the Prometheus documentation] ( https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels )
197
+ Refer to [ the Prometheus documentation] ( https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels )
198
198
as to what's a valid ` label_name ` .
199
199
200
- In order for a metric to accept labels, their names must be specified when first initializing
200
+ In order for a metric to accept labels, their names must be specified when first initializing
201
201
the metric. Then, when the metric is updated, all the specified labels must be present.
202
202
203
203
Example:
@@ -215,8 +215,8 @@ You can also "pre-set" some of these label values, if they'll always be the same
215
215
need to specify them every time:
216
216
217
217
``` ruby
218
- https_requests_total = Counter .new (:http_requests_total ,
219
- docstring: ' ...' ,
218
+ https_requests_total = Counter .new (:http_requests_total ,
219
+ docstring: ' ...' ,
220
220
labels: [:service , :status_code ],
221
221
preset_labels: { service: " my_service" })
222
222
@@ -231,7 +231,7 @@ with a subset (or full set) of labels set, so that you can increment / observe t
231
231
without having to specify the labels for every call.
232
232
233
233
Moreover, if all the labels the metric can take have been pre-set, validation of the labels
234
- is done on the call to ` with_labels ` , and then skipped for each observation, which can
234
+ is done on the call to ` with_labels ` , and then skipped for each observation, which can
235
235
lead to performance improvements. If you are incrementing a counter in a fast loop, you
236
236
definitely want to be doing this.
237
237
@@ -242,8 +242,8 @@ Examples:
242
242
243
243
``` ruby
244
244
# in the metric definition:
245
- records_processed_total = registry.counter.new (:records_processed_total ,
246
- docstring: ' ...' ,
245
+ records_processed_total = registry.counter.new (:records_processed_total ,
246
+ docstring: ' ...' ,
247
247
labels: [:service , :component ],
248
248
preset_labels: { service: " my_service" })
249
249
@@ -256,11 +256,11 @@ class MyComponent
256
256
def metric
257
257
@metric ||= records_processed_total.with_labels(component: " my_component" )
258
258
end
259
-
259
+
260
260
def process
261
261
records.each do |record |
262
262
# process the record
263
- metric.increment
263
+ metric.increment
264
264
end
265
265
end
266
266
end
@@ -280,11 +280,11 @@ metric definition will result in a
280
280
281
281
- ` :job `
282
282
- ` :instance `
283
- - ` :pid `
283
+ - ` :pid ` (unless you define a new ` ProcessIdentity ` )
284
284
285
285
## Data Stores
286
286
287
- The data for all the metrics (the internal counters associated with each labelset)
287
+ The data for all the metrics (the internal counters associated with each labelset)
288
288
is stored in a global Data Store object, rather than in the metric objects themselves.
289
289
(This "storage" is ephemeral, generally in-memory, it's not "long-term storage")
290
290
@@ -294,12 +294,12 @@ example), require a shared store between all the processes, to be able to report
294
294
numbers. At the same time, other applications may not have this requirement but be very
295
295
sensitive to performance, and would prefer instead a simpler, faster store.
296
296
297
- By having a standardized and simple interface that metrics use to access this store,
297
+ By having a standardized and simple interface that metrics use to access this store,
298
298
we abstract away the details of storing the data from the specific needs of each metric.
299
- This allows us to then simply swap around the stores based on the needs of different
300
- applications, with no changes to the rest of the client.
299
+ This allows us to then simply swap around the stores based on the needs of different
300
+ applications, with no changes to the rest of the client.
301
301
302
- The client provides 3 built-in stores, but if neither of these is ideal for your
302
+ The client provides 3 built-in stores, but if neither of these is ideal for your
303
303
requirements, you can easily make your own store and use that instead. More on this below.
304
304
305
305
### Configuring which store to use.
@@ -317,7 +317,7 @@ NOTE: You **must** make sure to set the `data_store` before initializing any met
317
317
If using Rails, you probably want to set up your Data Store on ` config/application.rb ` ,
318
318
or ` config/environments/* ` , both of which run before ` config/initializers/* `
319
319
320
- Also note that ` config.data_store ` is set to an * instance* of a ` DataStore ` , not to the
320
+ Also note that ` config.data_store ` is set to an * instance* of a ` DataStore ` , not to the
321
321
class. This is so that the stores can receive parameters. Most of the built-in stores
322
322
don't require any, but ` DirectFileStore ` does, for example.
323
323
@@ -336,45 +336,58 @@ documentation of each store for more details.
336
336
337
337
There are 3 built-in stores, with different trade-offs:
338
338
339
- - ** Synchronized** : Default store. Thread safe, but not suitable for multi-process
339
+ - ** Synchronized** : Default store. Thread safe, but not suitable for multi-process
340
340
scenarios (e.g. pre-fork servers, like Unicorn). Stores data in Hashes, with all accesses
341
- protected by Mutexes.
341
+ protected by Mutexes.
342
+
342
343
- ** SingleThreaded** : Fastest store, but only suitable for single-threaded scenarios.
343
- This store does not make any effort to synchronize access to its internal hashes, so
344
+ This store does not make any effort to synchronize access to its internal hashes, so
344
345
it's absolutely not thread safe.
346
+
345
347
- ** DirectFileStore** : Stores data in binary files, one file per process and per metric.
346
- This is generally the recommended store to use with pre-fork servers and other
348
+ This is generally the recommended store to use with pre-fork servers and other
347
349
"multi-process" scenarios. There are some important caveats to using this store, so
348
350
please read on the section below.
349
351
352
+ ``` ruby
353
+ # process_identifier and generate_identity are optional
354
+ DirectFileStore .new (dir: ' /tmp/dfs' , process_identifier: :process_name , generate_identity: -> { $0 })
355
+ ```
356
+
350
357
### ` DirectFileStore ` caveats and things to keep in mind
351
358
352
359
Each metric gets a file for each process, and manages its contents by storing keys and
353
- binary floats next to them, and updating the offsets of those Floats directly. When
354
- exporting metrics, it will find all the files that apply to each metric, read them,
360
+ binary floats next to them, and updating the offsets of those Floats directly. When
361
+ exporting metrics, it will find all the files that apply to each metric, read them,
355
362
and aggregate them.
356
363
357
364
** Aggregation of metrics** : Since there will be several files per metrics (one per process),
358
365
these need to be aggregated to present a coherent view to Prometheus. Depending on your
359
- use case, you may need to control how this works. When using this store,
366
+ use case, you may need to control how this works. When using this store,
360
367
each Metric allows you to specify an ` :aggregation ` setting, defining how
361
368
to aggregate the multiple possible values we can get for each labelset. By default,
362
369
Counters, Histograms and Summaries are ` SUM ` med, and Gauges report all their values (one
363
- for each process), tagged with a ` pid ` label. You can also select ` SUM ` , ` MAX ` , ` MIN ` , or
364
- ` MOST_RECENT ` for your gauges, depending on your use case.
370
+ for each process), tagged with a ` pid ` label by default . You can also select ` SUM ` , ` MAX ` ,
371
+ ` MIN ` , or ` MOST_RECENT ` for your gauges, depending on your use case.
365
372
366
373
Please note that that the ` MOST_RECENT ` aggregation only works for gauges, and it does not
367
- allow the use of ` increment ` / ` decrement ` , you can only use ` set ` .
374
+ allow the use of ` increment ` / ` decrement ` , you can only use ` set ` .
375
+
376
+ ** Process Identity** : When defining the ` DirectFileStore ` , you may change how processes are
377
+ identified. When the ` process_identifier ` and ` generate_identity ` arguments are specified,
378
+ then the default ` pid ` will no longer be applied. This can be done to capture the process
379
+ name (` $0 ` ), the puma worker's index, or other identifying attributes. ` generate_identity `
380
+ is expected to implement ` call() ` .
368
381
369
382
** Memory Usage** : When scraped by Prometheus, this store will read all these files, get all
370
383
the values and aggregate them. We have notice this can have a noticeable effect on memory
371
384
usage for your app. We recommend you test this in a realistic usage scenario to make sure
372
385
you won't hit any memory limits your app may have.
373
386
374
- ** Resetting your metrics on each run** : You should also make sure that the directory where
375
- you store your metric files (specified when initializing the ` DirectFileStore ` ) is emptied
376
- when your app starts. Otherwise, each app run will continue exporting the metrics from the
377
- previous run.
387
+ ** Resetting your metrics on each run** : You should also make sure that the directory where
388
+ you store your metric files (specified when initializing the ` DirectFileStore ` ) is emptied
389
+ when your app starts. Otherwise, each app run will continue exporting the metrics from the
390
+ previous run.
378
391
379
392
If you have this issue, one way to do this is to run code similar to this as part of you
380
393
initialization:
@@ -389,15 +402,15 @@ If you are running in pre-fork servers (such as Unicorn or Puma with multiple pr
389
402
make sure you do this ** before** the server forks. Otherwise, each child process may delete
390
403
files created by other processes on * this* run, instead of deleting old files.
391
404
392
- ** Large numbers of files** : Because there is an individual file per metric and per process
393
- (which is done to optimize for observation performance), you may end up with a large number
405
+ ** Large numbers of files** : Because there is an individual file per metric and per process
406
+ (which is done to optimize for observation performance), you may end up with a large number
394
407
of files. We don't currently have a solution for this problem, but we're working on it.
395
408
396
- ** Performance** : Even though this store saves data on disk, it's still much faster than
397
- would probably be expected, because the files are never actually ` fsync ` ed, so the store
398
- never blocks while waiting for disk. The kernel's page cache is incredibly efficient in
399
- this regard. If in doubt, check the benchmark scripts described in the documentation for
400
- creating your own stores and run them in your particular runtime environment to make sure
409
+ ** Performance** : Even though this store saves data on disk, it's still much faster than
410
+ would probably be expected, because the files are never actually ` fsync ` ed, so the store
411
+ never blocks while waiting for disk. The kernel's page cache is incredibly efficient in
412
+ this regard. If in doubt, check the benchmark scripts described in the documentation for
413
+ creating your own stores and run them in your particular runtime environment to make sure
401
414
this provides adequate performance.
402
415
403
416
@@ -406,7 +419,7 @@ this provides adequate performance.
406
419
If none of these stores is suitable for your requirements, you can easily make your own.
407
420
408
421
The interface and requirements of Stores are specified in detail in the ` README.md `
409
- in the ` client/data_stores ` directory. This thoroughly documents how to make your own
422
+ in the ` client/data_stores ` directory. This thoroughly documents how to make your own
410
423
store.
411
424
412
425
There are also links there to non-built-in stores created by others that may be useful,
@@ -418,16 +431,16 @@ If you are in a multi-process environment (such as pre-fork servers like Unicorn
418
431
process will probably keep their own counters, which need to be aggregated when receiving
419
432
a Prometheus scrape, to report coherent total numbers.
420
433
421
- For Counters, Histograms and quantile-less Summaries this is simply a matter of
434
+ For Counters, Histograms and quantile-less Summaries this is simply a matter of
422
435
summing the values of each process.
423
436
424
- For Gauges, however, this may not be the right thing to do, depending on what they're
437
+ For Gauges, however, this may not be the right thing to do, depending on what they're
425
438
measuring. You might want to take the maximum or minimum value observed in any process,
426
439
rather than the sum of all of them. By default, we export each process's individual
427
440
value, with a ` pid ` label identifying each one.
428
441
429
- If these defaults don't work for your use case, you should use the ` store_settings `
430
- parameter when registering the metric, to specify an ` :aggregation ` setting.
442
+ If these defaults don't work for your use case, you should use the ` store_settings `
443
+ parameter when registering the metric, to specify an ` :aggregation ` setting.
431
444
432
445
``` ruby
433
446
free_disk_space = registry.gauge(:free_disk_space_bytes ,
@@ -438,8 +451,8 @@ free_disk_space = registry.gauge(:free_disk_space_bytes,
438
451
NOTE: This will only work if the store you're using supports the ` :aggregation ` setting.
439
452
Of the built-in stores, only ` DirectFileStore ` does.
440
453
441
- Also note that the ` :aggregation ` setting works for all metric types, not just for gauges.
442
- It would be unusual to use it for anything other than gauges, but if your use-case
454
+ Also note that the ` :aggregation ` setting works for all metric types, not just for gauges.
455
+ It would be unusual to use it for anything other than gauges, but if your use-case
443
456
requires it, the store will respect your aggregation wishes.
444
457
445
458
## Tests
0 commit comments