Skip to content

Added missing statsd mappings, updated mappings dimensions and metrics docs to reflect the emitted Coordinator stats#19392

Open
pirvtech wants to merge 2 commits intoapache:masterfrom
RivianVW-tech:statsd-metrics-backlog
Open

Added missing statsd mappings, updated mappings dimensions and metrics docs to reflect the emitted Coordinator stats#19392
pirvtech wants to merge 2 commits intoapache:masterfrom
RivianVW-tech:statsd-metrics-backlog

Conversation

@pirvtech
Copy link
Copy Markdown

@pirvtech pirvtech commented Apr 30, 2026

Description

Metrics published by various Druid services are handled by the configured Emitter to send them to external observability systems. The Statsd Emitter is used to send to StatsD system. This emitter uses a default mapping file (packaged within Druid) to map from Druid metric types to StatsD metric types. If a mapping is not present for a metric, it is not emitted.

A number of metrics are missing mappings or have incomplete or incorrect dimensions specified in the mappings. It is likely because these mappings are not being kept up-to-date, with the additions and changes to published metrics in the code.

This change updates these mappings for Coordinator metrics and fixes documentation where it deviates from what is in the code.

Release note


Fixes missing Coordinator stats not being published to StatsD system

This PR has:

  • [ x] been self-reviewed.
  • added documentation for new or modified features or behaviors.
  • a release note entry in the PR description.
  • added Javadocs for most classes and all non-trivial methods. Linked related entities via Javadoc links.
  • added or updated version, license, or notice information in licenses.yaml
  • added comments explaining the "why" and the intent of the code wherever would not be obvious for an unfamiliar reader.
  • added unit tests or modified existing tests to cover new code paths, ensuring the threshold for code coverage is met.
  • added integration tests.
  • been tested in a test Druid cluster.

…s docs to reflect the emitted Coordinator stats
Copy link
Copy Markdown
Member

@FrankChen021 FrankChen021 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Severity Findings
P0 0
P1 0
P2 3
P3 0
Total 3

This is an automated review by Codex GPT-5

"segment/availableDeepStorageOnly/count" : { "dimensions" : ["dataSource"], "type" : "gauge" },
"segment/unneededEternityTombstone/count" : { "dimensions" : ["dataSource"], "type" : "gauge" },
"segment/clone/assigned/count" : { "dimensions" : ["dataSource"], "type" : "count" },
"segment/clone/dropped/count" : { "dimensions" : ["dataSource"], "type" : "count" },
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[P2] Clone-drop metric drops the emitted server dimension

segment/clone/dropped/count is emitted from CloneHistoricals.dropSegmentFromTargetServer with only the server dimension, but this new StatsD mapping asks for dataSource. DimensionConverter only forwards dimensions present on the event, so StatsD/DogStatsD will emit this newly mapped metric without either dimension and aggregate clone drops across all target servers. Map server, or emit dataSource as well if datasource-level reporting is intended.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

"compact/createJobs/time" : { "dimensions" : [], "type" : "timer" },
"compact/createJobs/count" : { "dimensions" : [], "type" : "count" },
"compact/runScheduler/time" : { "dimensions" : [], "type" : "timer" },
"compactTask/cancelled/count" : { "dimensions" : [], "type" : "count" },
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[P2] Cancelled compaction tasks lose datasource labels

CompactSegments records compactTask/cancelled/count with RowKey.of(Dimension.DATASOURCE, dataSource), but the added StatsD mapping has an empty dimension list. This silently strips datasource from the emitted StatsD/DogStatsD metric, so cancellations for different datasources are collapsed into one series. Include dataSource in the mapping.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed


"segment/balancer/compute/error" : { "dimensions" : ["tier", "dataSource", "description"], "type" : "count" },

"config/brokerSync/time" : { "dimensions" : [], "type" : "timer" },
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[P2] Broker sync metrics collapse per-broker failures

BaseDynamicConfigSyncer emits config/brokerSync/time and config/brokerSync/error with configType and server, and config/brokerSync/total/time with configType, but the new mappings for these metrics use no dimensions. StatsD will therefore aggregate sync latency and errors across config types and brokers, hiding which broker/config is failing. Add the emitted dimensions to the mappings.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

Copy link
Copy Markdown
Member

@FrankChen021 FrankChen021 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I reviewed the follow-up changes. All three prior statsd mapping threads appear resolved in the current code: clone-drop keeps the emitted server dimension, cancelled compaction tasks keep dataSource, and broker sync metrics include their emitted configType and server dimensions.


This is an automated review by Codex GPT-5

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants