Skip to content

feat: add DruidNode deploymentGroup field to support R/B deployments#19413

Open
jtuglu1 wants to merge 2 commits intoapache:masterfrom
jtuglu1:support-red-black-style-deployment-groups-in-druid
Open

feat: add DruidNode deploymentGroup field to support R/B deployments#19413
jtuglu1 wants to merge 2 commits intoapache:masterfrom
jtuglu1:support-red-black-style-deployment-groups-in-druid

Conversation

@jtuglu1
Copy link
Copy Markdown
Contributor

@jtuglu1 jtuglu1 commented May 5, 2026

Description

Druid Deployment Overview

Currently, deployment in Druid is geared towards "rolling" deployments, which, while potentially cheaper/faster are not the safest deployment mechanisms due to the lack of isolation during new cluster bring-up.

A red/black (a.k.a blue/green) deployment is better suited for cases where you want to bring another Druid cluster up in isolation of the existing one (but in same ZK/K8s discoverability namespace) in order to perform safety/performance checks before cutting over to the new deployment. The Overlord already supports a similar concept of worker "versioning" where it will only schedule peons on MMs/Indexers that are running the version it itself is configured with (or higher), allowing the cluster to eventually drain the ASGs with older Druid versions.

However, this functionality gets us part of the way to supporting what's effectively a zero-downtime (both query + ingest) deployment. To achieve a fully isolated (with the exception of master nodes: {coordinator, overlord}) deployment environment where we can mirror queries, observe state, etc. we also need to support version-based routing of queries.

Requirements

To do this, we need two things:

  1. Data Isolation

Users cannot be impacted by data loading/unavailability issues when a node is rolled. This is especially pertinent in cases where there is no replica for a segment in a given tier, meaning rolling an instance requires either cloning() the historical first or taking downtime. This applies for both historical and realtime segments: we should be avoiding any/all data availability/freshness problems during deployment.

681cbde provided support for tier aliases (so duplicate historical tier deployments can be brought up transparently to the user/operator).

  1. Query Isolation

Since Druid queries are generally read-only, isolating the new cluster from user traffic until it is deemed "healthy" is critical to ensure no regressions are deployed. This is also particularly helpful in performing performance/load tests. This PR provides the query routing support to allow user queries to route to strictly old nodes (router/broker/historical/peon), and any traffic sent to new nodes to route to new router/broker/historical/peon.

NB: we allow queries from both versions to hit old/new peons to allow for convenient roll-back and easier data availability properties when forcing task group hand-off. This behavior can be toggled via druid.broker.segment.strictRealtimeDeploymentGroupFilter, where setting it to true would imply realtime servers with non-matching versions would be excluded from query planning.

Deployment Steps

The combination of these 2 changes support the following deployment process:

  1. Deploy new Druid ASGs: router, broker, historical, coordinator, overlord, MM, etc.
  2. Configure Coordinator dynamic config to set up tier aliases for new/existing Druid historical tiers (so same set of segments are loaded in parallel onto equivalent tiers across the 2 versions).
  3. Wait for segments to load on the new Druid ASGs
  4. Switch coordinator leader to new Druid version coordinator
  5. Optionally mirror traffic to the new ASGs (new router/broker will be able to query only historicals of their same version; peons are by default queryable by all versions).
  6. Switch leader overlord to newer version (using generous timeouts/retries to avoid ingest task RPC failure)
  7. Finally, cutover to new router/broker/historicals.

This deployment method combines the traditional red/black deployment with Druid's rolling deployment, providing zero ingest downtime as well as zero query downtime for users (both in terms of availability and data freshness). It also provides ample time to experiment/canary changes without impacting user traffic.

Release note


This PR has:

  • been self-reviewed.
  • added documentation for new or modified features or behaviors.
  • a release note entry in the PR description.
  • added Javadocs for most classes and all non-trivial methods. Linked related entities via Javadoc links.
  • added or updated version, license, or notice information in licenses.yaml
  • added comments explaining the "why" and the intent of the code wherever would not be obvious for an unfamiliar reader.
  • added unit tests or modified existing tests to cover new code paths, ensuring the threshold for code coverage is met.
  • added integration tests.
  • been tested in a test Druid cluster.

@jtuglu1 jtuglu1 requested a review from clintropolis May 5, 2026 22:28
@jtuglu1 jtuglu1 force-pushed the support-red-black-style-deployment-groups-in-druid branch from 95da3b9 to b016aa3 Compare May 5, 2026 22:44
@jtuglu1 jtuglu1 force-pushed the support-red-black-style-deployment-groups-in-druid branch from b016aa3 to 7d9c627 Compare May 5, 2026 23:51
@jtuglu1 jtuglu1 requested review from abhishekrb19 and gianm May 6, 2026 00:31
@jtuglu1 jtuglu1 marked this pull request as ready for review May 6, 2026 18:44
Copy link
Copy Markdown
Member

@FrankChen021 FrankChen021 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Severity Findings
P0 0
P1 0
P2 1
P3 0
Total 1

This is an automated review by Codex GPT-5

}

return new Pair<>(brokerServiceName, nodesHolder.pick());
Server picked = nodesHolder.pick();
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[P2] Deployment-group filtering can be bypassed by router backup routing

When the filter removes all brokers for a service, this returns a Pair with a null server, but QueryHostFinder.findServerInner then falls back to serverBackup for the same/default service. After an acceptable broker has been cached and later re-announces with a non-matching deploymentGroup, or is removed while the same host remains reachable, queries can still be routed to that cached broker despite the configured acceptableDeploymentGroups. This breaks the intended fail-closed red/black isolation; the filtered/no-match case should avoid backup fallback or validate/clear backup entries against the deployment-group filter.

Comment on lines +256 to +266
return new DataSegment(
TestDataSource.WIKI,
Intervals.of("2024-01-01/2024-01-02"),
"1",
Collections.emptyMap(),
Collections.emptyList(),
Collections.emptyList(),
NoneShardSpec.instance(),
0,
100
);
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants