Skip to content

Skip lifecycle index creation when disk space is insufficient#2729

Open
delthas wants to merge 1 commit intodevelopment/9.4from
improvement/BB-753/skip-index-creation-disk-full
Open

Skip lifecycle index creation when disk space is insufficient#2729
delthas wants to merge 1 commit intodevelopment/9.4from
improvement/BB-753/skip-index-creation-disk-full

Conversation

@delthas
Copy link
Copy Markdown
Contributor

@delthas delthas commented Apr 1, 2026

Summary

  • Before attempting to create lifecycle v2 indexes, check available disk space and estimate the index creation cost.
  • If fsFreeSize < 3 * _id_ index size, skip index creation and fall back to v1 listing.
  • On error checking disk space, also fall back to v1 (consistent with existing error handling in _indexesGetOrCreate).

Context

When a large bucket fills the MongoDB PV, operators need lifecycle to empty it. The conductor tries to create v2 indexes for the bucket, which either fails (ENOSPC) or — worse — crashes the MongoDB shard. We confirmed this on a test cluster: attempting index creation with 50MB free on a 10GB PV caused "Connection closed by peer" and the shard went into a crash loop (560+ restarts). Removing the filler files was required to recover.

This change adds a proactive disk space check using two MongoDB metadata calls run sequentially (~11ms total, no locks):

  • getDiskUsage() (dbStats) — filesystem free space
  • getCollectionStats() (collStats) — per-collection index sizes

Why 3 * _id_ index size?

After running compact on a test collection with 1.37M objects, all three indexes (_id_, both lifecycle) are the same size (~60MB each). However, the _id_ index is typically bloated from incremental inserts (101MB before compact vs 58MB after). Since each lifecycle index is freshly built at its compact size, 3 * _id_ provides a safe margin:

  • When _id_ is bloated (101MB): threshold = 303MB, actual cost ~120MB — conservative, safe
  • When _id_ is compact (58MB): threshold = 174MB, actual cost ~120MB — still safe

Design decisions

  • Calls run sequentially (async.series) to avoid creating extra MongoDB connections/sessions. Latency is not critical here.
  • Check happens per-bucket in _indexesGetOrCreate, only for buckets that actually need index creation (indexes don't exist, auto-create enabled, not at concurrent limit). Most buckets already have indexes and return early.
  • Falls back to v1 on any error — consistent with every other error path in this method.
  • Depends on Arsenal PR Fix getDiskUsage and add getCollectionStats Arsenal#2605 for the fixed getDiskUsage and new getCollectionStats methods.

@bert-e
Copy link
Copy Markdown
Contributor

bert-e commented Apr 1, 2026

Hello delthas,

My role is to assist you with the merge of this
pull request. Please type @bert-e help to get information
on this process, or consult the user documentation.

Available options
name description privileged authored
/after_pull_request Wait for the given pull request id to be merged before continuing with the current one.
/bypass_author_approval Bypass the pull request author's approval
/bypass_build_status Bypass the build and test status
/bypass_commit_size Bypass the check on the size of the changeset TBA
/bypass_incompatible_branch Bypass the check on the source branch prefix
/bypass_jira_check Bypass the Jira issue check
/bypass_peer_approval Bypass the pull request peers' approval
/bypass_leader_approval Bypass the pull request leaders' approval
/approve Instruct Bert-E that the author has approved the pull request. ✍️
/create_pull_requests Allow the creation of integration pull requests.
/create_integration_branches Allow the creation of integration branches.
/no_octopus Prevent Wall-E from doing any octopus merge and use multiple consecutive merge instead
/unanimity Change review acceptance criteria from one reviewer at least to all reviewers
/wait Instruct Bert-E not to run until further notice.
Available commands
name description privileged
/help Print Bert-E's manual in the pull request.
/status Print Bert-E's current status in the pull request TBA
/clear Remove all comments from Bert-E from the history TBA
/retry Re-start a fresh build TBA
/build Re-start a fresh build TBA
/force_reset Delete integration branches & pull requests, and restart merge process from the beginning.
/reset Try to remove integration branches unless there are commits on them which do not appear on the source branch.

Status report is not available.

@delthas delthas changed the base branch from development/9.1 to development/9.3 April 1, 2026 22:35
@scality scality deleted a comment from bert-e Apr 1, 2026
@codecov
Copy link
Copy Markdown

codecov bot commented Apr 1, 2026

Codecov Report

❌ Patch coverage is 94.44444% with 1 line in your changes missing coverage. Please review.
✅ Project coverage is 74.74%. Comparing base (79e1ace) to head (d6e8a29).
⚠️ Report is 1 commits behind head on development/9.4.

Files with missing lines Patch % Lines
...tensions/lifecycle/conductor/LifecycleConductor.js 94.44% 1 Missing ⚠️
Additional details and impacted files

Impacted file tree graph

Files with missing lines Coverage Δ
...tensions/lifecycle/conductor/LifecycleConductor.js 84.16% <94.44%> (+0.45%) ⬆️

... and 5 files with indirect coverage changes

Components Coverage Δ
Bucket Notification 80.37% <ø> (ø)
Core Library 81.21% <ø> (+0.66%) ⬆️
Ingestion 70.53% <ø> (-0.62%) ⬇️
Lifecycle 79.10% <94.44%> (+0.09%) ⬆️
Oplog Populator 85.83% <ø> (ø)
Replication 59.61% <ø> (ø)
Bucket Scanner 85.76% <ø> (ø)
@@                 Coverage Diff                 @@
##           development/9.4    #2729      +/-   ##
===================================================
+ Coverage            74.50%   74.74%   +0.24%     
===================================================
  Files                  200      200              
  Lines                13610    13623      +13     
===================================================
+ Hits                 10140    10183      +43     
+ Misses                3460     3430      -30     
  Partials                10       10              
Flag Coverage Δ
api:retry 9.13% <0.00%> (-0.01%) ⬇️
api:routes 8.95% <0.00%> (-0.01%) ⬇️
bucket-scanner 85.76% <ø> (ø)
ft_test:queuepopulator 10.90% <0.00%> (+1.85%) ⬆️
ingestion 12.48% <0.00%> (-0.07%) ⬇️
lib 7.61% <0.00%> (+<0.01%) ⬆️
lifecycle 18.72% <0.00%> (-0.13%) ⬇️
notification 1.02% <0.00%> (-0.01%) ⬇️
oplogPopulator 0.14% <0.00%> (-0.01%) ⬇️
replication 18.46% <0.00%> (-0.02%) ⬇️
unit 51.16% <94.44%> (+0.04%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@scality scality deleted a comment from bert-e Apr 1, 2026
@bert-e
Copy link
Copy Markdown
Contributor

bert-e commented Apr 1, 2026

Waiting for approval

The following approvals are needed before I can proceed with the merge:

  • the author

  • 2 peers

@claude
Copy link
Copy Markdown

claude bot commented Apr 1, 2026

  • Missing test coverage for the two new code paths (disk usage error fallback and insufficient disk space fallback). The existing data-driven test structure makes adding these cases straightforward.

    Review by Claude Code

@claude
Copy link
Copy Markdown

claude bot commented Apr 2, 2026

  • Metric label getDiskUsageFailed (line 298) is misleading: async.parallel aggregates errors from both getDiskUsage and getCollectionStats, so the metric could fire for either failure. Consider renaming to diskSpaceCheckFailed.

Review by Claude Code

@delthas delthas force-pushed the improvement/BB-753/skip-index-creation-disk-full branch from 8f0cf5a to e3b661b Compare April 2, 2026 08:02
@delthas delthas changed the title BB-753: Skip lifecycle index creation when disk space is insufficient Skip lifecycle index creation when disk space is insufficient Apr 2, 2026
@claude
Copy link
Copy Markdown

claude bot commented Apr 2, 2026

LGTM

The disk space check is well-integrated into the existing guard chain in _indexesGetOrCreate. Error handling is consistent with surrounding code (falls back to v1). The async.parallel usage for the two MongoDB metadata calls is appropriate. Tests cover both the error path and the insufficient-space path.

Review by Claude Code

@bert-e
Copy link
Copy Markdown
Contributor

bert-e commented Apr 7, 2026

Jira issue not found

The Jira issue BB-753 was not found.

@delthas delthas force-pushed the improvement/BB-753/skip-index-creation-disk-full branch from e3b661b to a4deb09 Compare April 7, 2026 13:05
@delthas delthas changed the base branch from development/9.3 to development/9.4 April 7, 2026 13:05
@claude
Copy link
Copy Markdown

claude bot commented Apr 7, 2026

LGTM

Review by Claude Code

@delthas delthas requested review from a team, SylvainSenechal and benzekrimaha April 7, 2026 13:12
@scality scality deleted a comment from bert-e Apr 7, 2026
@bert-e
Copy link
Copy Markdown
Contributor

bert-e commented Apr 7, 2026

Waiting for approval

The following approvals are needed before I can proceed with the merge:

  • the author

  • 2 peers

@francoisferrand
Copy link
Copy Markdown
Contributor

should scality/Arsenal#2605 be bumped here first?
(will need to land before this actually works... and until then, it would probably crash...)

Copy link
Copy Markdown
Contributor

@SylvainSenechal SylvainSenechal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did this error ever happened in production ? What's the context that lead to this work, I don't see the linked ticket in the pr

I'm a bit surprised as you say :
"The id index size is used as an estimate for the lifecycle index cost. On a test cluster with 1M objects: id = 76MB, each lifecycle index ≈ 60MB — so 2 * id is a conservative estimate for both indexes combined."

It means a bucket with 100 million objects would only take ~7gb, which is.. fine ? Or maybe I misunderstood something and we are talking about ram instead of disk storage. Maybe the bigger problem is not the storage, but simply adding index on millions of objects is hammering the db for tens of minutes which also affects other apis calls

@delthas delthas force-pushed the improvement/BB-753/skip-index-creation-disk-full branch from a4deb09 to 6b69c2c Compare April 8, 2026 15:38
@claude
Copy link
Copy Markdown

claude bot commented Apr 8, 2026

  • When indexSizes is missing or _id_ is absent, idIndexSize defaults to 0, making fsFreeSize < 3 * 0 always false — the disk space guard is silently bypassed. Consider falling back to v1 when the index size is unknown, consistent with the other error paths in this method.
    - PR description says the threshold is 2 * _id_ but the code uses 3 * _id_. Update the description to match the code.

    Review by Claude Code

@delthas delthas force-pushed the improvement/BB-753/skip-index-creation-disk-full branch from 6b69c2c to da882c7 Compare April 8, 2026 15:42
@claude
Copy link
Copy Markdown

claude bot commented Apr 8, 2026

  • async.series should be async.parallel — the two MongoDB calls (getDiskUsage, getCollectionStats) are independent and the PR description states they should run in parallel
    - Change async.series to async.parallel on line 288
    - PR description says threshold is 2 * _id_ but code uses 3 * _id_ — description appears stale
    - Update the PR description to match the code

    Review by Claude Code

Before attempting to create lifecycle v2 indexes, check available
disk space via dbStats and estimate the index creation cost from
the collection's _id_ index size. If fsFreeSize < 2 * idIndexSize,
skip index creation and fall back to v1 listing.

This prevents the conductor from repeatedly attempting index
creation on a full disk, and avoids further pressuring MongoDB
storage when the volume is already constrained.

Issue: BB-753
@delthas delthas force-pushed the improvement/BB-753/skip-index-creation-disk-full branch from da882c7 to d6e8a29 Compare April 8, 2026 15:45
@claude
Copy link
Copy Markdown

claude bot commented Apr 8, 2026

LGTM

Review by Claude Code

@delthas
Copy link
Copy Markdown
Contributor Author

delthas commented Apr 8, 2026

Did this error ever happened in production ? What's the context that lead to this work, I don't see the linked ticket in the pr

I'm a bit surprised as you say : "The id index size is used as an estimate for the lifecycle index cost. On a test cluster with 1M objects: id = 76MB, each lifecycle index ≈ 60MB — so 2 * id is a conservative estimate for both indexes combined."

It means a bucket with 100 million objects would only take ~7gb, which is.. fine ? Or maybe I misunderstood something and we are talking about ram instead of disk storage. Maybe the bigger problem is not the storage, but simply adding index on millions of objects is hammering the db for tens of minutes which also affects other apis calls

The use case is for when your disk is (almost) full and you want to create a workflow to empty the bucket, but you can't because there isn't enough space to create the workflow index.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants