feat(core): Add SpanBuffer implementation#19204
Merged
Lms24 merged 3 commits intolms/feat-span-firstfrom Feb 9, 2026
Merged
Conversation
Contributor
Codecov Results 📊Generated by Codecov Action |
Contributor
size-limit report 📦
|
Contributor
node-overhead report 🧳Note: This is a synthetic benchmark with a minimal express app and does not necessarily reflect the real-world performance impact in an application.
|
This was referenced Feb 6, 2026
e96091e to
a1dc134
Compare
Member
Author
|
cc @isaacs tagged you for a review since you worked on the telemetry buffer and might see something I missed in this simplified implementation |
Base automatically changed from
lms/feat-core-captureSpan
to
lms/feat-span-first
February 6, 2026 14:08
a1dc134 to
471048d
Compare
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 3 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
Lms24
added a commit
that referenced
this pull request
Feb 13, 2026
This PR adds a simple span buffer implementation to be used for buffering streamed spans. Behaviour: - buckets incoming spans by `traceId`, as we must not mix up spans of different traces in one envelope - flushes the entire buffer every 5s by default - flushes the specific trace bucket if the max span limit (1000) is reached. Relay accepts at max. 1000 spans per envelope - computes the DSC when flushing the first span of a trace. This is the latest time we can do it as once we flushed we have to freeze the DSC for Dynamic Sampling consistency - debounces the flush interval whenever we flush - flushes the entire buffer if `Sentry.flush()` is called - shuts down the interval-based flushing when `Sentry.close()` is called - [implicit] Client report generation for dropped envelopes is handled in the transport Methods: - `add` accepts a new span to be enqueued into the buffer - `drain` flushes the entire buffer - `flush(traceId)` flushes a specific traceId bucket. This can be used by e.g. the browser span streaming implementation to flush out the trace of a segment span directly once it ends. Options: - `maxSpanLimit` - allows to configure a 0 < maxSpanLimit < 1000 custom span limit. Useful for testing but we could also expose this to users if we see a need - `flushInterval`- allows to configure a >0 flush interval Limitations/edge cases: - No maximum limit of concurrently buffered traces. I'd tend to accept this for now and see where this leads us in terms of memory pressure but at the end of the day, the interval based flushing, in combination with our promise buffer _should_ avoid an ever-growing map of trace buckets. Happy to change this if reviewers have strong opinions or I'm missing something important! - There's no priority based scheduling relative to other telemetry items. Just like with our other log and metric buffers. - since `Map` is [insertion order preserving](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map#description), we apply a FIFO strategy when`drain`ing the trace buckets. This is in line with our [develop spec](https://develop.sentry.dev/sdk/telemetry/telemetry-processor/backend-telemetry-processor/#:~:text=The%20span%20buffer,in%20the%20buffer.) for the telemetry processor but might lead to cases where new traces are dropped by the promise buffer if a lof of concurrently running traces are flushed. I think that's a fine trade off. ref #19119
Lms24
added a commit
that referenced
this pull request
Feb 16, 2026
This PR adds a simple span buffer implementation to be used for buffering streamed spans. Behaviour: - buckets incoming spans by `traceId`, as we must not mix up spans of different traces in one envelope - flushes the entire buffer every 5s by default - flushes the specific trace bucket if the max span limit (1000) is reached. Relay accepts at max. 1000 spans per envelope - computes the DSC when flushing the first span of a trace. This is the latest time we can do it as once we flushed we have to freeze the DSC for Dynamic Sampling consistency - debounces the flush interval whenever we flush - flushes the entire buffer if `Sentry.flush()` is called - shuts down the interval-based flushing when `Sentry.close()` is called - [implicit] Client report generation for dropped envelopes is handled in the transport Methods: - `add` accepts a new span to be enqueued into the buffer - `drain` flushes the entire buffer - `flush(traceId)` flushes a specific traceId bucket. This can be used by e.g. the browser span streaming implementation to flush out the trace of a segment span directly once it ends. Options: - `maxSpanLimit` - allows to configure a 0 < maxSpanLimit < 1000 custom span limit. Useful for testing but we could also expose this to users if we see a need - `flushInterval`- allows to configure a >0 flush interval Limitations/edge cases: - No maximum limit of concurrently buffered traces. I'd tend to accept this for now and see where this leads us in terms of memory pressure but at the end of the day, the interval based flushing, in combination with our promise buffer _should_ avoid an ever-growing map of trace buckets. Happy to change this if reviewers have strong opinions or I'm missing something important! - There's no priority based scheduling relative to other telemetry items. Just like with our other log and metric buffers. - since `Map` is [insertion order preserving](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map#description), we apply a FIFO strategy when`drain`ing the trace buckets. This is in line with our [develop spec](https://develop.sentry.dev/sdk/telemetry/telemetry-processor/backend-telemetry-processor/#:~:text=The%20span%20buffer,in%20the%20buffer.) for the telemetry processor but might lead to cases where new traces are dropped by the promise buffer if a lof of concurrently running traces are flushed. I think that's a fine trade off. ref #19119
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR adds a simple span buffer implementation to be used for buffering streamed spans.
Behaviour:
traceId, as we must not mix up spans of different traces in one envelopeSentry.flush()is calledSentry.close()is calledMethods:
addaccepts a new span to be enqueued into the bufferdrainflushes the entire bufferflush(traceId)flushes a specific traceId bucket. This can be used by e.g. the browser span streaming implementation to flush out the trace of a segment span directly once it ends.Options:
maxSpanLimit- allows to configure a 0 < maxSpanLimit < 1000 custom span limit. Useful for testing but we could also expose this to users if we see a needflushInterval- allows to configure a >0 flush intervalLimitations/edge cases:
Mapis insertion order preserving, we apply a FIFO strategy whendraining the trace buckets. This is in line with our develop spec for the telemetry processor but might lead to cases where new traces are dropped by the promise buffer if a lof of concurrently running traces are flushed. I think that's a fine trade off.ref #19119