Skip to content

feat: add recv buffer size option with improved recv buffer capacity recovery#904

Open
NedAnd1 wants to merge 3 commits intohyperium:masterfrom
NedAnd1:bigger-reads
Open

feat: add recv buffer size option with improved recv buffer capacity recovery#904
NedAnd1 wants to merge 3 commits intohyperium:masterfrom
NedAnd1:bigger-reads

Conversation

@NedAnd1
Copy link
Copy Markdown

@NedAnd1 NedAnd1 commented May 6, 2026

Summary

This PR addresses the receiver-side perf issue described in #902
by allowing the frame decoder's read buffer size to be configurable
and by mitigating its capacity degradation with a simple a buffer manager.

Problem

The recv buffer size of the frame decoder is small, and even if its buffer size was increased,
the effective capacity of that buffer would degrade over time and never recover, causing small read() calls.

Solution

Adds a recv_buffer_size option to both client::Builder and server::Builder,
allowing users to control the initial capacity of the frame decoder's read buffer.

Introduces a BufferManager that proactively recovers buffer capacity for decoders.
When the underlying decoder returns Ok(None) (no complete frame available)
and the buffer's capacity has fallen below half the initial capacity,
the manager replaces the primary buffer with a secondary buffer
that has a higher likelihood of being able to reclaim its original buffer space.

This mitigates the BytesMut capacity degradation problem without depending on upstream changes.

Validation

  • h2 cargo tests
  • external scenario tests
  • external perf tests

Perf results for a branch including the commits in this PR:

   OS: linux (kernel 6.6), Arch: arm64, TCP_NODELAY: True

   Data Frame Size │ master (throughput) │ batch-data-frames (throughput) │ Improvement
      4 kiB        │ 0.97 GB/s           │ 1.14 GB/s                      │ 1.18x
     16 kiB        │ 2.68 GB/s           │ 3.47 GB/s                      │ 1.29x
     64 kiB        │ 3.54 GB/s           │ 6.19 GB/s                      │ 1.75x
    256 kiB        │ 4.64 GB/s           │ 7.50 GB/s                      │ 1.62x
   1024 kiB        │ 4.28 GB/s           │ 7.81 GB/s                      │ 1.82x

Sender-side PR: #903

@NedAnd1 NedAnd1 changed the title feat: add recv buffer size option and improve buffer capacity recovery feat: add recv buffer size option with improved recv buffer capacity recovery May 6, 2026
@seanmonstar
Copy link
Copy Markdown
Member

Improving the recv buffer performance is a great idea! Though, I wonder why you swap between two? With hyper's http1 path, we keep track of a target minimum read amount (though doesn't have to be dynamic), and before each new message, a call to reserve(min_read_size) makes sure that the BytesMut has enough space. Is that not sufficient?

@NedAnd1
Copy link
Copy Markdown
Author

NedAnd1 commented May 7, 2026

Improving the recv buffer performance is a great idea! Though, I wonder why you swap between two? With hyper's http1 path, we keep track of a target minimum read amount (though doesn't have to be dynamic), and before each new message, a call to reserve(min_read_size) makes sure that the BytesMut has enough space. Is that not sufficient?

Thanks, that approach does lead to higher memory allocation churn.
Calling reserve(...) on the current buffer has a very high likelihood of needing a new allocation if it's below the threshold,
since the yielded data frames are still in use by higher-level components.
Calling reserve(...) on the secondary buffer is much more likely to reclaim that buffer's original space,
since it has a much smaller chance of still being referenced by higher-level components.

This observation was initially made by explicitly calling BytesMut::try_reclaim(capacity - len + 1)
at different locations with a single buffer, and it consistently failed,
especially in the location where the buffer is most likely to be empty.
Whereas BytesMut::try_reclaim on the secondary buffer was consistently succeeding
in the location where the primary buffer is most likely to be empty.

Any additional thoughts or questions?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants