Skip to content

Conversation

Copy link
Contributor

Copilot AI commented Dec 10, 2025

The WebSocketOutputStream.flush() method was blocking for 100ms per iteration while waiting for the queue to drain, creating a throughput bottleneck for streaming operations like Exec.exec().

Changes

  • Reduced sleep interval from 100ms to 1ms in flush loop
  • Added constants FLUSH_WAIT_MILLIS (1ms) and MAX_FLUSH_ITERATIONS (10000) to replace magic numbers
  • Maintained 10-second total timeout (10000 iterations × 1ms)

Before:

Thread.sleep(100);  // Wait 100ms per iteration
if (i++ > 100) {    // 100 iterations = 10s timeout

After:

Thread.sleep(FLUSH_WAIT_MILLIS);      // Wait 1ms per iteration
if (i++ > MAX_FLUSH_ITERATIONS) {     // 10000 iterations = 10s timeout

This provides up to 100× faster flush behavior while preserving the same maximum wait time.

Original prompt

This section details on the original issue you should resolve

<issue_title>Sending messages via WebSocketOutputStream is slow</issue_title>
<issue_description>Describe the bug
Sending messages through the WebSocketStreamHandler.WebSocketOutputStream is quite slow. This behavior is mainly caused by the flush() method. We often observed that flush() only returned after 100 ms which reduced the throughput significantly.

The root cause is the long wait time (100 ms!) in following code snippet:

      while (WebSocketStreamHandler.this.socket.queueSize() > 0) {
        try {
          Thread.sleep(100);
          ...

Client Version
19.0.1

Kubernetes Version
1.29

Java Version
Java 17

To Reproduce
Invoke Exec.exec(...) to create an ExecProcess.
Use the output stream returned by Process.getOutputStream().
Send data using write(...) calls and flush the stream afterwards.

Expected behavior
A wait time of 1 ms should be sufficient.

Server (please complete the following information):

  • OS: Linux and Windows
  • Environment: container and IDE
    </issue_description>

<agent_instructions>Please evaluate if this is still an issue and shorten the wait time to 1ms</agent_instructions>

Comments on the Issue (you are @copilot in this section)

@brendandburns Happy to take PRs to improve this (for example making the wait configurable) or alternate mechanisms for ensuring the queue is flushed.

✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Dec 10, 2025
Co-authored-by: brendandburns <5751682+brendandburns@users.noreply.github.com>
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: Copilot
Once this PR has been reviewed and has the lgtm label, please ask for approval from brendandburns. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Co-authored-by: brendandburns <5751682+brendandburns@users.noreply.github.com>
Copilot AI changed the title [WIP] Fix slow message sending via WebSocketOutputStream Reduce WebSocket flush wait from 100ms to 1ms Dec 10, 2025
Copilot AI requested a review from brendandburns December 10, 2025 19:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Sending messages via WebSocketOutputStream is slow

3 participants