Skip to content

Conversation

@arturobernalg
Copy link
Member

Implements Option 1 from the discussion: add a configurable cap on the async execution pipeline to prevent the recursive callback loop that led to StackOverflowError.

@ok2c
Copy link
Member

ok2c commented Nov 29, 2025

@arturobernalg Could you please rebase this change-set?

Copy link
Member

@ok2c ok2c left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@arturobernalg This is a good start. Please also do not forget about InternalH2AsyncExecRuntime.

@arturobernalg arturobernalg requested a review from ok2c December 1, 2025 10:45
@arturobernalg arturobernalg force-pushed the HTTPCLIENT-2398-1 branch 2 times, most recently from 0996102 to 9b9e1d4 Compare December 1, 2025 13:53
@ok2c
Copy link
Member

ok2c commented Dec 3, 2025

@arturobernalg Looks good. Can we add the same functionality to H2AsyncClientBuilder as well?

@arturobernalg
Copy link
Member Author

ks good. Can we add the same functionality to H2AsyncClientBuilder as well?

@ok2c please do another pass

final RequestConfig requestConfig = context.getRequestConfigOrDefault();
@SuppressWarnings("deprecation")
final Timeout connectTimeout = requestConfig.getConnectTimeout();
@SuppressWarnings("deprecation") final Timeout connectTimeout = requestConfig.getConnectTimeout();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@arturobernalg There is again plenty of white space / formatting noise. Please try to avoid it.

}
}

private AsyncClientExchangeHandler guard(final AsyncClientExchangeHandler handler) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@arturobernalg I am not super crazy about code re-use at all cost but in this case there is just too much duplicate code. Please consider extracting into a separate proxy class similar to RequestEntityProxy / ResponseEntityProxy


}

private static final ConcurrentMap<AsyncClientConnectionManager, AtomicInteger> QUEUE_COUNTERS =
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@arturobernalg This is bad. We need to find a way to maintain the counter without using a static hash map. The limit should ideally be per connection. If this is too difficult, then, per client instance.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ok2c please do another pass

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@arturobernalg Do not see how this is any better. The static map is still there and what you have is essentially a major resource leak right there.

@arturobernalg arturobernalg force-pushed the HTTPCLIENT-2398-1 branch 2 times, most recently from 6291a62 to 707f4cb Compare December 7, 2025 06:27
@arturobernalg arturobernalg requested a review from ok2c December 7, 2025 06:28

}

private static final ConcurrentMap<AsyncClientConnectionManager, AtomicInteger> QUEUE_COUNTERS =
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@arturobernalg Do not see how this is any better. The static map is still there and what you have is essentially a major resource leak right there.

@arturobernalg arturobernalg force-pushed the HTTPCLIENT-2398-1 branch 2 times, most recently from 07a3647 to 1272d81 Compare December 7, 2025 10:20
@arturobernalg
Copy link
Member Author

@ok2c got it. No more static var.

@arturobernalg arturobernalg force-pushed the HTTPCLIENT-2398-1 branch 2 times, most recently from ad57ea3 to 319e3d8 Compare December 7, 2025 12:03
@arturobernalg arturobernalg requested a review from ok2c December 7, 2025 12:04
@ok2c
Copy link
Member

ok2c commented Dec 12, 2025

@arturobernalg This is better. Please mark new setters in the builder classes as experimental.

Okay, this is what we have now

Users could now limit the number of concurrent active requests on a per client basis. How useful is that, though? Say, one sets max concurrent request number to 1000, then opens 100 active connections and equally distributes requests over those connections. 10 concurrent requests per connection is very little. However, having just one connection and trying to execute 1000 requests concurrently over it would likely be wrong.

What would be more useful is to cap the total number of requests on a per connection basis.

What could also be useful is enqueuing requests instead of failing them and executing them as soon as the number of concurrent requests drops below the maximum. Moreover, the same queuing mechanism could be used to re-queue and retry requests that did not get executed due to the opposite endpoint closing the connection.

This change-set is a start in the right direction. Let's keep it though clearly marked as experimental for now.

…ry. Add configurable maxQueuedRequests (default unlimited). Release slot on fail/cancel/close to avoid leaks
@arturobernalg
Copy link
Member Author

What could also be useful is enqueuing requests instead of failing them and executing them as soon as the number of concurrent requests drops below the maximum. Moreover, the same queuing mechanism could be used to re-queue and retry requests that did not get executed due to the opposite endpoint closing the connection.

@ok2c Marked the new builder setters as @experimental. Agreed on the limitations of a per-client cap; I’ll keep this change-set scoped to fail-fast limiting for now. I’ll follow up with separate PRs for (1) per-connection caps and (2) optional bounded queuing / re-queue of not-yet-started requests.

@ok2c
Copy link
Member

ok2c commented Dec 25, 2025

I’ll follow up with separate PRs for (1) per-connection caps

@arturobernalg Please note this will likely have to be done in core.

@arturobernalg
Copy link
Member Author

I’ll follow up with separate PRs for (1) per-connection caps

@arturobernalg Please note this will likely have to be done in core.

@ok2c Ill take a look the option to move it to the core

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants