-
-
Notifications
You must be signed in to change notification settings - Fork 64
Improve StreamIngest thoughtput by prefetching in bulk and chunking of fetched messages
#797
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve StreamIngest thoughtput by prefetching in bulk and chunking of fetched messages
#797
Conversation
… shared}.StreamIngest'.
c6e6640 to
ebe8644
Compare
|
@seigert I have released v2.9.0 with your changes. |
|
@ahjohannessen Thanks! |
|
@seigert Seems like something broke with streaming. In our play framework app that streams grpc over the wire from a doobie backed service app, e.g.: now hangs and times out with |
That's very strage, because most of my tests used exact same interop: unary client request and streaming server reply and not once I've got Can you maybe patch your local version of I'll try to create new MR with old acquire logic, just need to decide a best way to pass 'already requested' number of messages to |
Yes, I was thinking of doing that. The thing I noticed were
Perhaps just request(1) to get things "moving" is all what is necessary. |
Hello there!
While benchmarking our internal
fs2-grpc-based gRPC server against bothghzand ourfs2-grpc-based client I've found that througthput offs2-grpcclient is 5-10 times slower than that ofghzfor streaming calls.After some research I've found that
fs2.grpc.client.StreamIngestrequests messages from channel one-by-one, even forprefetchN > 1;prefetchNis always equals to 1 and it's not possible to set another value from server options.Most likely all of this are related to #386 and #503 -- but all activity there ended 2-3 years ago. :(
So I've decided to reimplement a little internal logic of
StreamIngest, now it is:max(0, (limit - (queued + already_requested)))every time internal message queue is either empty or blocked;According to my bechmarks, this improves both client- and server-side streaming throught 2-3 times both in individual messages per second and in megabytes per second.
This implementation is still about 3 times slower than
ghzfor the requests of same message payload size, but I think to improve it further some work must be done mostly ingrpc-javainternals with implementation of backpressure and message decoding.