Skip to content

Conversation

@vinniefalco
Copy link
Member

@vinniefalco vinniefalco commented Jan 21, 2026

Summary by CodeRabbit

  • New Features

    • Coroutine-based HTTP response streaming with automatic content header handling.
    • Router option to serve static files from a configured path.
  • Refactor

    • Public APIs migrated from legacy namespace/types to new HTTP equivalents.
    • Multipart/form-data, file-upload and stdin-body request types removed; request bodies simplified.
  • Tests

    • Unit tests for removed components deleted.
  • Chores

    • Repository attributes added; build preset configuration simplified.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link

coderabbitai bot commented Jan 21, 2026

📝 Walkthrough

Walkthrough

Removed body_source and serve_static components (headers, implementations, tests), migrated many capy:: types/usages to http:: equivalents (application, polystore, compression services), and added coroutine-based streaming support in corosio_route_params with a serializer-backed write path.

Changes

Cohort / File(s) Summary
body_source Removal
\include/boost/beast2/server/body_source.hpp`, `src/server/body_source.cpp`, `test/unit/server/body_source.cpp``
Deleted the type-erased body_source abstraction: header, implementation (dtor/move-assignment), and unit tests removed; public API surface eliminated.
serve_static Removal
\include/boost/beast2/server/serve_static.hpp`, `src/server/serve_static.cpp`, `test/unit/server/serve_static.cpp``
Removed serve_static public interface and implementation (options/dotfiles, MIME/path helpers, file I/O and streaming); corresponding tests deleted.
Coroutine Streaming API
\include/boost/beast2/server/route_handler_corosio.hpp``
Added serializer stream member and overrides: end() and protected write_impl(capy::const_buffer_param) to initialize streaming, feed serializer, and write serialized buffers to the socket in a coroutine-driven loop.
http:: Migration (application / polystore / services)
\src/http_server.cpp`, `include/boost/beast2/server/http_stream.hpp`, `src/log_service.cpp`, `include/boost/beast2/log_service.hpp`, `example/**` (multiple files)`
Replaced capy:: with http:: types and includes (application, polystore, brotli/zlib installers); updated function signatures and feature macros across server and example/client code.
Example & Build Updates
\example/server/main.cpp`, `example/*/CMakeLists.txt`, `.gitattributes`, `CMakePresets.json``
Example wiring changed to http::serve_static(argv[3]) (old handler disabled via #if 0), CMake files updated to link Boost::http_* targets instead of Boost::capy_*, added .gitattributes, and simplified CMake presets.
Client message & multipart reductions
\example/client/burl/message.hpp`, `example/client/burl/message.cpp`, `example/client/burl/multipart_form.`, `example/client/burl/options.cpp`, `example/client/burl/` (connect/main etc.)`
Removed multipart_form, file_body, stdin_body, and related streaming sources; message body variant reduced to string_body; multipart/form handling and file uploads now produce runtime errors or are unavailable; adjusted message serialization logic and content-type handling.

Sequence Diagram(s)

mermaid
sequenceDiagram
participant Handler as corosio_route_params
participant Serializer as http::serializer::stream
participant Socket as Corosio Socket
participant Client as TCP Client

Handler->>Serializer: initialize serializer (headers + first body chunk)
loop while serializer produces buffers
Serializer->>Handler: provide buffer(s)
Handler->>Socket: write_impl(buffers)
Socket->>Client: socket.write(data)
end
alt chunked terminator
Handler->>Socket: write_impl("0\r\n\r\n") rgba(0,128,0,0.5)
Socket->>Client: socket.write(chunk terminator)
end

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐰 I nibble bytes and stitch the stream,
Gone are old sources, fled the dream,
Coroutines hum and serializers sing,
Static paths point where browsers bring,
🥕 Hop, hop — the rabbit ships the thing!

🚥 Pre-merge checks | ✅ 1 | ❌ 1
❌ Failed checks (1 inconclusive)
Check name Status Explanation Resolution
Title check ❓ Inconclusive The PR title 'buffer-param' is vague and does not clearly communicate the main purpose or scope of the changes. The changeset involves extensive refactoring from capy to http namespaces, removal of several components (body_source, serve_static), and multiple API updates across numerous files. Consider using a more descriptive title that captures the primary objective, such as 'Migrate from capy to http namespace' or 'Refactor HTTP components and remove deprecated functionality' to better reflect the substantial changes in this pull request.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In `@include/boost/beast2/server/route_handler_corosio.hpp`:
- Around line 115-131: write_impl currently truncates if buffers.copy_to(bufs,
max_bufs) exceeds max_bufs and ignores partial writes from stream.write_some;
fix by processing the full buffer_sequence in chunks: repeatedly copy up to
max_bufs via buffers.copy_to(...) (using an offset/iterator or repeatedly
removing copied prefix) and for each capy::mutable_buffer attempt to write until
its bytes are consumed by looping on stream.write_some and advancing the buffer
by the returned written count on partial writes, retrying on transient errors
and returning ec on fatal errors; update references in write_impl to use
buffers.copy_to, capy::mutable_buffer bufs[], and stream.write_some when
implementing this chunked+retry logic.
- Around line 99-105: The Transfer-Encoding check currently uses exact equality
on res.value_or(http::field::transfer_encoding, "") which fails for
comma-separated or mixed-case tokens; update the condition to parse the header
string returned by res.value_or(...), split on commas, trim whitespace, and
perform a case-insensitive comparison of each token to "chunked" (so strings
like "gzip, chunked" or "Chunked" are detected). Replace the simple equality
test near the write_some call (the block using res and terminator) with this
tokenized/case-insensitive check and only send the "0\r\n\r\n" terminator when a
"chunked" token is present.
- Around line 64-87: The loop that writes serializer buffers uses
stream.write_some inside route_handler_corosio.hpp but ignores partial writes:
it sums buf.size() and calls serializer.consume(written) instead of the actual
bytes sent, which can drop data; update the write loop to handle partial writes
per buffer by retrying until each buffer's bytes are fully written (adjusting
the buffer pointer/offset passed to capy::const_buffer based on bytes written),
accumulate the actual bytes written into a variable (using the returned n from
stream.write_some) and call serializer.consume(actual_written) after sending all
bytes or on error; reference the serializer object, the for-loop over bufs,
stream.write_some, capy::const_buffer, and serializer.consume when making the
change.

Comment on lines 64 to 90
while(! serializer.is_done())
{
auto rv = serializer.prepare();
if(! rv)
co_return rv.error();

auto bufs = *rv;

// Write all buffers
for(auto const& buf : bufs)
{
auto [ec, n] = co_await stream.write_some(
capy::const_buffer(buf.data(), buf.size()));
if(ec)
co_return ec;
}

// Calculate total size written
std::size_t written = 0;
for(auto const& buf : bufs)
written += buf.size();

serializer.consume(written);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# First, let's find and examine the specific file
find . -name "route_handler_corosio.hpp" -type f

Repository: cppalliance/beast2

Length of output: 180


🏁 Script executed:

# Check the actual code at the specified lines
if [ -f "include/boost/beast2/server/route_handler_corosio.hpp" ]; then
  wc -l "include/boost/beast2/server/route_handler_corosio.hpp"
fi

Repository: cppalliance/beast2

Length of output: 119


🏁 Script executed:

# Get the context around lines 64-87
if [ -f "include/boost/beast2/server/route_handler_corosio.hpp" ]; then
  sed -n '50,100p' "include/boost/beast2/server/route_handler_corosio.hpp" | cat -n
fi

Repository: cppalliance/beast2

Length of output: 1896


🏁 Script executed:

# Look for write_some function signature and usage in the codebase
rg "write_some" --type cpp -B 2 -A 2 | head -80

Repository: cppalliance/beast2

Length of output: 5321


🏁 Script executed:

# Check what capy::const_buffer is and stream types
rg "class.*stream|write_some|const_buffer" --type hpp --type cpp | grep -E "(stream|write_some)" | head -40

Repository: cppalliance/beast2

Length of output: 89


🏁 Script executed:

# Verify write_some semantics by checking http_stream.hpp for pattern comparison
sed -n '1,50p' "include/boost/beast2/server/http_stream.hpp" | cat -n

Repository: cppalliance/beast2

Length of output: 2025


🏁 Script executed:

# Get more context from http_stream.hpp to see if it has the same issue
rg "write_some|consume" "include/boost/beast2/server/http_stream.hpp" -B 3 -A 3

Repository: cppalliance/beast2

Length of output: 410


🏁 Script executed:

# Let's get the exact code around lines 64-87 to confirm the bug
sed -n '60,90p' "include/boost/beast2/server/route_handler_corosio.hpp" | cat -n

Repository: cppalliance/beast2

Length of output: 1110


🏁 Script executed:

# Compare with http_stream.hpp's pattern - get full context there
rg "write_some" "include/boost/beast2/server/http_stream.hpp" -B 10 -A 10 | head -60

Repository: cppalliance/beast2

Length of output: 620


🏁 Script executed:

# Check if there are other instances in route_handler_corosio.hpp with the same pattern
rg "write_some" "include/boost/beast2/server/route_handler_corosio.hpp" -B 5 -A 8

Repository: cppalliance/beast2

Length of output: 1307


Handle partial writes before consuming serializer buffers.

write_some may write fewer bytes than requested. The current code captures the return value n (bytes actually written) in lines 16, but ignores it and instead sums buffer sizes, then calls serializer.consume(written) with the full buffer size regardless of actual bytes sent. This causes data loss because the serializer skips more bytes than were actually transmitted, corrupting the response stream.

Compare with include/boost/beast2/server/http_stream.hpp, which correctly accumulates actual bytes written (written += n) before consuming. The fix requires retrying partial writes per buffer until all bytes are sent or an error occurs.

🐛 Proposed fix (write all bytes, consume actual count)
-            auto bufs = *rv;
-
-            // Write all buffers
-            for(auto const& buf : bufs)
-            {
-                auto [ec, n] = co_await stream.write_some(
-                    capy::const_buffer(buf.data(), buf.size()));
-                if(ec)
-                    co_return ec;
-            }
-
-            // Calculate total size written
-            std::size_t written = 0;
-            for(auto const& buf : bufs)
-                written += buf.size();
+            auto bufs = *rv;
+            std::size_t written = 0;
+            for(auto const& buf : bufs)
+            {
+                std::size_t offset = 0;
+                auto const* data =
+                    static_cast<char const*>(buf.data());
+                while(offset < buf.size())
+                {
+                    auto [ec, n] = co_await stream.write_some(
+                        capy::const_buffer(
+                            data + offset,
+                            buf.size() - offset));
+                    if(ec)
+                        co_return ec;
+                    offset += n;
+                    written += n;
+                }
+            }
🤖 Prompt for AI Agents
In `@include/boost/beast2/server/route_handler_corosio.hpp` around lines 64 - 87,
The loop that writes serializer buffers uses stream.write_some inside
route_handler_corosio.hpp but ignores partial writes: it sums buf.size() and
calls serializer.consume(written) instead of the actual bytes sent, which can
drop data; update the write loop to handle partial writes per buffer by retrying
until each buffer's bytes are fully written (adjusting the buffer pointer/offset
passed to capy::const_buffer based on bytes written), accumulate the actual
bytes written into a variable (using the returned n from stream.write_some) and
call serializer.consume(actual_written) after sending all bytes or on error;
reference the serializer object, the for-loop over bufs, stream.write_some,
capy::const_buffer, and serializer.consume when making the change.

Comment on lines 99 to 105
// If using chunked encoding, send "0\r\n\r\n"
if(res.value_or(
http::field::transfer_encoding, "") == "chunked")
{
static constexpr char terminator[] = "0\r\n\r\n";
auto [ec, n] = co_await stream.write_some(
capy::const_buffer(terminator, 5));
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find . -name "route_handler_corosio.hpp" -type f

Repository: cppalliance/beast2

Length of output: 117


🏁 Script executed:

fd -t f "route_handler_corosio.hpp"

Repository: cppalliance/beast2

Length of output: 115


🏁 Script executed:

rg -l "Transfer-Encoding" --type cpp

Repository: cppalliance/beast2

Length of output: 44


🏁 Script executed:

cat -n include/boost/beast2/server/route_handler_corosio.hpp | sed -n '90,115p'

Repository: cppalliance/beast2

Length of output: 974


🏁 Script executed:

rg -i "transfer.encoding" --type-list

Repository: cppalliance/beast2

Length of output: 451


🏁 Script executed:

rg -i "chunked" include/boost/beast2/server/ -A 3 -B 3

Repository: cppalliance/beast2

Length of output: 1262


🏁 Script executed:

rg -i "transfer.encoding" . --type hpp --type cpp

Repository: cppalliance/beast2

Length of output: 89


🌐 Web query:

RFC 7230 Transfer-Encoding header format multiple codings

💡 Result:

Short answer: Transfer-Encoding is a comma-separated (or repeated-header) ordered list of transfer-coding tokens; the codings are listed in the order they were applied (first applied first). The ABNF is Transfer-Encoding = 1#transfer-coding. The chunked coding, if used, MUST appear only once and if present as the final coding it determines framing; chunked MUST NOT be applied more than once. Also, when Transfer-Encoding is present it overrides Content-Length. [1][2]

References:

  • RFC 7230 section 3.3.1 and section 4 (transfer-codings). [1]
  • HTTPbis online rendering of RFC 7230 (same material). [2]

🌐 Web query:

RFC 7230 transfer-coding token case-sensitive

💡 Result:

Yes — RFC 7230 states that all transfer-coding names are case-insensitive (Section 4). [1][2]

Sources:
[1] RFC 7230 (rfc-editor), Section 4.
[2] RFC 7230 (IETF datatracker), Section 4.


🏁 Script executed:

find . -name "*.hpp" -o -name "*.h" | head -20

Repository: cppalliance/beast2

Length of output: 838


🏁 Script executed:

rg "transfer.coding|transfer_coding" . --max-count 10

Repository: cppalliance/beast2

Length of output: 44


🏁 Script executed:

rg "transfer_encoding" include/ -A 5 -B 5

Repository: cppalliance/beast2

Length of output: 1119


🏁 Script executed:

rg "field::" include/boost/beast2/ -l | head -10

Repository: cppalliance/beast2

Length of output: 115


🏁 Script executed:

find include/boost/beast2 -name "*.hpp" -type f | xargs grep -l "Transfer-Encoding\|transfer.encoding" 2>/dev/null

Repository: cppalliance/beast2

Length of output: 115


Transfer-Encoding "chunked" check must handle multiple codings and case-insensitivity.

RFC 7230 specifies that Transfer-Encoding is a comma-separated list of transfer-coding tokens (e.g., "gzip, chunked") and all token names are case-insensitive. The current exact-equality check will fail to send the terminating chunk ("0\r\n\r\n") in these cases, leaving clients hanging and violating the chunked encoding protocol. Parse the Transfer-Encoding header to check for the chunked token according to RFC 7230 rules.

🤖 Prompt for AI Agents
In `@include/boost/beast2/server/route_handler_corosio.hpp` around lines 99 - 105,
The Transfer-Encoding check currently uses exact equality on
res.value_or(http::field::transfer_encoding, "") which fails for comma-separated
or mixed-case tokens; update the condition to parse the header string returned
by res.value_or(...), split on commas, trim whitespace, and perform a
case-insensitive comparison of each token to "chunked" (so strings like "gzip,
chunked" or "Chunked" are detected). Replace the simple equality test near the
write_some call (the block using res and terminator) with this
tokenized/case-insensitive check and only send the "0\r\n\r\n" terminator when a
"chunked" token is present.

Comment on lines 115 to 131
write_impl(capy::buffer_param buffers) override
{
// Extract buffers from type-erased wrapper
constexpr std::size_t max_bufs = 16;
capy::mutable_buffer bufs[max_bufs];
auto const n = buffers.copy_to(bufs, max_bufs);

// Write all buffers
for(std::size_t i = 0; i < n; ++i)
{
auto [ec, written] = co_await stream.write_some(
capy::const_buffer(
bufs[i].data(),
bufs[i].size()));
if(ec)
co_return ec;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# First, let's understand the file structure and locate relevant definitions
find . -type f -name "*.hpp" -o -name "*.h" | head -20

Repository: cppalliance/beast2

Length of output: 838


🏁 Script executed:

# Search for buffer_param definition
rg "class buffer_param|struct buffer_param" --type hpp

Repository: cppalliance/beast2

Length of output: 89


🏁 Script executed:

# Search for write_some implementation and usage patterns
rg "write_some" --type hpp -B 2 -A 2 | head -50

Repository: cppalliance/beast2

Length of output: 89


🏁 Script executed:

# Look at the broader context of the write_impl function
cat include/boost/beast2/server/route_handler_corosio.hpp | head -150

Repository: cppalliance/beast2

Length of output: 3899


🏁 Script executed:

# Search for capy namespace definitions
rg "namespace capy" --type hpp -A 5

Repository: cppalliance/beast2

Length of output: 89


🏁 Script executed:

# Look for copy_to implementation
rg "copy_to\(" --type hpp -B 2 -A 5

Repository: cppalliance/beast2

Length of output: 89


🏁 Script executed:

# Search for buffer_param definition without file type
rg "class buffer_param|struct buffer_param" -A 10

Repository: cppalliance/beast2

Length of output: 44


🏁 Script executed:

# Search for capy includes and namespace
rg "namespace capy" -B 2 -A 5

Repository: cppalliance/beast2

Length of output: 2890


🏁 Script executed:

# Look at the send() method which seems to handle buffers properly for comparison
sed -n '50,105p' include/boost/beast2/server/route_handler_corosio.hpp

Repository: cppalliance/beast2

Length of output: 1784


🏁 Script executed:

# Search for write_some in the codebase to understand its behavior
rg "write_some" -B 3 -A 3

Repository: cppalliance/beast2

Length of output: 6914


🏁 Script executed:

# Look for buffer_param.hpp file
find . -name "*buffer_param*" -type f

Repository: cppalliance/beast2

Length of output: 44


🏁 Script executed:

# Look at http_stream.hpp which also uses write_some to compare patterns
cat include/boost/beast2/server/http_stream.hpp

Repository: cppalliance/beast2

Length of output: 10046


🏁 Script executed:

# Search for corosio socket or write_some documentation
rg "corosio" -A 5 -B 2 | head -100

Repository: cppalliance/beast2

Length of output: 4328


🏁 Script executed:

# Look for tests that might show expected behavior
find . -name "*test*" -type f | grep -E "(corosio|route_handler)" | head -10

Repository: cppalliance/beast2

Length of output: 44


🏁 Script executed:

# Check if buffer_param is documented in any includes
rg "buffer_param" -B 2 -A 5 | head -60

Repository: cppalliance/beast2

Length of output: 1410


write_impl silently truncates buffer sequences and does not handle partial writes.

The copy_to(bufs, max_bufs=16) call will drop any buffers beyond 16 without indication, causing data loss. Additionally, write_some may return fewer bytes written than requested, but the code ignores the written value and proceeds to the next buffer without retrying, causing further data loss on partial writes.

🐛 Proposed fix
-    http::route_task
-    write_impl(capy::buffer_param buffers) override
-    {
-        // Extract buffers from type-erased wrapper
-        constexpr std::size_t max_bufs = 16;
-        capy::mutable_buffer bufs[max_bufs];
-        auto const n = buffers.copy_to(bufs, max_bufs);
-
-        // Write all buffers
-        for(std::size_t i = 0; i < n; ++i)
-        {
-            auto [ec, written] = co_await stream.write_some(
-                capy::const_buffer(
-                    bufs[i].data(),
-                    bufs[i].size()));
-            if(ec)
-                co_return ec;
-        }
-
-        co_return {};
-    }
+    http::route_task
+    write_impl(capy::buffer_param buffers) override
+    {
+        // Extract buffers from type-erased wrapper
+        std::size_t max_bufs = 16;
+        capy::mutable_buffer bufs[16];
+        auto n = buffers.copy_to(bufs, max_bufs);
+
+        std::size_t buf_index = 0;
+        while(buf_index < n)
+        {
+            // If we hit the 16-buffer limit, extract more
+            if(buf_index >= max_bufs && buffers.buffer_count() > n)
+            {
+                n += buffers.copy_to(bufs, max_bufs, n);
+            }
+
+            std::size_t offset = 0;
+            auto const* data = static_cast<char const*>(bufs[buf_index].data());
+            while(offset < bufs[buf_index].size())
+            {
+                auto [ec, written] = co_await stream.write_some(
+                    capy::const_buffer(
+                        data + offset,
+                        bufs[buf_index].size() - offset));
+                if(ec)
+                    co_return ec;
+                offset += written;
+            }
+            ++buf_index;
+        }
+
+        co_return {};
+    }
🤖 Prompt for AI Agents
In `@include/boost/beast2/server/route_handler_corosio.hpp` around lines 115 -
131, write_impl currently truncates if buffers.copy_to(bufs, max_bufs) exceeds
max_bufs and ignores partial writes from stream.write_some; fix by processing
the full buffer_sequence in chunks: repeatedly copy up to max_bufs via
buffers.copy_to(...) (using an offset/iterator or repeatedly removing copied
prefix) and for each capy::mutable_buffer attempt to write until its bytes are
consumed by looping on stream.write_some and advancing the buffer by the
returned written count on partial writes, retrying on transient errors and
returning ec on fatal errors; update references in write_impl to use
buffers.copy_to, capy::mutable_buffer bufs[], and stream.write_some when
implementing this chunked+retry logic.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@CMakePresets.json`:
- Around line 2-7: CMakePresets.json currently sets cmakeMinimumRequired to
3.25.0 which conflicts with the project's CMakeLists.txt that declares
cmake_minimum_required(VERSION 3.8...3.20); fix by making these consistent:
either lower CMakePresets.json cmakeMinimumRequired to 3.8.0 to match the
existing CMakeLists.txt, or update the CMakeLists.txt cmake_minimum_required
declaration to 3.25.0 (and adjust any code requiring newer CMake features) so
both CMakePresets.json and the cmake_minimum_required(VERSION ...) in
CMakeLists.txt reference the same minimum version.

Comment on lines +2 to +7
"version": 6,
"cmakeMinimumRequired": {
"major": 3,
"minor": 25,
"patch": 0
},
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# Check for cmake_minimum_required declarations
rg -n 'cmake_minimum_required' --glob 'CMakeLists.txt'

# Check CI/CD configurations
rg -n 'cmake' --glob '*.yml' --glob '*.yaml' -i

# Check README documentation
rg -n 'cmake' --glob '*README*' -i

# Also check if there are any other CMake configuration files
fd -e 'CMakeLists.txt' -o -e 'CMakePresets.json'

Repository: cppalliance/beast2

Length of output: 450


CMakePresets.json requires CMake 3.25.0 but CMakeLists.txt only supports up to 3.20.

The schema version 6 and cmakeMinimumRequired 3.25.0 in CMakePresets.json create an incompatibility: the main CMakeLists.txt declares cmake_minimum_required(VERSION 3.8...3.20), which allows CMake up to 3.20. Users with CMake 3.21–3.24 will be unable to use presets while the main build remains compatible.

Either update CMakeLists.txt to require CMake 3.25.0, or lower CMakePresets.json's cmakeMinimumRequired to match the actual project minimum (3.8).

🤖 Prompt for AI Agents
In `@CMakePresets.json` around lines 2 - 7, CMakePresets.json currently sets
cmakeMinimumRequired to 3.25.0 which conflicts with the project's CMakeLists.txt
that declares cmake_minimum_required(VERSION 3.8...3.20); fix by making these
consistent: either lower CMakePresets.json cmakeMinimumRequired to 3.8.0 to
match the existing CMakeLists.txt, or update the CMakeLists.txt
cmake_minimum_required declaration to 3.25.0 (and adjust any code requiring
newer CMake features) so both CMakePresets.json and the
cmake_minimum_required(VERSION ...) in CMakeLists.txt reference the same minimum
version.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
example/client/get/main.cpp (1)

412-418: Use buffers::const_buffer instead of capy::const_buffer in stdout_sink::on_write.

The http::sink::on_write interface requires buffers::const_buffer as the parameter type, not capy::const_buffer. Update the method signature to match the interface contract.

include/boost/beast2/log_service.hpp (1)

22-24: Public class needs a docstring.

log_service is a public header class without a documentation comment.

✍️ Suggested addition
-class BOOST_SYMBOL_VISIBLE
-    log_service
-{
+/** Logging service interface for named sections and thresholds. */
+class BOOST_SYMBOL_VISIBLE
+    log_service
+{
As per coding guidelines, please add the docstring.
🤖 Fix all issues with AI agents
In `@include/boost/beast2/server/http_stream.hpp`:
- Around line 63-66: The call to use_log_service at line 118 is passing an
http::application& but use_log_service expects an http::polystore&, causing a
type mismatch; update the http_stream code to pass the correct polystore
reference (e.g., obtain the polystore from the provided application or change
the constructor to accept http::polystore& instead of http::application&), then
call use_log_service(polystore) (or cast only if a guaranteed, safe conversion
exists); locate the http_stream constructor and the call to use_log_service and
update the parameter/argument to use http::polystore& consistent with how
serve_log_admin.cpp passes it.

In `@include/boost/beast2/server/route_handler_corosio.hpp`:
- Around line 95-126: Add a public docstring to the end() coroutine describing
its purpose (drain remaining serializer output and close the serializer stream)
and its return semantics since it is a public virtual override; then verify and
fix the consume logic: ensure that capy::write in end() either is
documented/assumed to perform full writes (so keep using
capy::buffer_size(*cbs)) or, if capy::write can return partial writes like
write_some(), change the drain loop to consume the actual bytes written (use the
returned n from auto [ec, n] = co_await capy::write(stream, *cbs) and call
serializer.consume(n), and loop on the same buffer until serializer indicates it
is fully consumed) — reference symbols: end(), serializer,
serializer.consume(...), capy::write(...), send() for consistency with
partial-write handling.
♻️ Duplicate comments (1)
include/boost/beast2/server/route_handler_corosio.hpp (1)

44-93: Partial writes are not handled correctly in send().

The loop at lines 76-82 calls stream.write_some() but ignores the returned byte count n. Lines 85-87 then calculate written by summing buffer sizes rather than actual bytes written. If write_some() performs a partial write, serializer.consume(written) will advance past data that wasn't actually sent, causing data corruption.

This is a duplicate of a prior review comment that remains unaddressed.

🐛 Proposed fix to handle partial writes
-            // Write all buffers
-            for(auto const& buf : bufs)
-            {
-                auto [ec, n] = co_await stream.write_some(
-                    capy::const_buffer(buf.data(), buf.size()));
-                if(ec)
-                    co_return ec;
-            }
-
-            // Calculate total size written
-            std::size_t written = 0;
-            for(auto const& buf : bufs)
-                written += buf.size();
+            // Write all buffers, handling partial writes
+            std::size_t written = 0;
+            for(auto const& buf : bufs)
+            {
+                auto const* data = static_cast<char const*>(buf.data());
+                std::size_t remaining = buf.size();
+                while(remaining > 0)
+                {
+                    auto [ec, n] = co_await stream.write_some(
+                        capy::const_buffer(data, remaining));
+                    if(ec)
+                        co_return ec;
+                    data += n;
+                    remaining -= n;
+                    written += n;
+                }
+            }
🧹 Nitpick comments (14)
example/server/serve_log_admin.hpp (1)

20-22: Missing docstring for public function.

Per coding guidelines, docstrings are required for all classes and functions in public headers in non-detail namespaces. The serve_log_admin function lacks documentation explaining its purpose, parameters, and return value.

📝 Suggested documentation
+/** Returns a router that serves log administration endpoints.
+ *
+ * `@param` ps The polystore providing required services
+ * `@return` A configured router for log administration
+ */
 router
 serve_log_admin(
     http::polystore& ps);
src/http_server.cpp (2)

24-44: Missing high-level implementation overview.

Per coding guidelines, there should be a /* */ section after the includes providing a high-level overview of how the implementation works. This file implements the HTTP server session handling and would benefit from documentation explaining the worker pattern, session lifecycle, and request/response flow.


206-208: Suppressing unused variable for disabled code block.

The (void)rv; cast suppresses the unused variable warning since the code that uses rv is disabled with #if 0. This is acceptable as a temporary measure, but consider adding a TODO comment indicating the intended completion of the response handling logic.

💡 Suggested improvement
         auto rv = co_await impl_->router.dispatch(
             w.rp.req.method(), w.rp.url, w.rp);
-        (void)rv;
+        (void)rv; // TODO: Use rv when do_respond is implemented
 `#if` 0
example/client/get/main.cpp (2)

32-60: Missing high-level implementation overview.

Per coding guidelines, there should be a /* */ section after the includes providing a high-level overview of how the implementation works. The comment on lines 32-34 is helpful but a more detailed overview of the session class architecture (redirect handling, SSL support, compression) would be beneficial.


445-445: Consider renaming capy_ctx to reflect the type change.

The variable is now of type http::polystore, but the name capy_ctx no longer reflects this. Consider renaming to http_ctx or simply ctx for consistency with the migration.

💡 Suggested rename
-    http::polystore capy_ctx;
+    http::polystore http_ctx;

This would require updating all references to capy_ctx throughout the file (lines 454, 459, 461, 465, 479).

example/client/visit/main.cpp (1)

19-20: Add a brief file-level overview after the includes.

This file jumps straight into code without the requested /* */ overview block. A short summary would satisfy the documentation guideline.

✍️ Suggested addition
 `#include` <boost/url/url_view.hpp>
 
+/*
+    Overview: resolves a URL, connects a TCP socket, and drives a simple
+    request/response visit loop.
+*/
+
 //------------------------------------------------
As per coding guidelines, please add the overview block.
include/boost/beast2/log_service.hpp (1)

13-16: Add a brief file-level overview after the includes.

✍️ Suggested addition
 `#include` <boost/core/detail/string_view.hpp>
 `#include` <vector>
 
+/*
+    Overview: declares the log_service interface and the polystore-backed
+    accessor used to create or retrieve it.
+*/
+
 namespace boost {
As per coding guidelines, please add the overview block.
src/log_service.cpp (1)

10-12: Add a brief file-level overview after the includes.

✍️ Suggested addition
 `#include` <boost/beast2/log_service.hpp>
 `#include` <boost/beast2/logger.hpp>
 
+/*
+    Overview: defines the log_service implementation and attaches it to
+    a polystore via use_log_service().
+*/
+
 namespace boost {
As per coding guidelines, please add the overview block.
include/boost/beast2/server/http_stream.hpp (1)

15-24: Add a brief file-level overview after the includes.

✍️ Suggested addition
 `#include` <boost/http/server/router.hpp>
 `#include` <boost/http/error.hpp>
 `#include` <boost/url/parse.hpp>
 
+/*
+    Overview: coroutine-driven HTTP stream that parses requests, routes them,
+    and serializes responses over a Corosio socket.
+*/
+
 namespace boost {
As per coding guidelines, please add the overview block.
example/server/serve_log_admin.cpp (1)

10-14: Add a brief file-level overview after the includes.

✍️ Suggested addition
 `#include` <boost/beast2/error.hpp>
 `#include` <boost/beast2/format.hpp>
 `#include` <boost/beast2/log_service.hpp>
 
+/*
+    Overview: example log admin handlers that render a log UI and apply
+    submitted logging configuration changes.
+*/
+
 namespace boost {
As per coding guidelines, please add the overview block.
example/server/main.cpp (1)

10-26: Add a brief file-level overview after the includes.

✍️ Suggested addition
 `#include` <boost/json/parser.hpp>
 `#include` <boost/url/ipv4_address.hpp>
 `#include` <iostream>
 
+/*
+    Overview: example HTTP server wiring that installs services, configures
+    routing, and serves static content.
+*/
+
 `#include` <boost/beast2/server/route_handler_corosio.hpp>
As per coding guidelines, please add the overview block.
include/boost/beast2/server/route_handler_corosio.hpp (2)

25-42: Class corosio_route_params has a docstring, but public members lack documentation.

The class has a brief docstring at line 25-26, but the public member stream (line 35) and the constructor lack documentation explaining their purpose and usage. As per coding guidelines, docstrings are required for public interfaces in non-detail namespaces.

Consider adding docstrings for:

  • stream_type typedef
  • stream public member reference
  • The constructor explaining ownership semantics

128-164: The write_impl() method lacks a docstring and the implementation looks correct.

The protected override write_impl() should have a brief docstring explaining its role in the streaming API.

The implementation correctly:

  • Initializes streaming on first call (lines 133-134)
  • Loops until all input buffers are consumed (lines 137-141)
  • Copies data to the serializer stream and commits (lines 144-146)
  • Handles need_data errors appropriately (lines 152-154)
  • Uses capy::write which should handle complete writes (line 157)

The previous concern about buffer truncation with copy_to(bufs, max_bufs=16) appears to have been addressed—the new implementation uses buffers.data() and buffers.consume() pattern which properly handles buffer sequences of any size.

example/client/jsonrpc/jsonrpc/client.cpp (1)

10-27: Add required file overview comment after includes.

This .cpp lacks the high‑level /* */ overview required immediately after includes. Please add a brief summary of the file’s responsibilities (stream_impl + JSON-RPC request/response flow). As per coding guidelines.

✍️ Suggested addition
 `#include` <boost/http/zlib/inflate.hpp>
 `#include` <boost/url/url_view.hpp>
 
+/*
+ * JSON-RPC client implementation over Beast2 HTTP.
+ * stream_impl manages TCP/TLS connectivity, and request_op drives the
+ * coroutine-based request/response flow with JSON streaming.
+ */
+
 using namespace boost;

Comment on lines 63 to 66
http_stream(
capy::application& app,
http::application& app,
corosio::socket& sock,
http::flat_router& routes);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Find application.hpp and examine the class
find . -type f -name "application.hpp" | head -5

Repository: cppalliance/beast2

Length of output: 100


🏁 Script executed:

#!/bin/bash
# Locate the http_stream.hpp file to see the actual constructor code
find . -type f -name "http_stream.hpp" | grep -E "server|http"

Repository: cppalliance/beast2

Length of output: 107


🏁 Script executed:

#!/bin/bash
# Search for use_log_service definition to understand its requirements
rg -n "use_log_service" --type cpp --type hpp -B2 -A5

Repository: cppalliance/beast2

Length of output: 89


🏁 Script executed:

#!/bin/bash
# Search for use_log_service definition without file type restriction
rg -n "use_log_service" -B2 -A5

Repository: cppalliance/beast2

Length of output: 2277


🏁 Script executed:

#!/bin/bash
# Examine application.hpp to check for polystore relationship
cat ./include/boost/beast2/application.hpp | head -100

Repository: cppalliance/beast2

Length of output: 614


🏁 Script executed:

#!/bin/bash
# Examine http_stream.hpp constructor at the specified lines
cat -n ./include/boost/beast2/server/http_stream.hpp | sed -n '50,120p'

Repository: cppalliance/beast2

Length of output: 2456


🏁 Script executed:

#!/bin/bash
# Get full application.hpp content to check for polystore relationship
wc -l ./include/boost/beast2/application.hpp

Repository: cppalliance/beast2

Length of output: 103


🏁 Script executed:

#!/bin/bash
# Find polystore definition
rg -n "class\s+polystore|struct\s+polystore" -B2 -A10

Repository: cppalliance/beast2

Length of output: 44


🏁 Script executed:

#!/bin/bash
# Search for any inheritance or conversion in application.hpp
rg -n "application\s*:" ./include/boost/beast2/application.hpp -A5

Repository: cppalliance/beast2

Length of output: 44


🏁 Script executed:

#!/bin/bash
# Search for operator polystore or conversion methods in application
rg -n "operator.*polystore|to_polystore|get_polystore"

Repository: cppalliance/beast2

Length of output: 44


🏁 Script executed:

#!/bin/bash
# Read entire application.hpp (only 28 lines)
cat -n ./include/boost/beast2/application.hpp

Repository: cppalliance/beast2

Length of output: 810


🏁 Script executed:

#!/bin/bash
# Search for polystore in any form (case-insensitive, partial matches)
rg -i "polystore" -n

Repository: cppalliance/beast2

Length of output: 2651


🏁 Script executed:

#!/bin/bash
# Check http_stream.hpp full context around line 118 - constructor implementation
cat -n ./include/boost/beast2/server/http_stream.hpp | sed -n '110,130p'

Repository: cppalliance/beast2

Length of output: 712


🏁 Script executed:

#!/bin/bash
# Search for http::application definition - might be in http namespace
rg -n "class\s+application|struct\s+application" -B2 -A5

Repository: cppalliance/beast2

Length of output: 44


🏁 Script executed:

#!/bin/bash
# Check if application is a typedef or alias for polystore
rg -n "using\s+application|typedef.*application"

Repository: cppalliance/beast2

Length of output: 44


🏁 Script executed:

#!/bin/bash
# Look for any http/ includes in http_stream.hpp to find where application comes from
head -50 ./include/boost/beast2/server/http_stream.hpp | rg -n "include.*http"

Repository: cppalliance/beast2

Length of output: 391


🏁 Script executed:

#!/bin/bash
# The application comes from boost/http library, not beast2
# Let's check if there's any conversion or relationship
# First, let's see if we can find where http::application is defined or how it relates to polystore

# Check if there's a boost/http directory in the repo
find . -type d -name "http" | grep -v node_modules

Repository: cppalliance/beast2

Length of output: 44


🏁 Script executed:

#!/bin/bash
# Search for any cast or conversion in the http_stream constructor implementation
cat -n ./include/boost/beast2/server/http_stream.hpp | sed -n '114,135p'

Repository: cppalliance/beast2

Length of output: 751


🏁 Script executed:

#!/bin/bash
# Check if there's an operator or cast that converts application to polystore
rg -n "operator.*polystore|static_cast.*polystore|reinterpret_cast.*polystore|dynamic_cast.*polystore"

Repository: cppalliance/beast2

Length of output: 44


🏁 Script executed:

#!/bin/bash
# Check if there's any documentation or comments about application/polystore relationship
rg -n "application.*polystore|polystore.*application" -C2

Repository: cppalliance/beast2

Length of output: 44


🏁 Script executed:

#!/bin/bash
# Let's check the actual compilation by looking for any build logs or CI config
find . -name "*.yaml" -o -name "*.yml" -o -name "Makefile" -o -name "CMakeLists.txt" | head -10

Repository: cppalliance/beast2

Length of output: 347


🏁 Script executed:

#!/bin/bash
# Search in examples to see how application is actually used with use_log_service
rg -n "http::application" -A3 -B3

Repository: cppalliance/beast2

Length of output: 2103


🏁 Script executed:

#!/bin/bash
# Check if there are any compilation/test results that would show if this compiles
grep -r "use_log_service" example/ --include="*.cpp" -B5 -A5

Repository: cppalliance/beast2

Length of output: 1166


🏁 Script executed:

#!/bin/bash
# Let's verify if http::application is used anywhere with services that might require polystore
rg -n "install.*app|app\..*install|service.*app" example/server/main.cpp -B2 -A2

Repository: cppalliance/beast2

Length of output: 847


Fix type mismatch: use_log_service expects http::polystore&, not http::application&.

At line 118, use_log_service(app) is called with http::application&, but the function signature requires http::polystore&. The examples (serve_log_admin.cpp) pass http::polystore& directly. Either cast app to a compatible polystore reference, or verify that http::application from boost/http provides an implicit conversion to http::polystore.

🤖 Prompt for AI Agents
In `@include/boost/beast2/server/http_stream.hpp` around lines 63 - 66, The call
to use_log_service at line 118 is passing an http::application& but
use_log_service expects an http::polystore&, causing a type mismatch; update the
http_stream code to pass the correct polystore reference (e.g., obtain the
polystore from the provided application or change the constructor to accept
http::polystore& instead of http::application&), then call
use_log_service(polystore) (or cast only if a guaranteed, safe conversion
exists); locate the http_stream constructor and the call to use_log_service and
update the parameter/argument to use http::polystore& consistent with how
serve_log_admin.cpp passes it.

Comment on lines +95 to +75
http::route_task
end() override
{
// Close the serializer stream if active
if(srs_.is_open())
srs_.close();

// Drain all remaining serializer output
while(!serializer.is_done())
{
auto cbs = serializer.prepare();
if(cbs.has_error())
{
if(cbs.error() == http::error::need_data)
continue;
co_return cbs.error();
}

if(capy::buffer_size(*cbs) == 0)
{
serializer.consume(0);
continue;
}

auto [ec, n] = co_await capy::write(stream, *cbs);
if(ec)
co_return ec;
serializer.consume(capy::buffer_size(*cbs));
}

co_return {};
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# First, let's check the file structure and namespace
head -50 include/boost/beast2/server/route_handler_corosio.hpp

Repository: cppalliance/beast2

Length of output: 1382


🏁 Script executed:

#!/bin/bash
# Look for the class definition and check if it's in a detail namespace
rg -n "namespace|class.*route_handler" include/boost/beast2/server/route_handler_corosio.hpp | head -20

Repository: cppalliance/beast2

Length of output: 104


🏁 Script executed:

#!/bin/bash
# Search for capy::write to understand its semantics
rg -n "capy::write" --type cpp -A 3 -B 2 | head -80

Repository: cppalliance/beast2

Length of output: 1128


🏁 Script executed:

#!/bin/bash
# Look for documentation or comments about capy::write
find . -type f \( -name "*.hpp" -o -name "*.cpp" \) -exec grep -l "capy::write" {} \; 2>/dev/null | head -10

Repository: cppalliance/beast2

Length of output: 117


🏁 Script executed:

#!/bin/bash
# Let's get more context around the end() method to see its access specifier
sed -n '85,130p' include/boost/beast2/server/route_handler_corosio.hpp

Repository: cppalliance/beast2

Length of output: 1175


🏁 Script executed:

#!/bin/bash
# Search for the class containing end() to understand its structure
rg -n "class corosio_route_params|http::route_task|end\(" include/boost/beast2/server/route_handler_corosio.hpp -B 2 -A 1 | head -30

Repository: cppalliance/beast2

Length of output: 469


🏁 Script executed:

#!/bin/bash
# Check if capy::write documentation exists in the repo
find . -type f \( -name "*.hpp" -o -name "*.md" \) -exec grep -l "capy::write" {} \; 2>/dev/null

Repository: cppalliance/beast2

Length of output: 117


🏁 Script executed:

#!/bin/bash
# Look at the base class to understand if end() is part of the interface
rg -n "route_handler|route_task" include/boost/http/server/route_handler.hpp 2>/dev/null | head -20

Repository: cppalliance/beast2

Length of output: 44


🏁 Script executed:

#!/bin/bash
# Check for any existing docstrings in the file
rg -n "/\*\*|///" include/boost/beast2/server/route_handler_corosio.hpp | head -10

Repository: cppalliance/beast2

Length of output: 124


🏁 Script executed:

#!/bin/bash
# Search for capy::write definition to understand its contract
find . -name "*.hpp" -path "*/capy/*" -exec grep -l "write" {} \; 2>/dev/null

Repository: cppalliance/beast2

Length of output: 44


🏁 Script executed:

#!/bin/bash
# Look at how write is used elsewhere in the codebase
rg -n "capy::write" --type cpp -B 3 -A 5 | head -100

Repository: cppalliance/beast2

Length of output: 1550


🏁 Script executed:

#!/bin/bash
# Check if there are any comments or documentation about partial writes
rg -n "partial|consume" include/boost/beast2/server/route_handler_corosio.hpp -B 2 -A 2

Repository: cppalliance/beast2

Length of output: 972


🏁 Script executed:

#!/bin/bash
# Check the send method implementation to see if it has similar pattern
sed -n '45,95p' include/boost/beast2/server/route_handler_corosio.hpp

Repository: cppalliance/beast2

Length of output: 1567


🏁 Script executed:

#!/bin/bash
# Let's check if there are any docstrings for other override methods in the same class
sed -n '25,200p' include/boost/beast2/server/route_handler_corosio.hpp | grep -B 3 "override"

Repository: cppalliance/beast2

Length of output: 274


🏁 Script executed:

#!/bin/bash
# Search for any documentation about how capy::write is intended to be used
rg -i "write.*guarantee|complete.*write" --type cpp --type hpp | head -20

Repository: cppalliance/beast2

Length of output: 89


🏁 Script executed:

#!/bin/bash
# Let's check the headers and see if capy::write is documented or if there's a contract
cat include/boost/capy/write.hpp 2>/dev/null || echo "File not found"

Repository: cppalliance/beast2

Length of output: 76


🏁 Script executed:

#!/bin/bash
# Search for any usage documentation or tests that might show capy::write behavior
rg -n "write_some|capy::write" --type cpp -A 2 -B 2 | grep -A 5 -B 5 "write_some" | head -40

Repository: cppalliance/beast2

Length of output: 2591


🏁 Script executed:

#!/bin/bash
# Check the difference in how send() uses write_some vs how end() uses capy::write
sed -n '75,95p' include/boost/beast2/server/route_handler_corosio.hpp

Repository: cppalliance/beast2

Length of output: 607


🏁 Script executed:

#!/bin/bash
# Let's check what other methods in the file look like to see if docstrings are used for methods
sed -n '1,200p' include/boost/beast2/server/route_handler_corosio.hpp | grep -A 5 "http::route_task"

Repository: cppalliance/beast2

Length of output: 608


🏁 Script executed:

#!/bin/bash
# Let's look at the base class http::route_params to understand the interface
find . -name "route_handler.hpp" -path "*/http/server/*" -exec head -100 {} \;

Repository: cppalliance/beast2

Length of output: 44


🏁 Script executed:

#!/bin/bash
# Check if the base class methods have docstrings
find . -name "route_handler.hpp" -path "*/http/server/*" -exec grep -A 10 "virtual.*route_task" {} \; 2>/dev/null

Repository: cppalliance/beast2

Length of output: 44


🏁 Script executed:

#!/bin/bash
# Let's get the full context of write_impl to see the complete pattern
sed -n '128,165p' include/boost/beast2/server/route_handler_corosio.hpp

Repository: cppalliance/beast2

Length of output: 1131


🏁 Script executed:

#!/bin/bash
# Check if capy is an external library with documentation
find . -name "*.md" -o -name "*.rst" | xargs grep -l "capy::write" 2>/dev/null | head -5

Repository: cppalliance/beast2

Length of output: 44


🏁 Script executed:

#!/bin/bash
# Check the return value usage pattern - is `n` used anywhere else?
rg -n "co_await.*capy::write" -A 3 include/boost/beast2/server/route_handler_corosio.hpp

Repository: cppalliance/beast2

Length of output: 434


🏁 Script executed:

#!/bin/bash
# Let's check if there are any other override methods in the codebase with docstrings to understand the pattern
find . -path "*/boost/*" -name "*.hpp" -exec grep -B 5 "override" {} \; 2>/dev/null | grep -B 5 "/\*\*" | head -40

Repository: cppalliance/beast2

Length of output: 44


🏁 Script executed:

#!/bin/bash
# Let's look at what happens with the `n` return value - is it used anywhere?
rg -n "auto \[ec, n\]" include/boost/beast2/server/route_handler_corosio.hpp -A 5

Repository: cppalliance/beast2

Length of output: 689


🏁 Script executed:

#!/antml:
# Search for information about whether capy is external and what its write contract is
find . -name "*.txt" -o -name "*.md" | xargs grep -i "capy" 2>/dev/null | head -20

Repository: cppalliance/beast2

Length of output: 215


🏁 Script executed:

#!/bin/bash
# Compare the pattern: in send(), write_some is called per buffer in a loop
# In end() and write_impl(), capy::write is called with full buffer and n is ignored
# Let's see if this pattern appears elsewhere in the codebase
rg -n "capy::write" -A 3 --type cpp -B 1

Repository: cppalliance/beast2

Length of output: 984


🏁 Script executed:

#!/bin/bash
# Check if `n` is ever used after being captured
rg -n "auto \[ec, n\] = co_await" -A 10 --type cpp | grep -A 10 "capy::write"

Repository: cppalliance/beast2

Length of output: 1805


Add a docstring to the end() method.

The method is a public virtual override in a non-detail namespace and requires a docstring for documentation generation, per the coding guidelines.

Regarding the capy::write semantics: The code captures the return value n (bytes written) but never uses it, always consuming the full buffer size via serializer.consume(capy::buffer_size(*cbs)). This differs from the send() method which explicitly handles partial writes using write_some(). Verify that capy::write guarantees complete writes; if it can return partial writes, the consume call must use the actual bytes written instead of the buffer size.

🤖 Prompt for AI Agents
In `@include/boost/beast2/server/route_handler_corosio.hpp` around lines 95 - 126,
Add a public docstring to the end() coroutine describing its purpose (drain
remaining serializer output and close the serializer stream) and its return
semantics since it is a public virtual override; then verify and fix the consume
logic: ensure that capy::write in end() either is documented/assumed to perform
full writes (so keep using capy::buffer_size(*cbs)) or, if capy::write can
return partial writes like write_some(), change the drain loop to consume the
actual bytes written (use the returned n from auto [ec, n] = co_await
capy::write(stream, *cbs) and call serializer.consume(n), and loop on the same
buffer until serializer indicates it is fully consumed) — reference symbols:
end(), serializer, serializer.consume(...), capy::write(...), send() for
consistency with partial-write handling.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
example/client/burl/main.cpp (1)

10-41: Add a file-level implementation overview after the includes.

This file contains substantial request/redirect/retry and streaming logic but lacks the required high-level /* */ overview comment. As per coding guidelines, non-trivial implementation files should include a post-include overview.

📝 Suggested addition
 `#include` <boost/url/url.hpp>

 `#include` <cstdlib>

+/*
+ * High-level overview:
+ * - Builds HTTP requests from CLI options and manages redirects/retries.
+ * - Streams response bodies with optional decompression and progress reporting.
+ * - Wires parser/serializer services for the HTTP pipeline.
+ */
 namespace beast2   = boost::beast2;
example/server/main.cpp (1)

10-31: Add a file-level implementation overview after the includes.

This file contains substantial server setup and routing logic but lacks the required high-level /* */ overview comment. As per coding guidelines, non-trivial implementation files should include a post-include overview.

📝 Suggested addition
 `#include` <boost/beast2/server/route_handler_corosio.hpp>

+/*
+ * High-level overview:
+ * - Parses CLI args and configures HTTP services.
+ * - Builds the router with static file serving and starts http_server.
+ * - Runs the corosio event loop and handles errors.
+ */
 namespace boost {
example/server/serve_log_admin.cpp (1)

10-14: Add a file-level implementation overview after the includes.

This file defines handler classes and routing helpers but lacks the required high-level /* */ overview comment. As per coding guidelines, non-trivial implementation files should include a post-include overview.

📝 Suggested addition
 `#include` <boost/beast2/log_service.hpp>

+/*
+ * High-level overview:
+ * - Defines log admin handlers wired to log_service.
+ * - Provides routing helpers for the log admin UI.
+ */
 namespace boost {
🤖 Fix all issues with AI agents
In `@example/client/burl/message.cpp`:
- Around line 10-13: Add a file-level block comment (/* ... */) immediately
after the includes in message.cpp that provides a short implementation overview
of the non-trivial request/body handling performed in this translation unit;
describe the main responsibilities (e.g., parsing/building messages,
lifetime/ownership of body buffers, error handling strategy, and key
interactions with any public functions or classes defined in message.hpp) so
readers can quickly understand the file's purpose and high-level flow before
diving into the function-level code.

In `@example/client/jsonrpc/jsonrpc/client.cpp`:
- Around line 23-27: Add a /* */ block comment immediately after the include
block that summarizes the jsonrpc client implementation: describe high-level
flow (async IO loop, request/response JSON-RPC lifecycle), key components to
look at (serialization/deserialization, brotli/zlib decoding, URL handling via
boost::url), concurrency/await semantics and any mutex/strand usage,
error/reconnect/backpressure behavior and retry strategy, and where to find
mappings between outgoing request IDs and response handlers; reference the
jsonrpc client implementation and its async handlers so readers know which
sections to inspect for details.

In `@example/server/serve_log_admin.hpp`:
- Around line 20-22: The public declaration serve_log_admin(http::polystore& ps)
lacks the required public-header docstring; add a complete Doxygen-style comment
above the function declaration describing the parameter (http::polystore& ps)
and its ownership/expectations, the return type (router) semantics,
thread-safety guarantees, and any exceptions or error conditions that callers
should handle (or state noexcept if applicable), so that serve_log_admin has a
full API docstring per coding guidelines.
♻️ Duplicate comments (3)
CMakePresets.json (1)

2-7: Align CMake preset minimum with project minimum.

This still appears to set cmakeMinimumRequired to 3.25.0; please confirm it matches the project’s cmake_minimum_required to avoid preset incompatibility.

#!/bin/bash
# Verify the minimum CMake version declared by the project.
rg -n 'cmake_minimum_required' --glob 'CMakeLists.txt'

# Show any other CMake preset files to compare.
fd -e 'CMakePresets.json' -e 'CMakeLists.txt'
include/boost/beast2/server/route_handler_corosio.hpp (2)

44-75: Handle partial writes from capy::write or verify it guarantees complete writes.

The return value n from capy::write (line 68) is captured but ignored. The code calls serializer.consume(capy::buffer_size(*cbs)) assuming all bytes were written. If capy::write can return fewer bytes than requested (partial write), this will corrupt the response stream by skipping unwritten data.

Either:

  1. Verify capy::write guarantees complete writes (document this assumption), or
  2. Consume only the actual bytes written: serializer.consume(n)

Additionally, per coding guidelines, this public override method requires a docstring documenting its purpose, return semantics, and any preconditions.


78-113: Same partial write concern in write_impl.

Line 106-109 has the same issue: n is captured from capy::write but serializer.consume(capy::buffer_size(*cbs)) uses the full buffer size. Apply the same fix as end().

Per coding guidelines, this protected override method also requires a docstring.

🧹 Nitpick comments (5)
example/client/jsonrpc/jsonrpc/client.cpp (2)

29-31: Consider adding an implementation overview comment.

Per coding guidelines, files containing non-trivial implementation logic should include a /* */ block comment after the includes providing a high-level overview. This file implements coroutine-based async JSON-RPC client operations with SSL/TCP stream handling, and maintainers would benefit from a brief overview of the request/response flow and the relationship between stream_impl, request_op, and client.


364-364: Consider renaming capy_ctx parameter to match new type.

The parameter type has been updated to http::polystore&, but the parameter name still references the old "capy" naming (capy_ctx). Consider renaming to something like ctx or http_ctx for consistency with the new namespace.

♻️ Suggested rename
 client::
 client(
     urls::url endpoint,
-    http::polystore& capy_ctx,
+    http::polystore& ctx,
     asio::any_io_executor exec,
     asio::ssl::context& ssl_ctx)
     : client(
         std::move(endpoint),
-        capy_ctx,
+        ctx,
         std::unique_ptr<any_stream>(new stream_impl(exec, ssl_ctx)))
 {
 }

 client::
 client(
     urls::url endpoint,
-    http::polystore& capy_ctx,
+    http::polystore& ctx,
     std::unique_ptr<any_stream> stream)
     : stream_(std::move(stream))
     , endpoint_(std::move(endpoint))
-    , sr_(capy_ctx)
-    , pr_(capy_ctx)
+    , sr_(ctx)
+    , pr_(ctx)
     , req_(http::method::post, "/")
 {
     // ... update usages of capy_ctx to ctx

Also applies to: 377-377

example/client/burl/options.cpp (1)

985-988: Multipart form options are still advertised but now always fail.

The --form and --form-string CLI options are still defined (lines 306-311), and the parse_form_option function (lines 100-156) along with form_option_result struct (lines 90-98) remain in the code but are now unreachable. Consider either:

  1. Removing the dead code and CLI options entirely, or
  2. Documenting this as a temporary limitation in the help text

This avoids user confusion when the documented options always fail.

include/boost/beast2/server/route_handler_corosio.hpp (1)

30-30: Consider documenting the srs_ member's lifecycle.

The serializer stream member srs_ is initialized lazily in write_impl and closed in end(). A brief comment explaining this lifecycle would help maintainers understand the state management.

example/server/main.cpp (1)

175-184: Consider removing the disabled handler block to reduce noise.

If the stub handler is no longer needed, dropping the #if 0 block keeps the routing section leaner.

🧹 Optional cleanup
         rr.use( "/", http::serve_static( argv[3] ) );
-#if 0
-        rr.use(
-            [](http::route_params& rp) -> capy::task<http::route_result>
-            {
-                co_return co_await rp.send( "Hello, coworld!" );
-            });
-#endif

Comment on lines 10 to 13
#include "message.hpp"
#include "mime_type.hpp"

#include <boost/capy/file.hpp>
#include <boost/http/field.hpp>
#include <boost/system/system_error.hpp>

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add a file-level implementation overview after the includes.

This file contains non-trivial request/body handling logic but lacks the required high-level /* */ overview comment. As per coding guidelines, non-trivial implementation files should include a post-include overview.

📝 Suggested addition
 `#include` "message.hpp"

 `#include` <boost/http/field.hpp>

+/*
+ * High-level overview:
+ * - Implements string_body helpers used for request setup.
+ * - Computes Content-Length and starts the serializer for message bodies.
+ */
 namespace capy = boost::capy;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
#include "message.hpp"
#include "mime_type.hpp"
#include <boost/capy/file.hpp>
#include <boost/http/field.hpp>
#include <boost/system/system_error.hpp>
`#include` "message.hpp"
`#include` <boost/http/field.hpp>
/*
* High-level overview:
* - Implements string_body helpers used for request setup.
* - Computes Content-Length and starts the serializer for message bodies.
*/
namespace capy = boost::capy;
🤖 Prompt for AI Agents
In `@example/client/burl/message.cpp` around lines 10 - 13, Add a file-level block
comment (/* ... */) immediately after the includes in message.cpp that provides
a short implementation overview of the non-trivial request/body handling
performed in this translation unit; describe the main responsibilities (e.g.,
parsing/building messages, lifetime/ownership of body buffers, error handling
strategy, and key interactions with any public functions or classes defined in
message.hpp) so readers can quickly understand the file's purpose and high-level
flow before diving into the function-level code.

Comment on lines +23 to 27
#include <boost/json/serialize.hpp>
#include <boost/http/brotli/decode.hpp>
#include <boost/http/core/polystore.hpp>
#include <boost/http/zlib/inflate.hpp>
#include <boost/url/url_view.hpp>
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Check if the include files exist in the expected locations
find . -type f \( -name "polystore.hpp" -o -name "decode.hpp" -o -name "inflate.hpp" -o -name "url_view.hpp" \) 2>/dev/null | head -20

Repository: cppalliance/beast2

Length of output: 44


🏁 Script executed:

# Also search using rg to locate these headers in the codebase
rg --type hpp "polystore\.hpp|decode\.hpp|inflate\.hpp|url_view\.hpp" -l

Repository: cppalliance/beast2

Length of output: 89


🏁 Script executed:

# Check the actual file around the include section and initial lines
if [ -f "example/client/jsonrpc/jsonrpc/client.cpp" ]; then
  echo "=== File exists. Showing lines 1-50 ==="
  head -50 "example/client/jsonrpc/jsonrpc/client.cpp"
else
  echo "File not found at expected path"
fi

Repository: cppalliance/beast2

Length of output: 1358


🏁 Script executed:

wc -l "example/client/jsonrpc/jsonrpc/client.cpp"

Repository: cppalliance/beast2

Length of output: 107


🏁 Script executed:

# Read a larger section to understand the implementation
sed -n '28,100p' "example/client/jsonrpc/jsonrpc/client.cpp"

Repository: cppalliance/beast2

Length of output: 1847


🏁 Script executed:

# Search for any existing comments that might serve as overview
grep -n "/\*" "example/client/jsonrpc/jsonrpc/client.cpp" | head -10

Repository: cppalliance/beast2

Length of output: 44


🏁 Script executed:

# Check if capy:: is mentioned anywhere in the file (to verify migration claim)
rg "capy::" "example/client/jsonrpc/jsonrpc/client.cpp"

Repository: cppalliance/beast2

Length of output: 231


🏁 Script executed:

# Show full include section with context
sed -n '11,35p' "example/client/jsonrpc/jsonrpc/client.cpp"

Repository: cppalliance/beast2

Length of output: 743


🏁 Script executed:

# Search for all http:: usage in the file
rg "http::" "example/client/jsonrpc/jsonrpc/client.cpp" -n

Repository: cppalliance/beast2

Length of output: 451


🏁 Script executed:

# Check the client.hpp header to understand the class structure
head -80 "example/client/jsonrpc/jsonrpc/client.hpp"

Repository: cppalliance/beast2

Length of output: 2214


Add implementation overview comment after includes.

This file contains non-trivial implementation logic (420 lines of async operations, JSON-RPC protocol handling, serialization logic) and must include a /* */ block comment immediately after the includes that explains how the implementation works, particularly any tricky or non-obvious details. Per coding guidelines, this overview is required for maintainability.

🤖 Prompt for AI Agents
In `@example/client/jsonrpc/jsonrpc/client.cpp` around lines 23 - 27, Add a /* */
block comment immediately after the include block that summarizes the jsonrpc
client implementation: describe high-level flow (async IO loop, request/response
JSON-RPC lifecycle), key components to look at (serialization/deserialization,
brotli/zlib decoding, URL handling via boost::url), concurrency/await semantics
and any mutex/strand usage, error/reconnect/backpressure behavior and retry
strategy, and where to find mappings between outgoing request IDs and response
handlers; reference the jsonrpc client implementation and its async handlers so
readers know which sections to inspect for details.

Comment on lines 20 to +22
router
serve_log_admin(
capy::polystore& ps);
http::polystore& ps);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add a public API docstring for serve_log_admin.

This is a public header declaration without the required docstring describing parameters, return value, thread-safety, and exceptions. As per coding guidelines, public-header functions need complete docstrings.

📝 Suggested docstring
 router
+/** Build routes for the log admin UI.
+ *
+ * `@param` ps Polystore providing access to log_service.
+ * `@return` Router configured with log admin endpoints.
+ * `@thread_safety` Not thread-safe; call during single-threaded setup.
+ * `@throws` std::exception Propagates exceptions from log_service access or allocation failures.
+ */
 serve_log_admin(
     http::polystore& ps);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
router
serve_log_admin(
capy::polystore& ps);
http::polystore& ps);
router
/** Build routes for the log admin UI.
*
* `@param` ps Polystore providing access to log_service.
* `@return` Router configured with log admin endpoints.
* `@thread_safety` Not thread-safe; call during single-threaded setup.
* `@throws` std::exception Propagates exceptions from log_service access or allocation failures.
*/
serve_log_admin(
http::polystore& ps);
🤖 Prompt for AI Agents
In `@example/server/serve_log_admin.hpp` around lines 20 - 22, The public
declaration serve_log_admin(http::polystore& ps) lacks the required
public-header docstring; add a complete Doxygen-style comment above the function
declaration describing the parameter (http::polystore& ps) and its
ownership/expectations, the return type (router) semantics, thread-safety
guarantees, and any exceptions or error conditions that callers should handle
(or state noexcept if applicable), so that serve_log_admin has a full API
docstring per coding guidelines.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant