feat(trogon-nats): implement Claim Check pattern for oversized payloads#100
feat(trogon-nats): implement Claim Check pattern for oversized payloads#100
Conversation
PR SummaryMedium Risk Overview Updates all source services (GitHub/GitLab/Slack/Linear/Discord) to construct a Also bounds No new environment variables were introduced in this diff (it removes Reviewed by Cursor Bugbot for commit 8a82dba. Bugbot is set up for automated code reviews on this repo. Configure here. |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds a JetStream claim-check pipeline with object-store traits/impls and mocks, centralizes HTTP body-size limits into a new trogon-std type, removes per-service max_body_size config and Docker env examples, and updates multiple source crates to use claim-check publishing and Axum DefaultBodyLimit. Changes
Sequence DiagramssequenceDiagram
participant Client as Client
participant Publisher as ClaimCheckPublisher
participant JS as JetStream
participant Store as ObjectStore
Client->>Publisher: publish(subject, headers, payload)
alt payload.len <= MaxPayload.threshold()
Publisher->>JS: publish(payload, headers)
JS-->>Publisher: ack
else payload.len > MaxPayload.threshold()
Publisher->>Store: put(bucket, key, full_payload)
Store-->>Publisher: success
Publisher->>JS: publish(empty_payload, claim_headers)
JS-->>Publisher: ack
end
Publisher-->>Client: publish_outcome
sequenceDiagram
participant Consumer as Consumer
participant Resolver as ClaimResolver
participant Store as ObjectStore
Consumer->>Resolver: resolve_claim(headers, payload)
alt claim header present
Resolver->>Store: get(bucket, key)
Store-->>Resolver: payload_bytes
Resolver-->>Consumer: reconstructed_payload
else no claim header
Resolver-->>Consumer: original_payload
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
c60849e to
903a711
Compare
4623703 to
1f10d0a
Compare
fcb5882 to
7847650
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
rsworkspace/crates/trogon-source-slack/src/config.rs (1)
11-22:⚠️ Potential issue | 🟡 MinorStale documentation: remove reference to
SLACK_MAX_BODY_SIZE.Line 20 still documents
SLACK_MAX_BODY_SIZEas a configurable environment variable, but this field has been removed from the config struct and is no longer parsed. This will mislead operators into thinking they can configure the body size limit.📝 Proposed fix
/// Configuration for the Slack Events API webhook server. /// /// Resolved from environment variables: /// - `SLACK_SIGNING_SECRET`: Slack app signing secret (**required**) /// - `SLACK_WEBHOOK_PORT`: HTTP listening port (default: 3000) /// - `SLACK_SUBJECT_PREFIX`: NATS subject prefix (default: `slack`) /// - `SLACK_STREAM_NAME`: JetStream stream name (default: `SLACK`) /// - `SLACK_STREAM_MAX_AGE_SECS`: max age of messages in the JetStream stream in seconds (default: 604800 / 7 days) /// - `SLACK_NATS_ACK_TIMEOUT_SECS`: NATS ack timeout in seconds (default: 10) -/// - `SLACK_MAX_BODY_SIZE`: maximum webhook body size in bytes (default: 1048576 / 1 MB) /// - `SLACK_TIMESTAMP_MAX_DRIFT_SECS`: max allowed clock drift for request timestamps in seconds (default: 300 / 5 min) /// - Standard `NATS_*` variables for NATS connection (see `trogon-nats`)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rsworkspace/crates/trogon-source-slack/src/config.rs` around lines 11 - 22, Remove the stale "SLACK_MAX_BODY_SIZE" entry from the module doc comment so the documentation matches the current config struct (e.g., remove the bullet referencing `SLACK_MAX_BODY_SIZE` in the docblock that lists resolved environment variables for the Slack webhook server); ensure the remaining bullets reflect only actual fields parsed by the config implementation (e.g., the `SLACK_SIGNING_SECRET`, `SLACK_WEBHOOK_PORT`, `SLACK_SUBJECT_PREFIX`, `SLACK_STREAM_NAME`, `SLACK_STREAM_MAX_AGE_SECS`, `SLACK_NATS_ACK_TIMEOUT_SECS`, and `SLACK_TIMESTAMP_MAX_DRIFT_SECS` entries referenced alongside the config type such as `Config`/`SlackConfig`).
🧹 Nitpick comments (4)
rsworkspace/crates/trogon-source-gitlab/src/main.rs (1)
26-37: Consider extracting the bucket name to a constant.The bucket name
"trogon-claims"is duplicated on lines 31 and 37. While this is a minor issue, extracting it to a constant would improve maintainability and prevent accidental mismatches if the name needs to change.♻️ Suggested refactor
use { acp_telemetry::ServiceName, tracing::error, tracing::info, trogon_nats::connect, trogon_nats::jetstream::ClaimCheckPublisher, trogon_nats::jetstream::MaxPayload, trogon_nats::jetstream::NatsJetStreamClient, trogon_nats::jetstream::NatsObjectStore, trogon_source_gitlab::GitlabConfig, trogon_source_gitlab::constants::DEFAULT_NATS_CONNECT_TIMEOUT, trogon_std::env::SystemEnv, trogon_std::fs::SystemFs, }; +const CLAIM_CHECK_BUCKET: &str = "trogon-claims"; + // ... let object_store = NatsObjectStore::provision( &js_context, async_nats::jetstream::object_store::Config { - bucket: "trogon-claims".to_string(), + bucket: CLAIM_CHECK_BUCKET.to_string(), ..Default::default() }, ) .await?; let inner = NatsJetStreamClient::new(js_context); - let js = ClaimCheckPublisher::new(inner, object_store, "trogon-claims".to_string(), max_payload); + let js = ClaimCheckPublisher::new(inner, object_store, CLAIM_CHECK_BUCKET.to_string(), max_payload);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rsworkspace/crates/trogon-source-gitlab/src/main.rs` around lines 26 - 37, The bucket literal "trogon-claims" is duplicated when provisioning the object store and constructing the ClaimCheckPublisher; extract it into a single constant (e.g., BUCKET_NAME) and replace both uses in the NatsObjectStore::provision call and the ClaimCheckPublisher::new invocation so the bucket value is defined once and reused by js_context/object_store and ClaimCheckPublisher (keep the constant near the top of the file or next to main for clarity).rsworkspace/crates/trogon-nats/src/jetstream/claim_check.rs (1)
41-58:resolve_claimignoresHEADER_CLAIM_BUCKETheader.The function reads the key from
HEADER_CLAIM_KEYbut doesn't useHEADER_CLAIM_BUCKET. The bucket is implicitly determined by thestoreparameter. This works if there's only one bucket, but could cause issues if multiple buckets are used in the future. Consider validating that the bucket header matches the expected bucket, or document this assumption.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rsworkspace/crates/trogon-nats/src/jetstream/claim_check.rs` around lines 41 - 58, The resolve_claim function reads HEADER_CLAIM_KEY but ignores HEADER_CLAIM_BUCKET; update resolve_claim(headers, payload, store) to read HEADER_CLAIM_BUCKET and either validate it matches the expected bucket for the provided store (or call a store API that accepts bucket+key) or explicitly document the single-bucket assumption. Specifically, in resolve_claim look up HEADER_CLAIM_BUCKET alongside HEADER_CLAIM_KEY, return ClaimResolveError::MissingBucket if absent, then compare the header value to the store's bucket identifier (or pass both bucket and key into store.get if ObjectStoreGet supports it) and error with a new ClaimResolveError variant if they mismatch, ensuring HEADER_CLAIM_BUCKET and HEADER_CLAIM_KEY are both considered before calling store.get.rsworkspace/crates/trogon-source-slack/src/main.rs (1)
26-37: Bucket name duplicated as string literal.The bucket name
"trogon-claims"appears twice: once in the object store config (line 31) and again inClaimCheckPublisher::new(line 37). Consider extracting this to a constant to avoid drift.Additionally, the object store config uses
..Default::default()with no retention policy. Withoutmax_ageormax_bytes, stored claims will accumulate indefinitely. Consider whether a TTL-based cleanup policy is needed.♻️ Suggested refactor
+const CLAIM_BUCKET: &str = "trogon-claims"; + let nats = connect(&config.nats, DEFAULT_NATS_CONNECT_TIMEOUT).await?; let max_payload = MaxPayload::from_server_limit(nats.server_info().max_payload); let js_context = async_nats::jetstream::new(nats); let object_store = NatsObjectStore::provision( &js_context, async_nats::jetstream::object_store::Config { - bucket: "trogon-claims".to_string(), + bucket: CLAIM_BUCKET.to_string(), ..Default::default() }, ) .await?; let inner = NatsJetStreamClient::new(js_context); -let js = ClaimCheckPublisher::new(inner, object_store, "trogon-claims".to_string(), max_payload); +let js = ClaimCheckPublisher::new(inner, object_store, CLAIM_BUCKET.to_string(), max_payload);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rsworkspace/crates/trogon-source-slack/src/main.rs` around lines 26 - 37, The bucket name "trogon-claims" is duplicated; extract it into a single constant (e.g., const BUCKET: &str = "trogon-claims") and use that constant in NatsObjectStore::provision(...) and ClaimCheckPublisher::new(...) to avoid drift, and update the async_nats::jetstream::object_store::Config passed to NatsObjectStore::provision to include an appropriate retention policy (set max_age and/or max_bytes) instead of relying on ..Default::default() so stored claims do not grow indefinitely.rsworkspace/crates/trogon-nats/src/jetstream/object_store.rs (1)
81-94: Consider memory implications for very large payloads.The
get()implementation reads the entire object into memory viaread_to_end. For very large payloads (potentially several MB if the NATS max_payload is set high), this could cause memory pressure. This is likely acceptable given the expected payload sizes, but worth noting for future consideration if payload sizes grow significantly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rsworkspace/crates/trogon-nats/src/jetstream/object_store.rs` around lines 81 - 94, The get() method currently reads the entire object into memory via object.read_to_end which can cause memory pressure for very large payloads; change trogon-nats::jetstream::object_store::get to avoid unbounded growth by either (a) pre-allocating the Vec using the object's known size (use the metadata from the value returned by self.inner.get(name) if available) and then read_to_end into that reserved buffer, or (b) switch to a streaming approach: convert the returned object reader into a stream (e.g., tokio_util::io::ReaderStream or tokio::io::copy to a temporary file) and accumulate/emit in bounded chunks (or spill to disk) before constructing the final Bytes; update handling around self.inner.get and object.read_to_end accordingly to implement the chosen approach.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@rsworkspace/crates/trogon-nats/src/jetstream/claim_check.rs`:
- Around line 182-199: The code stores the payload with self.store.put(&key,
payload) before publishing the claim with self.inner.publish_with_headers(...),
which can leave orphaned objects if publish fails; modify the publish failure
path in the function containing claim_object_key, self.store.put, and
self.inner.publish_with_headers so that on PublishFailed you attempt a
compensating delete of the stored object (e.g., call
self.store.delete(&key).await and handle/log any delete error) before returning
the ClaimCheckPublishError::PublishFailed, or alternatively add a code comment
documenting that orphan cleanup relies on store TTL if you intentionally accept
orphans.
---
Outside diff comments:
In `@rsworkspace/crates/trogon-source-slack/src/config.rs`:
- Around line 11-22: Remove the stale "SLACK_MAX_BODY_SIZE" entry from the
module doc comment so the documentation matches the current config struct (e.g.,
remove the bullet referencing `SLACK_MAX_BODY_SIZE` in the docblock that lists
resolved environment variables for the Slack webhook server); ensure the
remaining bullets reflect only actual fields parsed by the config implementation
(e.g., the `SLACK_SIGNING_SECRET`, `SLACK_WEBHOOK_PORT`, `SLACK_SUBJECT_PREFIX`,
`SLACK_STREAM_NAME`, `SLACK_STREAM_MAX_AGE_SECS`, `SLACK_NATS_ACK_TIMEOUT_SECS`,
and `SLACK_TIMESTAMP_MAX_DRIFT_SECS` entries referenced alongside the config
type such as `Config`/`SlackConfig`).
---
Nitpick comments:
In `@rsworkspace/crates/trogon-nats/src/jetstream/claim_check.rs`:
- Around line 41-58: The resolve_claim function reads HEADER_CLAIM_KEY but
ignores HEADER_CLAIM_BUCKET; update resolve_claim(headers, payload, store) to
read HEADER_CLAIM_BUCKET and either validate it matches the expected bucket for
the provided store (or call a store API that accepts bucket+key) or explicitly
document the single-bucket assumption. Specifically, in resolve_claim look up
HEADER_CLAIM_BUCKET alongside HEADER_CLAIM_KEY, return
ClaimResolveError::MissingBucket if absent, then compare the header value to the
store's bucket identifier (or pass both bucket and key into store.get if
ObjectStoreGet supports it) and error with a new ClaimResolveError variant if
they mismatch, ensuring HEADER_CLAIM_BUCKET and HEADER_CLAIM_KEY are both
considered before calling store.get.
In `@rsworkspace/crates/trogon-nats/src/jetstream/object_store.rs`:
- Around line 81-94: The get() method currently reads the entire object into
memory via object.read_to_end which can cause memory pressure for very large
payloads; change trogon-nats::jetstream::object_store::get to avoid unbounded
growth by either (a) pre-allocating the Vec using the object's known size (use
the metadata from the value returned by self.inner.get(name) if available) and
then read_to_end into that reserved buffer, or (b) switch to a streaming
approach: convert the returned object reader into a stream (e.g.,
tokio_util::io::ReaderStream or tokio::io::copy to a temporary file) and
accumulate/emit in bounded chunks (or spill to disk) before constructing the
final Bytes; update handling around self.inner.get and object.read_to_end
accordingly to implement the chosen approach.
In `@rsworkspace/crates/trogon-source-gitlab/src/main.rs`:
- Around line 26-37: The bucket literal "trogon-claims" is duplicated when
provisioning the object store and constructing the ClaimCheckPublisher; extract
it into a single constant (e.g., BUCKET_NAME) and replace both uses in the
NatsObjectStore::provision call and the ClaimCheckPublisher::new invocation so
the bucket value is defined once and reused by js_context/object_store and
ClaimCheckPublisher (keep the constant near the top of the file or next to main
for clarity).
In `@rsworkspace/crates/trogon-source-slack/src/main.rs`:
- Around line 26-37: The bucket name "trogon-claims" is duplicated; extract it
into a single constant (e.g., const BUCKET: &str = "trogon-claims") and use that
constant in NatsObjectStore::provision(...) and ClaimCheckPublisher::new(...) to
avoid drift, and update the async_nats::jetstream::object_store::Config passed
to NatsObjectStore::provision to include an appropriate retention policy (set
max_age and/or max_bytes) instead of relying on ..Default::default() so stored
claims do not grow indefinitely.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: b913803c-5b5d-4df5-b39a-e1937ccf9a49
⛔ Files ignored due to path filters (1)
rsworkspace/Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (26)
devops/docker/compose/.env.exampledevops/docker/compose/compose.ymlrsworkspace/crates/trogon-nats/Cargo.tomlrsworkspace/crates/trogon-nats/src/jetstream/claim_check.rsrsworkspace/crates/trogon-nats/src/jetstream/mocks.rsrsworkspace/crates/trogon-nats/src/jetstream/mod.rsrsworkspace/crates/trogon-nats/src/jetstream/object_store.rsrsworkspace/crates/trogon-source-github/Cargo.tomlrsworkspace/crates/trogon-source-github/src/config.rsrsworkspace/crates/trogon-source-github/src/constants.rsrsworkspace/crates/trogon-source-github/src/main.rsrsworkspace/crates/trogon-source-github/src/server.rsrsworkspace/crates/trogon-source-gitlab/Cargo.tomlrsworkspace/crates/trogon-source-gitlab/src/config.rsrsworkspace/crates/trogon-source-gitlab/src/constants.rsrsworkspace/crates/trogon-source-gitlab/src/main.rsrsworkspace/crates/trogon-source-gitlab/src/server.rsrsworkspace/crates/trogon-source-linear/Cargo.tomlrsworkspace/crates/trogon-source-linear/src/constants.rsrsworkspace/crates/trogon-source-linear/src/main.rsrsworkspace/crates/trogon-source-linear/src/server.rsrsworkspace/crates/trogon-source-slack/Cargo.tomlrsworkspace/crates/trogon-source-slack/src/config.rsrsworkspace/crates/trogon-source-slack/src/constants.rsrsworkspace/crates/trogon-source-slack/src/main.rsrsworkspace/crates/trogon-source-slack/src/server.rs
💤 Files with no reviewable changes (2)
- devops/docker/compose/.env.example
- devops/docker/compose/compose.yml
7847650 to
687a5d0
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (4)
rsworkspace/crates/trogon-nats/src/jetstream/object_store.rs (2)
33-42: Consider documenting idempotent bucket creation behavior.
create_object_storecreates the bucket if it doesn't exist or returns the existing one. This is correct for startup, but worth noting that if multiple services start concurrently with different configs for the same bucket name, the first writer wins.Since all sources use identical config (
Default::default()with bucket name only), this is not a current concern, but a brief doc comment would aid future maintenance.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rsworkspace/crates/trogon-nats/src/jetstream/object_store.rs` around lines 33 - 42, Add a brief doc comment to NatsObjectStore::provision explaining that async_nats::jetstream::Context::create_object_store is idempotent (it will create the bucket if missing or return the existing one) and that when multiple services concurrently attempt to create the same bucket with differing async_nats::jetstream::object_store::Config values the first writer wins; mention that current callers use Default::default() so this is not an immediate problem but callers should ensure consistent config to avoid surprises.
35-41: Object store bucket has no TTL configured.The callers pass
Default::default()for the object store config (per context snippet frommain.rs). This means nomax_ageis set, so orphaned claim objects (from failed publishes) will accumulate indefinitely.Consider either:
- Setting a reasonable
max_age(e.g., 7 days) in the config- Documenting that operators should configure retention at the NATS server level
This relates to the orphan cleanup concern flagged in
claim_check.rs.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rsworkspace/crates/trogon-nats/src/jetstream/object_store.rs` around lines 35 - 41, The object store created by provision(...) currently uses whatever Config is passed (callers use Default::default()), leaving max_age unset so orphaned claim objects accumulate; update provision (or the calling site) to ensure a reasonable retention by setting async_nats::jetstream::object_store::Config.max_age (e.g., 7 days) before calling js.create_object_store, or alternatively add clear documentation in the function comment and README stating operators must configure retention on the NATS server; reference the provision function and js.create_object_store call when making the change.rsworkspace/crates/trogon-source-slack/src/main.rs (1)
26-37: Claim-check wiring looks correct.The setup properly derives
MaxPayloadfrom server info, provisions the object store, and wraps the inner client. This aligns with the pattern in other source crates.Consider extracting bucket name to a constant.
The bucket name
"trogon-claims"appears twice (lines 31 and 37) and is duplicated across all four source crates. A shared constant introgon-natswould reduce duplication and prevent drift.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rsworkspace/crates/trogon-source-slack/src/main.rs` around lines 26 - 37, Bucket name "trogon-claims" is duplicated in the ClaimCheck setup (used in NatsObjectStore::provision and ClaimCheckPublisher::new alongside MaxPayload and NatsJetStreamClient wiring); define a shared constant (e.g., TROGON_CLAIMS_BUCKET) in the trogon-nats crate and replace the literal occurrences in this file (references: MaxPayload::from_server_limit, NatsObjectStore::provision, ClaimCheckPublisher::new) with that constant so all source crates can import the same bucket name and avoid drift.rsworkspace/crates/trogon-std/src/http.rs (1)
24-34: Add a test for the zero-input invariant.The panic path for zero (
Line 11) is part of the type contract but currently untested.✅ Proposed test
#[cfg(test)] mod tests { use super::*; @@ fn from_mib_one() { let size = HttpBodySizeMax::from_mib(1); assert_eq!(size.as_usize(), 1024 * 1024); } + + #[test] + #[should_panic(expected = "HTTP body size must be greater than zero")] + fn from_mib_zero_panics() { + let _ = HttpBodySizeMax::from_mib(0); + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rsworkspace/crates/trogon-std/src/http.rs` around lines 24 - 34, Add a unit test that verifies the zero-input invariant for HttpBodySizeMax::from_mib by asserting it panics when called with 0; create a new test function (e.g., fn from_mib_zero_panics) annotated with #[test] and #[should_panic] that calls HttpBodySizeMax::from_mib(0) (optionally touching as_usize()) so the contract around from_mib and as_usize is exercised and the panic path is covered.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@rsworkspace/crates/trogon-std/src/http.rs`:
- Around line 7-13: The from_mib constructor multiplies mib by 1024*1024 without
checking for overflow; change it to a const-safe checked multiplication (e.g.,
use mib.checked_mul(1024).and_then(|v| v.checked_mul(1024))) before creating the
NonZeroUsize, and if the checked_mul chain returns None or NonZeroUsize::new
returns None, panic with a clear message (e.g., "HTTP body size overflow or must
be greater than zero"). Keep the function as pub const fn from_mib(mib: usize)
-> Self and reference NonZeroUsize::new and from_mib when applying this fix.
---
Nitpick comments:
In `@rsworkspace/crates/trogon-nats/src/jetstream/object_store.rs`:
- Around line 33-42: Add a brief doc comment to NatsObjectStore::provision
explaining that async_nats::jetstream::Context::create_object_store is
idempotent (it will create the bucket if missing or return the existing one) and
that when multiple services concurrently attempt to create the same bucket with
differing async_nats::jetstream::object_store::Config values the first writer
wins; mention that current callers use Default::default() so this is not an
immediate problem but callers should ensure consistent config to avoid
surprises.
- Around line 35-41: The object store created by provision(...) currently uses
whatever Config is passed (callers use Default::default()), leaving max_age
unset so orphaned claim objects accumulate; update provision (or the calling
site) to ensure a reasonable retention by setting
async_nats::jetstream::object_store::Config.max_age (e.g., 7 days) before
calling js.create_object_store, or alternatively add clear documentation in the
function comment and README stating operators must configure retention on the
NATS server; reference the provision function and js.create_object_store call
when making the change.
In `@rsworkspace/crates/trogon-source-slack/src/main.rs`:
- Around line 26-37: Bucket name "trogon-claims" is duplicated in the ClaimCheck
setup (used in NatsObjectStore::provision and ClaimCheckPublisher::new alongside
MaxPayload and NatsJetStreamClient wiring); define a shared constant (e.g.,
TROGON_CLAIMS_BUCKET) in the trogon-nats crate and replace the literal
occurrences in this file (references: MaxPayload::from_server_limit,
NatsObjectStore::provision, ClaimCheckPublisher::new) with that constant so all
source crates can import the same bucket name and avoid drift.
In `@rsworkspace/crates/trogon-std/src/http.rs`:
- Around line 24-34: Add a unit test that verifies the zero-input invariant for
HttpBodySizeMax::from_mib by asserting it panics when called with 0; create a
new test function (e.g., fn from_mib_zero_panics) annotated with #[test] and
#[should_panic] that calls HttpBodySizeMax::from_mib(0) (optionally touching
as_usize()) so the contract around from_mib and as_usize is exercised and the
panic path is covered.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: ef7a1ee9-0adc-44ad-8ee0-bb288b94b005
⛔ Files ignored due to path filters (1)
rsworkspace/Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (27)
devops/docker/compose/.env.exampledevops/docker/compose/compose.ymlrsworkspace/crates/trogon-nats/Cargo.tomlrsworkspace/crates/trogon-nats/src/jetstream/claim_check.rsrsworkspace/crates/trogon-nats/src/jetstream/mocks.rsrsworkspace/crates/trogon-nats/src/jetstream/mod.rsrsworkspace/crates/trogon-nats/src/jetstream/object_store.rsrsworkspace/crates/trogon-source-github/Cargo.tomlrsworkspace/crates/trogon-source-github/src/config.rsrsworkspace/crates/trogon-source-github/src/constants.rsrsworkspace/crates/trogon-source-github/src/main.rsrsworkspace/crates/trogon-source-github/src/server.rsrsworkspace/crates/trogon-source-gitlab/Cargo.tomlrsworkspace/crates/trogon-source-gitlab/src/config.rsrsworkspace/crates/trogon-source-gitlab/src/constants.rsrsworkspace/crates/trogon-source-gitlab/src/main.rsrsworkspace/crates/trogon-source-gitlab/src/server.rsrsworkspace/crates/trogon-source-linear/src/constants.rsrsworkspace/crates/trogon-source-linear/src/main.rsrsworkspace/crates/trogon-source-linear/src/server.rsrsworkspace/crates/trogon-source-slack/Cargo.tomlrsworkspace/crates/trogon-source-slack/src/config.rsrsworkspace/crates/trogon-source-slack/src/constants.rsrsworkspace/crates/trogon-source-slack/src/main.rsrsworkspace/crates/trogon-source-slack/src/server.rsrsworkspace/crates/trogon-std/src/http.rsrsworkspace/crates/trogon-std/src/lib.rs
💤 Files with no reviewable changes (5)
- devops/docker/compose/.env.example
- rsworkspace/crates/trogon-source-gitlab/Cargo.toml
- rsworkspace/crates/trogon-source-github/Cargo.toml
- rsworkspace/crates/trogon-source-slack/Cargo.toml
- devops/docker/compose/compose.yml
✅ Files skipped from review due to trivial changes (2)
- rsworkspace/crates/trogon-std/src/lib.rs
- rsworkspace/crates/trogon-nats/src/jetstream/mod.rs
🚧 Files skipped from review as they are similar to previous changes (14)
- rsworkspace/crates/trogon-nats/Cargo.toml
- rsworkspace/crates/trogon-source-linear/src/constants.rs
- rsworkspace/crates/trogon-source-linear/src/server.rs
- rsworkspace/crates/trogon-source-github/src/server.rs
- rsworkspace/crates/trogon-source-slack/src/config.rs
- rsworkspace/crates/trogon-source-gitlab/src/constants.rs
- rsworkspace/crates/trogon-source-gitlab/src/server.rs
- rsworkspace/crates/trogon-source-github/src/constants.rs
- rsworkspace/crates/trogon-source-linear/src/main.rs
- rsworkspace/crates/trogon-source-gitlab/src/main.rs
- rsworkspace/crates/trogon-source-github/src/config.rs
- rsworkspace/crates/trogon-source-slack/src/constants.rs
- rsworkspace/crates/trogon-source-slack/src/server.rs
- rsworkspace/crates/trogon-nats/src/jetstream/mocks.rs
f492ad1 to
5d1576d
Compare
Code Coverage SummaryDetailsDiff against mainResults for commit: 8a82dba Minimum allowed coverage is ♻️ This comment has been updated with latest results |
5d1576d to
224895d
Compare
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
rsworkspace/crates/trogon-source-slack/src/config.rs (1)
20-20:⚠️ Potential issue | 🟡 MinorRemove stale documentation for
SLACK_MAX_BODY_SIZE.The docstring still references
SLACK_MAX_BODY_SIZEbut this environment variable is no longer read byfrom_env. This could mislead users into setting an env var that has no effect.📝 Proposed fix
-/// - `SLACK_MAX_BODY_SIZE`: maximum webhook body size in bytes (default: 1048576 / 1 MB)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rsworkspace/crates/trogon-source-slack/src/config.rs` at line 20, Remove the stale mention of SLACK_MAX_BODY_SIZE from the configuration docstring so docs match actual behavior: update the doc comment in config.rs (the comment that currently reads "`SLACK_MAX_BODY_SIZE`: maximum webhook body size...") to omit that environment variable or replace it with the currently-supported env var names, and ensure the from_env function and any config struct (referenced as from_env) are consistent with the docstring so users aren't misled into setting SLACK_MAX_BODY_SIZE.rsworkspace/crates/trogon-source-slack/src/server.rs (1)
76-92:⚠️ Potential issue | 🟠 MajorDon't swallow failures from
publish_unroutable.This helper only logs the
PublishOutcome, so every unroutable branch still returns its normal200/400even when the object-store write or JetStream publish fails. That silently drops the payload; theunhandled_payload_typepath is worst because it still replies200, which suppresses Slack retries.🔧 Suggested shape
-async fn publish_unroutable<P: JetStreamPublisher, S: ObjectStorePut>( +async fn publish_unroutable<P: JetStreamPublisher, S: ObjectStorePut>( publisher: &ClaimCheckPublisher<P, S>, subject_prefix: &NatsToken, reason: &str, body: Bytes, ack_timeout: Duration, -) { +) -> PublishOutcome<P::PublishError> { let subject = format!("{}.unroutable", subject_prefix); let mut headers = async_nats::HeaderMap::new(); headers.insert(NATS_HEADER_REJECT_REASON, reason); headers.insert(NATS_HEADER_PAYLOAD_KIND, "unroutable"); let outcome = publisher .publish_event(subject, headers, body, ack_timeout) .await; - outcome.log_on_error("slack.unroutable"); + if !outcome.is_ok() { + outcome.log_on_error("slack.unroutable"); + } + outcome }Then map unsuccessful outcomes to
500at each call site instead of always returning the original200/400.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rsworkspace/crates/trogon-source-slack/src/server.rs` around lines 76 - 92, The publish_unroutable helper currently swallows publish/write failures by logging the PublishOutcome and returning nothing; change publish_unroutable (involving ClaimCheckPublisher and its publish_event call) to return a Result/PublishOutcome (propagate the outcome or error) instead of discarding it so callers can map failures to HTTP 500; update publish_unroutable to await publisher.publish_event(...) and return Err when outcome indicates failure (or propagate the publish_event error), and adjust each call site (e.g., unhandled_payload_type paths) to inspect the returned Result/PublishOutcome and convert unsuccessful outcomes into a 500 response rather than always returning 200/400.
🧹 Nitpick comments (3)
rsworkspace/crates/trogon-nats/src/jetstream/publish.rs (1)
15-15:StoreFailedvariant lacks error context for debugging.Unlike
PublishFailed(E)andAckFailed(E), theStoreFailedvariant carries no error information. This makes it harder to diagnose object-store failures in production. Consider adding an error payload or at minimum a string message:♻️ Proposed enhancement
- StoreFailed, + StoreFailed(String), // or a dedicated error typeAnd update the log arm:
- PublishOutcome::StoreFailed => { - error!(source = source_name, "Failed to store claim check payload"); + PublishOutcome::StoreFailed(reason) => { + error!(error = %reason, source = source_name, "Failed to store claim check payload"); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rsworkspace/crates/trogon-nats/src/jetstream/publish.rs` at line 15, The StoreFailed enum variant currently has no error payload; change it to carry error context (e.g., StoreFailed(E) or StoreFailed(String)) in the same enum where PublishFailed(E) and AckFailed(E) are defined, propagate the store error into places that construct StoreFailed, and update the match arm(s) that log StoreFailed in publish.rs to include the contained error/message (same style as PublishFailed and AckFailed) so logs contain the underlying storage error for debugging.rsworkspace/crates/trogon-source-linear/src/main.rs (1)
31-40: Bucket name"trogon-claims"is duplicated.The bucket name appears twice in this file (lines 31 and 40) and is also hardcoded in the other three source crate
main.rsfiles. Consider extracting to a shared constant introgon-natsto avoid inconsistency if the name changes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rsworkspace/crates/trogon-source-linear/src/main.rs` around lines 31 - 40, The literal bucket name "trogon-claims" is duplicated; define a single public constant (e.g., pub const BUCKET_NAME: &str = "trogon-claims";) in the trogon-nats crate and import it where needed, then replace the hard-coded occurrences (the string passed into object_store construction and the call to ClaimCheckPublisher::new where "trogon-claims" is used) with that constant (use BUCKET_NAME with NatsJetStreamClient, ClaimCheckPublisher::new and any other main.rs files) so all crates share one source of truth.rsworkspace/crates/trogon-nats/src/jetstream/claim_check.rs (1)
19-30: Promote claim-check inputs to dedicated value objects.
MaxPayloadis a value object defined inline inclaim_check.rs, andbucket_namestays as a rawStringin the public constructor. That leaves invalid bucket names representable and skips the repo's value-object boundary on a new public API. I'd extractMaxPayloadintomax_payload.rsand introduce a typed bucket name here.As per coding guidelines "Prefer domain-specific value objects over primitives (e.g.,
AcpPrefixnotString), with each type's factory guaranteeing correctness at construction—invalid instances should be unrepresentable" and "Every value object lives in its own file named after the type (e.g.,acp_prefix.rs,ext_method_name.rs,session_id.rs), never inlined into config, aggregate, or service files".Also applies to: 89-105
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rsworkspace/crates/trogon-nats/src/jetstream/claim_check.rs` around lines 19 - 30, Extract MaxPayload into its own file named max_payload.rs as a value object: move the struct and impl (including from_server_limit and threshold) to that file, make the type public (pub struct MaxPayload) and re-export or import it in claim_check.rs; replace the raw bucket_name String in the public constructor(s) in claim_check.rs with a new typed value object BucketName (file bucket_name.rs) that validates/normalizes input in its factory (e.g., pub fn new(name: &str) -> Result<BucketName, Error>) so invalid names are unrepresentable; update all constructors and functions that previously took bucket_name: String to accept BucketName, adjust use/imports, and update any call sites/tests to construct BucketName via its factory; ensure both max_payload.rs and bucket_name.rs follow the repo naming and visibility conventions and preserve existing behavior by delegating from_server_limit to the moved MaxPayload.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@rsworkspace/crates/trogon-nats/src/jetstream/claim_check.rs`:
- Around line 121-129: Before the small-payload fast path returns via
super::publish::publish_event, remove or overwrite any headers in the headers
map whose keys start with "Trogon-Claim-" so this type owns the claim-check
namespace; specifically, in the function containing the if payload.len() <=
self.max_payload.threshold() check, mutate the headers argument (or clone and
scrub) to delete keys matching the "Trogon-Claim-*" prefix (case-sensitive or
per existing header conventions) so resolve_claim() won’t misinterpret inline
messages as externalized claims, then continue to call publish_event with the
scrubbed headers.
In `@rsworkspace/crates/trogon-source-linear/src/main.rs`:
- Around line 27-35: The startup can fail because NatsObjectStore::provision
calls js.create_object_store which errors if the bucket already exists; change
provision to handle that case by either calling the async-nats public
get_or_create_object_store API (if available) or catching
CreateObjectStoreError::BucketAlreadyExists (or equivalent) from
js.create_object_store and then calling js.get_object_store (or returning the
existing NatsObjectStore) instead of propagating the error; update
NatsObjectStore::provision to detect the "already exists" outcome and return the
existing object store so concurrent/restart startups succeed.
In `@rsworkspace/crates/trogon-source-slack/src/server.rs`:
- Around line 143-147: The router currently uses a hardcoded
DefaultBodyLimit::max(HTTP_BODY_SIZE_MAX) which conflicts with
runtime-initialized MaxPayload and misleading docs; fix by accepting/deriving
the configured SLACK_MAX_BODY_SIZE/MaxPayload and applying it to the HTTP layer:
update SlackConfig to parse SLACK_MAX_BODY_SIZE (or otherwise obtain MaxPayload
in serve()), change the serve(...) signature to accept a MaxPayload (or usize)
and replace the hardcoded HTTP_BODY_SIZE_MAX in the Router construction (where
DefaultBodyLimit is applied) with the provided MaxPayload.as_usize(), ensuring
handle_webhook/ClaimCheckPublisher continue to receive the same MaxPayload
value; alternatively, if you choose not to implement config threading, remove
the SLACK_MAX_BODY_SIZE documentation and ensure MaxPayload::from_server_limit()
is constrained to the router limit so behavior is consistent.
In `@rsworkspace/crates/trogon-std/src/http.rs`:
- Around line 9-14: The constructor new(size: ByteSize) casts size.as_u64() to
usize which can truncate on 32-bit platforms; change new to first validate the
u64 fits in usize (e.g. check size.as_u64() <= usize::MAX as u64) and only then
cast to usize and call NonZeroUsize::new, returning None if it doesn't fit;
alternatively, remove const and use try_into().ok().and_then(NonZeroUsize::new)
to perform a safe fallible conversion—update the implementation referenced by
new, ByteSize::as_u64, and NonZeroUsize::new accordingly.
---
Outside diff comments:
In `@rsworkspace/crates/trogon-source-slack/src/config.rs`:
- Line 20: Remove the stale mention of SLACK_MAX_BODY_SIZE from the
configuration docstring so docs match actual behavior: update the doc comment in
config.rs (the comment that currently reads "`SLACK_MAX_BODY_SIZE`: maximum
webhook body size...") to omit that environment variable or replace it with the
currently-supported env var names, and ensure the from_env function and any
config struct (referenced as from_env) are consistent with the docstring so
users aren't misled into setting SLACK_MAX_BODY_SIZE.
In `@rsworkspace/crates/trogon-source-slack/src/server.rs`:
- Around line 76-92: The publish_unroutable helper currently swallows
publish/write failures by logging the PublishOutcome and returning nothing;
change publish_unroutable (involving ClaimCheckPublisher and its publish_event
call) to return a Result/PublishOutcome (propagate the outcome or error) instead
of discarding it so callers can map failures to HTTP 500; update
publish_unroutable to await publisher.publish_event(...) and return Err when
outcome indicates failure (or propagate the publish_event error), and adjust
each call site (e.g., unhandled_payload_type paths) to inspect the returned
Result/PublishOutcome and convert unsuccessful outcomes into a 500 response
rather than always returning 200/400.
---
Nitpick comments:
In `@rsworkspace/crates/trogon-nats/src/jetstream/claim_check.rs`:
- Around line 19-30: Extract MaxPayload into its own file named max_payload.rs
as a value object: move the struct and impl (including from_server_limit and
threshold) to that file, make the type public (pub struct MaxPayload) and
re-export or import it in claim_check.rs; replace the raw bucket_name String in
the public constructor(s) in claim_check.rs with a new typed value object
BucketName (file bucket_name.rs) that validates/normalizes input in its factory
(e.g., pub fn new(name: &str) -> Result<BucketName, Error>) so invalid names are
unrepresentable; update all constructors and functions that previously took
bucket_name: String to accept BucketName, adjust use/imports, and update any
call sites/tests to construct BucketName via its factory; ensure both
max_payload.rs and bucket_name.rs follow the repo naming and visibility
conventions and preserve existing behavior by delegating from_server_limit to
the moved MaxPayload.
In `@rsworkspace/crates/trogon-nats/src/jetstream/publish.rs`:
- Line 15: The StoreFailed enum variant currently has no error payload; change
it to carry error context (e.g., StoreFailed(E) or StoreFailed(String)) in the
same enum where PublishFailed(E) and AckFailed(E) are defined, propagate the
store error into places that construct StoreFailed, and update the match arm(s)
that log StoreFailed in publish.rs to include the contained error/message (same
style as PublishFailed and AckFailed) so logs contain the underlying storage
error for debugging.
In `@rsworkspace/crates/trogon-source-linear/src/main.rs`:
- Around line 31-40: The literal bucket name "trogon-claims" is duplicated;
define a single public constant (e.g., pub const BUCKET_NAME: &str =
"trogon-claims";) in the trogon-nats crate and import it where needed, then
replace the hard-coded occurrences (the string passed into object_store
construction and the call to ClaimCheckPublisher::new where "trogon-claims" is
used) with that constant (use BUCKET_NAME with NatsJetStreamClient,
ClaimCheckPublisher::new and any other main.rs files) so all crates share one
source of truth.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 615c0faf-0ba2-40a9-b685-3b66f1236c9d
⛔ Files ignored due to path filters (1)
rsworkspace/Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (30)
devops/docker/compose/.env.exampledevops/docker/compose/compose.ymlrsworkspace/crates/AGENTS.mdrsworkspace/crates/trogon-nats/Cargo.tomlrsworkspace/crates/trogon-nats/src/jetstream/claim_check.rsrsworkspace/crates/trogon-nats/src/jetstream/mocks.rsrsworkspace/crates/trogon-nats/src/jetstream/mod.rsrsworkspace/crates/trogon-nats/src/jetstream/object_store.rsrsworkspace/crates/trogon-nats/src/jetstream/publish.rsrsworkspace/crates/trogon-source-github/Cargo.tomlrsworkspace/crates/trogon-source-github/src/config.rsrsworkspace/crates/trogon-source-github/src/constants.rsrsworkspace/crates/trogon-source-github/src/main.rsrsworkspace/crates/trogon-source-github/src/server.rsrsworkspace/crates/trogon-source-gitlab/Cargo.tomlrsworkspace/crates/trogon-source-gitlab/src/config.rsrsworkspace/crates/trogon-source-gitlab/src/constants.rsrsworkspace/crates/trogon-source-gitlab/src/main.rsrsworkspace/crates/trogon-source-gitlab/src/server.rsrsworkspace/crates/trogon-source-linear/src/constants.rsrsworkspace/crates/trogon-source-linear/src/main.rsrsworkspace/crates/trogon-source-linear/src/server.rsrsworkspace/crates/trogon-source-slack/Cargo.tomlrsworkspace/crates/trogon-source-slack/src/config.rsrsworkspace/crates/trogon-source-slack/src/constants.rsrsworkspace/crates/trogon-source-slack/src/main.rsrsworkspace/crates/trogon-source-slack/src/server.rsrsworkspace/crates/trogon-std/Cargo.tomlrsworkspace/crates/trogon-std/src/http.rsrsworkspace/crates/trogon-std/src/lib.rs
💤 Files with no reviewable changes (5)
- devops/docker/compose/.env.example
- rsworkspace/crates/trogon-source-github/Cargo.toml
- rsworkspace/crates/trogon-source-slack/Cargo.toml
- devops/docker/compose/compose.yml
- rsworkspace/crates/trogon-source-gitlab/Cargo.toml
✅ Files skipped from review due to trivial changes (4)
- rsworkspace/crates/trogon-std/Cargo.toml
- rsworkspace/crates/trogon-nats/Cargo.toml
- rsworkspace/crates/AGENTS.md
- rsworkspace/crates/trogon-nats/src/jetstream/mod.rs
🚧 Files skipped from review as they are similar to previous changes (10)
- rsworkspace/crates/trogon-source-linear/src/constants.rs
- rsworkspace/crates/trogon-source-gitlab/src/constants.rs
- rsworkspace/crates/trogon-source-gitlab/src/config.rs
- rsworkspace/crates/trogon-source-github/src/constants.rs
- rsworkspace/crates/trogon-source-slack/src/constants.rs
- rsworkspace/crates/trogon-source-github/src/main.rs
- rsworkspace/crates/trogon-std/src/lib.rs
- rsworkspace/crates/trogon-nats/src/jetstream/mocks.rs
- rsworkspace/crates/trogon-source-github/src/config.rs
- rsworkspace/crates/trogon-nats/src/jetstream/object_store.rs
9d1cc90 to
af403c7
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
rsworkspace/crates/trogon-nats/src/jetstream/publish.rs (1)
10-16:⚠️ Potential issue | 🟠 MajorPreserve the object-store error in
StoreFailed.This variant throws away the actual storage failure, so
log_on_error()can only emit a generic message and callers lose the reason for the 500. Make the variant carry the typed store error (or wrap publish/store failures in a dedicated typed enum) instead of collapsing it to a marker.As per coding guidelines, "Errors must be typed—use structs or enums, never String or format!(). Never discard error context by converting a typed error into a string; wrap the source error as a field or variant instead."
Also applies to: 23-37
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rsworkspace/crates/trogon-nats/src/jetstream/publish.rs` around lines 10 - 16, The PublishOutcome enum currently discards the concrete store error via the StoreFailed marker; change StoreFailed to carry the typed store error (e.g., StoreFailed(S) or StoreFailed(Box<dyn std::error::Error + Send + Sync>) or refactor into a PublishError enum that wraps both publish and store errors) so callers and log_on_error() can access the original store failure; update the PublishOutcome definition (refer to PublishOutcome and the StoreFailed variant), adjust places that construct StoreFailed to pass the actual store error, and update match arms (including log_on_error and any callers handling PublishOutcome) to pattern-match and propagate/log the carried typed error instead of losing context.rsworkspace/crates/trogon-source-gitlab/src/config.rs (1)
24-58:⚠️ Potential issue | 🟠 MajorDon't let
GITLAB_MAX_BODY_SIZEbecome a silent no-op.
GitlabConfig::from_envstopped reading this env var, butrsworkspace/crates/trogon-source-gitlab/src/server.rsnow applies a fixedHTTP_BODY_SIZE_MAXandrsworkspace/crates/trogon-source-gitlab/src/lib.rsstill documentsGITLAB_MAX_BODY_SIZEas supported. Any deployment that keeps the old variable set will silently run with the hardcoded limit instead of the operator-selected one. Please either reject/warn on the deprecated variable at startup or update the docs/templates in the same change.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rsworkspace/crates/trogon-source-gitlab/src/config.rs` around lines 24 - 58, GitlabConfig::from_env no longer reads GITLAB_MAX_BODY_SIZE so deployments with that env var silently get the hardcoded HTTP_BODY_SIZE_MAX; update GitlabConfig::from_env to read and parse "GITLAB_MAX_BODY_SIZE" (like other vars), expose it as a config field (e.g., body_max_size or http_body_size_max) and have server code use that field instead of the fixed HTTP_BODY_SIZE_MAX constant, or alternatively detect env.var("GITLAB_MAX_BODY_SIZE").ok() and emit a clear warning/error at startup; modify the constructor logic in GitlabConfig::from_env and the server code that references HTTP_BODY_SIZE_MAX to use the new config field (symbols: GitlabConfig::from_env, HTTP_BODY_SIZE_MAX, and the server handler that sets body size).
♻️ Duplicate comments (2)
rsworkspace/crates/trogon-source-slack/src/server.rs (1)
122-147:⚠️ Potential issue | 🟠 MajorKeep the HTTP body cap aligned with the runtime claim-check threshold.
rsworkspace/crates/trogon-source-slack/src/main.rsderivesMaxPayloadfrom the live NATS server, but this router still enforces an unrelatedHTTP_BODY_SIZE_MAX. If that constant is lower, Axum will reject the request beforeClaimCheckPublishercan spill to object store, so the new fallback never runs. Thread the chosen limit intorouter()/serve()or assert the relationship at startup so the two ceilings cannot drift.Run this to confirm the current wiring still has two independent thresholds and whether the old size setting is still documented anywhere:
#!/bin/bash set -euo pipefail sed -n '1,120p' rsworkspace/crates/trogon-source-slack/src/constants.rs sed -n '1,120p' rsworkspace/crates/trogon-source-slack/src/config.rs sed -n '24,42p' rsworkspace/crates/trogon-source-slack/src/main.rs sed -n '122,147p' rsworkspace/crates/trogon-source-slack/src/server.rs rg -n "SLACK_MAX_BODY_SIZE|max_body_size|HTTP_BODY_SIZE_MAX|MaxPayload::from_server_limit" rsworkspace/crates/trogon-source-slack/🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rsworkspace/crates/trogon-source-slack/src/server.rs` around lines 122 - 147, The router currently uses a hardcoded HTTP_BODY_SIZE_MAX which can diverge from the runtime-derived MaxPayload used by the ClaimCheckPublisher; update the API so the chosen runtime limit is threaded into routing (or assert equality at startup) to avoid Axum rejecting requests before claim-check fallback runs: change router()/router_with_clock() to accept an explicit body_limit (or MaxPayload/usize) and replace the DefaultBodyLimit::max(HTTP_BODY_SIZE_MAX.as_usize()) usage with that parameter, or alternatively add a startup check in main (where MaxPayload::from_server_limit is derived) that compares HTTP_BODY_SIZE_MAX to the runtime MaxPayload and fails fast if they differ; reference symbols: router, router_with_clock, DefaultBodyLimit::max, HTTP_BODY_SIZE_MAX, MaxPayload::from_server_limit, and ClaimCheckPublisher to implement the fix.rsworkspace/crates/trogon-nats/src/jetstream/claim_check.rs (1)
121-130:⚠️ Potential issue | 🟡 MinorScrub reserved
Trogon-Claim-*headers on the direct publish path.If a caller passes
Trogon-Claim-*headers on a small payload, this branch publishes them unchanged. Downstream consumers callingresolve_claim()will then treat the message as an externalized claim and attempt to fetch from the object store, even though nothing was stored.Clear these reserved headers before publishing to ensure
ClaimCheckPublisherowns the claim-check namespace.🛡️ Proposed fix to scrub reserved headers
pub async fn publish_event( &self, subject: String, - headers: HeaderMap, + mut headers: HeaderMap, payload: Bytes, ack_timeout: Duration, ) -> PublishOutcome<P::PublishError> { + // Scrub reserved claim-check headers to prevent namespace collision + headers.remove(HEADER_CLAIM_CHECK); + headers.remove(HEADER_CLAIM_BUCKET); + headers.remove(HEADER_CLAIM_KEY); + if payload.len() <= self.max_payload.threshold() { return super::publish::publish_event(🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rsworkspace/crates/trogon-nats/src/jetstream/claim_check.rs` around lines 121 - 130, The small-payload direct-publish branch currently forwards headers unchanged, so callers can inject reserved "Trogon-Claim-*" headers and confuse consumers; before calling super::publish::publish_event in the payload.len() <= self.max_payload.threshold() branch (in ClaimCheckPublisher / claim_check.rs), remove/clear any headers whose names start with "Trogon-Claim-" (case-insensitive) from the headers map/collection so the publish path never forwards those reserved claim-check headers.
🧹 Nitpick comments (2)
rsworkspace/crates/trogon-source-slack/src/server.rs (1)
564-573: Add one low-threshold integration test for the claim-check branch.
wrap_publisher()hardcodesMaxPayload::from_server_limit(usize::MAX), so every request in this suite stays on the direct publish path. That leaves the new "store blob + publish claim reference" behavior unexercised at the router layer.🧪 Small refactor to let one test force the claim-check path
- fn wrap_publisher( - publisher: MockJetStreamPublisher, - ) -> ClaimCheckPublisher<MockJetStreamPublisher, MockObjectStore> { + fn wrap_publisher( + publisher: MockJetStreamPublisher, + max_payload: MaxPayload, + ) -> ClaimCheckPublisher<MockJetStreamPublisher, MockObjectStore> { ClaimCheckPublisher::new( publisher, MockObjectStore::new(), "test-bucket".to_string(), - MaxPayload::from_server_limit(usize::MAX), + max_payload, ) }Use
MaxPayload::from_server_limit(usize::MAX)for the current happy-path tests, and pass a tiny limit in one dedicated test that asserts the object-store mock is used.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rsworkspace/crates/trogon-source-slack/src/server.rs` around lines 564 - 573, wrap_publisher currently hardcodes MaxPayload::from_server_limit(usize::MAX) so the claim-check path is never exercised; modify wrap_publisher (or add an overload/test helper) to accept a MaxPayload (or usize limit) parameter and use that when constructing ClaimCheckPublisher, then add one integration test that calls the helper with a tiny limit (e.g., 1) to force blob storage and assert MockObjectStore is used; keep existing tests using the default MAX limit to preserve current behavior.rsworkspace/crates/trogon-nats/src/jetstream/claim_check.rs (1)
67-87:ClaimResolveErrorshould preserve source error forStoreFailedvariant.The
std::error::Errorimpl doesn't overridesource(), so the wrapped error inStoreFailed(E)andReadFailed(std::io::Error)won't be accessible via the standard error chain. This loses error context when using libraries that traverse the error chain.♻️ Proposed fix to implement source()
-impl<E: fmt::Display + fmt::Debug + Send + Sync + 'static> std::error::Error - for ClaimResolveError<E> -{ -} +impl<E: std::error::Error + Send + Sync + 'static> std::error::Error for ClaimResolveError<E> { + fn source(&self) -> Option<&(dyn std::error::Error + 'static)> { + match self { + Self::MissingKey => None, + Self::StoreFailed(e) => Some(e), + Self::ReadFailed(e) => Some(e), + } + } +}Note: This requires tightening the bound on
Efromfmt::Displaytostd::error::Error.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rsworkspace/crates/trogon-nats/src/jetstream/claim_check.rs` around lines 67 - 87, The ClaimResolveError's StoreFailed and ReadFailed variants don't expose their inner errors via the standard error chain; update the Error impl for ClaimResolveError to return the inner errors from source() and tighten the generic bound on E (in the enum and impl) from fmt::Display to at least dyn std::error::Error (e.g., E: std::error::Error + fmt::Display + ... as needed) so you can implement fn source(&self) -> Option<&(dyn std::error::Error + 'static)>, matching Self::StoreFailed(e) -> Some(e) and Self::ReadFailed(e) -> Some(e), and None for MissingKey; ensure the trait bounds on ClaimResolveError<E> and its Error impl are adjusted accordingly (refer to ClaimResolveError, StoreFailed, ReadFailed, and the impl std::error::Error for ClaimResolveError<E>).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@rsworkspace/crates/trogon-nats/src/jetstream/mocks.rs`:
- Around line 504-529: The Reader type and returned value in
MockObjectStore::get currently use std::io::Cursor<Vec<u8>>, which does not
implement tokio::io::AsyncRead; change the associated type Reader in the
ObjectStoreGet impl to an async-compatible reader (e.g.,
futures::io::Cursor<Vec<u8>>) and construct that type in MockObjectStore::get
(or alternatively wrap the std::io::Cursor with tokio_util::io::SyncIoBridge
before returning). Update the signature for type Reader = ... and replace
std::io::Cursor::new(...) in the map branch with the chosen async-compatible
constructor so the Reader satisfies AsyncRead + Unpin + Send for resolve_claim()
to await reader.read_to_end().
---
Outside diff comments:
In `@rsworkspace/crates/trogon-nats/src/jetstream/publish.rs`:
- Around line 10-16: The PublishOutcome enum currently discards the concrete
store error via the StoreFailed marker; change StoreFailed to carry the typed
store error (e.g., StoreFailed(S) or StoreFailed(Box<dyn std::error::Error +
Send + Sync>) or refactor into a PublishError enum that wraps both publish and
store errors) so callers and log_on_error() can access the original store
failure; update the PublishOutcome definition (refer to PublishOutcome and the
StoreFailed variant), adjust places that construct StoreFailed to pass the
actual store error, and update match arms (including log_on_error and any
callers handling PublishOutcome) to pattern-match and propagate/log the carried
typed error instead of losing context.
In `@rsworkspace/crates/trogon-source-gitlab/src/config.rs`:
- Around line 24-58: GitlabConfig::from_env no longer reads GITLAB_MAX_BODY_SIZE
so deployments with that env var silently get the hardcoded HTTP_BODY_SIZE_MAX;
update GitlabConfig::from_env to read and parse "GITLAB_MAX_BODY_SIZE" (like
other vars), expose it as a config field (e.g., body_max_size or
http_body_size_max) and have server code use that field instead of the fixed
HTTP_BODY_SIZE_MAX constant, or alternatively detect
env.var("GITLAB_MAX_BODY_SIZE").ok() and emit a clear warning/error at startup;
modify the constructor logic in GitlabConfig::from_env and the server code that
references HTTP_BODY_SIZE_MAX to use the new config field (symbols:
GitlabConfig::from_env, HTTP_BODY_SIZE_MAX, and the server handler that sets
body size).
---
Duplicate comments:
In `@rsworkspace/crates/trogon-nats/src/jetstream/claim_check.rs`:
- Around line 121-130: The small-payload direct-publish branch currently
forwards headers unchanged, so callers can inject reserved "Trogon-Claim-*"
headers and confuse consumers; before calling super::publish::publish_event in
the payload.len() <= self.max_payload.threshold() branch (in ClaimCheckPublisher
/ claim_check.rs), remove/clear any headers whose names start with
"Trogon-Claim-" (case-insensitive) from the headers map/collection so the
publish path never forwards those reserved claim-check headers.
In `@rsworkspace/crates/trogon-source-slack/src/server.rs`:
- Around line 122-147: The router currently uses a hardcoded HTTP_BODY_SIZE_MAX
which can diverge from the runtime-derived MaxPayload used by the
ClaimCheckPublisher; update the API so the chosen runtime limit is threaded into
routing (or assert equality at startup) to avoid Axum rejecting requests before
claim-check fallback runs: change router()/router_with_clock() to accept an
explicit body_limit (or MaxPayload/usize) and replace the
DefaultBodyLimit::max(HTTP_BODY_SIZE_MAX.as_usize()) usage with that parameter,
or alternatively add a startup check in main (where
MaxPayload::from_server_limit is derived) that compares HTTP_BODY_SIZE_MAX to
the runtime MaxPayload and fails fast if they differ; reference symbols: router,
router_with_clock, DefaultBodyLimit::max, HTTP_BODY_SIZE_MAX,
MaxPayload::from_server_limit, and ClaimCheckPublisher to implement the fix.
---
Nitpick comments:
In `@rsworkspace/crates/trogon-nats/src/jetstream/claim_check.rs`:
- Around line 67-87: The ClaimResolveError's StoreFailed and ReadFailed variants
don't expose their inner errors via the standard error chain; update the Error
impl for ClaimResolveError to return the inner errors from source() and tighten
the generic bound on E (in the enum and impl) from fmt::Display to at least dyn
std::error::Error (e.g., E: std::error::Error + fmt::Display + ... as needed) so
you can implement fn source(&self) -> Option<&(dyn std::error::Error +
'static)>, matching Self::StoreFailed(e) -> Some(e) and Self::ReadFailed(e) ->
Some(e), and None for MissingKey; ensure the trait bounds on
ClaimResolveError<E> and its Error impl are adjusted accordingly (refer to
ClaimResolveError, StoreFailed, ReadFailed, and the impl std::error::Error for
ClaimResolveError<E>).
In `@rsworkspace/crates/trogon-source-slack/src/server.rs`:
- Around line 564-573: wrap_publisher currently hardcodes
MaxPayload::from_server_limit(usize::MAX) so the claim-check path is never
exercised; modify wrap_publisher (or add an overload/test helper) to accept a
MaxPayload (or usize limit) parameter and use that when constructing
ClaimCheckPublisher, then add one integration test that calls the helper with a
tiny limit (e.g., 1) to force blob storage and assert MockObjectStore is used;
keep existing tests using the default MAX limit to preserve current behavior.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 7dbc932b-2896-43a7-b54f-e004a56caf75
⛔ Files ignored due to path filters (1)
rsworkspace/Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (30)
devops/docker/compose/.env.exampledevops/docker/compose/compose.ymlrsworkspace/crates/AGENTS.mdrsworkspace/crates/trogon-nats/Cargo.tomlrsworkspace/crates/trogon-nats/src/jetstream/claim_check.rsrsworkspace/crates/trogon-nats/src/jetstream/mocks.rsrsworkspace/crates/trogon-nats/src/jetstream/mod.rsrsworkspace/crates/trogon-nats/src/jetstream/object_store.rsrsworkspace/crates/trogon-nats/src/jetstream/publish.rsrsworkspace/crates/trogon-source-github/Cargo.tomlrsworkspace/crates/trogon-source-github/src/config.rsrsworkspace/crates/trogon-source-github/src/constants.rsrsworkspace/crates/trogon-source-github/src/main.rsrsworkspace/crates/trogon-source-github/src/server.rsrsworkspace/crates/trogon-source-gitlab/Cargo.tomlrsworkspace/crates/trogon-source-gitlab/src/config.rsrsworkspace/crates/trogon-source-gitlab/src/constants.rsrsworkspace/crates/trogon-source-gitlab/src/main.rsrsworkspace/crates/trogon-source-gitlab/src/server.rsrsworkspace/crates/trogon-source-linear/src/constants.rsrsworkspace/crates/trogon-source-linear/src/main.rsrsworkspace/crates/trogon-source-linear/src/server.rsrsworkspace/crates/trogon-source-slack/Cargo.tomlrsworkspace/crates/trogon-source-slack/src/config.rsrsworkspace/crates/trogon-source-slack/src/constants.rsrsworkspace/crates/trogon-source-slack/src/main.rsrsworkspace/crates/trogon-source-slack/src/server.rsrsworkspace/crates/trogon-std/Cargo.tomlrsworkspace/crates/trogon-std/src/http.rsrsworkspace/crates/trogon-std/src/lib.rs
💤 Files with no reviewable changes (5)
- devops/docker/compose/.env.example
- rsworkspace/crates/trogon-source-github/Cargo.toml
- rsworkspace/crates/trogon-source-gitlab/Cargo.toml
- rsworkspace/crates/trogon-source-slack/Cargo.toml
- devops/docker/compose/compose.yml
✅ Files skipped from review due to trivial changes (5)
- rsworkspace/crates/trogon-nats/Cargo.toml
- rsworkspace/crates/trogon-std/Cargo.toml
- rsworkspace/crates/trogon-std/src/lib.rs
- rsworkspace/crates/trogon-std/src/http.rs
- rsworkspace/crates/trogon-source-github/src/server.rs
🚧 Files skipped from review as they are similar to previous changes (9)
- rsworkspace/crates/trogon-source-linear/src/constants.rs
- rsworkspace/crates/AGENTS.md
- rsworkspace/crates/trogon-source-gitlab/src/constants.rs
- rsworkspace/crates/trogon-source-slack/src/config.rs
- rsworkspace/crates/trogon-source-slack/src/constants.rs
- rsworkspace/crates/trogon-source-github/src/config.rs
- rsworkspace/crates/trogon-source-linear/src/main.rs
- rsworkspace/crates/trogon-source-github/src/constants.rs
- rsworkspace/crates/trogon-nats/src/jetstream/mod.rs
0de338a to
7f9868a
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
rsworkspace/crates/trogon-source-telegram/src/config.rs (1)
13-21:⚠️ Potential issue | 🟡 MinorRemove the stale
TELEGRAM_MAX_BODY_SIZEdoc entry.
TelegramSourceConfig::from_env()no longer reads that variable, and the fixed limit now comes fromrsworkspace/crates/trogon-source-telegram/src/constants.rs:12. Leaving this bullet in place tells operators they can tune a setting that is silently ignored.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rsworkspace/crates/trogon-source-telegram/src/config.rs` around lines 13 - 21, The doc comment in config.rs still lists the stale TELEGRAM_MAX_BODY_SIZE bullet even though TelegramSourceConfig::from_env() no longer reads that env var and the fixed limit is defined in constants.rs (the MAX_BODY_SIZE constant). Remove the "- `TELEGRAM_MAX_BODY_SIZE`: maximum body size..." bullet from the top-of-file doc comment so the documentation matches the actual behavior, and optionally add a short note pointing readers to the MAX_BODY_SIZE constant if you want to document the fixed limit.rsworkspace/crates/trogon-source-discord/src/server.rs (1)
70-86:⚠️ Potential issue | 🟠 MajorDon't return
400after an unroutable publish failed.This helper only logs failures, so invalid or unknown interactions still get a
400even when the claim-check/DLQ write failed. That acknowledges a request we did not durably preserve. Return a success signal/status frompublish_unroutable()and upgrade the response to500on failure.💡 Suggested shape
-async fn publish_unroutable<P: JetStreamPublisher, S: ObjectStorePut>( +async fn publish_unroutable<P: JetStreamPublisher, S: ObjectStorePut>( publisher: &ClaimCheckPublisher<P, S>, subject_prefix: &str, reason: &str, body: Bytes, ack_timeout: Duration, -) { +) -> bool { let subject = format!("{subject_prefix}.unroutable"); let mut headers = async_nats::HeaderMap::new(); headers.insert(NATS_HEADER_REJECT_REASON, reason); headers.insert(NATS_HEADER_PAYLOAD_KIND, "unroutable"); let outcome = publisher .publish_event(subject, headers, body, ack_timeout) .await; - outcome.log_on_error("discord.unroutable"); + if outcome.is_ok() { + true + } else { + outcome.log_on_error("discord.unroutable"); + false + } }Then have each caller map
falsetoStatusCode::INTERNAL_SERVER_ERROR.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rsworkspace/crates/trogon-source-discord/src/server.rs` around lines 70 - 86, The publish_unroutable helper currently only logs failures and always treats the request as handled; change its signature (function publish_unroutable) to return a success indicator (e.g., bool or Result<bool, E>) instead of unit, inspect the publisher.publish_event outcome and return true on success and false (or Err) on failure (preserving the existing outcome.log_on_error call), and update every caller to map a false/Err result to respond with StatusCode::INTERNAL_SERVER_ERROR rather than 400 so that DLQ/claim-check write failures surface as 500s.rsworkspace/crates/trogon-source-slack/src/server.rs (1)
76-92:⚠️ Potential issue | 🟠 MajorDon't acknowledge Slack requests when the unroutable write failed.
publish_unroutable()only logs thePublishOutcome, but its callers still return their normal200/400responses. If the object-store write or JetStream publish fails, we still tell Slack the request was handled even though the payload was never durably recorded. Please return a success signal/status from this helper and translate failures into500.💡 Suggested shape
-async fn publish_unroutable<P: JetStreamPublisher, S: ObjectStorePut>( +async fn publish_unroutable<P: JetStreamPublisher, S: ObjectStorePut>( publisher: &ClaimCheckPublisher<P, S>, subject_prefix: &NatsToken, reason: &str, body: Bytes, ack_timeout: Duration, -) { +) -> bool { let subject = format!("{}.unroutable", subject_prefix); let mut headers = async_nats::HeaderMap::new(); headers.insert(NATS_HEADER_REJECT_REASON, reason); headers.insert(NATS_HEADER_PAYLOAD_KIND, "unroutable"); let outcome = publisher .publish_event(subject, headers, body, ack_timeout) .await; - outcome.log_on_error("slack.unroutable"); + if outcome.is_ok() { + true + } else { + outcome.log_on_error("slack.unroutable"); + false + } }Then have each caller map
falsetoStatusCode::INTERNAL_SERVER_ERROR.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rsworkspace/crates/trogon-source-slack/src/server.rs` around lines 76 - 92, publish_unroutable currently swallows failures and always lets callers return 200/400; change it to return a Result<(),E> (or a bool success) indicating whether the object-store/JetStream write succeeded so callers can map failures to 500. Specifically, update async fn publish_unroutable<...> to return a success indicator (e.g. Result<(), anyhow::Error> or bool), call publisher.publish_event(...).await and if the outcome is an error convert/return that error (still call outcome.log_on_error(...) for logging) instead of silently returning, then update each caller to treat a failure return from publish_unroutable as StatusCode::INTERNAL_SERVER_ERROR. Ensure references to ClaimCheckPublisher::publish_event, publish_unroutable, and PublishOutcome.log_on_error are used to locate and modify the logic.
♻️ Duplicate comments (1)
rsworkspace/crates/trogon-nats/src/jetstream/claim_check.rs (1)
121-129:⚠️ Potential issue | 🟡 MinorStrip reserved claim-check headers before the small-payload fast path.
If a caller supplies
Trogon-Claim-*headers on a below-threshold payload, this branch publishes them unchanged andresolve_claim()will ignore the inline body on the consumer side. Please scrub or overwrite the reserved claim-check headers before the early return.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rsworkspace/crates/trogon-nats/src/jetstream/claim_check.rs` around lines 121 - 129, The early-return path in claim_check (when payload.len() <= self.max_payload.threshold()) publishes headers unchanged, which allows caller-supplied Trogon-Claim-* headers to leak; before calling publish_event(&self.publisher, subject, headers, payload, ack_timeout).await you must scrub or overwrite any reserved claim-check headers (e.g., headers starting with "Trogon-Claim-") from the headers map/collection so the consumer-side resolve_claim() will not treat the inline body as a claim; update the code that calls publish_event in claim_check.rs to remove those keys (or replace them with safe defaults) from the headers object passed into publish_event.
🧹 Nitpick comments (2)
rsworkspace/crates/trogon-source-discord/src/gateway.rs (1)
196-205: Add one test that actually trips the claim-check branch.
wrap_publisher()always usesMaxPayload::from_server_limit(usize::MAX), so the newClaimCheckPublisherwiring is only covered through the direct-publish path. A low-threshold case here would guard the object-store fallback introduced by this refactor.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rsworkspace/crates/trogon-source-discord/src/gateway.rs` around lines 196 - 205, Add a unit test that forces the ClaimCheckPublisher to use the object-store fallback by creating a ClaimCheckPublisher with a small payload threshold instead of MaxPayload::from_server_limit(usize::MAX); update or add a test helper (or a variant of wrap_publisher) that constructs ClaimCheckPublisher::new(publisher, MockObjectStore::new(), "test-bucket".to_string(), MaxPayload::from_server_limit(1) or an appropriately small MaxPayload) and then publish a message larger than that threshold to assert the publisher stores to the object store path (verify MockObjectStore was used and direct publish was not); reference wrap_publisher, ClaimCheckPublisher, MockObjectStore, and MaxPayload to locate where to add the test.rsworkspace/crates/trogon-source-discord/src/main.rs (1)
112-125: De-duplicate the claim bucket identifier.
trogon-claimsis embedded twice in the same startup path. Pull it into one local constant/value so the provisioned bucket and the bucket encoded into claim refs cannot drift apart later.♻️ Suggested cleanup
+ let claims_bucket = "trogon-claims".to_string(); let object_store = NatsObjectStore::provision( &js_context, async_nats::jetstream::object_store::Config { - bucket: "trogon-claims".to_string(), + bucket: claims_bucket.clone(), ..Default::default() }, ) .await?; let client = NatsJetStreamClient::new(js_context); let publisher = ClaimCheckPublisher::new( client.clone(), object_store, - "trogon-claims".to_string(), + claims_bucket, max_payload, );As per coding guidelines, "Prefer domain-specific value objects over primitives (e.g.,
AcpPrefixinstead ofString). Each type's factory must guarantee correctness at construction—invalid instances should be unrepresentable."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rsworkspace/crates/trogon-source-discord/src/main.rs` around lines 112 - 125, The startup code duplicates the bucket identifier string "trogon-claims" between NatsObjectStore::provision and ClaimCheckPublisher::new; introduce a single local variable (e.g., let claim_bucket = "trogon-claims".to_string()) and use that variable in the NatsObjectStore::provision Config.bucket and as the bucket argument to ClaimCheckPublisher::new so the provisioned bucket and encoded claim refs cannot drift apart (consider replacing the plain String with a domain-specific value object when available).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@rsworkspace/crates/trogon-source-discord/src/main.rs`:
- Around line 128-130: Replace the stringifying error conversion used on the
provision call with the typed ServeError wrapper: instead of mapping the
CreateStreamError into a String via map_err(|e| format!(...)), map the error
into ServeError::Provision so the original CreateStreamError and its chain are
preserved; update imports to bring ServeError into scope and change the call on
trogon_source_discord::provision(&client, &config).await.map_err(...) to use
map_err(ServeError::Provision)?.
In `@rsworkspace/crates/trogon-source-gitlab/src/main.rs`:
- Line 26: MaxPayload::from_server_limit(nats.server_info().max_payload) can
produce a zero effective claim-check threshold (when server max_payload <= 8
KiB) which silently forces claim-check for all non-empty messages; update
startup validation to detect a zero threshold and fail fast with a clear error
(or change MaxPayload::from_server_limit to return a Result and propagate the
error), e.g., call MaxPayload::from_server_limit(...), check the returned
threshold is > 0, and if not return Err / abort startup with a descriptive
message referencing MaxPayload::from_server_limit and
nats.server_info().max_payload so the invalid server limit is rejected centrally
at startup.
---
Outside diff comments:
In `@rsworkspace/crates/trogon-source-discord/src/server.rs`:
- Around line 70-86: The publish_unroutable helper currently only logs failures
and always treats the request as handled; change its signature (function
publish_unroutable) to return a success indicator (e.g., bool or Result<bool,
E>) instead of unit, inspect the publisher.publish_event outcome and return true
on success and false (or Err) on failure (preserving the existing
outcome.log_on_error call), and update every caller to map a false/Err result to
respond with StatusCode::INTERNAL_SERVER_ERROR rather than 400 so that
DLQ/claim-check write failures surface as 500s.
In `@rsworkspace/crates/trogon-source-slack/src/server.rs`:
- Around line 76-92: publish_unroutable currently swallows failures and always
lets callers return 200/400; change it to return a Result<(),E> (or a bool
success) indicating whether the object-store/JetStream write succeeded so
callers can map failures to 500. Specifically, update async fn
publish_unroutable<...> to return a success indicator (e.g. Result<(),
anyhow::Error> or bool), call publisher.publish_event(...).await and if the
outcome is an error convert/return that error (still call
outcome.log_on_error(...) for logging) instead of silently returning, then
update each caller to treat a failure return from publish_unroutable as
StatusCode::INTERNAL_SERVER_ERROR. Ensure references to
ClaimCheckPublisher::publish_event, publish_unroutable, and
PublishOutcome.log_on_error are used to locate and modify the logic.
In `@rsworkspace/crates/trogon-source-telegram/src/config.rs`:
- Around line 13-21: The doc comment in config.rs still lists the stale
TELEGRAM_MAX_BODY_SIZE bullet even though TelegramSourceConfig::from_env() no
longer reads that env var and the fixed limit is defined in constants.rs (the
MAX_BODY_SIZE constant). Remove the "- `TELEGRAM_MAX_BODY_SIZE`: maximum body
size..." bullet from the top-of-file doc comment so the documentation matches
the actual behavior, and optionally add a short note pointing readers to the
MAX_BODY_SIZE constant if you want to document the fixed limit.
---
Duplicate comments:
In `@rsworkspace/crates/trogon-nats/src/jetstream/claim_check.rs`:
- Around line 121-129: The early-return path in claim_check (when payload.len()
<= self.max_payload.threshold()) publishes headers unchanged, which allows
caller-supplied Trogon-Claim-* headers to leak; before calling
publish_event(&self.publisher, subject, headers, payload, ack_timeout).await you
must scrub or overwrite any reserved claim-check headers (e.g., headers starting
with "Trogon-Claim-") from the headers map/collection so the consumer-side
resolve_claim() will not treat the inline body as a claim; update the code that
calls publish_event in claim_check.rs to remove those keys (or replace them with
safe defaults) from the headers object passed into publish_event.
---
Nitpick comments:
In `@rsworkspace/crates/trogon-source-discord/src/gateway.rs`:
- Around line 196-205: Add a unit test that forces the ClaimCheckPublisher to
use the object-store fallback by creating a ClaimCheckPublisher with a small
payload threshold instead of MaxPayload::from_server_limit(usize::MAX); update
or add a test helper (or a variant of wrap_publisher) that constructs
ClaimCheckPublisher::new(publisher, MockObjectStore::new(),
"test-bucket".to_string(), MaxPayload::from_server_limit(1) or an appropriately
small MaxPayload) and then publish a message larger than that threshold to
assert the publisher stores to the object store path (verify MockObjectStore was
used and direct publish was not); reference wrap_publisher, ClaimCheckPublisher,
MockObjectStore, and MaxPayload to locate where to add the test.
In `@rsworkspace/crates/trogon-source-discord/src/main.rs`:
- Around line 112-125: The startup code duplicates the bucket identifier string
"trogon-claims" between NatsObjectStore::provision and ClaimCheckPublisher::new;
introduce a single local variable (e.g., let claim_bucket =
"trogon-claims".to_string()) and use that variable in the
NatsObjectStore::provision Config.bucket and as the bucket argument to
ClaimCheckPublisher::new so the provisioned bucket and encoded claim refs cannot
drift apart (consider replacing the plain String with a domain-specific value
object when available).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: d23193d9-b7d7-46fa-a845-9e791ec10106
⛔ Files ignored due to path filters (1)
rsworkspace/Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (41)
devops/docker/compose/.env.exampledevops/docker/compose/compose.ymlrsworkspace/crates/AGENTS.mdrsworkspace/crates/trogon-nats/Cargo.tomlrsworkspace/crates/trogon-nats/src/jetstream/claim_check.rsrsworkspace/crates/trogon-nats/src/jetstream/mocks.rsrsworkspace/crates/trogon-nats/src/jetstream/mod.rsrsworkspace/crates/trogon-nats/src/jetstream/object_store.rsrsworkspace/crates/trogon-nats/src/jetstream/publish.rsrsworkspace/crates/trogon-source-discord/Cargo.tomlrsworkspace/crates/trogon-source-discord/src/config.rsrsworkspace/crates/trogon-source-discord/src/constants.rsrsworkspace/crates/trogon-source-discord/src/gateway.rsrsworkspace/crates/trogon-source-discord/src/main.rsrsworkspace/crates/trogon-source-discord/src/server.rsrsworkspace/crates/trogon-source-github/Cargo.tomlrsworkspace/crates/trogon-source-github/src/config.rsrsworkspace/crates/trogon-source-github/src/constants.rsrsworkspace/crates/trogon-source-github/src/main.rsrsworkspace/crates/trogon-source-github/src/server.rsrsworkspace/crates/trogon-source-gitlab/Cargo.tomlrsworkspace/crates/trogon-source-gitlab/src/config.rsrsworkspace/crates/trogon-source-gitlab/src/constants.rsrsworkspace/crates/trogon-source-gitlab/src/main.rsrsworkspace/crates/trogon-source-gitlab/src/server.rsrsworkspace/crates/trogon-source-linear/src/constants.rsrsworkspace/crates/trogon-source-linear/src/main.rsrsworkspace/crates/trogon-source-linear/src/server.rsrsworkspace/crates/trogon-source-slack/Cargo.tomlrsworkspace/crates/trogon-source-slack/src/config.rsrsworkspace/crates/trogon-source-slack/src/constants.rsrsworkspace/crates/trogon-source-slack/src/main.rsrsworkspace/crates/trogon-source-slack/src/server.rsrsworkspace/crates/trogon-source-telegram/Cargo.tomlrsworkspace/crates/trogon-source-telegram/src/config.rsrsworkspace/crates/trogon-source-telegram/src/constants.rsrsworkspace/crates/trogon-source-telegram/src/main.rsrsworkspace/crates/trogon-source-telegram/src/server.rsrsworkspace/crates/trogon-std/Cargo.tomlrsworkspace/crates/trogon-std/src/http.rsrsworkspace/crates/trogon-std/src/lib.rs
💤 Files with no reviewable changes (7)
- devops/docker/compose/.env.example
- rsworkspace/crates/trogon-source-telegram/Cargo.toml
- rsworkspace/crates/trogon-source-discord/Cargo.toml
- rsworkspace/crates/trogon-source-gitlab/Cargo.toml
- rsworkspace/crates/trogon-source-slack/Cargo.toml
- devops/docker/compose/compose.yml
- rsworkspace/crates/trogon-source-github/Cargo.toml
✅ Files skipped from review due to trivial changes (3)
- rsworkspace/crates/trogon-std/Cargo.toml
- rsworkspace/crates/trogon-nats/Cargo.toml
- rsworkspace/crates/trogon-nats/src/jetstream/object_store.rs
🚧 Files skipped from review as they are similar to previous changes (13)
- rsworkspace/crates/trogon-source-linear/src/constants.rs
- rsworkspace/crates/AGENTS.md
- rsworkspace/crates/trogon-source-slack/src/config.rs
- rsworkspace/crates/trogon-source-github/src/constants.rs
- rsworkspace/crates/trogon-nats/src/jetstream/publish.rs
- rsworkspace/crates/trogon-source-linear/src/main.rs
- rsworkspace/crates/trogon-nats/src/jetstream/mod.rs
- rsworkspace/crates/trogon-std/src/lib.rs
- rsworkspace/crates/trogon-std/src/http.rs
- rsworkspace/crates/trogon-source-github/src/server.rs
- rsworkspace/crates/trogon-source-gitlab/src/server.rs
- rsworkspace/crates/trogon-nats/src/jetstream/mocks.rs
- rsworkspace/crates/trogon-source-github/src/main.rs
7f9868a to
331a279
Compare
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, have a team admin enable autofix in the Cursor dashboard.
Reviewed by Cursor Bugbot for commit 331a279. Configure here.
Signed-off-by: Yordis Prieto <yordis.prieto@gmail.com>
331a279 to
8a82dba
Compare

Summary
MaxPayloadthreshold is read once at boot from the NATS server info (no hardcoded limits), with 8KB subtracted for protocol overheadRequestBodyLimitLayer/max_body_sizeconfig /bytesize+tower-httpdeps from GitHub, GitLab, and Slack sources since size enforcement now lives in the publish layerClaimCheckPublisheras a decorator aroundNatsJetStreamClientat startup