Skip to content

[c-930] use metrics v2 to trace relevant code paths#84

Merged
maxim-inj merged 13 commits intov0.50.x-injfrom
c-930/trace-everything
Apr 11, 2026
Merged

[c-930] use metrics v2 to trace relevant code paths#84
maxim-inj merged 13 commits intov0.50.x-injfrom
c-930/trace-everything

Conversation

@dbrajovic
Copy link
Copy Markdown

@dbrajovic dbrajovic commented Mar 30, 2026

Introduces injective metrics and traces code paths of interest

MERGE ONLY after InjectiveLabs/cometbft#10 has been merged and tagged and this PR's go.mod is updated

Description

Closes: #XXXX


Author Checklist

All items are required. Please add a note to the item if the item is not applicable and
please add links to any relevant follow up issues.

I have...

  • included the correct type prefix in the PR title
  • added ! to the type prefix if API or client breaking change
  • targeted the correct branch (see PR Targeting)
  • provided a link to the relevant issue or specification
  • followed the guidelines for building modules
  • included the necessary unit and integration tests
  • added a changelog entry to CHANGELOG.md
  • included comments for documenting Go code
  • updated the relevant documentation or specification
  • reviewed "Files changed" and left comments if necessary
  • run make lint and make test
  • confirmed all CI checks have passed

Reviewers Checklist

All items are required. Please add a note if the item is not applicable and please add
your handle next to the items reviewed if you only reviewed selected items.

I have...

  • confirmed the correct type prefix in the PR title
  • confirmed ! in the type prefix if API or client breaking change
  • confirmed all author checklist items have been addressed
  • reviewed state machine logic
  • reviewed API design and naming
  • reviewed documentation is accurate
  • reviewed tests and test coverage
  • manually tested (if applicable)

Summary by CodeRabbit

  • New Features

    • Added broad telemetry/timing across core modules to improve observability and diagnostics.
  • Bug Fixes

    • Improved panic recovery logging and error propagation for more reliable runtime diagnostics.
    • Minor startup wiring change to ensure node initialization arguments are explicit.
  • Chores

    • Updated Go toolchain and refreshed numerous dependencies and module resolutions.

@linear
Copy link
Copy Markdown

linear Bot commented Mar 30, 2026

@github-actions
Copy link
Copy Markdown

@dbrajovic your pull request is missing a changelog!

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 30, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Large, cross-module change that instruments many keepers and handlers with a cached metrics.Meter, unwrapped SDK contexts (sdkCtx), and deferred FuncTiming calls; converts many functions to named returns to capture errors; updates numerous go.mod dependency versions/replacements and adds an extra trailing nil to node.NewNode calls; small panic/recover fixes.

Changes

Cohort / File(s) Summary
BaseApp & Node startup
baseapp/abci.go, baseapp/baseapp.go, server/start.go, testutil/network/util.go
Fix recover handling by capturing rec := recover(), adjust internalFinalizeBlock/error assignment to use outer err, change executeTxs to named returns, and add explicit trailing nil argument to node.NewNode calls.
Top-level go.mod updates
go.mod, simapp/go.mod, store/go.mod, tests/go.mod, x/*/go.mod
Bumped Go versions/toolchain, upgraded many direct/indirect deps (grpc, protobuf, otel, testify), added github.com/InjectiveLabs/metrics/v2, and updated CometBFT replace targets to InjectiveLabs pseudo-versions.
Keeper instrumentation (bulk)
x/*/keeper/*.go, e.g. x/auth/keeper/*, x/bank/keeper/*, x/distribution/keeper/*, x/staking/keeper/*, x/gov/keeper/*, x/group/keeper/*, x/nft/keeper/*, x/feegrant/keeper/*, x/upgrade/keeper/*, x/slashing/keeper/*, x/evidence/keeper/*, x/mint/keeper/*, x/params/keeper/*, x/consensus/keeper/*, x/circuit/keeper/*
Add meter metrics.Meter + Meter(ctx) helpers to many Keepers; unwrap sdkCtx := sdk.UnwrapSDKContext(ctx) at entry; convert many functions to named returns (meterResult, err); route store/keeper calls through sdkCtx; add deferred FuncTiming(&sdkCtx, "<Name>")(&err) instrumentation.
gRPC / Msg server handlers
many *_query.go, msg_server.go across modules (auth, bank, distribution, staking, gov, group, nft, feegrant, etc.)
Change handlers to named returns, unwrap SDK context once, pass sdkCtx into keeper/pagination helpers, and defer per-endpoint FuncTiming to capture latency and error.
ABCI / module lifecycle
assorted x/*/module/abci.go, x/*/abci.go, module EndBlock/BeginBlock functions
Convert several BeginBlock/EndBlock functions to named returns and/or unwrap sdkCtx, and add deferred timing so block lifecycle timing observes final err.
Tests & legacy wiring
tests/integration/gov/keeper/keeper_test.go, x/gov/module.go, related tests
Legacy MsgServer construction updated to pass keeper reference into NewLegacyMsgServerImpl and test wiring adjusted accordingly.
Store / iterator plumbing
various store.go, *_query.go files
Open KV stores and run pagination/iterators using sdkCtx instead of raw ctx; adjust closures to assign into outer err to avoid shadowing.
Migrations & whitespace
assorted migrations.go
Non-functional whitespace/blank-line insertions in migration helpers.
Misc small fixes
assorted helpers (alias functions, baseapp changes, recover logging)
Replace short-scope if err := ... with if err = ... where required for named err, and fix recover logging to avoid shadowing (capture rec := recover()).

Sequence Diagram(s)

sequenceDiagram
  participant Client as Client
  participant GRPC as "gRPC / MsgServer"
  participant Keeper as Keeper
  participant Store as "KV Store"
  participant Meter as Meter

  Client->>GRPC: request (context ctx)
  GRPC->>GRPC: sdkCtx := sdk.UnwrapSDKContext(ctx)
  GRPC->>Meter: defer FuncTiming(&sdkCtx, "Handler")(&err)
  GRPC->>Keeper: Keeper.Method(sdkCtx, ...)
  Keeper->>Keeper: (use cached Meter) sdkCtx := sdk.UnwrapSDKContext(ctx)
  Keeper->>Meter: defer FuncTiming(&sdkCtx, "Method")(&err)
  Keeper->>Store: OpenKVStore(sdkCtx) / read/write
  Store-->>Keeper: data/result
  Keeper-->>GRPC: response / error
  GRPC-->>Client: reply
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Poem

"I’m a rabbit with a tiny meter, I hop and time each call,
I unwrap contexts, defer the ticks, and chase down every fall.
I log the little recoveries, avoid shadowed errs with care,
Hop, measure, ship—instrumented code is dancing in the air. 🐇📏"

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch c-930/trace-everything

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Note

Due to the large number of review comments, Critical severity comments were prioritized as inline comments.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
go.mod (1)

64-64: ⚠️ Potential issue | 🔴 Critical

Critical: Pipeline failing due to confirmed vulnerable dependencies.

The following confirmed vulnerabilities must be addressed before merging:

  1. google.golang.org/grpc@1.75.0 (Critical - GHSA-p77j-4mvh-x3m3): Authorization bypass via missing leading slash in :path header. Upgrade to v1.79.3 or later.

  2. go.opentelemetry.io/otel/sdk@1.38.0 (High - GHSA-9h8m-3fm2-qjrq): Arbitrary code execution via PATH hijacking on macOS/Darwin systems. Upgrade to v1.40.0 or later.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@go.mod` at line 64, Update the vulnerable module versions in go.mod: bump
google.golang.org/grpc to v1.79.3 or later and go.opentelemetry.io/otel/sdk to
v1.40.0 or later, then run dependency resolution (e.g., use go get
google.golang.org/grpc@v1.79.3 and go get go.opentelemetry.io/otel/sdk@v1.40.0)
and go mod tidy to ensure the upgraded transitive deps are resolved; verify the
updates affect any references to these modules in build/test pipelines and
re-run CI to confirm the vulnerabilities are resolved.
x/slashing/keeper/signing_info.go (2)

261-289: ⚠️ Potential issue | 🟠 Major

Return immediately when the callback asks to stop.

The contract says iteration halts when the callback returns true, but Lines 283-285 only break the inner bit loop. The outer chunk loop keeps running, so callers can still receive more callbacks after requesting stop.

Suggested fix
-			if cb(index, bs.Test(i)) {
-				break
-			}
+			if cb(index, bs.Test(i)) {
+				return nil
+			}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/slashing/keeper/signing_info.go` around lines 261 - 289,
IterateMissedBlockBitmap isn't stopping the outer chunk loop when the callback
cb(index, ...) returns true — change the control flow so that when cb returns
true you return immediately (e.g., return nil) instead of breaking only the
inner bit loop; update the inner loop in IterateMissedBlockBitmap to return upon
cb true so the function exits and no further chunks/callbacks are processed,
keeping the existing deferred iter.Close and error handling intact.

197-230: ⚠️ Potential issue | 🟠 Major

Propagate the bitmap write error.

Line 229 ignores the result of setMissedBlockBitmapChunk. A failed store write currently returns nil, so missed-block state can silently diverge from slashing state.

Suggested fix
-	k.setMissedBlockBitmapChunk(sdkCtx, addr, chunkIndex, updatedChunk)
-	return nil
+	if err := k.setMissedBlockBitmapChunk(sdkCtx, addr, chunkIndex, updatedChunk); err != nil {
+		return errors.Wrapf(err, "failed to persist bitmap chunk; index: %d", index)
+	}
+	return nil
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/slashing/keeper/signing_info.go` around lines 197 - 230, In
SetMissedBlockBitmapValue, the return value from
k.setMissedBlockBitmapChunk(sdkCtx, addr, chunkIndex, updatedChunk) is ignored
so storage errors are dropped; capture its error, wrap it (e.g., with context
like "failed to set bitmap chunk; index: %d") and return it from
SetMissedBlockBitmapValue so write failures propagate to the caller (referencing
the function SetMissedBlockBitmapValue and the method
setMissedBlockBitmapChunk).
x/staking/keeper/validator.go (1)

329-351: ⚠️ Potential issue | 🟠 Major

Close both iterators before returning.

GetValidators and IterateLastValidatorPowers each open a store iterator but never close it. Any decode failure, handler stop, or normal return leaks the iterator.

Suggested fix
 	iterator, err := store.Iterator(types.ValidatorsKey, storetypes.PrefixEndBytes(types.ValidatorsKey))
 	if err != nil {
 		return nil, err
 	}
+	defer iterator.Close()
@@
 	iter, err := k.LastValidatorsIterator(sdkCtx)
 	if err != nil {
 		return err
 	}
+	defer iter.Close()

Also applies to: 453-475

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/staking/keeper/validator.go` around lines 329 - 351, GetValidators (and
similarly IterateLastValidatorPowers) opens a KV iterator that is never closed,
leaking resources on errors or normal returns; fix by ensuring the iterator is
closed on all paths (use a defer iterator.Close() immediately after successful
store.Iterator(...) return, and ensure any early returns still run the defer)
for functions GetValidators and IterateLastValidatorPowers so the iterator is
always released even when UnmarshalValidator or other steps return an error.
🟠 Major comments (18)
x/crisis/keeper/keeper.go-21-21 (1)

21-21: ⚠️ Potential issue | 🟠 Major

Guard lazy meter initialization against concurrent access.

k.meter is mutated in the hot path without synchronization. Concurrent calls can race on first initialization and produce undefined behavior under race conditions.

Proposed fix
 import (
 	"context"
 	"fmt"
+	"sync"
 	"time"
@@
 type Keeper struct {
 	meter          metrics.Meter
+	meterOnce      sync.Once
 	routes         []types.InvarRoute
@@
 func (k *Keeper) Meter(ctx context.Context) metrics.Meter {
-	if k.meter == nil {
-		k.meter = sdk.UnwrapSDKContext(ctx).Meter().SubMeter(types.ModuleName, metrics.Tag("svc", types.ModuleName))
-	}
+	k.meterOnce.Do(func() {
+		k.meter = sdk.UnwrapSDKContext(ctx).Meter().SubMeter(types.ModuleName, metrics.Tag("svc", types.ModuleName))
+	})
 
 	return k.meter
 }

Also applies to: 138-144

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/crisis/keeper/keeper.go` at line 21, The lazy initialization of k.meter on
the keeper struct is racy; add a synchronization primitive to the keeper (e.g.,
a sync.Mutex field or a sync.Once) and use it when checking/setting k.meter so
concurrent callers cannot race on first initialization; update all places that
lazily mutate k.meter (the hot-path methods that reference k.meter) to acquire
the mutex or use the Once to perform initialization and then read k.meter,
ensuring a double-check (nil check before and after locking if using a mutex) to
avoid unnecessary locking.
x/gov/keeper/keeper.go-202-208 (1)

202-208: ⚠️ Potential issue | 🟠 Major

Same Meter caching issue as staking keeper.

The Meter method uses a pointer receiver, but Logger (line 155) uses a value receiver. This has the same ineffective caching problem described in x/staking/keeper/keeper.go—the meter will be re-created on each call from value receiver methods.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/gov/keeper/keeper.go` around lines 202 - 208, The Meter method is using a
pointer receiver while Logger uses a value receiver, causing cached fields to be
ineffective for value-receiver methods; change the Meter signature to use a
value receiver (func (k Keeper) Meter(...)) so it matches Logger and prevents
the meter from being re-created when called from value-receiver methods, keeping
the caching behavior consistent for the Keeper type.
x/staking/keeper/keeper.go-191-197 (1)

191-197: ⚠️ Potential issue | 🟠 Major

Meter caching ineffective due to value receivers.

The Meter method uses a pointer receiver (*Keeper) to cache the meter, but most keeper methods that call it (e.g., Logger, GetLastTotalPower, SetLastTotalPower) use value receivers (Keeper). When a value receiver method calls k.Meter(sdkCtx), the implicit address-of creates a copy, so the cached k.meter assignment doesn't persist across calls. This results in creating a new SubMeter on every invocation, defeating the lazy initialization purpose.

Consider either:

  1. Change all keeper methods to pointer receivers, or
  2. Initialize the meter in NewKeeper instead of lazy initialization
Proposed fix: Initialize meter in NewKeeper
 func NewKeeper(
 	cdc codec.BinaryCodec,
 	storeService storetypes.KVStoreService,
 	ak types.AccountKeeper,
 	bk types.BankKeeper,
 	authority string,
 	validatorAddressCodec addresscodec.Codec,
 	consensusAddressCodec addresscodec.Codec,
+	meter metrics.Meter,
 ) *Keeper {
 	// ... existing validation ...
 
 	return &Keeper{
+		meter:                 meter,
 		storeService:          storeService,
 		cdc:                   cdc,
 		// ... rest of fields ...
 	}
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/staking/keeper/keeper.go` around lines 191 - 197, The Meter method's lazy
cache on *Keeper is ineffective because many callers use value receivers (e.g.,
Logger, GetLastTotalPower, SetLastTotalPower) so the cached k.meter doesn't
persist; fix by initializing the meter once in NewKeeper instead of lazy init:
in NewKeeper compute and assign k.meter using ctx.Meter().SubMeter(...)
(referencing Keeper.Meter and NewKeeper) and simplify/remove the lazy set in
Meter so it just returns k.meter (or keep a nil-check that does not attempt to
set); this avoids relying on pointer receiver semantics and ensures a single
SubMeter instance is used.
x/auth/keeper/keeper.go-305-311 (1)

305-311: ⚠️ Potential issue | 🟠 Major

Data race in lazy meter initialization.

The Meter method has a potential data race: multiple goroutines can concurrently read and write ak.meter without synchronization. While the functional outcome may be equivalent (all goroutines would create similar sub-meters), this is a race condition that can be detected by the Go race detector.

Consider either:

  1. Initialize the meter in NewAccountKeeper (if context is available at that point), or
  2. Use sync.Once for thread-safe lazy initialization.
🔒 Proposed fix using sync.Once

Add a sync.Once field to the struct:

type AccountKeeper struct {
	// ... existing fields ...
	meter     metrics.Meter
	meterOnce sync.Once
}

Then update the Meter method:

 func (ak *AccountKeeper) Meter(ctx context.Context) metrics.Meter {
-	if ak.meter == nil {
-		ak.meter = sdk.UnwrapSDKContext(ctx).Meter().SubMeter(types.ModuleName, metrics.Tag("svc", types.ModuleName))
-	}
-
-	return ak.meter
+	ak.meterOnce.Do(func() {
+		ak.meter = sdk.UnwrapSDKContext(ctx).Meter().SubMeter(types.ModuleName, metrics.Tag("svc", types.ModuleName))
+	})
+	return ak.meter
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/auth/keeper/keeper.go` around lines 305 - 311, The Meter method has a data
race on ak.meter; add a sync.Once field (e.g. meterOnce sync.Once) to the
AccountKeeper struct and change Meter to perform thread-safe lazy init via
ak.meterOnce.Do(func(){ ak.meter =
sdk.UnwrapSDKContext(ctx).Meter().SubMeter(types.ModuleName, metrics.Tag("svc",
types.ModuleName)) }), returning ak.meter afterwards; alternatively, if a
suitable context is available when constructing the keeper, initialize ak.meter
in NewAccountKeeper instead. Ensure you import sync and remove unsynchronized
writes to ak.meter elsewhere.
x/authz/keeper/keeper.go-486-492 (1)

486-492: ⚠️ Potential issue | 🟠 Major

Data race in lazy meter initialization (same issue as x/auth/keeper/keeper.go).

This Meter method has the same race condition: concurrent goroutines can simultaneously check k.meter == nil and write to it without synchronization.

Apply the same fix pattern using sync.Once:

🔒 Proposed fix using sync.Once

Add a sync.Once field to the struct:

 type Keeper struct {
 	storeService corestoretypes.KVStoreService
 	cdc          codec.Codec
 	router       baseapp.MessageRouter
 	authKeeper   authz.AccountKeeper
 	bankKeeper   authz.BankKeeper
 	meter        metrics.Meter
+	meterOnce    sync.Once
 }

Then update the Meter method:

 func (k *Keeper) Meter(ctx context.Context) metrics.Meter {
-	if k.meter == nil {
-		k.meter = sdk.UnwrapSDKContext(ctx).Meter().SubMeter(authz.ModuleName, metrics.Tag("svc", authz.ModuleName))
-	}
-
-	return k.meter
+	k.meterOnce.Do(func() {
+		k.meter = sdk.UnwrapSDKContext(ctx).Meter().SubMeter(authz.ModuleName, metrics.Tag("svc", authz.ModuleName))
+	})
+	return k.meter
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/authz/keeper/keeper.go` around lines 486 - 492, The Meter method on Keeper
has a data race when lazily initializing k.meter; add a sync.Once field (e.g.,
meterOnce) to the Keeper struct and use meterOnce.Do(...) inside Keeper.Meter to
perform the one-time initialization using
sdk.UnwrapSDKContext(ctx).Meter().SubMeter(authz.ModuleName, metrics.Tag("svc",
authz.ModuleName)), leaving the method to simply return k.meter after the Do
call; follow the same pattern used in x/auth/keeper/keeper.go to ensure
thread-safe lazy init.
x/nft/keeper/keeper.go-17-18 (1)

17-18: ⚠️ Potential issue | 🟠 Major

Remove the mutable meter field and eliminate lazy initialization to prevent cache corruption and race conditions.

The Meter method uses a pointer receiver while all other keeper methods use value receivers. When value-receiver methods call k.Meter(), Go implicitly creates a temporary pointer to the copy (&k), and any mutations to k.meter are lost when the method returns. This causes:

  1. Receiver-copy loss: The lazy-initialized cache is never persisted back to the original keeper.
  2. Race condition: Concurrent calls to Meter() from different value copies race to initialize k.meter on separate copies.
Recommended fix: Remove mutable cache
 type Keeper struct {
-	meter        metrics.Meter
 	cdc          codec.BinaryCodec
 	storeService store.KVStoreService
 	bk           nft.BankKeeper
 	ac           address.Codec
 }
@@
-func (k *Keeper) Meter(ctx context.Context) metrics.Meter {
-	if k.meter == nil {
-		k.meter = sdk.UnwrapSDKContext(ctx).Meter().SubMeter(nft.ModuleName, metrics.Tag("svc", nft.ModuleName))
-	}
-
-	return k.meter
+func (k Keeper) Meter(ctx context.Context) metrics.Meter {
+	return sdk.UnwrapSDKContext(ctx).Meter().SubMeter(nft.ModuleName, metrics.Tag("svc", nft.ModuleName))
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/nft/keeper/keeper.go` around lines 17 - 18, Remove the mutable meter field
from the keeper struct and stop lazy-initializing it in Meter; instead make
Meter a non-mutating accessor that returns a fresh or globally initialized
metrics.Meter without writing to k.meter. Concretely: delete the meter field
declaration, change Meter to use a metrics provider/factory (or a package-level
singleton initialized with sync.Once in an init/path-safe place) and return that
meter directly from Meter (keep Meter as a value receiver and do not assign to
any keeper fields). Update any references to k.meter to call k.Meter() and
ensure no code mutates the keeper inside Meter to avoid the receiver-copy loss
and race conditions.
x/distribution/keeper/validator.go-181-181 (1)

181-181: ⚠️ Potential issue | 🟠 Major

Propagate incrementReferenceCount failure.

Line 181 drops the returned error. If the reference-count update fails, execution still writes slash state, which can leave historical rewards tracking inconsistent.

💡 Proposed fix
-	k.incrementReferenceCount(sdkCtx, valAddr, newPeriod)
+	if err := k.incrementReferenceCount(sdkCtx, valAddr, newPeriod); err != nil {
+		return err
+	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/distribution/keeper/validator.go` at line 181, The call to
k.incrementReferenceCount(sdkCtx, valAddr, newPeriod) currently ignores its
returned error; update the caller (the function containing that call) to check
the returned error and propagate it (return the error) instead of dropping it so
execution halts when the reference-count update fails and avoids writing slash
state inconsistently. Locate the incrementReferenceCount(...) call in
validator.go and change the call to capture its error (err :=
k.incrementReferenceCount(...)); if err != nil { return err } (or wrap and
return the error consistent with the surrounding function's error handling).
x/evidence/keeper/grpc_query.go-48-50 (1)

48-50: ⚠️ Potential issue | 🟠 Major

Handle Evidences.Get errors instead of discarding them.

At line 48, the returned error is ignored, which converts all failures—including storage and codec errors—into codes.NotFound. Other call sites in the module properly check this error (see keeper.go:107 and genesis.go:26).

The proposed fix captures the error and checks it. Note that if the error is not collections.ErrNotFound, it should return an appropriate error code (e.g., codes.Internal) rather than codes.NotFound to accurately reflect the failure type.

💡 Proposed fix
-	evidence, _ := k.k.Evidences.Get(sdkCtx, decodedHash)
-	if evidence == nil {
+	evidence, err := k.k.Evidences.Get(sdkCtx, decodedHash)
+	if err != nil {
+		return nil, status.Errorf(codes.NotFound, "evidence %s not found", req.Hash)
+	}
+	if evidence == nil {
 		return nil, status.Errorf(codes.NotFound, "evidence %s not found", req.Hash)
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/evidence/keeper/grpc_query.go` around lines 48 - 50, The call to
k.k.Evidences.Get currently discards the returned error which causes all
failures to be reported as codes.NotFound; change the code in the handler that
calls k.k.Evidences.Get(sdkCtx, decodedHash) to capture the error (e.g.,
evidence, err := k.k.Evidences.Get(...)), check if err != nil and if
errors.Is(err, collections.ErrNotFound) return status.Errorf(codes.NotFound,
...), otherwise return status.Errorf(codes.Internal, "failed to get evidence:
%v", err); keep the existing nil-evidence check but only treat it as NotFound
when the collection error indicates absence.
x/evidence/keeper/keeper.go-128-134 (1)

128-134: ⚠️ Potential issue | 🟠 Major

Data race on lazy meter initialization.

The Meter() method has a race condition when accessed concurrently. Multiple goroutines can simultaneously observe k.meter == nil, causing redundant SubMeter allocations and potential data races on field assignment.

This pattern is systemic—identical implementations exist across 22 keepers in the codebase (bank, auth, staking, gov, feegrant, slashing, etc.). If fixing this in evidence keeper, consider applying the same fix across all affected keepers.

Proposed fix using sync.Once

Add a sync.Once field to Keeper:

 type Keeper struct {
+	meterOnce      sync.Once
 	meter          metrics.Meter
 	cdc            codec.BinaryCodec

Update the Meter method:

 func (k *Keeper) Meter(ctx context.Context) metrics.Meter {
-	if k.meter == nil {
-		k.meter = sdk.UnwrapSDKContext(ctx).Meter().SubMeter(types.ModuleName, metrics.Tag("svc", types.ModuleName))
-	}
-
+	k.meterOnce.Do(func() {
+		k.meter = sdk.UnwrapSDKContext(ctx).Meter().SubMeter(types.ModuleName, metrics.Tag("svc", types.ModuleName))
+	})
 	return k.meter
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/evidence/keeper/keeper.go` around lines 128 - 134, The Meter method has a
race on lazy initialization of k.meter; add a sync.Once field (e.g., meterOnce
sync.Once) to the Keeper struct and change Meter(ctx context.Context) to
initialize k.meter inside meterOnce.Do(func() { ... }) so only one goroutine
calls sdk.UnwrapSDKContext(ctx).Meter().SubMeter(...) and assigns k.meter;
update evidence Keeper and apply the same pattern to other keepers with
identical lazy initialization (bank, auth, staking, gov, feegrant, slashing,
etc.).
x/circuit/keeper/keeper.go-18-19 (1)

18-19: ⚠️ Potential issue | 🟠 Major

Meter() has a race-prone lazy init on shared keeper state.
k.meter is written without synchronization on lines 85–86. Concurrent access from query handlers, message handlers, and genesis operations can race. Use sync.Once to safely initialize.

Suggested fix
 import (
 	context "context"
+	"sync"
@@
 type Keeper struct {
 	meter        metrics.Meter
+	meterOnce    sync.Once
 	cdc          codec.BinaryCodec
@@
 func (k *Keeper) Meter(ctx context.Context) metrics.Meter {
-	if k.meter == nil {
-		k.meter = sdk.UnwrapSDKContext(ctx).Meter().SubMeter(types.ModuleName, metrics.Tag("svc", types.ModuleName))
-	}
+	k.meterOnce.Do(func() {
+		k.meter = sdk.UnwrapSDKContext(ctx).Meter().SubMeter(types.ModuleName, metrics.Tag("svc", types.ModuleName))
+	})
 
 	return k.meter
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/circuit/keeper/keeper.go` around lines 18 - 19, The Meter() method
currently lazily assigns k.meter without synchronization, causing races; add a
sync.Once field (e.g., meterOnce sync.Once) to the keeper struct and change
Meter() to initialize k.meter inside meterOnce.Do(func(){ ... }) so the
assignment to k.meter happens exactly once; also add the sync import and keep
the rest of Meter() behavior the same (returning k.meter) to ensure thread-safe
lazy initialization.
x/feegrant/keeper/msg_server.go-47-48 (1)

47-48: ⚠️ Potential issue | 🟠 Major

Don't ignore GetAllowance failures.

Line 47 drops the read error and treats it as "no allowance". If that lookup fails because the stored grant is unreadable, this handler can still proceed as if nothing exists.

Suggested fix
-	if f, _ := k.GetAllowance(sdkCtx, granter, grantee); f != nil {
+	if f, err := k.GetAllowance(sdkCtx, granter, grantee); err != nil {
+		return nil, err
+	} else if f != nil {
 		return nil, errorsmod.Wrap(sdkerrors.ErrInvalidRequest, "fee allowance already exists")
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/feegrant/keeper/msg_server.go` around lines 47 - 48, Replace the
ignored-error call to GetAllowance with an explicit check: call f, err :=
k.GetAllowance(sdkCtx, granter, grantee); if err != nil return nil,
errorsmod.Wrap(err, "failed to read existing allowance") to propagate
storage/read errors, then if f != nil return nil,
errorsmod.Wrap(sdkerrors.ErrInvalidRequest, "fee allowance already exists");
this ensures GetAllowance failures are not treated as “no allowance” and are
surfaced correctly.
x/staking/keeper/delegation.go-947-954 (1)

947-954: ⚠️ Potential issue | 🟠 Major

Bug: currTime parameter is ignored, using sdkCtx.HeaderInfo().Time instead.

The function accepts currTime time.Time as a parameter but line 954 passes sdkCtx.HeaderInfo().Time to RedelegationQueueIterator instead of the provided currTime. This could cause incorrect behavior if a caller passes a different time value.

🐛 Proposed fix
 	// gets an iterator for all timeslices from time 0 until the current Blockheader time
-	redelegationTimesliceIterator, err := k.RedelegationQueueIterator(sdkCtx, sdkCtx.HeaderInfo().Time)
+	redelegationTimesliceIterator, err := k.RedelegationQueueIterator(sdkCtx, currTime)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/staking/keeper/delegation.go` around lines 947 - 954, The function
DequeueAllMatureRedelegationQueue currently ignores its currTime parameter and
calls RedelegationQueueIterator with sdkCtx.HeaderInfo().Time; change the call
to use the provided currTime so the iterator covers times up to the passed-in
value (i.e., replace sdkCtx.HeaderInfo().Time with currTime when invoking
RedelegationQueueIterator and ensure any subsequent logic that relies on the
iteration uses currTime consistently).
x/distribution/keeper/keeper.go-237-243 (1)

237-243: ⚠️ Potential issue | 🟠 Major

Race condition in lazy initialization of meter field.

The Meter method has a check-then-write race condition. If multiple goroutines call Meter() concurrently when k.meter == nil, they can both pass the nil check and create separate meter instances. While this may not cause crashes, it leads to inconsistent meter state.

Consider using sync.Once for thread-safe initialization:

🔒 Proposed fix using sync.Once

Add a sync.Once field to the Keeper struct:

 type Keeper struct {
     // ... existing fields ...
     meter metrics.Meter
+    meterOnce sync.Once
 }

Then update the Meter method:

 func (k *Keeper) Meter(ctx context.Context) metrics.Meter {
-    if k.meter == nil {
-        k.meter = sdk.UnwrapSDKContext(ctx).Meter().SubMeter(types.ModuleName, metrics.Tag("svc", types.ModuleName))
-    }
-
-    return k.meter
+    k.meterOnce.Do(func() {
+        k.meter = sdk.UnwrapSDKContext(ctx).Meter().SubMeter(types.ModuleName, metrics.Tag("svc", types.ModuleName))
+    })
+    return k.meter
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/distribution/keeper/keeper.go` around lines 237 - 243, The lazy
initialization of k.meter in Keeper.Meter has a race: multiple goroutines can
pass the nil check and create separate meters; add a sync.Once field (e.g.,
meterOnce) to the Keeper struct and move the initialization into meterOnce.Do to
ensure the creation of k.meter happens exactly once; update Meter(ctx
context.Context) to call k.meterOnce.Do(func(){ k.meter =
sdk.UnwrapSDKContext(ctx).Meter().SubMeter(types.ModuleName, metrics.Tag("svc",
types.ModuleName)) }) and then return k.meter.
x/bank/keeper/msg_server.go-38-42 (1)

38-42: ⚠️ Potential issue | 🟠 Major

NewMsgServerImpl advertises a Keeper interface but all four message handlers require the concrete BaseKeeper type.

The constructor accepts keeper Keeper at the API level, yet Send, MultiSend, UpdateParams, and SetSendEnabled each assert k.Keeper.(BaseKeeper) and fail at runtime if the dynamic type is not BaseKeeper. This breaks the interface contract: any implementation of Keeper other than BaseKeeper will be rejected at runtime instead of at the type-check level.

Either tighten the constructor and field types to BaseKeeper to make the requirement explicit, or expose the required metering and address-codec methods on the Keeper interface instead of asserting the concrete type.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/bank/keeper/msg_server.go` around lines 38 - 42, NewMsgServerImpl's
constructor and its msg server type take a generic Keeper but every handler
(Send, MultiSend, UpdateParams, SetSendEnabled) asserts k.Keeper.(BaseKeeper),
causing runtime failures; either change the API to require BaseKeeper explicitly
or add the needed methods to the Keeper interface. Fix by updating
NewMsgServerImpl to accept and store BaseKeeper (replace Keeper with BaseKeeper
in the constructor and the msg server struct) so the metering and address-codec
methods are available without type assertions, or alternatively extend the
Keeper interface to declare the metering and address codec methods used by
base.Meter(...) and other handlers and remove the concrete BaseKeeper
assertions. Ensure all references to k.Keeper and the constructor signature
(NewMsgServerImpl) and usages in Send, MultiSend, UpdateParams, SetSendEnabled
are updated accordingly.
x/feegrant/keeper/keeper.go-25-25 (1)

25-25: ⚠️ Potential issue | 🟠 Major

Value-receiver methods calling pointer-receiver Meter() break the lazy cache.

Line 421 implements a pointer-receiver Meter() that mutates k.meter at line 423, but all methods in this file are value receivers (func (k Keeper)). When a value-receiver method calls k.Meter(), the assignment mutates a temporary copy, so the sub-meter is recreated on every invocation instead of being cached. Either switch all metered keeper methods to pointer receivers, or make Meter stateless by deriving the sub-meter directly from sdkCtx without caching.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/feegrant/keeper/keeper.go` at line 25, The keeper currently caches a
sub-meter in the Keeper.meter field via the pointer-receiver Meter() (which
assigns k.meter), but all keeper methods in this file use value receivers (func
(k Keeper)), so calls to k.Meter() mutate a copy and the cache is lost; to fix
either: (A) switch all keeper methods in this file from value receivers to
pointer receivers (change func (k Keeper) ... to func (k *Keeper) ...) so
k.Meter() mutates the real Keeper and the cached k.meter is reused, or (B) make
Meter() stateless by removing the k.meter cache and deriving/returning the
sub-meter directly from the sdk.Context (or relevant sdkCtx) inside Meter() so
it no longer mutates Keeper; update references to Meter(), Keeper.meter, and any
colocated methods accordingly.
x/slashing/keeper/keeper.go-21-21 (1)

21-21: ⚠️ Potential issue | 🟠 Major

The lazy meter cache is ineffective with value receivers calling pointer receiver method.

Value receiver methods (Logger, AddPubkey, GetPubkey, Slash, SlashWithInfractionReason, Jail, deleteAddrPubkeyRelation) call k.Meter(), which is a pointer receiver. Each value receiver call creates a struct copy, so the pointer receiver's lazy initialization writes to the copy's meter field, which is discarded after the call. On the next invocation, a new copy is created and the cache is rebuilt. Either make all keeper methods pointer receivers or drop the struct-level cache.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/slashing/keeper/keeper.go` at line 21, The lazy meter cache on the keeper
is broken because methods like Logger, AddPubkey, GetPubkey, Slash,
SlashWithInfractionReason, Jail, and deleteAddrPubkeyRelation use value
receivers but call the pointer-receiver Meter(), causing the cached meter to be
written into a copy; fix by making these methods use pointer receivers (change
their receiver from k keeper to k *keeper) so Meter() writes to the real struct,
or alternatively remove the struct-level meter cache and compute/return the
meter directly in Meter(); update all affected method signatures (Logger,
AddPubkey, GetPubkey, Slash, SlashWithInfractionReason, Jail,
deleteAddrPubkeyRelation) and any call sites accordingly to eliminate the copy
bug.
x/gov/keeper/msg_server.go-314-317 (1)

314-317: ⚠️ Potential issue | 🟠 Major

Use distinct timing names for the legacy wrappers.

These methods immediately delegate to the v1 server, and the keeper-backed v1 implementation already records SubmitProposal, Vote, VoteWeighted, and Deposit. Reusing the same metric names here double-counts every legacy request and skews latency/throughput dashboards.

Suggested direction
-	defer k.keeper.Meter(sdkCtx).FuncTiming(&sdkCtx, "SubmitProposal")()
+	defer k.keeper.Meter(sdkCtx).FuncTiming(&sdkCtx, "LegacySubmitProposal")()

-	defer k.keeper.Meter(sdkCtx).FuncTiming(&sdkCtx, "Vote")()
+	defer k.keeper.Meter(sdkCtx).FuncTiming(&sdkCtx, "LegacyVote")()

-	defer k.keeper.Meter(sdkCtx).FuncTiming(&sdkCtx, "VoteWeighted")()
+	defer k.keeper.Meter(sdkCtx).FuncTiming(&sdkCtx, "LegacyVoteWeighted")()

-	defer k.keeper.Meter(sdkCtx).FuncTiming(&sdkCtx, "Deposit")()
+	defer k.keeper.Meter(sdkCtx).FuncTiming(&sdkCtx, "LegacyDeposit")()

Also applies to: 355-358, 370-373, 393-396

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/gov/keeper/msg_server.go` around lines 314 - 317, The legacy wrapper
methods (legacyMsgServer methods SubmitProposal, Vote, VoteWeighted, Deposit)
are reusing the same metric names as the keeper-backed v1 implementations
causing double-counting; change the timing metric names in the legacy wrappers
to distinct identifiers (for example prefix with "legacy_" or "_legacy") when
calling k.keeper.Meter(sdkCtx).FuncTiming(&sdkCtx, ...) so the keeper-backed v1
metrics remain unchanged and dashboards are not skewed; update each legacy
method (legacyMsgServer.SubmitProposal, legacyMsgServer.Vote,
legacyMsgServer.VoteWeighted, legacyMsgServer.Deposit) to use the new unique
timing string.
x/upgrade/keeper/keeper.go-641-646 (1)

641-646: ⚠️ Potential issue | 🟠 Major

Don't memoize meter on a copied keeper.

Meter caches on *Keeper, but most instrumented methods in this file use func (k Keeper) receivers. In Go that means k.Meter(...) writes the cache to the local copy, so the sub-meter is rebuilt on every call. The first lazy init can also race if multiple goroutines call Meter() concurrently on the same keeper. Either move the instrumented methods to pointer receivers or stop storing the meter on the keeper.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/upgrade/keeper/keeper.go` around lines 641 - 646, The Keeper.Meter method
currently memoizes the meter on k.meter which is unsafe because many
instrumented methods use value receivers (func (k Keeper)) so the cache is
stored on copies and can race; fix by removing the cached field usage: make
Meter(ctx context.Context) return
sdk.UnwrapSDKContext(ctx).Meter().SubMeter(types.ModuleName, metrics.Tag("svc",
types.ModuleName)) unconditionally (i.e., do not assign to k.meter), or
alternatively convert the instrumented methods to pointer receivers and keep
k.meter with proper synchronization; update references to Meter, the k.meter
field, and any funcs with receivers (methods declared as func (k Keeper))
accordingly.
🟡 Minor comments (1)
x/nft/keeper/nft_batch.go-96-98 (1)

96-98: ⚠️ Potential issue | 🟡 Minor

Potentially misleading error wrapping.

When transferWithNoCheck fails, wrapping the error with nft.ErrNFTNotExists may mask the actual transfer failure reason. The NFT existence is already verified on line 93, so a transfer failure would be due to a different issue.

Consider returning the original error or using a more appropriate error type:

Proposed fix
-		if err := k.transferWithNoCheck(sdkCtx, classID, nftID, receiver); err != nil {
-			return errors.Wrap(nft.ErrNFTNotExists, nftID)
-		}
+		if err := k.transferWithNoCheck(sdkCtx, classID, nftID, receiver); err != nil {
+			return err
+		}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/nft/keeper/nft_batch.go` around lines 96 - 98, The current error handling
in nft_batch.go wraps any error from transferWithNoCheck(sdkCtx, classID, nftID,
receiver) with nft.ErrNFTNotExists which can mask the real cause; instead,
return the original err or wrap the returned err with a contextual message
(e.g., errors.Wrap(err, "transferWithNoCheck failed") or fmt.Errorf("transfer
failed: %w", err)) so the true failure from transferWithNoCheck is preserved;
update the return in the transferWithNoCheck error branch to propagate or
appropriately wrap the actual err rather than nft.ErrNFTNotExists.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 727af76a-19ac-44a8-a8e6-942f1b434dd6

📥 Commits

Reviewing files that changed from the base of the PR and between c09c5bf and bf689f2.

⛔ Files ignored due to path filters (9)
  • go.sum is excluded by !**/*.sum
  • simapp/go.sum is excluded by !**/*.sum
  • store/go.sum is excluded by !**/*.sum
  • tests/go.sum is excluded by !**/*.sum
  • x/circuit/go.sum is excluded by !**/*.sum
  • x/evidence/go.sum is excluded by !**/*.sum
  • x/feegrant/go.sum is excluded by !**/*.sum
  • x/nft/go.sum is excluded by !**/*.sum
  • x/upgrade/go.sum is excluded by !**/*.sum
📒 Files selected for processing (121)
  • baseapp/abci.go
  • go.mod
  • server/start.go
  • simapp/go.mod
  • store/go.mod
  • tests/go.mod
  • tests/integration/gov/keeper/keeper_test.go
  • x/auth/keeper/account.go
  • x/auth/keeper/genesis.go
  • x/auth/keeper/grpc_query.go
  • x/auth/keeper/keeper.go
  • x/auth/keeper/msg_server.go
  • x/authz/keeper/genesis.go
  • x/authz/keeper/grpc_query.go
  • x/authz/keeper/keeper.go
  • x/authz/keeper/msg_server.go
  • x/bank/keeper/genesis.go
  • x/bank/keeper/grpc_query.go
  • x/bank/keeper/keeper.go
  • x/bank/keeper/msg_server.go
  • x/bank/keeper/send.go
  • x/bank/keeper/transient_balances.go
  • x/bank/keeper/view.go
  • x/bank/keeper/virtual_balances.go
  • x/circuit/go.mod
  • x/circuit/keeper/genesis.go
  • x/circuit/keeper/keeper.go
  • x/circuit/keeper/msg_server.go
  • x/circuit/keeper/query.go
  • x/consensus/keeper/keeper.go
  • x/crisis/keeper/genesis.go
  • x/crisis/keeper/keeper.go
  • x/crisis/keeper/msg_server.go
  • x/distribution/keeper/alias_functions.go
  • x/distribution/keeper/allocation.go
  • x/distribution/keeper/delegation.go
  • x/distribution/keeper/fee_pool.go
  • x/distribution/keeper/genesis.go
  • x/distribution/keeper/grpc_query.go
  • x/distribution/keeper/hooks.go
  • x/distribution/keeper/keeper.go
  • x/distribution/keeper/msg_server.go
  • x/distribution/keeper/params.go
  • x/distribution/keeper/store.go
  • x/distribution/keeper/validator.go
  • x/evidence/go.mod
  • x/evidence/keeper/abci.go
  • x/evidence/keeper/grpc_query.go
  • x/evidence/keeper/infraction.go
  • x/evidence/keeper/keeper.go
  • x/evidence/keeper/msg_server.go
  • x/feegrant/go.mod
  • x/feegrant/keeper/grpc_query.go
  • x/feegrant/keeper/keeper.go
  • x/feegrant/keeper/migrations.go
  • x/feegrant/keeper/msg_server.go
  • x/gov/keeper/deposit.go
  • x/gov/keeper/grpc_query.go
  • x/gov/keeper/keeper.go
  • x/gov/keeper/keeper_test.go
  • x/gov/keeper/migrations.go
  • x/gov/keeper/msg_server.go
  • x/gov/keeper/proposal.go
  • x/gov/keeper/tally.go
  • x/gov/keeper/vote.go
  • x/gov/module.go
  • x/group/keeper/genesis.go
  • x/group/keeper/grpc_query.go
  • x/group/keeper/keeper.go
  • x/group/keeper/migrations.go
  • x/group/keeper/msg_server.go
  • x/group/keeper/proposal_executor.go
  • x/group/keeper/tally.go
  • x/mint/keeper/genesis.go
  • x/mint/keeper/grpc_query.go
  • x/mint/keeper/keeper.go
  • x/mint/keeper/migrator.go
  • x/mint/keeper/msg_server.go
  • x/nft/go.mod
  • x/nft/keeper/class.go
  • x/nft/keeper/genesis.go
  • x/nft/keeper/grpc_query.go
  • x/nft/keeper/keeper.go
  • x/nft/keeper/msg_server.go
  • x/nft/keeper/nft.go
  • x/nft/keeper/nft_batch.go
  • x/params/keeper/grpc_query.go
  • x/params/keeper/keeper.go
  • x/slashing/keeper/genesis.go
  • x/slashing/keeper/grpc_query.go
  • x/slashing/keeper/hooks.go
  • x/slashing/keeper/infractions.go
  • x/slashing/keeper/keeper.go
  • x/slashing/keeper/migrations.go
  • x/slashing/keeper/msg_server.go
  • x/slashing/keeper/params.go
  • x/slashing/keeper/signing_info.go
  • x/slashing/keeper/unjail.go
  • x/staking/keeper/abci.go
  • x/staking/keeper/alias_functions.go
  • x/staking/keeper/delegation.go
  • x/staking/keeper/delegation_receiver.go
  • x/staking/keeper/genesis.go
  • x/staking/keeper/grpc_query.go
  • x/staking/keeper/historical_info.go
  • x/staking/keeper/keeper.go
  • x/staking/keeper/migrations.go
  • x/staking/keeper/msg_server.go
  • x/staking/keeper/params.go
  • x/staking/keeper/pool.go
  • x/staking/keeper/power_reduction.go
  • x/staking/keeper/query_utils.go
  • x/staking/keeper/slash.go
  • x/staking/keeper/unbonding.go
  • x/staking/keeper/val_state_change.go
  • x/staking/keeper/validator.go
  • x/upgrade/go.mod
  • x/upgrade/keeper/grpc_query.go
  • x/upgrade/keeper/keeper.go
  • x/upgrade/keeper/migrations.go
  • x/upgrade/keeper/msg_server.go

Comment thread store/go.mod
golang.org/x/exp v0.0.0-20250106191152-7588d65b2ba8
google.golang.org/grpc v1.71.0
google.golang.org/protobuf v1.36.5
google.golang.org/grpc v1.75.0
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

❓ Verification inconclusive

Script executed:

#!/bin/bash
set -euo pipefail

echo "OSV query for google.golang.org/grpc@v1.75.0"
curl -sS https://api.osv.dev/v1/query \
  -H 'Content-Type: application/json' \
  -d '{"package":{"name":"google.golang.org/grpc","ecosystem":"Go"},"version":"1.75.0"}'

echo
echo "Latest published grpc version from Go proxy"
curl -sS https://proxy.golang.org/google.golang.org/grpc/@latest

Repository: InjectiveLabs/cosmos-sdk


Repository: InjectiveLabs/cosmos-sdk
Exit code: 0

stdout:

OSV query for google.golang.org/grpc@v1.75.0
{"vulns":[{"id":"GHSA-p77j-4mvh-x3m3","summary":"gRPC-Go has an authorization bypass via missing leading slash in :path","details":"### Impact\n_What kind of vulnerability is it? Who is impacted?_\n\nIt is an **Authorization Bypass** resulting from **Improper Input Validation** of the HTTP/2 `:path` pseudo-header.\n\nThe gRPC-Go server was too lenient in its routing logic, accepting requests where the `:path` omitted the mandatory leading slash (e.g., `Service/Method` instead of `/Service/Method`). While the server successfully routed these requests to the correct handler, authorization interceptors (including the official `grpc/authz` package) evaluated the raw, non-canonical path string. Consequently, \"deny\" rules defined using canonical paths (starting with `/`) failed to match the incoming request, allowing it to bypass the policy if a fallback \"allow\" rule was present.\n\n**Who is impacted?**\nThis affects gRPC-Go servers that meet both of the following criteria:\n1. They use path-based authorization interceptors, such as the official RBAC implementation in `google.golang.org/grpc/authz` or custom interceptors relying on `info.FullMethod` or `grpc.Method(ctx)`.\n2. Their security policy contains specific \"deny\" rules for canonical paths but allows other requests by default (a fallback \"allow\" rule).\n\nThe vulnerability is exploitable by an attacker who can send raw HTTP/2 frames with malformed `:path` headers directly to the gRPC server.\n\n### Patches\n_Has the problem been patched? What versions should users upgrade to?_\n\nYes, the issue has been patched. The fix ensures that any request with a `:path` that does not start with a leading slash is immediately rejected with a `codes.Unimplemented` error, preventing it from reaching authorization interceptors or handlers with a non-canonical path string.\n\nUsers should upgrade to the following versions (or newer):\n* **v1.79.3**\n* The latest **master** branch.\n\nIt is recommended that all users employing path-based authorization (especially `grpc/authz`) upgrade as soon as the patch is available in a tagged release.\n\n### Workarounds\n_Is there a way for users to fix or remediate the vulnerability without upgrading?_\n\nWhile upgrading is the most secure and recommended path, users can mitigate the vulnerability using one of the following methods:\n\n#### 1. Use a Validating Interceptor (Recommended Mitigation)\nAdd an \"outermost\" interceptor to your server that validates the path before any other authorization logic runs:\n\n```go\nfunc pathValidationInterceptor(ctx context.Context, req any, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (any, error) {\n    if info.FullMethod == \"\" || info.FullMethod[0] != '/' {\n        return nil, status.Errorf(codes.Unimplemented, \"malformed method name\")\n    }   \n    return handler(ctx, req)\n}\n\n// Ensure this is the FIRST interceptor in your chain\ns := grpc.NewServer(\n    grpc.ChainUnaryInterceptor(pathValidationInterceptor, authzInterceptor),\n)\n```\n\n#### 2. Infrastructure-Level Normalization\nIf your gRPC server is behind a reverse proxy or load balancer (such as Envoy, NGINX, or an L7 Cloud Load Balancer), ensure it is configured to enforce strict HTTP/2 compliance for pseudo-headers and reject or normalize requests where the `:path` header does not start with a leading slash.\n\n#### 3. Policy Hardening\nSwitch to a \"default deny\" posture in your authorization policies (explicitly listing all allowed paths and denying everything else) to reduce the risk of bypasses via malformed inputs.","aliases":["CVE-2026-33186","GO-2026-4762"],"modified":"2026-03-27T19:11:22.629776Z","published":"2026-03-18T20:10:29Z","related":["CGA-8qwg-3gv7-2fp3"],"database_specific":{"cwe_ids":["CWE-285"],"github_reviewed_at":"2026-03-18T20:10:29Z","github_reviewed":true,"nvd_published_at":"2026-03-20T23:16:45Z","severity":"CRITICAL"},"references":[{"type":"WEB","url":"https://github.com/grpc/grpc-go/security/advisories/GHSA-p77j-4mvh-x3m3"},{"type":"ADVISORY","url":"https://nvd.nist.gov/vuln/detail/CVE-2026-33186"},{"type":"PACKAGE","url":"https://github.com/grpc/grpc-go"}],"affected":[{"package":{"name":"google.golang.org/grpc","ecosystem":"Go","purl":"pkg:golang/google.golang.org/grpc"},"ranges":[{"type":"SEMVER","events":[{"introduced":"0"},{"fixed":"1.79.3"}]}],"database_specific":{"source":"https://github.com/github/advisory-database/blob/main/advisories/github-reviewed/2026/03/GHSA-p77j-4mvh-x3m3/GHSA-p77j-4mvh-x3m3.json"}}],"schema_version":"1.7.5","severity":[{"type":"CVSS_V3","score":"CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:N"}]},{"id":"GO-2026-4762","summary":"Authorization bypass in gRPC-Go via missing leading slash in :path in google.golang.org/grpc","details":"Authorization bypass in gRPC-Go via missing leading slash in :path in google.golang.org/grpc","aliases":["CVE-2026-33186","GHSA-p77j-4mvh-x3m3"],"modified":"2026-03-27T19:00:25.589382Z","published":"2026-03-27T18:39:44Z","database_specific":{"url":"https://pkg.go.dev/vuln/GO-2026-4762","review_status":"REVIEWED"},"references":[{"type":"ADVISORY","url":"https://github.com/grpc/grpc-go/security/advisories/GHSA-p77j-4mvh-x3m3"}],"affected":[{"package":{"name":"google.golang.org/grpc","ecosystem":"Go","purl":"pkg:golang/google.golang.org/grpc"},"ranges":[{"type":"SEMVER","events":[{"introduced":"0"},{"fixed":"1.79.3"}]}],"ecosystem_specific":{"imports":[{"symbols":["Server.Serve","Server.ServeHTTP","Server.handleStream"],"path":"google.golang.org/grpc"}]},"database_specific":{"source":"https://vuln.go.dev/ID/GO-2026-4762.json"}}],"schema_version":"1.7.5"}]}
Latest published grpc version from Go proxy
{"Version":"v1.79.3","Time":"2026-03-17T23:13:23Z","Origin":{"VCS":"git","URL":"https://github.com/grpc/grpc-go","Hash":"dda86dbd9cecb8b35b58c73d507d81d67761205f","Ref":"refs/tags/v1.79.3"}}

Blocker: google.golang.org/grpc v1.75.0 contains a critical authorization bypass vulnerability (GHSA-p77j-4mvh-x3m3).

This dependency must be bumped to v1.79.3 or later before merge. The vulnerability is an authorization bypass resulting from improper validation of the HTTP/2 :path pseudo-header and affects gRPC servers with path-based authorization interceptors.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@store/go.mod` at line 24, Replace the vulnerable dependency entry
"google.golang.org/grpc v1.75.0" with a secure release (at least v1.79.3), e.g.
update the module version to "google.golang.org/grpc v1.79.3"; then run the Go
module update commands (go get google.golang.org/grpc@v1.79.3 and go mod tidy)
to regenerate go.sum and ensure the project builds and tests pass. Ensure the
final go.mod no longer references v1.75.0 anywhere and commit the updated go.mod
and go.sum.

Comment thread x/circuit/go.mod
google.golang.org/grpc v1.71.0
github.com/stretchr/testify v1.11.1
google.golang.org/genproto/googleapis/api v0.0.0-20250825161204-c5933d9347a5
google.golang.org/grpc v1.75.0
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# First, check if the file exists and examine its contents around the specified lines
find . -name "go.mod" -path "*/circuit/*" -type f

Repository: InjectiveLabs/cosmos-sdk

Length of output: 86


🏁 Script executed:

# Check the specific lines mentioned in the review
if [ -f "x/circuit/go.mod" ]; then
  echo "=== Line 22 (grpc) ==="
  sed -n '20,24p' "x/circuit/go.mod"
  echo ""
  echo "=== Line 159 (otel/sdk) ==="
  sed -n '157,161p' "x/circuit/go.mod"
fi

Repository: InjectiveLabs/cosmos-sdk

Length of output: 566


🌐 Web query:

GHSA-p77j-4mvh-x3m3 google.golang.org/grpc v1.75.0

💡 Result:

GHSA-p77j-4mvh-x3m3 (CVE-2026-33186) is a critical authorization bypass vulnerability (CVSS 9.1) in google.golang.org/grpc, affecting versions before 1.79.3, including v1.75.0 (released around August 2025). It allows attackers to bypass path-based authorization (e.g., grpc/authz deny rules) by sending HTTP/2 requests with malformed :path headers missing the leading slash (e.g., "Service/Method" instead of "/Service/Method"). The server routes these to handlers but interceptors see the non-canonical path, failing to match deny rules with a fallback allow. Patched in v1.79.3 (March 17, 2026), which rejects such requests with codes.Unimplemented. Upgrade to v1.79.3 or later. Mitigations: validating interceptors, strict HTTP/2 proxies, or method-based auth.

Citations:


🌐 Web query:

GHSA-9h8m-3fm2-qjrq go.opentelemetry.io/otel v1.38.0

💡 Result:

GHSA-9h8m-3fm2-qjrq is a high-severity vulnerability (CVSS 7.0) in the OpenTelemetry Go SDK, specifically affecting go.opentelemetry.io/otel/sdk/resource on macOS/Darwin systems. It allows arbitrary code execution via local PATH hijacking. The resource detection code in sdk/resource/host_id.go executes the 'ioreg' system command without a secure PATH, enabling an attacker who can modify the PATH environment variable to execute malicious code in the application's context. Affected versions: >=1.21.0 to <=1.39.0 (including v1.38.0). Patched in v1.40.0 via commit d45961b. Aliases: CVE-2026-24051, GO-2026-4394. CWE-426 (Untrusted Search Path). Published February 2, 2026.

Citations:


Upgrade google.golang.org/grpc to v1.79.3+ and go.opentelemetry.io/otel/sdk to v1.40.0+.

Both versions contain known vulnerabilities:

  • google.golang.org/grpc v1.75.0 is vulnerable to GHSA-p77j-4mvh-x3m3 (CVE-2026-33186, CVSS 9.1)—a critical authorization bypass via malformed HTTP/2 :path headers that bypasses authz deny rules. Fixed in v1.79.3.
  • go.opentelemetry.io/otel/sdk v1.38.0 is vulnerable to GHSA-9h8m-3fm2-qjrq (CVE-2026-24051, CVSS 7.0)—arbitrary code execution via local PATH hijacking on macOS. Fixed in v1.40.0.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/circuit/go.mod` at line 22, Update the vulnerable module versions in
go.mod: change the google.golang.org/grpc dependency to v1.79.3 (or a newer
compatible release) and change go.opentelemetry.io/otel/sdk to v1.40.0 (or
newer); locate these by the module path strings "google.golang.org/grpc" and
"go.opentelemetry.io/otel/sdk" in go.mod, update their version tags, then run go
get ./... (or go get google.golang.org/grpc@v1.79.3
go.opentelemetry.io/otel/sdk@v1.40.0) and go mod tidy to update the lockfile and
resolve transitive changes, then run the test suite/build to verify nothing
broke.

Comment thread x/evidence/go.mod
Comment on lines +25 to +27
google.golang.org/genproto/googleapis/api v0.0.0-20250825161204-c5933d9347a5
google.golang.org/grpc v1.75.0
google.golang.org/protobuf v1.36.8
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# First, check if x/evidence/go.mod exists and view lines 25-27
if [ -f "x/evidence/go.mod" ]; then
  echo "=== x/evidence/go.mod lines 20-30 ==="
  sed -n '20,30p' x/evidence/go.mod
  echo ""
  echo "=== x/evidence/go.mod lines 150-165 ==="
  sed -n '150,165p' x/evidence/go.mod
else
  echo "x/evidence/go.mod not found"
  git ls-files | grep -i "go.mod"
fi

Repository: InjectiveLabs/cosmos-sdk

Length of output: 1351


🏁 Script executed:

# Search for any go.mod files to understand structure
fd -name "go.mod" -type f

Repository: InjectiveLabs/cosmos-sdk

Length of output: 298


🌐 Web query:

GHSA-p77j-4mvh-x3m3 google.golang.org/grpc vulnerability

💡 Result:

GHSA-p77j-4mvh-x3m3 is a critical authorization bypass vulnerability (CVE-2026-33186) in google.golang.org/grpc (gRPC-Go), affecting versions prior to 1.79.3. It results from improper input validation of the HTTP/2 :path pseudo-header, where the server accepts requests missing the leading slash (e.g., "Service/Method" instead of "/Service/Method"). This allows bypassing path-based authorization interceptors like those in grpc/authz if they have deny rules for canonical paths but a fallback allow rule. CVSS score: 9.1 (AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:N). CWE-285 (Improper Authorization). Published around March 17-20, 2026. Fixed in v1.79.3 by rejecting invalid :path headers with codes.Unimplemented. Impacts gRPC-Go servers using path-based auth interceptors. Upgrade to v1.79.3 or later.

Citations:


🌐 Web query:

GHSA-9h8m-3fm2-qjrq opentelemetry otel sdk vulnerability

💡 Result:

GHSA-9h8m-3fm2-qjrq is a high-severity vulnerability (CVSS 7.0) in the OpenTelemetry Go SDK (go.opentelemetry.io/otel/sdk/resource), affecting versions >=1.21.0 and <=1.39.0. It allows arbitrary code execution via local PATH hijacking on macOS/Darwin systems. The resource detection code in sdk/resource/host_id.go executes the 'ioreg' system command without a secure PATH, enabling an attacker with local PATH modification ability to execute arbitrary code in the application's context. Patched in v1.40.0 (commit d45961b). Also tracked as CVE-2026-24051 and GO-2026-4394. Published February 2, 2026.

Citations:


This module includes two critical dependency vulnerabilities requiring immediate patching.

google.golang.org/grpc v1.75.0 is vulnerable to GHSA-p77j-4mvh-x3m3 (authorization bypass, CVE-2026-33186, CVSS 9.1) and requires upgrade to v1.79.3 or later. go.opentelemetry.io/otel/sdk v1.38.0 is vulnerable to GHSA-9h8m-3fm2-qjrq (arbitrary code execution via PATH hijacking, CVE-2026-24051, CVSS 7.0) and requires upgrade to v1.40.0 or later. Upgrade both dependencies before merging.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/evidence/go.mod` around lines 25 - 27, The go.mod currently pins vulnerable
versions; update the listed modules to secure releases: bump
google.golang.org/grpc to v1.79.3 or later and go.opentelemetry.io/otel/sdk to
v1.40.0 or later, then run `go get`/`go mod tidy` to update go.sum and ensure no
other indirect deps force older versions; verify build and run tests to confirm
compatibility (look for occurrences of google.golang.org/grpc and
go.opentelemetry.io/otel/sdk to locate the entries to change).

Comment thread x/feegrant/go.mod
google.golang.org/protobuf v1.36.5
github.com/stretchr/testify v1.11.1
google.golang.org/genproto/googleapis/api v0.0.0-20250825161204-c5933d9347a5
google.golang.org/grpc v1.75.0
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

find . -name "go.mod" -path "*/feegrant/*" | head -5

Repository: InjectiveLabs/cosmos-sdk

Length of output: 87


🏁 Script executed:

cat -n x/feegrant/go.mod | sed -n '20,30p'

Repository: InjectiveLabs/cosmos-sdk

Length of output: 478


🏁 Script executed:

cat -n x/feegrant/go.mod | sed -n '170,178p'

Repository: InjectiveLabs/cosmos-sdk

Length of output: 676


🌐 Web query:

GHSA-p77j-4mvh-x3m3 google.golang.org/grpc vulnerability

💡 Result:

GHSA-p77j-4mvh-x3m3 is a critical authorization bypass vulnerability (CVE-2026-33186, CVSS 9.1) in google.golang.org/grpc (gRPC-Go) versions prior to 1.79.3. The gRPC-Go server improperly validates the HTTP/2 :path pseudo-header, accepting requests missing the mandatory leading slash (e.g., "Service/Method" instead of "/Service/Method"). While routing succeeds, authorization interceptors (like google.golang.org/grpc/authz) evaluate the raw path without the slash, bypassing path-based "deny" rules if a fallback "allow" exists. Impacts gRPC-Go servers using path-based authorization with explicit denies and default allows. Attackers with HTTP/2 access can send malformed :path headers to bypass restrictions. Fixed in v1.79.3 (released 2026-03-17) by rejecting non-canonical paths with codes.Unimplemented before interceptors (PR #8981). Upgrade to v1.79.3 or later. No known workarounds besides avoiding path-based auth or disabling HTTP/2.

Citations:


🌐 Web query:

GHSA-9h8m-3fm2-qjrq go.opentelemetry.io/otel/sdk vulnerability

💡 Result:

GHSA-9h8m-3fm2-qjrq is a high-severity vulnerability (CVSS 7.0) in the go.opentelemetry.io/otel/sdk package, also known as CVE-2026-24051 and GO-2026-4394. It affects versions from v1.21.0 (or v1.20.0 per some sources) up to but not including v1.40.0, specifically on macOS/Darwin systems. The vulnerability is due to untrusted search path (CWE-426) in the resource detection code at sdk/resource/host_id.go. It executes the 'ioreg' system command without a fully qualified path, allowing an attacker who can modify the PATH environment variable to hijack the command and achieve arbitrary code execution (ACE) in the application's context. Attack requires local access with low privileges but high attack complexity (AV:L/AC:H/PR:L). The issue was published on 2026-02-02 and patched in commit d45961b, released in v1.40.0 on the same day. Users should upgrade to v1.40.0 or later. As of 2026-03-30, later versions like v1.42.0 are available. Workarounds: Do not use resource detection on untrusted macOS environments or ensure PATH cannot be modified by untrusted parties.

Citations:


Upgrade blocked dependencies to patch known vulnerabilities.

google.golang.org/grpc v1.75.0 contains a critical authorization bypass vulnerability (GHSA-p77j-4mvh-x3m3, CVE-2026-33186, CVSS 9.1) that allows path-based authorization bypass in gRPC servers. Upgrade to v1.79.3 or later.

go.opentelemetry.io/otel/sdk v1.38.0 contains a high-severity code execution vulnerability (GHSA-9h8m-3fm2-qjrq, CVE-2026-24051, CVSS 7.0) on macOS due to unsafe command execution. Upgrade to v1.40.0 or later.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/feegrant/go.mod` at line 25, Update the vulnerable module versions in
go.mod: bump google.golang.org/grpc to v1.79.3 (or later) and
go.opentelemetry.io/otel/sdk to v1.40.0 (or later); then run go get
google.golang.org/grpc@v1.79.3 and go get go.opentelemetry.io/otel/sdk@v1.40.0
(or use your chosen newer versions) and run go mod tidy to update the lockfile
and ensure dependent packages are resolved; target the module names
google.golang.org/grpc and go.opentelemetry.io/otel/sdk in the file and verify
tests/build succeed.

Comment thread x/nft/go.mod
Comment on lines +23 to +24
google.golang.org/genproto/googleapis/api v0.0.0-20250825161204-c5933d9347a5
google.golang.org/grpc v1.75.0
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

cat -n x/nft/go.mod | head -30

Repository: InjectiveLabs/cosmos-sdk

Length of output: 1294


🏁 Script executed:

cat -n x/nft/go.mod | sed -n '154,160p'

Repository: InjectiveLabs/cosmos-sdk

Length of output: 565


🏁 Script executed:

wc -l x/nft/go.mod

Repository: InjectiveLabs/cosmos-sdk

Length of output: 84


🌐 Web query:

GHSA-p77j-4mvh-x3m3 google.golang.org/grpc

💡 Result:

GHSA-p77j-4mvh-x3m3 is a critical authorization bypass vulnerability (CVE-2026-33186) in google.golang.org/grpc (gRPC-Go). Summary: The gRPC-Go server improperly validates the HTTP/2 :path pseudo-header, accepting requests without a leading slash (e.g., "Service/Method" instead of "/Service/Method"). This allows bypassing path-based authorization interceptors like those in grpc/authz, as they match on the raw non-canonical path. Affected: Versions < 1.79.3 of google.golang.org/grpc. Fixed: Upgrade to v1.79.3 or later. Severity: CVSS 3.1 9.1 (Critical) - AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:N. CWE-285 (Improper Authorization). Impact: Affects servers using path-based auth interceptors with deny rules for canonical paths and fallback allow. Exploitable remotely by sending malformed HTTP/2 frames. Published: March 17-18, 2026. As of 2026-03-30, patched and recent. Mitigation: Upgrade immediately. Optionally, add custom unary interceptor to reject non-/ paths with codes.Unimplemented.

Citations:


🌐 Web query:

GHSA-9h8m-3fm2-qjrq go.opentelemetry.io/otel

💡 Result:

GHSA-9h8m-3fm2-qjrq is a high-severity security vulnerability (CVSS 7.0) in the OpenTelemetry Go SDK package go.opentelemetry.io/otel/sdk, affecting versions >=1.21.0 and <=1.39.0. It allows arbitrary code execution via local PATH hijacking on macOS/Darwin systems. The resource detection code in sdk/resource/host_id.go executes the 'ioreg' system command without a secure PATH, enabling an attacker with local access to modify PATH to achieve ACE in the application's context. Patched in v1.40.0 (commit d45961b). Aliases: CVE-2026-24051, GO-2026-4394. Published Feb 2, 2026.

Citations:


Resolve critical security vulnerabilities in dependencies.

This module pins versions with known high-severity and critical vulnerabilities that are flagged in dependency review:

  • google.golang.org/grpc v1.75.0 (GHSA-p77j-4mvh-x3m3): Critical authorization bypass—affects versions < 1.79.3. Upgrade to v1.79.3 or later.
  • go.opentelemetry.io/otel/sdk v1.38.0 (GHSA-9h8m-3fm2-qjrq): High-severity arbitrary code execution via local PATH hijacking—affects versions 1.21.0 to 1.39.0. Upgrade to v1.40.0 or later.

Both packages require bumping before merge (lines 24 and 154–160).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/nft/go.mod` around lines 23 - 24, Update the vulnerable dependency versions
in go.mod: bump google.golang.org/grpc from v1.75.0 to v1.79.3 (or later) and
bump go.opentelemetry.io/otel/sdk from v1.38.0 to v1.40.0 (or later); adjust any
other referenced otel modules in the same section (lines around the otel/sdk
block) to compatible versions, then run go get ./... or go get
<module>@<version> and go mod tidy and run tests/build to ensure no breakage.
Ensure the module names exactly match google.golang.org/grpc and
go.opentelemetry.io/otel/sdk so the dependency review picks up the fixed
versions.

Comment thread x/params/keeper/keeper.go
Comment on lines +81 to +87
func (k *Keeper) Meter(ctx context.Context) metrics.Meter {
if k.meter == nil {
k.meter = sdk.UnwrapSDKContext(ctx).Meter().SubMeter(proposal.ModuleName, metrics.Tag("svc", proposal.ModuleName))
}

return k.meter
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Data race on lazy meter initialization.

The Meter() method performs unsynchronized read-write on k.meter. Concurrent gRPC requests can race on the nil check and assignment, leading to undefined behavior.

Since keepers are typically long-lived and shared across goroutines handling concurrent requests, this needs synchronization.

🔒 Proposed fix using sync.Once
 import (
 	"context"
+	"sync"
 	"cosmossdk.io/log"
 	storetypes "cosmossdk.io/store/types"
 	metrics "github.com/InjectiveLabs/metrics/v2"
 ...
 
 type Keeper struct {
+	meterOnce   sync.Once
 	meter       metrics.Meter
 	cdc         codec.BinaryCodec
 ...

 func (k *Keeper) Meter(ctx context.Context) metrics.Meter {
-	if k.meter == nil {
-		k.meter = sdk.UnwrapSDKContext(ctx).Meter().SubMeter(proposal.ModuleName, metrics.Tag("svc", proposal.ModuleName))
-	}
-
+	k.meterOnce.Do(func() {
+		k.meter = sdk.UnwrapSDKContext(ctx).Meter().SubMeter(proposal.ModuleName, metrics.Tag("svc", proposal.ModuleName))
+	})
 	return k.meter
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/params/keeper/keeper.go` around lines 81 - 87, The Meter method has a data
race: it reads and writes k.meter without synchronization (nil check +
assignment) causing races for concurrent gRPC requests; fix by making lazy
initialization thread-safe—add a sync.Once (or a mutex) field on Keeper (e.g.,
meterOnce or meterMu) and use it in Meter to ensure the SubMeter creation and
assignment to k.meter happens exactly once, while subsequent calls simply return
k.meter; reference the Meter method, k.meter, and Keeper type when applying this
change.

Comment thread x/evidence/keeper/abci.go Outdated
case comet.LightClientAttack, comet.DuplicateVote:
evidence := types.FromABCIEvidence(evidences.Get(i), k.stakingKeeper.ConsensusAddressCodec())
err := k.handleEquivocationEvidence(ctx, evidence)
err := k.handleEquivocationEvidence(sdkCtx, evidence)

Check warning

Code scanning / CodeQL

Panic in BeginBock or EndBlock consensus methods

path flow from Begin/EndBlock to a panic call path flow from Begin/EndBlock to a panic call path flow from Begin/EndBlock to a panic call path flow from Begin/EndBlock to a panic call path flow from Begin/EndBlock to a panic call
Comment thread x/staking/keeper/abci.go

defer telemetry.ModuleMeasureSince(types.ModuleName, telemetry.Now(), telemetry.MetricKeyBeginBlocker)
return k.TrackHistoricalInfo(ctx)
return k.TrackHistoricalInfo(sdkCtx)

Check warning

Code scanning / CodeQL

Panic in BeginBock or EndBlock consensus methods

path flow from Begin/EndBlock to a panic call path flow from Begin/EndBlock to a panic call path flow from Begin/EndBlock to a panic call path flow from Begin/EndBlock to a panic call path flow from Begin/EndBlock to a panic call
Comment thread x/staking/keeper/abci.go

defer telemetry.ModuleMeasureSince(types.ModuleName, telemetry.Now(), telemetry.MetricKeyEndBlocker)
return k.BlockValidatorUpdates(ctx)
return k.BlockValidatorUpdates(sdkCtx)

Check warning

Code scanning / CodeQL

Panic in BeginBock or EndBlock consensus methods

path flow from Begin/EndBlock to a panic call path flow from Begin/EndBlock to a panic call path flow from Begin/EndBlock to a panic call path flow from Begin/EndBlock to a panic call path flow from Begin/EndBlock to a panic call
@socket-security
Copy link
Copy Markdown

socket-security Bot commented Mar 31, 2026

Warning

Review the following alerts detected in dependencies.

According to your organization's Security Policy, it is recommended to resolve "Warn" alerts. Learn more about Socket for GitHub.

Action Severity Alert  (click "▶" to expand/collapse)
Warn Critical
Critical CVE: HashiCorp go-getter Vulnerable to Argument Injection When Fetching Remote Default Git Branches in golang github.com/hashicorp/go-getter

CVE: GHSA-q64h-39hv-4cf7 HashiCorp go-getter Vulnerable to Argument Injection When Fetching Remote Default Git Branches (CRITICAL)

Affected versions: >= 1.5.9 < 1.7.4

Patched version: 1.7.4

From: ?golang/cosmossdk.io/x/upgrade@v0.0.0-20230614103911-b3da8bb4e801golang/github.com/hashicorp/go-getter@v1.7.1

ℹ Read more on: This package | This alert | What is a critical CVE?

Next steps: Take a moment to review the security alert above. Review the linked package source code to understand the potential risk. Ensure the package is not malicious before proceeding. If you're unsure how to proceed, reach out to your security team or ask the Socket team for help at support@socket.dev.

Suggestion: Remove or replace dependencies that include known critical CVEs. Consumers can use dependency overrides or npm audit fix --force to remove vulnerable dependencies.

Mark the package as acceptable risk. To ignore this alert only in this pull request, reply with the comment @SocketSecurity ignore golang/github.com/hashicorp/go-getter@v1.7.1. You can also ignore all packages with @SocketSecurity ignore-all. To ignore an alert for all future pull requests, use Socket's Dashboard to change the triage state of this alert.

Warn Critical
Critical CVE: Misuse of ServerConfig.PublicKeyCallback may cause authorization bypass in golang golang.org/x/crypto

CVE: GHSA-v778-237x-gjrc Misuse of ServerConfig.PublicKeyCallback may cause authorization bypass in golang.org/x/crypto (CRITICAL)

Affected versions: < 0.31.0

Patched version: 0.31.0

From: ?golang/cosmossdk.io/depinject@v1.0.0-alpha.4golang/cosmossdk.io/errors@v1.0.1golang/cosmossdk.io/core@v0.11.0golang/cosmossdk.io/api@v0.7.5golang/github.com/spf13/viper@v1.19.0golang/cosmossdk.io/x/upgrade@v0.0.0-20230614103911-b3da8bb4e801golang/github.com/cosmos/cosmos-sdk@v0.50.6golang/github.com/hashicorp/go-getter@v1.7.4golang/cosmossdk.io/store@v1.1.1golang/cosmossdk.io/api@v0.7.6golang/cosmossdk.io/depinject@v1.1.0golang/github.com/cosmos/cosmos-db@v1.1.1golang/github.com/grpc-ecosystem/grpc-gateway@v1.16.0golang/golang.org/x/crypto@v0.26.0

ℹ Read more on: This package | This alert | What is a critical CVE?

Next steps: Take a moment to review the security alert above. Review the linked package source code to understand the potential risk. Ensure the package is not malicious before proceeding. If you're unsure how to proceed, reach out to your security team or ask the Socket team for help at support@socket.dev.

Suggestion: Remove or replace dependencies that include known critical CVEs. Consumers can use dependency overrides or npm audit fix --force to remove vulnerable dependencies.

Mark the package as acceptable risk. To ignore this alert only in this pull request, reply with the comment @SocketSecurity ignore golang/golang.org/x/crypto@v0.26.0. You can also ignore all packages with @SocketSecurity ignore-all. To ignore an alert for all future pull requests, use Socket's Dashboard to change the triage state of this alert.

View full report

@dbrajovic dbrajovic requested review from kakysha and maxim-inj March 31, 2026 13:00
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (3)
x/circuit/go.mod (1)

22-22: ⚠️ Potential issue | 🔴 Critical

Block merge on vulnerable grpc and otel/sdk pins.

Line 22 (google.golang.org/grpc v1.75.0) and Line 159 (go.opentelemetry.io/otel/sdk v1.38.0) are both explicitly flagged in this PR’s dependency review as vulnerable (GHSA-p77j-4mvh-x3m3, GHSA-9h8m-3fm2-qjrq). Please bump to patched versions before merge.

🔧 Minimal version bump sketch
- google.golang.org/grpc v1.75.0
+ google.golang.org/grpc v1.79.3
...
- go.opentelemetry.io/otel/sdk v1.38.0 // indirect
+ go.opentelemetry.io/otel/sdk v1.40.0 // indirect

Also applies to: 159-159

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/circuit/go.mod` at line 22, The go.mod pins for google.golang.org/grpc and
go.opentelemetry.io/otel/sdk are vulnerable; update the grpc and otel/sdk module
lines to the patched versions cited in the security advisories (replace
"google.golang.org/grpc v1.75.0" and "go.opentelemetry.io/otel/sdk v1.38.0" with
their respective fixed versions), then run a module update (eg. go get
<module>@<patched-version>), run go mod tidy to refresh go.sum, and run the test
suite (go test ./...) to verify; commit the updated go.mod and go.sum.
x/feegrant/go.mod (1)

25-25: ⚠️ Potential issue | 🔴 Critical

Security blocker: upgrade vulnerable grpc and otel/sdk versions.

Line 25 (google.golang.org/grpc v1.75.0) and Line 163 (go.opentelemetry.io/otel/sdk v1.38.0) match the dependency-review failures in this PR. Please bump both to patched versions before merge.

Also applies to: 163-163

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/feegrant/go.mod` at line 25, Update the vulnerable module versions for
google.golang.org/grpc and go.opentelemetry.io/otel/sdk in the go.mod by bumping
the entries for "google.golang.org/grpc" and "go.opentelemetry.io/otel/sdk" to
their patched releases (use a non-vulnerable/minimum patched version), then run
go mod tidy to update go.sum and ensure the dependency-review passes;
specifically edit the module lines for google.golang.org/grpc and
go.opentelemetry.io/otel/sdk, set them to patched versions, run go mod tidy, and
re-run the dependency-check to verify the fix.
x/evidence/go.mod (1)

26-26: ⚠️ Potential issue | 🔴 Critical

Vulnerable dependency versions still present.

Line 26 (google.golang.org/grpc v1.75.0) and Line 161 (go.opentelemetry.io/otel/sdk v1.38.0) are still on vulnerable versions already reported by CI. This should be patched before merge to unblock dependency review.

Also applies to: 161-161

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/evidence/go.mod` at line 26, The go.mod currently pins vulnerable
dependency versions for google.golang.org/grpc and go.opentelemetry.io/otel/sdk;
update the module versions for the symbols "google.golang.org/grpc" and
"go.opentelemetry.io/otel/sdk" to the patched non-vulnerable releases
recommended by the security advisories, then regenerate modules (e.g., run
module tidy/update) and re-run CI to verify the vulnerability warnings are
resolved before merging.
🧹 Nitpick comments (2)
x/bank/module.go (1)

181-188: Type assertion silently skips instrumentation for non-BaseKeeper implementations.

If am.keeper is neither keeper.BaseKeeper nor *keeper.BaseKeeper (e.g., a mock or wrapped implementation), the timing instrumentation is silently skipped. While this may be acceptable for test scenarios, consider logging or adding an explicit default case if instrumentation is expected in production.

Additionally, note that the relevant snippet shows Keeper is an interface (line 22-62 of x/bank/keeper/keeper.go), and Meter() is only defined on BaseKeeper. This pattern is functional but creates an implicit contract that production keepers must be BaseKeeper.

💡 Optional: Add Meter to the Keeper interface for uniform instrumentation

If instrumentation should apply uniformly, consider extending the Keeper interface to include Meter(), or add a separate interface:

// In keeper.go
type MeterProvider interface {
    Meter(ctx context.Context) metrics.Meter
}

// In module.go EndBlock
if mp, ok := am.keeper.(keeper.MeterProvider); ok {
    sdkCtx := sdk.UnwrapSDKContext(ctx)
    defer mp.Meter(sdkCtx).FuncTiming(&sdkCtx, "EndBlocker")()
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/bank/module.go` around lines 181 - 188, The switch on am.keeper only
handles keeper.BaseKeeper and *keeper.BaseKeeper so instrumentation is silently
skipped for other implementations; update EndBlocker in module.go to explicitly
handle the default case (log a warning) or, better, detect a Meter-capable
keeper via an interface (e.g., add/recognize a MeterProvider interface with
Meter(ctx) or extend Keeper) and use a type assertion like `mp, ok :=
am.keeper.(keeper.MeterProvider)` to call mp.Meter(sdkCtx).FuncTiming(&sdkCtx,
"EndBlocker")(), ensuring you reference am.keeper, keeper.BaseKeeper, Keeper
(keeper.go) and Meter() when making the change so production keepers are
consistently instrumented.
x/feegrant/module/abci.go (1)

12-15: Use sdkCtx for the RemoveExpiredAllowances call.

EndBlocker already unwraps to sdkCtx on Line 12, but Line 15 passes ctx. Passing sdkCtx keeps this path consistent with x/feegrant/keeper/msg_server.go and avoids an extra unwrap in the callee.

♻️ Suggested change
- return k.RemoveExpiredAllowances(ctx, 200)
+ return k.RemoveExpiredAllowances(sdkCtx, 200)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/feegrant/module/abci.go` around lines 12 - 15, EndBlocker unwraps the
context into sdkCtx but calls k.RemoveExpiredAllowances with the original ctx,
causing an extra unwrap; change the call to pass sdkCtx instead of ctx (i.e.,
call k.RemoveExpiredAllowances(sdkCtx, 200)) so the code consistently uses the
unwrapped context used by EndBlocker and matches
x/feegrant/keeper/msg_server.go's pattern.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@x/nft/go.mod`:
- Around line 192-193: The replace directives in go.mod pointing to
github.com/InjectiveLabs/cometbft at revision
v1.0.2-0.20260406101044-d1e31b5e53f6 must not be merged until upstream PR#10 is
merged and tagged; revert or remove these replace lines (the two lines
referencing github.com/cometbft/cometbft and github.com/cometbft/cometbft/api)
from go.mod now, or change them back to the previous stable module versions, and
once upstream PR#10 is merged and a proper tag is created, update the replace
directives (or switch to the released tag) to the new stable version in go.mod
and vendor as needed.

---

Duplicate comments:
In `@x/circuit/go.mod`:
- Line 22: The go.mod pins for google.golang.org/grpc and
go.opentelemetry.io/otel/sdk are vulnerable; update the grpc and otel/sdk module
lines to the patched versions cited in the security advisories (replace
"google.golang.org/grpc v1.75.0" and "go.opentelemetry.io/otel/sdk v1.38.0" with
their respective fixed versions), then run a module update (eg. go get
<module>@<patched-version>), run go mod tidy to refresh go.sum, and run the test
suite (go test ./...) to verify; commit the updated go.mod and go.sum.

In `@x/evidence/go.mod`:
- Line 26: The go.mod currently pins vulnerable dependency versions for
google.golang.org/grpc and go.opentelemetry.io/otel/sdk; update the module
versions for the symbols "google.golang.org/grpc" and
"go.opentelemetry.io/otel/sdk" to the patched non-vulnerable releases
recommended by the security advisories, then regenerate modules (e.g., run
module tidy/update) and re-run CI to verify the vulnerability warnings are
resolved before merging.

In `@x/feegrant/go.mod`:
- Line 25: Update the vulnerable module versions for google.golang.org/grpc and
go.opentelemetry.io/otel/sdk in the go.mod by bumping the entries for
"google.golang.org/grpc" and "go.opentelemetry.io/otel/sdk" to their patched
releases (use a non-vulnerable/minimum patched version), then run go mod tidy to
update go.sum and ensure the dependency-review passes; specifically edit the
module lines for google.golang.org/grpc and go.opentelemetry.io/otel/sdk, set
them to patched versions, run go mod tidy, and re-run the dependency-check to
verify the fix.

---

Nitpick comments:
In `@x/bank/module.go`:
- Around line 181-188: The switch on am.keeper only handles keeper.BaseKeeper
and *keeper.BaseKeeper so instrumentation is silently skipped for other
implementations; update EndBlocker in module.go to explicitly handle the default
case (log a warning) or, better, detect a Meter-capable keeper via an interface
(e.g., add/recognize a MeterProvider interface with Meter(ctx) or extend Keeper)
and use a type assertion like `mp, ok := am.keeper.(keeper.MeterProvider)` to
call mp.Meter(sdkCtx).FuncTiming(&sdkCtx, "EndBlocker")(), ensuring you
reference am.keeper, keeper.BaseKeeper, Keeper (keeper.go) and Meter() when
making the change so production keepers are consistently instrumented.

In `@x/feegrant/module/abci.go`:
- Around line 12-15: EndBlocker unwraps the context into sdkCtx but calls
k.RemoveExpiredAllowances with the original ctx, causing an extra unwrap; change
the call to pass sdkCtx instead of ctx (i.e., call
k.RemoveExpiredAllowances(sdkCtx, 200)) so the code consistently uses the
unwrapped context used by EndBlocker and matches
x/feegrant/keeper/msg_server.go's pattern.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 1418a7d9-a69e-4a4c-ad57-8f0c43294fd2

📥 Commits

Reviewing files that changed from the base of the PR and between f45aba8 and 52cd8ba.

⛔ Files ignored due to path filters (9)
  • go.sum is excluded by !**/*.sum
  • simapp/go.sum is excluded by !**/*.sum
  • store/go.sum is excluded by !**/*.sum
  • tests/go.sum is excluded by !**/*.sum
  • x/circuit/go.sum is excluded by !**/*.sum
  • x/evidence/go.sum is excluded by !**/*.sum
  • x/feegrant/go.sum is excluded by !**/*.sum
  • x/nft/go.sum is excluded by !**/*.sum
  • x/upgrade/go.sum is excluded by !**/*.sum
📒 Files selected for processing (18)
  • go.mod
  • simapp/go.mod
  • store/go.mod
  • tests/go.mod
  • x/authz/module/abci.go
  • x/bank/module.go
  • x/circuit/go.mod
  • x/crisis/abci.go
  • x/distribution/abci.go
  • x/evidence/go.mod
  • x/feegrant/go.mod
  • x/feegrant/module/abci.go
  • x/gov/abci.go
  • x/group/module/abci.go
  • x/mint/abci.go
  • x/nft/go.mod
  • x/slashing/abci.go
  • x/upgrade/go.mod
✅ Files skipped from review due to trivial changes (3)
  • x/distribution/abci.go
  • simapp/go.mod
  • go.mod
🚧 Files skipped from review as they are similar to previous changes (3)
  • tests/go.mod
  • store/go.mod
  • x/upgrade/go.mod

Comment thread x/nft/go.mod
Comment on lines +192 to +193
github.com/cometbft/cometbft => github.com/InjectiveLabs/cometbft v1.0.2-0.20260406101044-d1e31b5e53f6
github.com/cometbft/cometbft/api => github.com/InjectiveLabs/cometbft/api v1.0.1-0.20260406101044-d1e31b5e53f6
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify the CometBFT fork commit exists and check its status

echo "Checking CometBFT fork commit existence..."
gh api repos/InjectiveLabs/cometbft/commits/d1e31b5e53f6 --jq '.sha, .commit.message' 2>/dev/null || echo "Commit not found or API error"

echo ""
echo "Checking for related PRs in InjectiveLabs/cometbft..."
gh pr list --repo InjectiveLabs/cometbft --state all --limit 5 --json number,title,state,mergedAt

Repository: InjectiveLabs/cosmos-sdk

Length of output: 423


🏁 Script executed:

# First, let's locate and inspect the go.mod file to confirm the stated versions
find . -name "go.mod" -path "*/x/nft/*" -type f

Repository: InjectiveLabs/cosmos-sdk

Length of output: 82


🌐 Web query:

InjectiveLabs cometbft fork PR 10 status

💡 Result:

InjectiveLabs cometbft PR #10 (“[c-930] use metrics v2 to trace relevant code paths”) is currently Open (opened Mar 30, 2026) by dbrajovic, proposing to merge 13 commits into the v1.x-inj base branch from c-930/trace-everything. [1][2]

As of the latest PR page state, it’s awaiting review (review requested Mar 31, 2026) and shows a pre-merge check warning for Docstring Coverage (73.33% < 80% required). [2]


🏁 Script executed:

# Check if we can find the specific go.mod file and view the relevant lines
fd "go.mod" | grep -E "nft/go.mod|x/nft" || fd "go.mod" | head -20

Repository: InjectiveLabs/cosmos-sdk

Length of output: 80


🏁 Script executed:

# Read the specific lines from the go.mod file to verify the versions
sed -n '190,195p' ./x/nft/go.mod

Repository: InjectiveLabs/cosmos-sdk

Length of output: 346


Block merge until upstream PR#10 is merged and tagged.

The replace directives correctly point to the InjectiveLabs CometBFT fork with the new revision v1.0.2-0.20260406101044-d1e31b5e53f6. However, the upstream PR (InjectiveLabs/cometbft#10) is currently open and awaiting review with a failing pre-merge check (Docstring Coverage: 73.33% < 80% required). This PR must be merged, tagged, and publicly available before this change can be merged.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/nft/go.mod` around lines 192 - 193, The replace directives in go.mod
pointing to github.com/InjectiveLabs/cometbft at revision
v1.0.2-0.20260406101044-d1e31b5e53f6 must not be merged until upstream PR#10 is
merged and tagged; revert or remove these replace lines (the two lines
referencing github.com/cometbft/cometbft and github.com/cometbft/cometbft/api)
from go.mod now, or change them back to the previous stable module versions, and
once upstream PR#10 is merged and a proper tag is created, update the replace
directives (or switch to the released tag) to the new stable version in go.mod
and vendor as needed.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
x/staking/keeper/validator.go (2)

459-477: ⚠️ Potential issue | 🟠 Major

IterateLastValidatorPowers also needs to close its iterator.

LastValidatorsIterator returns a closeable iterator, but this method never closes it before returning or on early errors.

Proposed fix
 	iter, err := k.LastValidatorsIterator(sdkCtx)
 	if err != nil {
 		return err
 	}
+	defer iter.Close()
 
 	for ; iter.Valid(); iter.Next() {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/staking/keeper/validator.go` around lines 459 - 477, The iterator returned
by LastValidatorsIterator in IterateLastValidatorPowers is never closed; after
checking err from k.LastValidatorsIterator(sdkCtx) immediately call defer
iter.Close() (or otherwise ensure iter.Close() runs on all returns) so the
iterator is closed on normal exit and on early breaks/returns; update the block
that obtains iter (and any early-return paths inside the loop that use
iter.Valid(), iter.Next(), and k.cdc.Unmarshal) to rely on this deferred close
to release resources.

338-353: ⚠️ Potential issue | 🟠 Major

Close the iterator in GetValidators.

The iterator opened at Line 338 is never closed. This leaks a store iterator on every call.

Proposed fix
 	iterator, err := store.Iterator(types.ValidatorsKey, storetypes.PrefixEndBytes(types.ValidatorsKey))
 	if err != nil {
 		return nil, err
 	}
+	defer iterator.Close()
 
 	i := 0
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/staking/keeper/validator.go` around lines 338 - 353, In GetValidators, the
store iterator returned by store.Iterator (variable iterator) is not closed;
after creating iterator, immediately add a defer iterator.Close() so the
iterator is always released (even on early returns/errors) — locate the
store.Iterator call in GetValidators and add the defer right after the err check
to ensure iterator.Close() is invoked before returning.
x/staking/keeper/delegation.go (1)

390-410: ⚠️ Potential issue | 🟠 Major

Close these iterators before returning.

Both functions open a store iterator and never call Close(). On hot paths this leaks iterator resources and can degrade store performance over time.

Suggested fix
 	iterator, err := store.Iterator(delegatorPrefixKey, storetypes.PrefixEndBytes(delegatorPrefixKey))
 	if err != nil {
 		return err
 	}
+	defer iterator.Close()
 	iterator, err := store.Iterator(delegatorPrefixKey, storetypes.PrefixEndBytes(delegatorPrefixKey))
 	if err != nil {
 		return nil, err
 	}
+	defer iterator.Close()

Also applies to: 625-648

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/staking/keeper/delegation.go` around lines 390 - 410, The iterator returned
by store.Iterator in IterateDelegatorRedelegations is never closed, leaking
resources; update the function (and the similar iteration function handling
redelegations by src/dst) to call iterator.Close() via a defer immediately after
successful iterator creation (i.e., after iterator, err := store.Iterator(...)),
and ensure any early returns (including error returns from types.UnmarshalRED or
cb-induced breaks) still allow the iterator to be closed by using the deferred
Close; reference the iterator variable from store.Iterator and the function
IterateDelegatorRedelegations (and the other redelegation iteration function)
when applying this fix.
🧹 Nitpick comments (2)
x/distribution/keeper/keeper.go (1)

75-87: This meter cache is lost on value receivers.

The newly instrumented methods in this file all use func (k Keeper) ..., but Meter caches onto *Keeper. Each call gets its own receiver copy, so k.meter = ... only updates the stack copy and the lazy cache never persists. If the intent is to cache one module meter, switch these methods to pointer receivers or initialize/store the meter once in NewKeeper.

Also applies to: 237-243

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/distribution/keeper/keeper.go` around lines 75 - 87, The Meter is being
cached on *Keeper but these methods use value receivers (func (k Keeper) ...),
so assignments to k.meter only affect the copy and the cached meter is lost;
change the receiver of instrumented methods (e.g., Logger, SetWithdrawAddr and
the other affected methods around the area referenced) from value to pointer
(func (k *Keeper) ...) so k.meter updates persist, or alternatively initialize
and store the module meter once in NewKeeper and have methods read the stored
meter; update all methods that call k.Meter to use the pointer receiver pattern
to fix the caching bug.
x/circuit/keeper/keeper.go (1)

18-19: The lazy meter cache does not persist across the common value-copy call path.

Meter mutates k.meter, but msgServer still embeds Keeper by value in x/circuit/keeper/msg_server.go. That means each RPC can initialize the meter on a copy and then discard it at return, so this is not really a cached module meter. If you want one shared meter instance, initialize it in NewKeeper or make the shared owners hold/use *Keeper consistently.

Also applies to: 84-89

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/circuit/keeper/keeper.go` around lines 18 - 19, The lazy meter caching is
ineffective because k.meter gets set on a Keeper value copy (msgServer embeds
Keeper by value), so the meter is discarded after each RPC; fix by initializing
and assigning the shared metrics.Meter in NewKeeper (set k.meter once on the
keeper instance returned by NewKeeper) and also update msg_server.go so
msgServer embeds/holds *Keeper instead of Keeper (or conversely ensure all
shared owners use *Keeper), and apply the same change for the other cached
fields referenced around lines 84-89 to ensure the meter/cache persists across
RPC calls.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@x/auth/keeper/keeper.go`:
- Around line 107-108: The cached meter on AccountKeeper is lost because many
methods (e.g., GetPubKey, GetSequence, GetModuleAccount) use value receivers
while Meter is a pointer receiver, causing writes to ak.Meter(ctx) to affect
only a copy; fix by making the instrumented path consistently pointer-based:
change the receiver of all AccountKeeper methods that access ak.Meter (including
GetPubKey, GetSequence, GetModuleAccount, any other methods using ak.Meter) from
value to pointer receivers (func (ak *AccountKeeper) ...) and update the query
server to store and use *AccountKeeper instead of AccountKeeper; alternatively,
if you prefer not to use shared state, remove the cached meter field and
compute/return the meter directly from Meter, but be consistent across
AccountKeeper methods and query server to ensure the meter is properly shared.

In `@x/authz/keeper/keeper.go`:
- Around line 593-599: The instrumented methods (e.g., getGrant, update,
DispatchActions, SaveGrant, DeleteGrant, GetAuthorizations, GetAuthorization,
IterateGrants) are defined with value receivers, so calls to k.Meter(ctx) mutate
a copy and the meter cache is never persisted; change those methods' receivers
to pointer receivers (e.g., func (k *Keeper) ...) so that k.Meter(ctx) updates
the actual Keeper instance's meter field and the lazy cache takes effect. Ensure
all methods that call k.Meter or otherwise mutate Keeper state use *Keeper
receivers consistently to avoid cache recreation.

In `@x/bank/keeper/view.go`:
- Around line 73-74: The cached meter on BaseViewKeeper is re-initialized per
call because instrumented methods use value receivers while Meter() is a pointer
receiver; fix by making the keeper shared: change NewBaseViewKeeper to return
*BaseViewKeeper (allocate and return a pointer) and convert all instrumented
methods that currently have receivers like (k BaseViewKeeper) to pointer
receivers (k *BaseViewKeeper) so they use the same instance and the meter field
is preserved, and ensure Meter() remains a pointer method so SubMeter(...) is
built once and reused.

In `@x/feegrant/keeper/keeper.go`:
- Around line 188-190: The expiry-queue deletion uses the wrong key order: when
enqueuing in GrantAllowance the code uses FeeAllowanceKey(granter, grantee)[1:]
but the deletion uses FeeAllowanceKey(grantee, granter)[1:], leaving stale queue
entries; update the deletion in the expiry handling to use
FeeAllowanceKey(granter, grantee)[1:] (i.e., match the same key ordering used by
GrantAllowance when calling FeeAllowancePrefixQueue/FeeAllowanceKey) so
RemoveExpiredAllowances removes the exact entry that was inserted.
- Around line 421-426: Meter() currently caches a meter on the Keeper pointer
but most other methods use value receivers so calls to k.Meter(ctx) operate on
copies and the cache is ineffective; fix by making the instrumented methods use
pointer receivers so they call Meter on the same Keeper instance — change the
receiver from (k Keeper) to (k *Keeper) for Logger, GrantAllowance,
UpdateAllowance, revokeAllowance, GetAllowance, getGrant,
IterateAllFeeAllowances, UseGrantedFees, InitGenesis, ExportGenesis,
addToFeeAllowanceQueue, RemoveExpiredAllowances, and CheckGrantedFee so the
cached k.meter is reused (alternatively, if you prefer stateless Meter, remove
caching inside Meter and return a fresh sub-meter each call).

In `@x/gov/keeper/grpc_query.go`:
- Around line 317-320: The legacy v1beta1 wrapper handlers (e.g.,
legacyQueryServer.Proposal, Proposals, Vote, Votes, Params, Deposit, Deposits,
TallyResult) are adding duplicate timings before delegating to the v1 handlers;
remove the defer q.k.Meter(ctx).FuncTiming(&sdkCtx, "<Name>")(&err) lines from
these wrapper methods so only the underlying v1 handlers record timing, keeping
the sdkCtx unwrap and the rest of the method intact; locate the timing defer in
each of the listed functions and delete that single defer call (do not change
the call sites that the wrappers delegate to).
- Around line 64-68: The Proposals handler dereferences req.Pagination before
checking req for nil; add a nil guard at the start of queryServer.Proposals
(e.g., if req == nil { return nil, status.Error(codes.InvalidArgument, "empty
request") }) so it returns codes.InvalidArgument instead of panicking, keeping
the rest of the function (sdkCtx unwrap, defer q.k.Meter.FuncTiming, and the
call to query.CollectionFilteredPaginate with q.k.Proposals and the pagination
handler) unchanged.

In `@x/gov/keeper/msg_server.go`:
- Around line 314-317: The legacy wrapper double-records RPC metrics by
deferring k.keeper.Meter(goCtx).FuncTiming(&sdkCtx, "SubmitProposal") and then
calling the already-instrumented v1 handler; fix this in
legacyMsgServer.SubmitProposal by either removing the outer defer timing or
renaming the metric key to a distinct name (e.g., "LegacySubmitProposal") so
samples aren't indistinguishable, and apply the same change to the other legacy
wrapper methods legacyMsgServer.Vote, legacyMsgServer.VoteWeighted, and
legacyMsgServer.Deposit to avoid duplicate metrics.

In `@x/nft/keeper/nft_batch.go`:
- Around line 96-97: The current error handling in the transfer branch masks the
real failure by always returning nft.ErrNFTNotExists; update the return to
preserve and wrap the actual error from k.transferWithNoCheck(sdkCtx, classID,
nftID, receiver) (e.g., wrap err with nftID or a clear message) instead of using
nft.ErrNFTNotExists so callers see the true cause; locate the call to
k.transferWithNoCheck and replace the errors.Wrap(nft.ErrNFTNotExists, nftID)
return with wrapping/returning the original err (referencing variables
transferWithNoCheck, err, and nftID).

In `@x/slashing/keeper/keeper.go`:
- Around line 148-153: The Keeper.Meter method currently calls
sdk.UnwrapSDKContext(ctx).Meter().SubMeter(...) and caches the result on
k.meter, which will panic if Context.Meter() is nil and incorrectly reuses a
context-scoped meter; change Meter to derive the sub-meter from the provided ctx
on every call instead of caching it: call sdk.UnwrapSDKContext(ctx).Meter(),
check for nil and fall back to the SDK noop meter (or equivalent no-op meter)
before calling SubMeter(types.ModuleName, metrics.Tag("svc", types.ModuleName)),
and return that sub-meter directly; remove any use of the k.meter cache so
callers of Keeper.Meter (e.g., Logger, AddPubkey, GetPubkey, Slash,
SlashWithInfractionReason, Jail, deleteAddrPubkeyRelation) always get a safe
per-request meter.

In `@x/staking/keeper/alias_functions.go`:
- Around line 15-18: Change the receiver of the instrumented alias methods from
value to pointer so Meter() uses the same Keeper instance and can cache the
meter: update the method signatures for IterateValidators,
IterateBondedValidatorsByPower, IterateLastValidators, Validator,
ValidatorByConsAddr, Delegation, IterateDelegations, and GetAllSDKDelegations
from "func (k Keeper) ..." to "func (k *Keeper) ..."; ensure any internal uses
of k remain correct and no additional code changes are required besides the
receiver type so calls to k.Meter(ctx) will mutate the actual Keeper's meter
field rather than a copy.

In `@x/staking/keeper/delegation.go`:
- Around line 947-955: DequeueAllMatureRedelegationQueue currently ignores the
currTime parameter by calling RedelegationQueueIterator with
sdkCtx.HeaderInfo().Time; update the call to use the provided currTime instead
so the function dequeues up to the caller-specified cutoff. Locate the
DequeueAllMatureRedelegationQueue function and replace the argument to
RedelegationQueueIterator (and any related time usages in that function) from
sdkCtx.HeaderInfo().Time to currTime, preserving existing error handling and
defer timing logic.

In `@x/staking/keeper/keeper.go`:
- Line 28: The keeper's meter caching is broken because Meter() is defined with
a pointer receiver while many staking methods (Logger, GetLastTotalPower,
SetLastTotalPower, SetValidatorUpdates, GetValidatorUpdates and the other listed
functions across pool.go and delegation_receiver.go) are defined with value
receivers, so assignments to k.meter mutate only the copy; fix by making these
methods use pointer receivers (change their receiver from (k Keeper) to (k
*Keeper)) so k.Meter(ctx) and any caching (k.meter = ...) update the real
struct, or alternatively remove the caching logic entirely and always call
k.Meter(ctx) without storing to k.meter — apply consistently to the listed
methods (Logger, GetLastTotalPower, SetLastTotalPower, SetValidatorUpdates,
GetValidatorUpdates, GetBondedPool, GetNotBondedPool, bondedTokensToNotBonded,
notBondedTokensToBonded, burnBondedTokens, burnNotBondedTokens,
TotalBondedTokens, StakingTokenSupply, BondedRatio,
SetDelegationTransferReceiver, DeleteDelegationTransferReceiver,
IsAllowedDelegationTransferReceiver, GetAllAllowedDelegationTransferReceivers,
delegationTransferReceiversStore).

---

Outside diff comments:
In `@x/staking/keeper/delegation.go`:
- Around line 390-410: The iterator returned by store.Iterator in
IterateDelegatorRedelegations is never closed, leaking resources; update the
function (and the similar iteration function handling redelegations by src/dst)
to call iterator.Close() via a defer immediately after successful iterator
creation (i.e., after iterator, err := store.Iterator(...)), and ensure any
early returns (including error returns from types.UnmarshalRED or cb-induced
breaks) still allow the iterator to be closed by using the deferred Close;
reference the iterator variable from store.Iterator and the function
IterateDelegatorRedelegations (and the other redelegation iteration function)
when applying this fix.

In `@x/staking/keeper/validator.go`:
- Around line 459-477: The iterator returned by LastValidatorsIterator in
IterateLastValidatorPowers is never closed; after checking err from
k.LastValidatorsIterator(sdkCtx) immediately call defer iter.Close() (or
otherwise ensure iter.Close() runs on all returns) so the iterator is closed on
normal exit and on early breaks/returns; update the block that obtains iter (and
any early-return paths inside the loop that use iter.Valid(), iter.Next(), and
k.cdc.Unmarshal) to rely on this deferred close to release resources.
- Around line 338-353: In GetValidators, the store iterator returned by
store.Iterator (variable iterator) is not closed; after creating iterator,
immediately add a defer iterator.Close() so the iterator is always released
(even on early returns/errors) — locate the store.Iterator call in GetValidators
and add the defer right after the err check to ensure iterator.Close() is
invoked before returning.

---

Nitpick comments:
In `@x/circuit/keeper/keeper.go`:
- Around line 18-19: The lazy meter caching is ineffective because k.meter gets
set on a Keeper value copy (msgServer embeds Keeper by value), so the meter is
discarded after each RPC; fix by initializing and assigning the shared
metrics.Meter in NewKeeper (set k.meter once on the keeper instance returned by
NewKeeper) and also update msg_server.go so msgServer embeds/holds *Keeper
instead of Keeper (or conversely ensure all shared owners use *Keeper), and
apply the same change for the other cached fields referenced around lines 84-89
to ensure the meter/cache persists across RPC calls.

In `@x/distribution/keeper/keeper.go`:
- Around line 75-87: The Meter is being cached on *Keeper but these methods use
value receivers (func (k Keeper) ...), so assignments to k.meter only affect the
copy and the cached meter is lost; change the receiver of instrumented methods
(e.g., Logger, SetWithdrawAddr and the other affected methods around the area
referenced) from value to pointer (func (k *Keeper) ...) so k.meter updates
persist, or alternatively initialize and store the module meter once in
NewKeeper and have methods read the stored meter; update all methods that call
k.Meter to use the pointer receiver pattern to fix the caching bug.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 5567c8ca-e5cb-489c-bb2b-d0938e9b243d

📥 Commits

Reviewing files that changed from the base of the PR and between 52cd8ba and e80adca.

⛔ Files ignored due to path filters (3)
  • go.sum is excluded by !**/*.sum
  • simapp/go.sum is excluded by !**/*.sum
  • tests/go.sum is excluded by !**/*.sum
📒 Files selected for processing (112)
  • baseapp/abci.go
  • baseapp/baseapp.go
  • go.mod
  • simapp/go.mod
  • tests/go.mod
  • x/auth/keeper/account.go
  • x/auth/keeper/genesis.go
  • x/auth/keeper/grpc_query.go
  • x/auth/keeper/keeper.go
  • x/auth/keeper/msg_server.go
  • x/authz/keeper/genesis.go
  • x/authz/keeper/grpc_query.go
  • x/authz/keeper/keeper.go
  • x/authz/keeper/msg_server.go
  • x/authz/module/abci.go
  • x/bank/keeper/genesis.go
  • x/bank/keeper/grpc_query.go
  • x/bank/keeper/keeper.go
  • x/bank/keeper/msg_server.go
  • x/bank/keeper/send.go
  • x/bank/keeper/transient_balances.go
  • x/bank/keeper/view.go
  • x/bank/keeper/virtual_balances.go
  • x/bank/module.go
  • x/circuit/keeper/genesis.go
  • x/circuit/keeper/keeper.go
  • x/circuit/keeper/msg_server.go
  • x/circuit/keeper/query.go
  • x/consensus/keeper/keeper.go
  • x/crisis/abci.go
  • x/crisis/keeper/genesis.go
  • x/crisis/keeper/keeper.go
  • x/crisis/keeper/msg_server.go
  • x/distribution/abci.go
  • x/distribution/keeper/alias_functions.go
  • x/distribution/keeper/allocation.go
  • x/distribution/keeper/delegation.go
  • x/distribution/keeper/fee_pool.go
  • x/distribution/keeper/genesis.go
  • x/distribution/keeper/grpc_query.go
  • x/distribution/keeper/hooks.go
  • x/distribution/keeper/keeper.go
  • x/distribution/keeper/msg_server.go
  • x/distribution/keeper/params.go
  • x/distribution/keeper/store.go
  • x/distribution/keeper/validator.go
  • x/evidence/keeper/abci.go
  • x/evidence/keeper/grpc_query.go
  • x/evidence/keeper/infraction.go
  • x/evidence/keeper/keeper.go
  • x/evidence/keeper/msg_server.go
  • x/feegrant/keeper/grpc_query.go
  • x/feegrant/keeper/keeper.go
  • x/feegrant/keeper/msg_server.go
  • x/feegrant/module/abci.go
  • x/gov/abci.go
  • x/gov/keeper/deposit.go
  • x/gov/keeper/grpc_query.go
  • x/gov/keeper/keeper.go
  • x/gov/keeper/msg_server.go
  • x/gov/keeper/proposal.go
  • x/gov/keeper/tally.go
  • x/gov/keeper/vote.go
  • x/group/keeper/genesis.go
  • x/group/keeper/grpc_query.go
  • x/group/keeper/keeper.go
  • x/group/keeper/msg_server.go
  • x/group/keeper/proposal_executor.go
  • x/group/keeper/tally.go
  • x/group/module/abci.go
  • x/mint/abci.go
  • x/mint/keeper/genesis.go
  • x/mint/keeper/grpc_query.go
  • x/mint/keeper/keeper.go
  • x/mint/keeper/msg_server.go
  • x/nft/keeper/class.go
  • x/nft/keeper/genesis.go
  • x/nft/keeper/grpc_query.go
  • x/nft/keeper/msg_server.go
  • x/nft/keeper/nft.go
  • x/nft/keeper/nft_batch.go
  • x/params/keeper/grpc_query.go
  • x/slashing/abci.go
  • x/slashing/keeper/genesis.go
  • x/slashing/keeper/grpc_query.go
  • x/slashing/keeper/hooks.go
  • x/slashing/keeper/infractions.go
  • x/slashing/keeper/keeper.go
  • x/slashing/keeper/msg_server.go
  • x/slashing/keeper/params.go
  • x/slashing/keeper/signing_info.go
  • x/slashing/keeper/unjail.go
  • x/staking/keeper/abci.go
  • x/staking/keeper/alias_functions.go
  • x/staking/keeper/delegation.go
  • x/staking/keeper/delegation_receiver.go
  • x/staking/keeper/genesis.go
  • x/staking/keeper/grpc_query.go
  • x/staking/keeper/historical_info.go
  • x/staking/keeper/keeper.go
  • x/staking/keeper/msg_server.go
  • x/staking/keeper/params.go
  • x/staking/keeper/pool.go
  • x/staking/keeper/power_reduction.go
  • x/staking/keeper/query_utils.go
  • x/staking/keeper/slash.go
  • x/staking/keeper/unbonding.go
  • x/staking/keeper/val_state_change.go
  • x/staking/keeper/validator.go
  • x/upgrade/keeper/grpc_query.go
  • x/upgrade/keeper/keeper.go
  • x/upgrade/keeper/msg_server.go
✅ Files skipped from review due to trivial changes (7)
  • x/group/keeper/genesis.go
  • baseapp/baseapp.go
  • simapp/go.mod
  • go.mod
  • x/bank/keeper/msg_server.go
  • x/slashing/keeper/signing_info.go
  • x/bank/keeper/grpc_query.go
🚧 Files skipped from review as they are similar to previous changes (71)
  • x/auth/keeper/genesis.go
  • x/slashing/keeper/genesis.go
  • x/nft/keeper/genesis.go
  • x/authz/module/abci.go
  • x/crisis/abci.go
  • x/gov/keeper/vote.go
  • x/mint/abci.go
  • x/staking/keeper/power_reduction.go
  • x/slashing/abci.go
  • x/mint/keeper/msg_server.go
  • x/distribution/keeper/genesis.go
  • x/mint/keeper/genesis.go
  • x/group/module/abci.go
  • x/crisis/keeper/genesis.go
  • x/auth/keeper/account.go
  • x/bank/keeper/genesis.go
  • x/feegrant/module/abci.go
  • x/gov/abci.go
  • x/evidence/keeper/infraction.go
  • x/bank/module.go
  • x/distribution/keeper/alias_functions.go
  • x/evidence/keeper/abci.go
  • x/nft/keeper/msg_server.go
  • x/distribution/keeper/params.go
  • x/distribution/keeper/fee_pool.go
  • x/slashing/keeper/unjail.go
  • x/circuit/keeper/genesis.go
  • baseapp/abci.go
  • x/gov/keeper/keeper.go
  • x/crisis/keeper/keeper.go
  • tests/go.mod
  • x/circuit/keeper/query.go
  • x/slashing/keeper/hooks.go
  • x/params/keeper/grpc_query.go
  • x/evidence/keeper/grpc_query.go
  • x/auth/keeper/msg_server.go
  • x/group/keeper/tally.go
  • x/nft/keeper/class.go
  • x/gov/keeper/proposal.go
  • x/slashing/keeper/grpc_query.go
  • x/authz/keeper/grpc_query.go
  • x/distribution/keeper/validator.go
  • x/mint/keeper/grpc_query.go
  • x/mint/keeper/keeper.go
  • x/feegrant/keeper/grpc_query.go
  • x/distribution/keeper/msg_server.go
  • x/crisis/keeper/msg_server.go
  • x/feegrant/keeper/msg_server.go
  • x/authz/keeper/msg_server.go
  • x/distribution/keeper/allocation.go
  • x/distribution/keeper/delegation.go
  • x/bank/keeper/send.go
  • x/slashing/keeper/params.go
  • x/group/keeper/keeper.go
  • x/distribution/keeper/hooks.go
  • x/staking/keeper/params.go
  • x/bank/keeper/virtual_balances.go
  • x/staking/keeper/query_utils.go
  • x/staking/keeper/slash.go
  • x/bank/keeper/keeper.go
  • x/upgrade/keeper/grpc_query.go
  • x/staking/keeper/unbonding.go
  • x/consensus/keeper/keeper.go
  • x/staking/keeper/val_state_change.go
  • x/staking/keeper/historical_info.go
  • x/gov/keeper/deposit.go
  • x/nft/keeper/nft.go
  • x/distribution/keeper/store.go
  • x/staking/keeper/grpc_query.go
  • x/staking/keeper/msg_server.go
  • x/evidence/keeper/keeper.go

Comment thread x/auth/keeper/keeper.go
Comment on lines +107 to 108
meter metrics.Meter
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify that Meter is a pointer receiver but the instrumented call sites use copied keepers.
rg -n 'func \(ak AccountKeeper\) (GetPubKey|GetSequence|NextAccountNumber|GetModuleAccountAndPermissions|GetModuleAccount|SetModuleAccount|GetParams)\(' x/auth/keeper/keeper.go
rg -n 'func \(ak \*AccountKeeper\) Meter\(' x/auth/keeper/keeper.go
rg -n 'type queryServer struct\{ k AccountKeeper \}' x/auth/keeper/grpc_query.go

Repository: InjectiveLabs/cosmos-sdk

Length of output: 917


Cached meter is lost due to value receiver semantics.

Methods like GetPubKey, GetSequence, GetModuleAccount, etc. use value receivers func (ak AccountKeeper), while the Meter method uses a pointer receiver func (ak *AccountKeeper). When these value-receiver methods call ak.Meter(ctx), the meter field is written to a copy of the keeper, not the original. The same issue occurs in the query server, which stores AccountKeeper by value.

Make the instrumented path consistently pointer-based or remove the cached field to ensure the meter is properly shared across call sites.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/auth/keeper/keeper.go` around lines 107 - 108, The cached meter on
AccountKeeper is lost because many methods (e.g., GetPubKey, GetSequence,
GetModuleAccount) use value receivers while Meter is a pointer receiver, causing
writes to ak.Meter(ctx) to affect only a copy; fix by making the instrumented
path consistently pointer-based: change the receiver of all AccountKeeper
methods that access ak.Meter (including GetPubKey, GetSequence,
GetModuleAccount, any other methods using ak.Meter) from value to pointer
receivers (func (ak *AccountKeeper) ...) and update the query server to store
and use *AccountKeeper instead of AccountKeeper; alternatively, if you prefer
not to use shared state, remove the cached meter field and compute/return the
meter directly from Meter, but be consistent across AccountKeeper methods and
query server to ensure the meter is properly shared.

Comment thread x/authz/keeper/keeper.go
Comment on lines +593 to +599
func (k *Keeper) Meter(ctx context.Context) metrics.Meter {
if k.meter == nil {
k.meter = sdk.UnwrapSDKContext(ctx).Meter().SubMeter(authz.ModuleName, metrics.Tag("svc", authz.ModuleName))
}

return k.meter
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cd x/authz/keeper && wc -l keeper.go

Repository: InjectiveLabs/cosmos-sdk

Length of output: 81


🏁 Script executed:

cd x/authz/keeper && sed -n '593,599p' keeper.go

Repository: InjectiveLabs/cosmos-sdk

Length of output: 281


🏁 Script executed:

cd x/authz/keeper && rg -n "func \(k" keeper.go | head -30

Repository: InjectiveLabs/cosmos-sdk

Length of output: 2303


🏁 Script executed:

cd x/authz/keeper && rg "\.Meter\(" keeper.go -B 3 -A 3

Repository: InjectiveLabs/cosmos-sdk

Length of output: 5535


Convert instrumented methods to pointer receivers to make the meter cache effective.

The Meter() method is a pointer receiver, but all instrumented methods (getGrant, update, DispatchActions, SaveGrant, DeleteGrant, GetAuthorizations, GetAuthorization, IterateGrants, and others) are value receivers. When value-receiver methods call k.Meter(ctx), Go automatically passes a pointer to the local copy. This means k.meter = ... mutates the copy's field, not the keeper's, so the cache is recreated on every call. Converting these methods to pointer receivers would enable the lazy cache to work as intended.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/authz/keeper/keeper.go` around lines 593 - 599, The instrumented methods
(e.g., getGrant, update, DispatchActions, SaveGrant, DeleteGrant,
GetAuthorizations, GetAuthorization, IterateGrants) are defined with value
receivers, so calls to k.Meter(ctx) mutate a copy and the meter cache is never
persisted; change those methods' receivers to pointer receivers (e.g., func (k
*Keeper) ...) so that k.Meter(ctx) updates the actual Keeper instance's meter
field and the lazy cache takes effect. Ensure all methods that call k.Meter or
otherwise mutate Keeper state use *Keeper receivers consistently to avoid cache
recreation.

Comment thread x/bank/keeper/view.go
Comment on lines +73 to 74
meter metrics.Meter
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify that the instrumented methods use value receivers while Meter uses a pointer receiver.
rg -n 'func \(k BaseViewKeeper\) (HasBalance|GetAllBalances|GetAccountsBalances|GetBalance|IterateAccountBalances|IterateAllBalances|LockedCoins|SpendableCoins|SpendableCoin|spendableCoins|ValidateBalance)\(' x/bank/keeper/view.go
rg -n 'func \(k \*BaseViewKeeper\) Meter\(' x/bank/keeper/view.go

Repository: InjectiveLabs/cosmos-sdk

Length of output: 1276


🏁 Script executed:

# Check the meter field definition and NewBaseViewKeeper signature
sed -n '65,85p' x/bank/keeper/view.go

# Check the full Meter method implementation
sed -n '293,305p' x/bank/keeper/view.go

# Check if any value-receiver methods call Meter()
rg -n '\.Meter\(' x/bank/keeper/view.go

Repository: InjectiveLabs/cosmos-sdk

Length of output: 2032


The lazy meter cache never survives these call sites.

BaseViewKeeper is constructed by value in NewBaseViewKeeper() and all instrumented methods in this file use value receivers (func (k BaseViewKeeper)). Each instrumented method calls the pointer-receiver Meter() method, which initializes the meter on a per-call copy of the keeper, not the shared instance. As a result, SubMeter(...) is rebuilt on every invocation and the meter field remains effectively unused. Either switch the instrumented methods to pointer receivers or remove the cached meter field.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/bank/keeper/view.go` around lines 73 - 74, The cached meter on
BaseViewKeeper is re-initialized per call because instrumented methods use value
receivers while Meter() is a pointer receiver; fix by making the keeper shared:
change NewBaseViewKeeper to return *BaseViewKeeper (allocate and return a
pointer) and convert all instrumented methods that currently have receivers like
(k BaseViewKeeper) to pointer receivers (k *BaseViewKeeper) so they use the same
instance and the meter field is preserved, and ensure Meter() remains a pointer
method so SubMeter(...) is built once and reused.

Comment on lines 188 to 190
if exp != nil {
if err := store.Delete(feegrant.FeeAllowancePrefixQueue(exp, feegrant.FeeAllowanceKey(grantee, granter)[1:])); err != nil {
if err = store.Delete(feegrant.FeeAllowancePrefixQueue(exp, feegrant.FeeAllowanceKey(grantee, granter)[1:])); err != nil {
return err
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Delete the same expiry-queue key that GrantAllowance inserted.

GrantAllowance enqueues FeeAllowanceKey(granter, grantee)[1:], but Line 189 deletes FeeAllowanceKey(grantee, granter)[1:]. That leaves stale queue entries behind, and RemoveExpiredAllowances can later delete a newly recreated grant when the old expiry fires.

Proposed fix
-		if err = store.Delete(feegrant.FeeAllowancePrefixQueue(exp, feegrant.FeeAllowanceKey(grantee, granter)[1:])); err != nil {
+		if err = store.Delete(feegrant.FeeAllowancePrefixQueue(exp, feegrant.FeeAllowanceKey(granter, grantee)[1:])); err != nil {
 			return err
 		}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if exp != nil {
if err := store.Delete(feegrant.FeeAllowancePrefixQueue(exp, feegrant.FeeAllowanceKey(grantee, granter)[1:])); err != nil {
if err = store.Delete(feegrant.FeeAllowancePrefixQueue(exp, feegrant.FeeAllowanceKey(grantee, granter)[1:])); err != nil {
return err
if exp != nil {
if err = store.Delete(feegrant.FeeAllowancePrefixQueue(exp, feegrant.FeeAllowanceKey(granter, grantee)[1:])); err != nil {
return err
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/feegrant/keeper/keeper.go` around lines 188 - 190, The expiry-queue
deletion uses the wrong key order: when enqueuing in GrantAllowance the code
uses FeeAllowanceKey(granter, grantee)[1:] but the deletion uses
FeeAllowanceKey(grantee, granter)[1:], leaving stale queue entries; update the
deletion in the expiry handling to use FeeAllowanceKey(granter, grantee)[1:]
(i.e., match the same key ordering used by GrantAllowance when calling
FeeAllowancePrefixQueue/FeeAllowanceKey) so RemoveExpiredAllowances removes the
exact entry that was inserted.

Comment on lines +421 to +426
func (k *Keeper) Meter(ctx context.Context) metrics.Meter {
if k.meter == nil {
k.meter = sdk.UnwrapSDKContext(ctx).Meter().SubMeter(feegrant.ModuleName, metrics.Tag("svc", feegrant.ModuleName))
}

return k.meter
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find . -path "*feegrant/keeper/keeper.go" -type f

Repository: InjectiveLabs/cosmos-sdk

Length of output: 97


🏁 Script executed:

git ls-files | grep -E "feegrant/keeper.*keeper\.go"

Repository: InjectiveLabs/cosmos-sdk

Length of output: 95


🏁 Script executed:

wc -l x/feegrant/keeper/keeper.go

Repository: InjectiveLabs/cosmos-sdk

Length of output: 99


🏁 Script executed:

cat -n x/feegrant/keeper/keeper.go

Repository: InjectiveLabs/cosmos-sdk

Length of output: 15102


Most instrumented methods use value receivers, making meter caching ineffective.

Meter() with pointer receiver func (k *Keeper) Meter(...) caches the meter in k.meter. However, all other methods in this file use value receivers: Logger, GrantAllowance, UpdateAllowance, revokeAllowance, GetAllowance, getGrant, IterateAllFeeAllowances, UseGrantedFees, InitGenesis, ExportGenesis, addToFeeAllowanceQueue, RemoveExpiredAllowances, and CheckGrantedFee. When these value-receiver methods call k.Meter(ctx), they pass a pointer to their local copy, so the cached meter mutation is lost. Each call rebuilds the sub-meter instead of reusing it.

Either convert these methods to pointer receivers or make Meter() stateless.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/feegrant/keeper/keeper.go` around lines 421 - 426, Meter() currently caches
a meter on the Keeper pointer but most other methods use value receivers so
calls to k.Meter(ctx) operate on copies and the cache is ineffective; fix by
making the instrumented methods use pointer receivers so they call Meter on the
same Keeper instance — change the receiver from (k Keeper) to (k *Keeper) for
Logger, GrantAllowance, UpdateAllowance, revokeAllowance, GetAllowance,
getGrant, IterateAllFeeAllowances, UseGrantedFees, InitGenesis, ExportGenesis,
addToFeeAllowanceQueue, RemoveExpiredAllowances, and CheckGrantedFee so the
cached k.meter is reused (alternatively, if you prefer stateless Meter, remove
caching inside Meter and return a fresh sub-meter each call).

Comment thread x/nft/keeper/nft_batch.go
Comment on lines +96 to 97
if err = k.transferWithNoCheck(sdkCtx, classID, nftID, receiver); err != nil {
return errors.Wrap(nft.ErrNFTNotExists, nftID)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Preserve underlying transfer error instead of rewriting it as ErrNFTNotExists.

At Line 96-97, a failed k.transferWithNoCheck(...) currently always returns nft.ErrNFTNotExists, which can hide the real failure source and misclassify errors.

💡 Proposed fix
-		if err = k.transferWithNoCheck(sdkCtx, classID, nftID, receiver); err != nil {
-			return errors.Wrap(nft.ErrNFTNotExists, nftID)
-		}
+		if err = k.transferWithNoCheck(sdkCtx, classID, nftID, receiver); err != nil {
+			return err
+		}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if err = k.transferWithNoCheck(sdkCtx, classID, nftID, receiver); err != nil {
return errors.Wrap(nft.ErrNFTNotExists, nftID)
if err = k.transferWithNoCheck(sdkCtx, classID, nftID, receiver); err != nil {
return err
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/nft/keeper/nft_batch.go` around lines 96 - 97, The current error handling
in the transfer branch masks the real failure by always returning
nft.ErrNFTNotExists; update the return to preserve and wrap the actual error
from k.transferWithNoCheck(sdkCtx, classID, nftID, receiver) (e.g., wrap err
with nftID or a clear message) instead of using nft.ErrNFTNotExists so callers
see the true cause; locate the call to k.transferWithNoCheck and replace the
errors.Wrap(nft.ErrNFTNotExists, nftID) return with wrapping/returning the
original err (referencing variables transferWithNoCheck, err, and nftID).

Comment on lines +148 to +153
func (k *Keeper) Meter(ctx context.Context) metrics.Meter {
if k.meter == nil {
k.meter = sdk.UnwrapSDKContext(ctx).Meter().SubMeter(types.ModuleName, metrics.Tag("svc", types.ModuleName))
}

return k.meter
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# First, find the keeper.go file in x/slashing/keeper/
find . -path "*/x/slashing/keeper/keeper.go" -type f

Repository: InjectiveLabs/cosmos-sdk

Length of output: 97


🏁 Script executed:

# Also check for types/context.go
find . -path "*/types/context.go" -type f | head -20

Repository: InjectiveLabs/cosmos-sdk

Length of output: 111


🏁 Script executed:

wc -l ./x/slashing/keeper/keeper.go

Repository: InjectiveLabs/cosmos-sdk

Length of output: 101


🏁 Script executed:

# Read keeper.go around the Meter method (lines 140-160)
sed -n '140,160p' ./x/slashing/keeper/keeper.go

Repository: InjectiveLabs/cosmos-sdk

Length of output: 599


🏁 Script executed:

# Read types/context.go around lines 363-370 to check Meter() behavior
sed -n '360,375p' ./types/context.go

Repository: InjectiveLabs/cosmos-sdk

Length of output: 427


🏁 Script executed:

# Read the beginning of keeper.go to see the Keeper struct definition
head -30 ./x/slashing/keeper/keeper.go

Repository: InjectiveLabs/cosmos-sdk

Length of output: 857


🏁 Script executed:

# Find all method definitions in keeper.go to identify receiver types
grep -n "^func (k" ./x/slashing/keeper/keeper.go

Repository: InjectiveLabs/cosmos-sdk

Length of output: 986


🏁 Script executed:

# Let me check if these value-receiver methods actually call k.Meter()
grep -n "k\.Meter(" ./x/slashing/keeper/keeper.go

Repository: InjectiveLabs/cosmos-sdk

Length of output: 518


🏁 Script executed:

# Read the actual implementation of a few key methods to see usage pattern
sed -n '49,55p' ./x/slashing/keeper/keeper.go

Repository: InjectiveLabs/cosmos-sdk

Length of output: 322


🏁 Script executed:

# Check if types/context.go initializes meter by default or shows it can be nil
sed -n '350,380p' ./types/context.go

Repository: InjectiveLabs/cosmos-sdk

Length of output: 847


🏁 Script executed:

# Verify that line 150 would panic by checking if there's any noop meter fallback
grep -r "noop.*[Mm]eter\|[Mm]eter.*noop" ./x/slashing/ 2>/dev/null || echo "No noop meter fallback found in slashing module"

Repository: InjectiveLabs/cosmos-sdk

Length of output: 115


🏁 Script executed:

# Check how Context is typically initialized - is Meter ever set by default?
grep -n "NewContext\|NewSDKContext" ./types/context.go | head -5

Repository: InjectiveLabs/cosmos-sdk

Length of output: 182


🏁 Script executed:

# Verify that sdk.UnwrapSDKContext returns the Context and check if it validates meter
grep -A 5 "func UnwrapSDKContext" ./types/context.go

Repository: InjectiveLabs/cosmos-sdk

Length of output: 222


Line 150 will panic when sdk.Context.Meter() is uninitialized, and caching a context-scoped meter on the keeper is unsafe.

sdk.UnwrapSDKContext(ctx).Meter().SubMeter(...) will panic if Meter() returns nil. Since Context.Meter() returns the meter field with no initialization guarantee, any non-instrumented request path will crash before business logic executes.

Additionally, caching that SubMeter on the keeper causes two problems: (1) it binds later requests to a stale meter derived from an earlier context, and (2) the value-receiver methods (Logger, AddPubkey, GetPubkey, Slash, SlashWithInfractionReason, Jail, deleteAddrPubkeyRelation) that call this pointer-receiver method cannot persist mutations, so the cache is never actually used consistently.

Derive the sub-meter from the current context on each call with a safe fallback to the noop meter when none is installed.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/slashing/keeper/keeper.go` around lines 148 - 153, The Keeper.Meter method
currently calls sdk.UnwrapSDKContext(ctx).Meter().SubMeter(...) and caches the
result on k.meter, which will panic if Context.Meter() is nil and incorrectly
reuses a context-scoped meter; change Meter to derive the sub-meter from the
provided ctx on every call instead of caching it: call
sdk.UnwrapSDKContext(ctx).Meter(), check for nil and fall back to the SDK noop
meter (or equivalent no-op meter) before calling SubMeter(types.ModuleName,
metrics.Tag("svc", types.ModuleName)), and return that sub-meter directly;
remove any use of the k.meter cache so callers of Keeper.Meter (e.g., Logger,
AddPubkey, GetPubkey, Slash, SlashWithInfractionReason, Jail,
deleteAddrPubkeyRelation) always get a safe per-request meter.

Comment on lines +15 to +18
func (k Keeper) IterateValidators(ctx context.Context, fn func(index int64, validator types.ValidatorI) (stop bool)) (err error) {
sdkCtx := sdk.UnwrapSDKContext(ctx)
defer k.Meter(ctx).FuncTiming(&sdkCtx, "IterateValidators")(&err)

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, locate the files mentioned
find . -type f -name "alias_functions.go" | grep staking
find . -type f -name "keeper.go" | grep staking

Repository: InjectiveLabs/cosmos-sdk

Length of output: 134


🏁 Script executed:

# Read the alias_functions.go file to see the actual receiver types
cat -n x/staking/keeper/alias_functions.go | head -50

Repository: InjectiveLabs/cosmos-sdk

Length of output: 1926


🏁 Script executed:

# Read the keeper.go file around line 191-197 to see the Meter method
sed -n '180,210p' x/staking/keeper/keeper.go

Repository: InjectiveLabs/cosmos-sdk

Length of output: 438


🏁 Script executed:

# Also check for the meter field definition in the Keeper struct
grep -n "meter" x/staking/keeper/keeper.go | head -20

Repository: InjectiveLabs/cosmos-sdk

Length of output: 473


🏁 Script executed:

# Check how many instrumented methods exist in alias_functions.go
grep -n "k.Meter(ctx)" x/staking/keeper/alias_functions.go

Repository: InjectiveLabs/cosmos-sdk

Length of output: 640


🏁 Script executed:

# Get a better view of the entire alias_functions.go file to see the scope
wc -l x/staking/keeper/alias_functions.go

Repository: InjectiveLabs/cosmos-sdk

Length of output: 107


🏁 Script executed:

# Read the full alias_functions.go file to see all methods
cat -n x/staking/keeper/alias_functions.go

Repository: InjectiveLabs/cosmos-sdk

Length of output: 7302


Change value receivers to pointer receivers in these alias methods to enable meter caching.

All 8 instrumented methods in this file use value receivers func (k Keeper), but Meter() is defined on *Keeper with a pointer receiver (line 191 in keeper.go). When value receiver methods call k.Meter(ctx), Go passes a copy of the Keeper to Meter(), so the assignment k.meter = ... only modifies the copy. This means the meter is recreated on every call instead of being cached, defeating the performance optimization.

Change these method signatures to pointer receivers: func (k *Keeper) for IterateValidators, IterateBondedValidatorsByPower, IterateLastValidators, Validator, ValidatorByConsAddr, Delegation, IterateDelegations, and GetAllSDKDelegations.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/staking/keeper/alias_functions.go` around lines 15 - 18, Change the
receiver of the instrumented alias methods from value to pointer so Meter() uses
the same Keeper instance and can cache the meter: update the method signatures
for IterateValidators, IterateBondedValidatorsByPower, IterateLastValidators,
Validator, ValidatorByConsAddr, Delegation, IterateDelegations, and
GetAllSDKDelegations from "func (k Keeper) ..." to "func (k *Keeper) ...";
ensure any internal uses of k remain correct and no additional code changes are
required besides the receiver type so calls to k.Meter(ctx) will mutate the
actual Keeper's meter field rather than a copy.

Comment on lines 947 to 955
func (k Keeper) DequeueAllMatureRedelegationQueue(ctx context.Context, currTime time.Time) (matureRedelegations []types.DVVTriplet, err error) {
store := k.storeService.OpenKVStore(ctx)
sdkCtx := sdk.UnwrapSDKContext(ctx)
defer k.Meter(ctx).FuncTiming(&sdkCtx, "DequeueAllMatureRedelegationQueue")(&err)

store := k.storeService.OpenKVStore(sdkCtx)

// gets an iterator for all timeslices from time 0 until the current Blockheader time
sdkCtx := sdk.UnwrapSDKContext(ctx)
redelegationTimesliceIterator, err := k.RedelegationQueueIterator(ctx, sdkCtx.HeaderInfo().Time)
redelegationTimesliceIterator, err := k.RedelegationQueueIterator(sdkCtx, sdkCtx.HeaderInfo().Time)
if err != nil {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use the currTime parameter instead of the block header time.

This call now ignores the method argument and always dequeues up to sdkCtx.HeaderInfo().Time. That changes the API behavior for callers that intentionally pass a different cutoff time, including tests/simulations.

Suggested fix
-	redelegationTimesliceIterator, err := k.RedelegationQueueIterator(sdkCtx, sdkCtx.HeaderInfo().Time)
+	redelegationTimesliceIterator, err := k.RedelegationQueueIterator(sdkCtx, currTime)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func (k Keeper) DequeueAllMatureRedelegationQueue(ctx context.Context, currTime time.Time) (matureRedelegations []types.DVVTriplet, err error) {
store := k.storeService.OpenKVStore(ctx)
sdkCtx := sdk.UnwrapSDKContext(ctx)
defer k.Meter(ctx).FuncTiming(&sdkCtx, "DequeueAllMatureRedelegationQueue")(&err)
store := k.storeService.OpenKVStore(sdkCtx)
// gets an iterator for all timeslices from time 0 until the current Blockheader time
sdkCtx := sdk.UnwrapSDKContext(ctx)
redelegationTimesliceIterator, err := k.RedelegationQueueIterator(ctx, sdkCtx.HeaderInfo().Time)
redelegationTimesliceIterator, err := k.RedelegationQueueIterator(sdkCtx, sdkCtx.HeaderInfo().Time)
if err != nil {
func (k Keeper) DequeueAllMatureRedelegationQueue(ctx context.Context, currTime time.Time) (matureRedelegations []types.DVVTriplet, err error) {
sdkCtx := sdk.UnwrapSDKContext(ctx)
defer k.Meter(ctx).FuncTiming(&sdkCtx, "DequeueAllMatureRedelegationQueue")(&err)
store := k.storeService.OpenKVStore(sdkCtx)
// gets an iterator for all timeslices from time 0 until the current Blockheader time
redelegationTimesliceIterator, err := k.RedelegationQueueIterator(sdkCtx, currTime)
if err != nil {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/staking/keeper/delegation.go` around lines 947 - 955,
DequeueAllMatureRedelegationQueue currently ignores the currTime parameter by
calling RedelegationQueueIterator with sdkCtx.HeaderInfo().Time; update the call
to use the provided currTime instead so the function dequeues up to the
caller-specified cutoff. Locate the DequeueAllMatureRedelegationQueue function
and replace the argument to RedelegationQueueIterator (and any related time
usages in that function) from sdkCtx.HeaderInfo().Time to currTime, preserving
existing error handling and defer timing logic.


// Keeper of the x/staking store
type Keeper struct {
meter metrics.Meter
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify that newly instrumented staking methods are value receivers while Meter is a pointer receiver.
rg -n 'func \(k Keeper\) (Logger|GetLastTotalPower|SetLastTotalPower|SetValidatorUpdates|GetValidatorUpdates|GetBondedPool|GetNotBondedPool|bondedTokensToNotBonded|notBondedTokensToBonded|burnBondedTokens|burnNotBondedTokens|TotalBondedTokens|StakingTokenSupply|BondedRatio|SetDelegationTransferReceiver|DeleteDelegationTransferReceiver|IsAllowedDelegationTransferReceiver|GetAllAllowedDelegationTransferReceivers|delegationTransferReceiversStore)\(' x/staking/keeper/keeper.go x/staking/keeper/pool.go x/staking/keeper/delegation_receiver.go
rg -n 'func \(k \*Keeper\) Meter\(' x/staking/keeper/keeper.go

Repository: InjectiveLabs/cosmos-sdk

Length of output: 2483


Meter caching is ineffective due to receiver type mismatch.

All newly instrumented staking methods (Logger, GetLastTotalPower, SetLastTotalPower, SetValidatorUpdates, GetValidatorUpdates in keeper.go; GetBondedPool, GetNotBondedPool, bondedTokensToNotBonded, notBondedTokensToBonded, burnBondedTokens, burnNotBondedTokens, TotalBondedTokens, StakingTokenSupply, BondedRatio in pool.go; SetDelegationTransferReceiver, DeleteDelegationTransferReceiver, IsAllowedDelegationTransferReceiver, GetAllAllowedDelegationTransferReceivers, delegationTransferReceiversStore in delegation_receiver.go) use value receivers while Meter() is a pointer receiver. When value-receiver methods call k.Meter(ctx), the mutations to meter only affect the value copy and are lost upon return. Either convert these methods to pointer receivers or remove the caching logic.

Also applies to: 191-197

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/staking/keeper/keeper.go` at line 28, The keeper's meter caching is broken
because Meter() is defined with a pointer receiver while many staking methods
(Logger, GetLastTotalPower, SetLastTotalPower, SetValidatorUpdates,
GetValidatorUpdates and the other listed functions across pool.go and
delegation_receiver.go) are defined with value receivers, so assignments to
k.meter mutate only the copy; fix by making these methods use pointer receivers
(change their receiver from (k Keeper) to (k *Keeper)) so k.Meter(ctx) and any
caching (k.meter = ...) update the real struct, or alternatively remove the
caching logic entirely and always call k.Meter(ctx) without storing to k.meter —
apply consistently to the listed methods (Logger, GetLastTotalPower,
SetLastTotalPower, SetValidatorUpdates, GetValidatorUpdates, GetBondedPool,
GetNotBondedPool, bondedTokensToNotBonded, notBondedTokensToBonded,
burnBondedTokens, burnNotBondedTokens, TotalBondedTokens, StakingTokenSupply,
BondedRatio, SetDelegationTransferReceiver, DeleteDelegationTransferReceiver,
IsAllowedDelegationTransferReceiver, GetAllAllowedDelegationTransferReceivers,
delegationTransferReceiversStore).

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
x/feegrant/go.mod (1)

25-25: ⚠️ Potential issue | 🔴 Critical

Block merge: vulnerable grpc and otel/sdk versions are still pinned.

Line 25 keeps google.golang.org/grpc v1.75.0 and Line 162 keeps go.opentelemetry.io/otel/sdk v1.38.0; both are flagged by Dependency Review in this PR and should be bumped to patched releases before merge.

🔧 Suggested dependency bump
-	google.golang.org/grpc v1.75.0
+	google.golang.org/grpc v1.79.3
...
-	go.opentelemetry.io/otel/sdk v1.38.0 // indirect
+	go.opentelemetry.io/otel/sdk v1.40.0 // indirect

Also keep all go.opentelemetry.io/otel* modules on the same version line to avoid mixed OTel API/SDK graph resolution.

Also applies to: 162-162

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/feegrant/go.mod` at line 25, Update the pinned vulnerable dependencies in
go.mod: bump google.golang.org/grpc (currently v1.75.0) to the patched release
and update go.opentelemetry.io/otel/sdk (currently v1.38.0) to a patched
release; ensure all go.opentelemetry.io/otel* module lines in go.mod are set to
the same version line to avoid mixed OTel API/SDK resolution (search for the
grpc and otel/sdk entries in go.mod to locate and update them).
x/evidence/go.mod (1)

26-26: ⚠️ Potential issue | 🔴 Critical

Block merge: dependency review failures remain in this module.

Line 26 (grpc v1.75.0) and Line 160 (otel/sdk v1.38.0) are still on vulnerable versions reported by the pipeline; upgrade to patched releases before merging.

Also applies to: 160-160

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/evidence/go.mod` at line 26, The go.mod currently pins vulnerable versions
of google.golang.org/grpc and go.opentelemetry.io/otel/sdk; update those module
versions to the patched releases reported by the pipeline (replace the entries
for google.golang.org/grpc and go.opentelemetry.io/otel/sdk), then run `go get
google.golang.org/grpc@<patched-version>` and `go get
go.opentelemetry.io/otel/sdk@<patched-version>` (or edit go.mod and run `go mod
tidy`) to update the lockfile and verify the build/CI passes before merging.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@x/upgrade/go.mod`:
- Line 29: The go.mod currently pins vulnerable versions of gRPC and
OpenTelemetry: update the module versions so google.golang.org/grpc is bumped to
v1.79.3 or later and go.opentelemetry.io/otel/sdk is bumped to v1.40.0 or later;
locate the entries for google.golang.org/grpc and go.opentelemetry.io/otel/sdk
in go.mod (the lines containing those module paths) and replace their versions
accordingly, then run dependency tidy (e.g., go get/update and go mod tidy) and
run tests to ensure compatibility.

---

Duplicate comments:
In `@x/evidence/go.mod`:
- Line 26: The go.mod currently pins vulnerable versions of
google.golang.org/grpc and go.opentelemetry.io/otel/sdk; update those module
versions to the patched releases reported by the pipeline (replace the entries
for google.golang.org/grpc and go.opentelemetry.io/otel/sdk), then run `go get
google.golang.org/grpc@<patched-version>` and `go get
go.opentelemetry.io/otel/sdk@<patched-version>` (or edit go.mod and run `go mod
tidy`) to update the lockfile and verify the build/CI passes before merging.

In `@x/feegrant/go.mod`:
- Line 25: Update the pinned vulnerable dependencies in go.mod: bump
google.golang.org/grpc (currently v1.75.0) to the patched release and update
go.opentelemetry.io/otel/sdk (currently v1.38.0) to a patched release; ensure
all go.opentelemetry.io/otel* module lines in go.mod are set to the same version
line to avoid mixed OTel API/SDK resolution (search for the grpc and otel/sdk
entries in go.mod to locate and update them).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 0ce4431f-e545-4cf1-8fa3-959c5dafcb0d

📥 Commits

Reviewing files that changed from the base of the PR and between e80adca and b3cf28d.

⛔ Files ignored due to path filters (5)
  • x/circuit/go.sum is excluded by !**/*.sum
  • x/evidence/go.sum is excluded by !**/*.sum
  • x/feegrant/go.sum is excluded by !**/*.sum
  • x/nft/go.sum is excluded by !**/*.sum
  • x/upgrade/go.sum is excluded by !**/*.sum
📒 Files selected for processing (5)
  • x/circuit/go.mod
  • x/evidence/go.mod
  • x/feegrant/go.mod
  • x/nft/go.mod
  • x/upgrade/go.mod
✅ Files skipped from review due to trivial changes (1)
  • x/nft/go.mod
🚧 Files skipped from review as they are similar to previous changes (1)
  • x/circuit/go.mod

Comment thread x/upgrade/go.mod
google.golang.org/protobuf v1.36.5
github.com/stretchr/testify v1.11.1
google.golang.org/genproto/googleapis/api v0.0.0-20250825161204-c5933d9347a5
google.golang.org/grpc v1.75.0
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# First, verify the file exists and check its size
wc -l x/upgrade/go.mod

Repository: InjectiveLabs/cosmos-sdk

Length of output: 88


🏁 Script executed:

# Read the specific lines mentioned in the review
sed -n '29p;177p' x/upgrade/go.mod

Repository: InjectiveLabs/cosmos-sdk

Length of output: 149


🏁 Script executed:

# Get broader context around those lines to verify the dependencies
sed -n '25,35p' x/upgrade/go.mod
sed -n '173,180p' x/upgrade/go.mod

Repository: InjectiveLabs/cosmos-sdk

Length of output: 1042


🌐 Web query:

google.golang.org/grpc v1.75.0 security vulnerability CVE

💡 Result:

Yes, google.golang.org/grpc v1.75.0 is affected by a critical security vulnerability identified as CVE-2026-33186 (GHSA-p77j-4mvh-x3m3). This is an authorization bypass vulnerability due to improper input validation of the HTTP/2 ':path' pseudo-header in the gRPC-Go server. Key details: - Affected versions: All versions prior to v1.79.3, explicitly including v1.75.0 (released August 2025). - Severity: Critical, CVSS 3.1 score 9.1 (AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:N). - Description: The server accepts requests with ':path' omitting the leading slash (e.g., 'Service/Method' instead of '/Service/Method'). This bypasses path-based authorization interceptors (e.g., grpc/authz) that expect canonical paths starting with '/', allowing unauthorized access if deny rules target canonical paths with a fallback allow. - Impact: Affects gRPC-Go servers using path-based auth interceptors with specific deny rules and fallback allow. - Exploitation: Network-accessible; attacker sends malformed HTTP/2 frames. - Patch: Upgrade to v1.79.3 or later (released around March 2026), which rejects non-slash-starting ':path' with codes.Unimplemented. - Mitigation (pre-upgrade): Custom interceptor to validate/normalize ':path'; reverse proxy to enforce leading slash; avoid fallback allows in auth policies. - Official sources: gRPC-Go GitHub security advisory, NVD entry. No other CVEs specifically tied to exactly v1.75.0 were found; older issues like HTTP/2 rapid reset (patched earlier) do not affect this version in the same way.

Citations:


🌐 Web query:

go.opentelemetry.io/otel/sdk v1.38.0 security vulnerability CVE

💡 Result:

Yes, go.opentelemetry.io/otel/sdk v1.38.0 has a known security vulnerability identified as GO-2026-4394 (also CVE-2026-24051, GHSA-9h8m-3fm2-qjrq). It is an arbitrary code execution vulnerability due to PATH hijacking (untrusted search path) in the resource detection code, specifically on macOS/Darwin systems where the ioreg command is executed without a secure PATH. The vulnerability affects versions from v1.20.0 to before v1.40.0. It was published around February 2026 and fixed in v1.40.0. Multiple projects (e.g., Helm, etcd, Loki) upgraded from v1.38.0 to v1.40.0 to remediate it. Upgrade to v1.40.0 or later to fix.

Citations:


Upgrade vulnerable gRPC and OpenTelemetry SDK dependencies.

File x/upgrade/go.mod contains two critical vulnerabilities:

  • Line 29: google.golang.org/grpc v1.75.0 is affected by CVE-2026-33186 (CVSS 9.1), an authorization bypass due to improper HTTP/2 ':path' validation. Upgrade to v1.79.3 or later.
  • Line 177: go.opentelemetry.io/otel/sdk v1.38.0 is affected by CVE-2026-24051, an arbitrary code execution vulnerability via PATH hijacking. Upgrade to v1.40.0 or later.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@x/upgrade/go.mod` at line 29, The go.mod currently pins vulnerable versions
of gRPC and OpenTelemetry: update the module versions so google.golang.org/grpc
is bumped to v1.79.3 or later and go.opentelemetry.io/otel/sdk is bumped to
v1.40.0 or later; locate the entries for google.golang.org/grpc and
go.opentelemetry.io/otel/sdk in go.mod (the lines containing those module paths)
and replace their versions accordingly, then run dependency tidy (e.g., go
get/update and go mod tidy) and run tests to ensure compatibility.

@maxim-inj maxim-inj merged commit 74203f1 into v0.50.x-inj Apr 11, 2026
70 of 81 checks passed
@maxim-inj maxim-inj deleted the c-930/trace-everything branch April 11, 2026 03:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants