Summary
vmcp (and potentially other components) log audit events at `"level":"INFO+2"` — a Go `log/slog` artifact from using `slog.LevelInfo + 2` as the verbosity level rather than a standard named level. This breaks level detection in all common log aggregation systems.
Observed behaviour
Audit event log lines contain `"level":"INFO+2"`:
```json
{"time":"2026-03-20T16:33:03.481912959Z","level":"INFO+2","msg":"audit_event","audit_id":"...","type":"mcp_tools_list",...}
```
Standard log levels recognised by log aggregators (Loki `detected_level`, Elasticsearch, Splunk, etc.) are: `trace`, `debug`, `info`, `warn`, `warning`, `error`, `fatal`, `critical`. The value `INFO+2` matches none of these, so audit events appear as `unknown` level in Grafana's Explore Logs view.
This is especially problematic because audit events are the primary log type users want to filter and alert on.
Impact
- Audit events show as `detected_level=unknown` in Loki/Grafana
- Users cannot write level-based LogQL filters like `{service="vmcp"} | level = "audit"` or use level facets in Explore Logs
- Any log pipeline with a level-based filter (e.g. "only ship WARN and above") will silently drop audit events
Suggested fix
Use a standard level or a properly named custom level for audit events. Options:
- Log at `INFO` — simplest, audit events are informational by nature
- Define a named custom level — e.g. `AUDIT` at a specific numeric value, which log/slog supports via `slog.Level` with a custom `String()` method
Related: the plain-text startup line issue is tracked in #4295.
Summary
vmcp (and potentially other components) log audit events at `"level":"INFO+2"` — a Go `log/slog` artifact from using `slog.LevelInfo + 2` as the verbosity level rather than a standard named level. This breaks level detection in all common log aggregation systems.
Observed behaviour
Audit event log lines contain `"level":"INFO+2"`:
```json
{"time":"2026-03-20T16:33:03.481912959Z","level":"INFO+2","msg":"audit_event","audit_id":"...","type":"mcp_tools_list",...}
```
Standard log levels recognised by log aggregators (Loki `detected_level`, Elasticsearch, Splunk, etc.) are: `trace`, `debug`, `info`, `warn`, `warning`, `error`, `fatal`, `critical`. The value `INFO+2` matches none of these, so audit events appear as `unknown` level in Grafana's Explore Logs view.
This is especially problematic because audit events are the primary log type users want to filter and alert on.
Impact
Suggested fix
Use a standard level or a properly named custom level for audit events. Options:
Related: the plain-text startup line issue is tracked in #4295.