Operator version: v0.26.1
Related but not the same: #3111
The CRD and documentation suggests that bearerTokens in a MCPExternalAuthConfig should be supported in Kubernetes - it uses the same name and key that a secret mounted on a MCPServer would use.
However I get the following logs from the proxyRunner sitting in front of the MCP server:
{"time":"2026-05-06T12:17:41Z","level":"INFO","msg":"Successfully loaded configuration from /etc/runconfig/runconfig.json"}
{"time":"2026-05-06T12:17:41Z","level":"INFO","msg":"auto-discovered and loaded configuration from runconfig.json file"}
Error: error determining secrets provider type: secrets provider not configured. Please run 'thv secret setup' to configure a secrets provider first
Usage:
thv-proxyrunner run [flags] SERVER_OR_IMAGE_OR_PROTOCOL [-- ARGS...]
Flags:
-h, --help help for run
Global Flags:
--debug Enable debug mode
{"time":"2026-05-06T12:17:41Z","level":"ERROR","msg":"error executing command","error":"error determining secrets provider type: secrets provider not configured. Please run 'thv secret setup' to configure a secrets provider first"}
This is required in our environment as all requests without a valid bearerToken will emit a log line when the proxyRunner does a health check, unless there's a way to configure that as well? It makes using telemetry from this server quite difficult as this will pollute the metrics with errors
mcp WARN MCP Server - Missing or empty token for URI '/'
mcp WARN MCP Server - Missing or empty token for URI '/mcp'
MCPExternalAuthConfig:
apiVersion: toolhive.stacklok.dev/v1beta1
kind: MCPExternalAuthConfig
metadata:
name: external-auth-bearerToken
namespace: <namespace>
spec:
type: bearerToken
bearerToken:
tokenSecretRef:
name: <secret_name>
key: token
Update:
Changed this to be headerInjection but am unsure if it has made any difference as the log lines still appear but the configuration appears to have applied correctly
Operator version:
v0.26.1Related but not the same: #3111
The CRD and documentation suggests that bearerTokens in a MCPExternalAuthConfig should be supported in Kubernetes - it uses the same
nameandkeythat a secret mounted on a MCPServer would use.However I get the following logs from the proxyRunner sitting in front of the MCP server:
This is required in our environment as all requests without a valid bearerToken will emit a log line when the proxyRunner does a health check, unless there's a way to configure that as well? It makes using telemetry from this server quite difficult as this will pollute the metrics with errors
MCPExternalAuthConfig:
Update:
Changed this to be
headerInjectionbut am unsure if it has made any difference as the log lines still appear but the configuration appears to have applied correctly