-
-
Notifications
You must be signed in to change notification settings - Fork 2
Add monitoring in podman deployment #213
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Signed-off-by: Alexander Adam <alphaone23@gmail.com>
Signed-off-by: Alexander Adam <alphaone23@gmail.com>
Signed-off-by: Alexander Adam <alphaone23@gmail.com>
…chart. Signed-off-by: Alexander Adam <alphaone23@gmail.com>
Signed-off-by: Alexander Adam <alphaone23@gmail.com>
…s in Helm chart. Signed-off-by: Alexander Adam <alphaone23@gmail.com>
Signed-off-by: Alexander Adam <alphaone23@gmail.com>
📝 WalkthroughWalkthroughAdds a monitoring Helm subchart and chart dependency, implements a modular OpenTelemetry bootstrap (setupOTelSDK) with tracer/meter/logger providers and a metrics handler, injects conditional OTEL env vars and metrics ports into Helm charts, adds a slog MultiHandler, and updates SPDX years and metadata across files. Changes
Sequence Diagram(s)sequenceDiagram
participant App as main
participant SDK as setupOTelSDK
participant Resource as newResource
participant Tracer as newTracerProvider
participant Meter as newMeterProvider
participant Logger as newLoggerProvider
participant Exporters as Exporter builders
participant MetricsHTTP as metrics http.Handler
participant Shutdown as ShutdownFunc
App->>SDK: setupOTelSDK(ctx)
SDK->>Resource: newResource(ctx)
Resource-->>SDK: resource
SDK->>Tracer: newTracerProvider(ctx, resource)
Tracer->>Exporters: build trace exporter(s)
Exporters-->>Tracer: span exporter(s)
SDK->>Meter: newMeterProvider(ctx, resource)
Meter->>Exporters: build metric exporter/reader
Exporters-->>Meter: metric handler / http.Handler
SDK->>Logger: newLoggerProvider(ctx, resource)
Logger->>Exporters: build log exporter(s)/bridge
Exporters-->>Logger: log exporter(s)
SDK-->>App: (shutdownFunc, MetricsHTTP, nil)
App->>MetricsHTTP: serveMetrics (goroutine)
App->>Shutdown: defer shutdownFunc
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report✅ All modified and coverable lines are covered by tests. |
Signed-off-by: Alexander Adam <alphaone23@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (15)
CHANGELOG.md(1 hunks)go.mod(2 hunks)helm/.helmignore(1 hunks)helm/.helmignore.license(0 hunks)helm/templates/NOTES.txt(1 hunks)helm/templates/NOTES.txt.license(0 hunks)helm/templates/_helpers.tpl(3 hunks)helm/templates/deployment.yaml(2 hunks)helm/templates/hpa.yaml(1 hunks)helm/templates/ingress.yaml(1 hunks)helm/templates/monitoring.yaml(1 hunks)helm/templates/service.yaml(2 hunks)helm/templates/serviceaccount.yaml(1 hunks)helm/templates/tests/test-connection.yaml(1 hunks)helm/values.yaml(4 hunks)
💤 Files with no reviewable changes (2)
- helm/templates/NOTES.txt.license
- helm/.helmignore.license
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-21T14:10:51.227Z
Learnt from: AlphaOne1
Repo: AlphaOne1/sonicweb PR: 193
File: governance.md:1-4
Timestamp: 2025-09-21T14:10:51.227Z
Learning: In the SonicWeb repository, AlphaOne1 prefers to maintain original author attribution in SPDX headers even when it differs from the standardized format used elsewhere in the repo, particularly for governance and foundational documents.
Applied to files:
helm/.helmignore
🪛 YAMLlint (1.37.1)
helm/templates/service.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
helm/templates/monitoring.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
helm/templates/ingress.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
helm/templates/tests/test-connection.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
helm/templates/hpa.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
helm/templates/deployment.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
helm/templates/serviceaccount.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
🔇 Additional comments (18)
CHANGELOG.md (1)
4-8: Changelog entries align with PR scope.The new Release 1.6.2 entries accurately reflect the monitoring feature addition and dependency updates made in this PR. Format and structure are consistent with prior releases.
helm/.helmignore (1)
1-2: SPDX headers correctly added.Licensing metadata is properly formatted and positioned. No functional changes to ignore patterns.
helm/templates/NOTES.txt (1)
1-4: Template-block header conversion is idiomatic Helm syntax.The SPDX header conversion from plain comments to
{{- /* ... */ -}}is the correct approach for Helm template files. This syntax is valid and will render properly. The YAMLlint error about line 1 is a false positive due to pre-template-processing YAML validation.helm/templates/service.yaml (2)
1-4: Template-block header conversion is valid.Same idiomatic pattern as other Helm templates. YAMLlint error is a false positive.
19-22: No issues found - metricsPort is properly defined and consistently configured.The
metricsPortvalue is defined inhelm/values.yaml(line 72:8081) and correctly referenced across all templates:
- Service exposes it as a port (service.yaml:19)
- Deployment exposes it as a containerPort (deployment.yaml:49)
- Monitoring scrapes the same port (monitoring.yaml:73)
All configurations are consistent and properly aligned.
helm/templates/tests/test-connection.yaml (1)
1-4: Template header conversion is consistent across files.Valid Helm template block syntax. YAMLlint error is a false positive.
helm/templates/ingress.yaml (1)
1-4: Header conversion is consistent and valid.Template block syntax is idiomatic. YAMLlint false positive at line 1 should be ignored.
helm/templates/deployment.yaml (2)
1-4: Template header conversion is valid.Idiomatic Helm template block syntax. YAMLlint false positive should be ignored.
48-50: Metrics port configuration is consistent with service template.Port name, value reference, and configuration align with the metrics port added to service.yaml. Enables dedicated metrics port for monitoring stack integration.
go.mod (1)
46-46: Updated indirect dependencies verified as secure.Security verification complete via public vulnerability databases:
- prometheus/common v0.67.2: No public CVE or security advisory found; release includes panic fix and linting cleanup
- prometheus/procfs v0.19.2: No publicly reported CVEs or known vulnerabilities
- google.golang.org/genproto v0.0.0-20251029180050-ab9386a59fda: No public CVE or vendor security advisory for this pseudo-version
The indirect dependency updates are safe to merge.
helm/values.yaml (3)
11-15: LGTM! Well-structured feature flags.The addition of
podmanKubeandincludeMonitoringflags with opt-in defaults (false) is a good approach for introducing optional functionality.
91-101: LGTM! Reasonable resource requests.The resource requests (100m CPU, 20Mi memory) are appropriate for a lightweight web service, and the detailed comments provide good guidance for users.
69-72: Document breaking port change in release notes.The change from
service.port: 80toservice.port: 8080is indeed a breaking change for existing Helm deployments. Verify that this change is documented in the CHANGELOG or a migration guide, and ensure users upgrading from prior versions are notified of the port modification so they can update their ingress configurations, load balancers, or networking rules accordingly.helm/templates/hpa.yaml (1)
1-4: LGTM! Consistent header format.The SPDX header has been correctly converted to Helm template comment syntax, consistent with other templates in this PR. This prevents the header from appearing in rendered manifests.
Note: The YAMLlint syntax error is a false positive; this is valid Helm template syntax.
helm/templates/_helpers.tpl (2)
22-30: LGTM! Well-implemented podmanKube support.The conditional logic correctly overrides the release name to "SonicWeb" when
podmanKubeis enabled, ensuring consistent naming for Podman deployments.
58-63: LGTM! Consistent instance labeling.The selector labels correctly apply the same conditional logic, ensuring that labels match the naming convention when
podmanKubeis enabled.helm/templates/monitoring.yaml (2)
39-48: LGTM! Prometheus configuration follows best practices.The Prometheus deployment uses a pinned version (v2.55.0), appropriate retention settings (7d), and proper volume mounts for configuration and storage.
66-73: LGTM! Proper Prometheus scrape configuration.The Prometheus configuration correctly references the SonicWeb service using the template helper for the fullname and the metrics port from values.
helm/templates/monitoring.yaml
Outdated
| image: busybox:latest | ||
| command: | ||
| - sh | ||
| - -c | ||
| - | | ||
| set -e | ||
| # IDs und Revisions definieren | ||
| # Beispiel: Go Dashboard | ||
| wget -qO - https://grafana.com/api/dashboards/13240/revisions/2/download \ | ||
| | sed 's/"${DS_PROMETHEUS}"/"prometheus"/g' \ | ||
| > /dashboards/promhttp-go-stats.json | ||
| volumeMounts: | ||
| - name: grafana-dashboards | ||
| mountPath: /dashboards |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use specific busybox version and consider alternative for dashboard provisioning.
Two concerns:
- Using
busybox:latestshould be replaced with a specific version. - Downloading dashboards from
grafana.comduring init can fail due to network issues or API changes, causing pod startup failures.
Apply this diff to fix the image version:
- image: busybox:latest
+ image: busybox:1.37.0Consider these alternatives for dashboard provisioning:
- Pre-package the dashboard JSON in the ConfigMap
- Make dashboard provisioning optional or best-effort (don't fail on download errors)
- Use a more robust init container with curl and error handling
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| image: busybox:latest | |
| command: | |
| - sh | |
| - -c | |
| - | | |
| set -e | |
| # IDs und Revisions definieren | |
| # Beispiel: Go Dashboard | |
| wget -qO - https://grafana.com/api/dashboards/13240/revisions/2/download \ | |
| | sed 's/"${DS_PROMETHEUS}"/"prometheus"/g' \ | |
| > /dashboards/promhttp-go-stats.json | |
| volumeMounts: | |
| - name: grafana-dashboards | |
| mountPath: /dashboards | |
| image: busybox:1.37.0 | |
| command: | |
| - sh | |
| - -c | |
| - | | |
| set -e | |
| # IDs und Revisions definieren | |
| # Beispiel: Go Dashboard | |
| wget -qO - https://grafana.com/api/dashboards/13240/revisions/2/download \ | |
| | sed 's/"${DS_PROMETHEUS}"/"prometheus"/g' \ | |
| > /dashboards/promhttp-go-stats.json | |
| volumeMounts: | |
| - name: grafana-dashboards | |
| mountPath: /dashboards |
🤖 Prompt for AI Agents
In helm/templates/monitoring.yaml around lines 120-133, the init container uses
image: busybox:latest and directly downloads dashboards which can cause startup
failures; change the image to a specific BusyBox tag (e.g., busybox:1.36.1 or
another vetted version) and avoid relying on a remote download as a blocking
step—either embed the dashboard JSON into a ConfigMap and mount it into
/dashboards, or make the download best-effort (wrap the wget in error-tolerant
logic so the container exits successfully on failure), or replace the init
container with a small image that has curl and retry/backoff/error handling and
exits cleanly if the download fails; implement one of these options and update
the image tag accordingly.
Signed-off-by: Alexander Adam <alphaone23@gmail.com>
Signed-off-by: Alexander Adam <alphaone23@gmail.com>
…ring.enabled` field. Signed-off-by: Alexander Adam <alphaone23@gmail.com>
Signed-off-by: Alexander Adam <alphaone23@gmail.com>
Signed-off-by: Alexander Adam <alphaone23@gmail.com>
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.43.0 to 0.44.0. - [Commits](golang/crypto@v0.43.0...v0.44.0) --- updated-dependencies: - dependency-name: golang.org/x/crypto dependency-version: 0.44.0 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com>
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 4.31.2 to 4.31.3. - [Release notes](https://github.com/github/codeql-action/releases) - [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md) - [Commits](github/codeql-action@0499de3...014f16e) --- updated-dependencies: - dependency-name: github/codeql-action dependency-version: 4.31.3 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com>
Bumps [actions/dependency-review-action](https://github.com/actions/dependency-review-action) from 4.8.1 to 4.8.2. - [Release notes](https://github.com/actions/dependency-review-action/releases) - [Commits](actions/dependency-review-action@40c09b7...3c4e3dc) --- updated-dependencies: - dependency-name: actions/dependency-review-action dependency-version: 4.8.2 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com>
Bumps [actions/checkout](https://github.com/actions/checkout) from 5.0.0 to 5.0.1. - [Release notes](https://github.com/actions/checkout/releases) - [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md) - [Commits](actions/checkout@08c6903...93cb6ef) --- updated-dependencies: - dependency-name: actions/checkout dependency-version: 5.0.1 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com>
Bumps ubuntu from `66460d5` to `c35e29c`. --- updated-dependencies: - dependency-name: ubuntu dependency-version: latest dependency-type: direct:production ... Signed-off-by: dependabot[bot] <support@github.com>
Bumps the go_modules group with 1 update in the / directory: [golang.org/x/crypto](https://github.com/golang/crypto). Updates `golang.org/x/crypto` from 0.44.0 to 0.45.0 - [Commits](golang/crypto@v0.44.0...v0.45.0) --- updated-dependencies: - dependency-name: golang.org/x/crypto dependency-version: 0.45.0 dependency-type: direct:production dependency-group: go_modules ... Signed-off-by: dependabot[bot] <support@github.com>
1ef09fc to
f554caf
Compare
Signed-off-by: Alexander Adam <alphaone23@gmail.com>
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 6.0.0 to 6.1.0. - [Release notes](https://github.com/actions/setup-go/releases) - [Commits](actions/setup-go@4469467...4dc6199) --- updated-dependencies: - dependency-name: actions/setup-go dependency-version: 6.1.0 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com>
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 4.31.3 to 4.31.5. - [Release notes](https://github.com/github/codeql-action/releases) - [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md) - [Commits](github/codeql-action@014f16e...fdbfb4d) --- updated-dependencies: - dependency-name: github/codeql-action dependency-version: 4.31.5 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com>
Bumps [actions/checkout](https://github.com/actions/checkout) from 5.0.1 to 6.0.0. - [Release notes](https://github.com/actions/checkout/releases) - [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md) - [Commits](actions/checkout@93cb6ef...1af3b93) --- updated-dependencies: - dependency-name: actions/checkout dependency-version: 6.0.0 dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Alexander Adam <alphaone23@gmail.com>
…xternalize ConfigMap, refine Grafana and VictoriaMetrics configurations, and add NetworkPolicy for enhanced access control. Signed-off-by: Alexander Adam <alphaone23@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 9
🤖 Fix all issues with AI agents
In @helm/charts/monitoring/templates/alloy_configmap.yaml:
- Around line 1-5: The manifest for the ConfigMap uses an incorrect capitalized
key "Kind:" which makes the object invalid; update the YAML to use the lowercase
key "kind:" (e.g., ensure the resource for "alloy-otel-collector-config" reads
"kind: ConfigMap") and verify indentation and spacing remain valid under the
existing apiVersion and metadata entries.
In @helm/charts/monitoring/templates/grafana_datasources.yaml:
- Around line 1-25: The VictoriaTraces Jaeger datasource in the Grafana
ConfigMap is using an incomplete URL; update the datasources.yaml entry for the
datasource with name "VictoriaTraces" (type: jaeger) to append the required path
by changing the url value to include "/select/jaeger" (i.e., point the url for
VictoriaTraces to http://{{ .Values.traces.fullnameOverride
}}-server:10428/select/jaeger) so Grafana can reach the Jaeger Query Service.
In @helm/charts/monitoring/templates/networkpolicy.yaml:
- Around line 46-65: The NetworkPolicy created by the grafana ingress template
currently uses a hardcoded ipBlock cidr: 0.0.0.0/0 which opens port 3000 to the
world; change the template to use a configurable value (e.g.,
.Values.grafana.ingress.cidr) and only render the ipBlock when that value is
set, otherwise prefer a safer selector-based rule (e.g., namespaceSelector or
podSelector for your ingress controller); update the template references around
the NetworkPolicy named {{ include "monitoring.fullname" . }}-grafana-ingress
and the grafana label matchLabels.app.kubernetes.io/instance: {{
.Values.grafana.fullnameOverride }} to conditionally emit either the ipBlock
with the configured CIDR or a namespaceSelector/podSelector targeting
ingress-controller pods, and add the new values entry (grafana.ingress.cidr) to
values.yaml with a secure default (unset or loopback) and documentation.
- Around line 8-44: Update the NetworkPolicy pod selectors to match the Helm
release name instead of per-component fullnameOverride: replace the
podSelector.matchLabels value for the Alloy-alloy-ingress policy (currently
using {{ .Values.alloy.fullnameOverride }}) with {{ .Release.Name }}, and in the
Victoria-access policy replace the matchExpressions/values list (currently
containing {{ .Values.metrics.fullnameOverride }}, {{
.Values.logs.fullnameOverride }}, {{ .Values.traces.fullnameOverride }}) with a
single value {{ .Release.Name }}, and also replace the two ingress from
podSelector.matchLabels entries (currently using {{
.Values.alloy.fullnameOverride }} and {{ .Values.grafana.fullnameOverride }})
with {{ .Release.Name }} so selectors match the standard
app.kubernetes.io/instance label.
In @helm/templates/deployment.yaml:
- Around line 1-4: The Helm template header uses Helm comment delimiters "{{- /*
... */ -}}" which are not valid YAML and break yamllint; replace that header
with plain YAML comments prefixed by "#" (e.g., use "# SPDX-FileCopyrightText:
..." and "# SPDX-License-Identifier: ...") in helm/templates/deployment.yaml or
adjust CI yamllint configuration to exclude helm/templates/** so lint runs only
on rendered YAML.
- Around line 44-63: OTEL_EXPORTER_OTLP_ENDPOINT is using the wrong Helm value
key (.Values.monitoring.alloy.fullnameOverride) which doesn't match the
values.yaml key (.Values.monitoring.alloy.fullNameOverride), causing the
endpoint to render empty; update the template in deployment.yaml to use the
exact key name used in values.yaml (change fullnameOverride to fullNameOverride)
or alternatively rename the key in values.yaml to fullnameOverride so both
match, ensuring OTEL_EXPORTER_OTLP_ENDPOINT references the correct
.Values.monitoring.alloy.fullNameOverride.
- Around line 51-63: The OTLP endpoint value is missing a URL scheme; update the
OTEL_EXPORTER_OTLP_ENDPOINT env var to include a scheme (e.g. set value to
"http://{{ .Values.monitoring.alloy.fullnameOverride }}:4317" or "https://{{
.Values.monitoring.alloy.fullnameOverride }}:4317" when TLS is required) while
keeping OTEL_EXPORTER_OTLP_PROTOCOL=grpc and the other OTEL_* vars unchanged.
In @helm/values.yaml:
- Around line 18-27: The value key for the alloy chart is incorrectly cased as
fullNameOverride and will be ignored by templates expecting fullnameOverride;
rename the key monitoring.alloy.fullNameOverride to
monitoring.alloy.fullnameOverride so the templates that reference
fullnameOverride pick up the override, and verify any other occurrences of
fullNameOverride in values.yaml are changed to fullnameOverride to keep naming
consistent with chart templates.
- Around line 14-17: Update the monitoring blurb for clarity and typos: change
the header "monitoring settings" to "Monitoring settings", keep the "not
production-ready" note, and reword the paragraph to a single clear sentence such
as: "The monitoring subchart is intended only for development and testing and is
not production-ready; use it to verify SonicWeb monitoring functionality or as a
starting point for a more robust production monitoring solution." Ensure
punctuation and capitalization are corrected and remove redundant phrasing.
🧹 Nitpick comments (1)
helm/charts/monitoring/templates/alloy_configmap.yaml (1)
3-6: ConfigMap name should be release-scoped to avoid collisions.Proposed change
metadata: - name: alloy-otel-collector-config + name: {{ include "monitoring.fullname" . }}-alloy-otel-collector-config
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
helm/charts/monitoring/templates/alloy_configmap.yamlhelm/charts/monitoring/templates/grafana_datasources.yamlhelm/charts/monitoring/templates/networkpolicy.yamlhelm/charts/monitoring/values.yamlhelm/templates/deployment.yamlhelm/values.yaml
🚧 Files skipped from review as they are similar to previous changes (1)
- helm/charts/monitoring/values.yaml
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2026-01-10T17:00:17.035Z
Learnt from: AlphaOne1
Repo: AlphaOne1/sonicweb PR: 213
File: helm/charts/monitoring/values.yaml:85-88
Timestamp: 2026-01-10T17:00:17.035Z
Learning: The monitoring chart in helm/charts/monitoring/ is designed as a development/testing convenience feature, not as a production-ready monitoring solution. For production use, adjustments are required including enabling persistent volumes and configuring appropriate retention periods.
Applied to files:
helm/values.yaml
📚 Learning: 2025-09-21T14:10:51.227Z
Learnt from: AlphaOne1
Repo: AlphaOne1/sonicweb PR: 193
File: governance.md:1-4
Timestamp: 2025-09-21T14:10:51.227Z
Learning: In the SonicWeb repository, AlphaOne1 prefers to maintain original author attribution in SPDX headers even when it differs from the standardized format used elsewhere in the repo, particularly for governance and foundational documents.
Applied to files:
helm/values.yaml
🪛 YAMLlint (1.37.1)
helm/templates/deployment.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
helm/charts/monitoring/templates/networkpolicy.yaml
[warning] 6-6: too many spaces after colon
(colons)
[warning] 16-16: too many spaces after hyphen
(hyphens)
[warning] 17-17: too many spaces after hyphen
(hyphens)
[warning] 22-22: too many spaces after colon
(colons)
[warning] 28-28: too many spaces after hyphen
(hyphens)
[warning] 28-28: too many spaces after colon
(colons)
[warning] 37-37: too many spaces after hyphen
(hyphens)
[warning] 38-38: too many spaces after hyphen
(hyphens)
[warning] 41-41: too many spaces after hyphen
(hyphens)
[warning] 48-48: too many spaces after colon
(colons)
[warning] 58-58: too many spaces after hyphen
(hyphens)
[warning] 61-61: too many spaces after hyphen
(hyphens)
[warning] 64-64: too many spaces after hyphen
(hyphens)
[warning] 65-65: too many spaces after colon
(colons)
[error] 8-8: syntax error: expected , but found ''
(syntax)
helm/charts/monitoring/templates/grafana_datasources.yaml
[error] 4-4: syntax error: expected , but found ''
(syntax)
🔇 Additional comments (1)
helm/charts/monitoring/templates/alloy_configmap.yaml (1)
6-43: All Alloy/OTel component syntax is valid for Alloy 1.5.1. Theotelcol.receiver.otlp,otelcol.exporter.prometheus, andotelcol.exporter.otlpblocks are correctly configured with validforward_towiring andtls { insecure = true }settings. Theprometheus.remote_writeendpoint configuration is also correct. No syntax corrections needed.
| name: {{ include "monitoring.fullname" . }}-alloy-ingress | ||
| spec: | ||
| podSelector: | ||
| matchLabels: | ||
| app.kubernetes.io/instance: {{ .Values.alloy.fullnameOverride }} | ||
| policyTypes: | ||
| - Ingress | ||
| ingress: | ||
| - from: | ||
| - podSelector: { } # Allows access from all pods in the same namespace | ||
|
|
||
| --- | ||
| # Victoria Instances: Access restricted to Alloy (Write) and Grafana (Read) | ||
| apiVersion: networking.k8s.io/v1 | ||
| kind: NetworkPolicy | ||
| metadata: | ||
| name: {{ include "monitoring.fullname" . }}-victoria-access | ||
| spec: | ||
| podSelector: | ||
| matchExpressions: | ||
| - key: app.kubernetes.io/instance | ||
| operator: In | ||
| values: | ||
| - {{ .Values.metrics.fullnameOverride }} | ||
| - {{ .Values.logs.fullnameOverride }} | ||
| - {{ .Values.traces.fullnameOverride }} | ||
| policyTypes: | ||
| - Ingress | ||
| ingress: | ||
| - from: | ||
| - podSelector: | ||
| matchLabels: | ||
| app.kubernetes.io/instance: {{ .Values.alloy.fullnameOverride }} | ||
| - podSelector: | ||
| matchLabels: | ||
| app.kubernetes.io/instance: {{ .Values.grafana.fullnameOverride }} | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, explore the monitoring chart structure
find helm/charts/monitoring -type f -name "*.yaml" -o -name "*.yml" | head -20Repository: AlphaOne1/sonicweb
Length of output: 294
🏁 Script executed:
#!/bin/bash
# Look for deployment/statefulset templates in the monitoring chart
fd -t f '\.(yaml|yml)$' helm/charts/monitoring/templates | grep -E '(deployment|statefulset)'Repository: AlphaOne1/sonicweb
Length of output: 44
🏁 Script executed:
#!/bin/bash
# Check the values.yaml to understand the structure
cat helm/charts/monitoring/values.yaml 2>/dev/null | head -100Repository: AlphaOne1/sonicweb
Length of output: 1368
🏁 Script executed:
#!/bin/bash
# Check Chart.yaml to see dependencies
cat helm/charts/monitoring/Chart.yamlRepository: AlphaOne1/sonicweb
Length of output: 1699
🏁 Script executed:
#!/bin/bash
# Look for subchart directories or check if they're defined elsewhere
ls -la helm/charts/monitoring/Repository: AlphaOne1/sonicweb
Length of output: 352
🌐 Web query:
Helm chart app.kubernetes.io/instance label standard behavior fullnameOverride
💡 Result:
Short answer: by convention Helm sets app.kubernetes.io/instance to the Helm release name ({{ .Release.Name }}). fullnameOverride/nameOverride change how resource metadata.name is generated (the chart's fullname helper) but do NOT automatically change app.kubernetes.io/instance unless the chart's templates/helpers explicitly use fullnameOverride for that label. See Helm best-practices and Kubernetes recommended-labels. [1][2][3]
Sources
- Helm chart best-practices — Labels (app.kubernetes.io/instance should be {{ .Release.Name }}). [1]
- Kubernetes recommended labels (app.kubernetes.io/instance = unique instance name). [2]
- Common Helm helper examples showing fullnameOverride affects object names while selector/labels use release name unless overridden in chart helpers. [3]
🌐 Web query:
Grafana Helm chart app.kubernetes.io/instance label what value
💡 Result:
The Grafana Helm chart sets app.kubernetes.io/instance to the Helm release name ({{ .Release.Name }}). [1][2]
References:
- Helm chart label best-practices (app.kubernetes.io/instance -> {{ .Release.Name }}). [1]
- Grafana-related Helm chart templates/values showing app.kubernetes.io/instance = "{{ .Release.Name }}". [2]
🌐 Web query:
Alloy Helm chart app.kubernetes.io/instance label pod selector
💡 Result:
Short answer: The Alloy Helm chart labels pods with the standard Kubernetes app labels (including app.kubernetes.io/instance set to the Helm release name) and uses those labels in selectors (spec.selector.matchLabels and affinity labelSelectors). That means services, Deployments/StatefulSets and affinity/selectors expect pods to have app.kubernetes.io/instance = <Release.Name>; changing the release/name or trying to change those selector labels later will break upgrades (selector.matchLabels is immutable) or leave selectors not matching pods. [1][2]
What to do:
- Ensure your pods/template.metadata.labels include app.kubernetes.io/instance: {{ .Release.Name }} so selectors match. [2]
- Don’t put chart-version labels (helm.sh/chart) into selector.matchLabels (immutable). Use annotations or common labels for non-selector metadata. [3]
- If you need different instance labeling, set it consistently via values/commonLabels or release name; update selectors only by recreating resources if necessary. [1][2]
References:
- Alloy docs (Helm chart/kubernetes guidance showing labels usage). [2]
- Grafana Alloy GitHub issues demonstrating app.kubernetes.io/instance used in selectors and related conflicts. [1]
Sources:
[1] Grafana Alloy GitHub issues (instance label conflicts / selector examples).
[2] Grafana Alloy docs / Helm chart values showing app.kubernetes.io/instance set to {{ .Release.Name }} in selectors.
[3] Kubernetes / Helm discussion on immutable spec.selector.matchLabels (StackOverflow / Kubernetes docs).
🌐 Web query:
VictoriaMetrics Helm chart app.kubernetes.io/instance label pods
💡 Result:
Short answer: Yes — VictoriaMetrics charts/operator add the app.kubernetes.io/instance label to managed resources (including pod templates), and the charts expose podLabels/podLabels (or podLabels in values.yaml) to add extra pod-only labels.
Evidence:
- Operator documentation: managed resources include immutable labels app.kubernetes.io/instance (and others). [1]
- Helm chart values show podLabels / podLabels (extra labels for pods) and name/instance handling in values.yaml. [2][3]
If you want, I can show the exact values.yaml snippet and an example kubectl output for your chart version.
🌐 Web query:
Helm dependency subchart release name how determined parent release name
💡 Result:
Short answer: a subchart (dependency) inherits the parent release's name — Helm sets .Release.Name to the top-level release name for all templates, including subcharts. So the name you pass to helm install (or the generated name) is what both parent and subchart see. Use an alias only to change the dependency's chart name in values/conditions, not the release name. [1][2][3]
References:
- Helm chart template guide — Release object (.Release.Name). [1]
- Helm dependencies / subcharts docs (aliases, conditions). [2]
- Community explanation / example showing subcharts see parent release name. [3]
🌐 Web query:
Helm subchart effective release name with alias Chart.yaml
💡 Result:
Short answer: alias only changes the key under .Subcharts and dependency lookup; it does not change the chart's Chart.yaml name or .Release.Name (the release name is the umbrella release name for all subcharts). To get the subchart's actual Chart.Name inside the umbrella templates (when you used alias in Chart.yaml), call the template with the subchart object looked up by the real chart name, e.g.:
-
Example dependency in Chart.yaml:
dependencies:- name: my-test-subchart
alias: myTestSubchart
...
- name: my-test-subchart
-
Use this in the umbrella templates to obtain the subchart's own name (from its Chart.yaml):
{{ include "my-test-subchart.name" (get .Subcharts "my-test-subchart") }}
Notes:
- .Release.Name is the parent release name for all subcharts.
- alias affects how the chart appears under .Subcharts (the key), not the chart's Chart.yaml name.
Sources: Helm docs (dependencies/alias). [1] StackOverflow example showing include + get to retrieve the real chart name. [2]
NetworkPolicy selectors will not match pods—fix label selectors to use release name.
The app.kubernetes.io/instance label on all pods will be set to the Helm release name (e.g., monitoring), not the per-component fullnameOverride values. Standard Helm practice sets this label to {{ .Release.Name }} across all subcharts, and fullnameOverride only affects resource names, not labels.
The selectors on lines 12 and 32-36 will fail to match any pods:
podSelector:
matchLabels:
app.kubernetes.io/instance: {{ .Values.alloy.fullnameOverride }} # matches "monitoring-alloy"Should be:
podSelector:
matchLabels:
app.kubernetes.io/instance: {{ .Release.Name }} # matches "monitoring"Apply this correction to all pod selectors in both NetworkPolicies (lines 12, 34-36).
🧰 Tools
🪛 YAMLlint (1.37.1)
[warning] 16-16: too many spaces after hyphen
(hyphens)
[warning] 17-17: too many spaces after hyphen
(hyphens)
[warning] 22-22: too many spaces after colon
(colons)
[warning] 28-28: too many spaces after hyphen
(hyphens)
[warning] 28-28: too many spaces after colon
(colons)
[warning] 37-37: too many spaces after hyphen
(hyphens)
[warning] 38-38: too many spaces after hyphen
(hyphens)
[warning] 41-41: too many spaces after hyphen
(hyphens)
[error] 8-8: syntax error: expected , but found ''
(syntax)
🤖 Prompt for AI Agents
In @helm/charts/monitoring/templates/networkpolicy.yaml around lines 8 - 44,
Update the NetworkPolicy pod selectors to match the Helm release name instead of
per-component fullnameOverride: replace the podSelector.matchLabels value for
the Alloy-alloy-ingress policy (currently using {{
.Values.alloy.fullnameOverride }}) with {{ .Release.Name }}, and in the
Victoria-access policy replace the matchExpressions/values list (currently
containing {{ .Values.metrics.fullnameOverride }}, {{
.Values.logs.fullnameOverride }}, {{ .Values.traces.fullnameOverride }}) with a
single value {{ .Release.Name }}, and also replace the two ingress from
podSelector.matchLabels entries (currently using {{
.Values.alloy.fullnameOverride }} and {{ .Values.grafana.fullnameOverride }})
with {{ .Release.Name }} so selectors match the standard
app.kubernetes.io/instance label.
| # Grafana: Allow access via Ingress (from outside the namespace) | ||
| apiVersion: networking.k8s.io/v1 | ||
| kind: NetworkPolicy | ||
| metadata: | ||
| name: {{ include "monitoring.fullname" . }}-grafana-ingress | ||
| spec: | ||
| podSelector: | ||
| matchLabels: | ||
| app.kubernetes.io/instance: {{ .Values.grafana.fullnameOverride }} | ||
| policyTypes: | ||
| - Ingress | ||
| ingress: | ||
| - from: | ||
| # Allows access from Ingress Controllers (often in a different namespace) | ||
| # or externally via LoadBalancer | ||
| - ipBlock: | ||
| cidr: 0.0.0.0/0 | ||
| ports: | ||
| - protocol: TCP | ||
| port: 3000 No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Grafana ingress is effectively open to all sources (0.0.0.0/0).
Given this monitoring stack is dev/test (based on learnings), this might be acceptable, but it’s a sharp edge if someone enables it in a less trusted cluster; consider restricting to ingress-controller namespaces/pods or making the CIDR configurable.
Possible tightening (make CIDR configurable)
- - ipBlock:
- cidr: 0.0.0.0/0
+ - ipBlock:
+ cidr: {{ .Values.grafana.ingressCIDR | default "0.0.0.0/0" | quote }}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Grafana: Allow access via Ingress (from outside the namespace) | |
| apiVersion: networking.k8s.io/v1 | |
| kind: NetworkPolicy | |
| metadata: | |
| name: {{ include "monitoring.fullname" . }}-grafana-ingress | |
| spec: | |
| podSelector: | |
| matchLabels: | |
| app.kubernetes.io/instance: {{ .Values.grafana.fullnameOverride }} | |
| policyTypes: | |
| - Ingress | |
| ingress: | |
| - from: | |
| # Allows access from Ingress Controllers (often in a different namespace) | |
| # or externally via LoadBalancer | |
| - ipBlock: | |
| cidr: 0.0.0.0/0 | |
| ports: | |
| - protocol: TCP | |
| port: 3000 | |
| # Grafana: Allow access via Ingress (from outside the namespace) | |
| apiVersion: networking.k8s.io/v1 | |
| kind: NetworkPolicy | |
| metadata: | |
| name: {{ include "monitoring.fullname" . }}-grafana-ingress | |
| spec: | |
| podSelector: | |
| matchLabels: | |
| app.kubernetes.io/instance: {{ .Values.grafana.fullnameOverride }} | |
| policyTypes: | |
| - Ingress | |
| ingress: | |
| - from: | |
| # Allows access from Ingress Controllers (often in a different namespace) | |
| # or externally via LoadBalancer | |
| - ipBlock: | |
| cidr: {{ .Values.grafana.ingressCIDR | default "0.0.0.0/0" | quote }} | |
| ports: | |
| - protocol: TCP | |
| port: 3000 |
🧰 Tools
🪛 YAMLlint (1.37.1)
[warning] 48-48: too many spaces after colon
(colons)
[warning] 58-58: too many spaces after hyphen
(hyphens)
[warning] 61-61: too many spaces after hyphen
(hyphens)
[warning] 64-64: too many spaces after hyphen
(hyphens)
[warning] 65-65: too many spaces after colon
(colons)
🤖 Prompt for AI Agents
In @helm/charts/monitoring/templates/networkpolicy.yaml around lines 46 - 65,
The NetworkPolicy created by the grafana ingress template currently uses a
hardcoded ipBlock cidr: 0.0.0.0/0 which opens port 3000 to the world; change the
template to use a configurable value (e.g., .Values.grafana.ingress.cidr) and
only render the ipBlock when that value is set, otherwise prefer a safer
selector-based rule (e.g., namespaceSelector or podSelector for your ingress
controller); update the template references around the NetworkPolicy named {{
include "monitoring.fullname" . }}-grafana-ingress and the grafana label
matchLabels.app.kubernetes.io/instance: {{ .Values.grafana.fullnameOverride }}
to conditionally emit either the ipBlock with the configured CIDR or a
namespaceSelector/podSelector targeting ingress-controller pods, and add the new
values entry (grafana.ingress.cidr) to values.yaml with a secure default (unset
or loopback) and documentation.
| {{- /* | ||
| SPDX-FileCopyrightText: 2026 The SonicWeb contributors. | ||
| SPDX-License-Identifier: MPL-2.0 | ||
| */ -}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
YAMLlint will choke on Helm template comment delimiters ({{- ... -}}).
If YAMLlint runs in CI over helm/templates/**, either exclude Helm templates (recommended) or switch the header to YAML # comments so it remains valid YAML pre-render.
🧰 Tools
🪛 YAMLlint (1.37.1)
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
🤖 Prompt for AI Agents
In @helm/templates/deployment.yaml around lines 1 - 4, The Helm template header
uses Helm comment delimiters "{{- /* ... */ -}}" which are not valid YAML and
break yamllint; replace that header with plain YAML comments prefixed by "#"
(e.g., use "# SPDX-FileCopyrightText: ..." and "# SPDX-License-Identifier: ...")
in helm/templates/deployment.yaml or adjust CI yamllint configuration to exclude
helm/templates/** so lint runs only on rendered YAML.
| ports: | ||
| - name: http | ||
| containerPort: {{ .Values.service.port }} | ||
| protocol: TCP | ||
| - name: metrics | ||
| containerPort: {{ .Values.service.metricsPort }} | ||
| protocol: TCP | ||
| {{- if .Values.monitoring.enabled }} | ||
| env: | ||
| - name: OTEL_LOGS_EXPORTER | ||
| value: otlp | ||
| - name: OTEL_METRICS_EXPORTER | ||
| value: otlp | ||
| - name: OTEL_TRACES_EXPORTER | ||
| value: otlp | ||
| - name: OTEL_EXPORTER_OTLP_PROTOCOL | ||
| value: grpc | ||
| - name: OTEL_EXPORTER_OTLP_ENDPOINT | ||
| value: {{ .Values.monitoring.alloy.fullnameOverride }}:4317 | ||
| {{- end }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OTEL exporter endpoint likely renders empty due to values key mismatch (fullnameOverride vs fullNameOverride).
OTEL_EXPORTER_OTLP_ENDPOINT uses .Values.monitoring.alloy.fullnameOverride, but the provided helm/values.yaml defines monitoring.alloy.fullNameOverride (different key). This will likely render as :4317 and break telemetry export.
Proposed fix (make keys consistent)
- - name: OTEL_EXPORTER_OTLP_ENDPOINT
- value: {{ .Values.monitoring.alloy.fullnameOverride }}:4317
+ - name: OTEL_EXPORTER_OTLP_ENDPOINT
+ value: {{ .Values.monitoring.alloy.fullnameOverride }}:4317- alloy:
- fullNameOverride: monitoring-alloy
+ alloy:
+ fullnameOverride: monitoring-alloy🤖 Prompt for AI Agents
In @helm/templates/deployment.yaml around lines 44 - 63,
OTEL_EXPORTER_OTLP_ENDPOINT is using the wrong Helm value key
(.Values.monitoring.alloy.fullnameOverride) which doesn't match the values.yaml
key (.Values.monitoring.alloy.fullNameOverride), causing the endpoint to render
empty; update the template in deployment.yaml to use the exact key name used in
values.yaml (change fullnameOverride to fullNameOverride) or alternatively
rename the key in values.yaml to fullnameOverride so both match, ensuring
OTEL_EXPORTER_OTLP_ENDPOINT references the correct
.Values.monitoring.alloy.fullNameOverride.
helm/values.yaml
Outdated
| # monitoring settings | ||
| # The monitoring subchart is not intended to replace a robust monitoring solution. Its primary use is to | ||
| # verify the monitoring functionality of SonicWeb. It can be used as a starting point for a more robust | ||
| # productive monitoring solution. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor wording/typos in chart docs (improves readability, especially since this is explicitly dev/test). Based on learnings, the “not production-ready” note is helpful—keep it.
Proposed edits
-# productive monitoring solution.
+# production monitoring solution.
@@
-#This section builds out the service account more information can be found here: https://kubernetes.io/docs/concepts/security/service-accounts/
+# This section builds out the service account more information can be found here: https://kubernetes.io/docs/concepts/security/service-accounts/
@@
-# This is for the secretes for pulling an image from a private repository more information can be found here: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
+# This is for the secrets for pulling an image from a private repository more information can be found here: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/🤖 Prompt for AI Agents
In @helm/values.yaml around lines 14 - 17, Update the monitoring blurb for
clarity and typos: change the header "monitoring settings" to "Monitoring
settings", keep the "not production-ready" note, and reword the paragraph to a
single clear sentence such as: "The monitoring subchart is intended only for
development and testing and is not production-ready; use it to verify SonicWeb
monitoring functionality or as a starting point for a more robust production
monitoring solution." Ensure punctuation and capitalization are corrected and
remove redundant phrasing.
helm/values.yaml
Outdated
| monitoring: | ||
| # Controls if monitoring setup should be included. | ||
| enabled: false | ||
| grafana: | ||
| adminUser: admin | ||
| adminPassword: "" # empty, _must_ be provided with --set monitoring.grafana.adminPassword=<...> | ||
|
|
||
| alloy: | ||
| fullNameOverride: monitoring-alloy | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: fullNameOverride key casing won’t be read by templates expecting fullnameOverride.
Given other templates in this PR reference ...fullnameOverride, monitoring.alloy.fullNameOverride will be ignored and break rendered names/endpoints.
Proposed fix
monitoring:
@@
alloy:
- fullNameOverride: monitoring-alloy
+ fullnameOverride: monitoring-alloy🤖 Prompt for AI Agents
In @helm/values.yaml around lines 18 - 27, The value key for the alloy chart is
incorrectly cased as fullNameOverride and will be ignored by templates expecting
fullnameOverride; rename the key monitoring.alloy.fullNameOverride to
monitoring.alloy.fullnameOverride so the templates that reference
fullnameOverride pick up the override, and verify any other occurrences of
fullNameOverride in values.yaml are changed to fullnameOverride to keep naming
consistent with chart templates.
…place fullnameOverride values with dynamic templates, update dependencies, and enhance OpenTelemetry ConfigMaps and NetworkPolicies. Signed-off-by: Alexander Adam <alphaone23@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In @helm/charts/monitoring/templates/_helpers.tpl:
- Around line 23-25: The OTEL endpoint must use the monitoring subchart's
overridden name when podmanKube is enabled; update the helper logic in
templates/_helpers.tpl (the block that sets $relName via .Values.podmanKube) to
expose a deterministic alloy service name (e.g., a helper/template or variable
that yields "monitoring-alloy" when .Values.podmanKube is true and "{{
.Release.Name }}-alloy" otherwise), then change the parent chart's OTEL endpoint
reference in helm/templates/deployment.yaml (where it currently uses "{{
.Release.Name }}-alloy") to use that helper/variable instead so the OTEL
collector points to the correct service name in both modes.
In @helm/templates/deployment.yaml:
- Around line 51-63: The OTEL exporter endpoint is pointing to the wrong service
name: OTEL_EXPORTER_OTLP_ENDPOINT is set to http://{{ .Release.Name
}}-alloy:4317 but the Alloy collector is deployed under the monitoring subchart
as {{ .Release.Name }}-monitoring-alloy; update the OTEL_EXPORTER_OTLP_ENDPOINT
environment value in the deployment template to use http://{{ .Release.Name
}}-monitoring-alloy:4317 so the app targets the correct monitoring-alloy
service.
🧹 Nitpick comments (1)
helm/charts/monitoring/templates/alloy_configmap.yaml (1)
9-9: Consider using Helm templating for the ConfigMap name.The ConfigMap name is hardcoded, which may cause naming collisions when deploying multiple releases. Consider using a templated name for consistency with other monitoring resources.
♻️ Suggested refactor
- name: alloy-collector-config + name: {{ include "monitoring.fullname" . }}-alloy-collector-config
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (8)
go.modhelm/charts/monitoring/templates/_helpers.tplhelm/charts/monitoring/templates/alloy_configmap.yamlhelm/charts/monitoring/templates/grafana_datasources.yamlhelm/charts/monitoring/templates/networkpolicy.yamlhelm/charts/monitoring/values.yamlhelm/templates/deployment.yamlhelm/values.yaml
🚧 Files skipped from review as they are similar to previous changes (1)
- helm/charts/monitoring/values.yaml
🧰 Additional context used
🧠 Learnings (4)
📚 Learning: 2026-01-10T17:00:17.035Z
Learnt from: AlphaOne1
Repo: AlphaOne1/sonicweb PR: 213
File: helm/charts/monitoring/values.yaml:85-88
Timestamp: 2026-01-10T17:00:17.035Z
Learning: The monitoring chart in helm/charts/monitoring/ is designed as a development/testing convenience feature, not as a production-ready monitoring solution. For production use, adjustments are required including enabling persistent volumes and configuring appropriate retention periods.
Applied to files:
helm/charts/monitoring/templates/_helpers.tplhelm/values.yamlhelm/charts/monitoring/templates/grafana_datasources.yaml
📚 Learning: 2025-09-21T14:10:51.227Z
Learnt from: AlphaOne1
Repo: AlphaOne1/sonicweb PR: 193
File: governance.md:1-4
Timestamp: 2025-09-21T14:10:51.227Z
Learning: In the SonicWeb repository, AlphaOne1 prefers to maintain original author attribution in SPDX headers even when it differs from the standardized format used elsewhere in the repo, particularly for governance and foundational documents.
Applied to files:
go.modhelm/values.yaml
📚 Learning: 2025-09-21T14:11:56.773Z
Learnt from: AlphaOne1
Repo: AlphaOne1/sonicweb PR: 193
File: governance.md:38-44
Timestamp: 2025-09-21T14:11:56.773Z
Learning: In the SonicWeb repository governance process, AlphaOne1 prefers to rely on GitHub's native issue timestamp handling for time-based waiting periods rather than specifying explicit timezones, as the issue creation time provides a clear reference point regardless of viewer timezone representation.
Applied to files:
go.mod
📚 Learning: 2025-09-14T02:05:57.237Z
Learnt from: AlphaOne1
Repo: AlphaOne1/sonicweb PR: 176
File: go.mod:46-46
Timestamp: 2025-09-14T02:05:57.237Z
Learning: The go.opentelemetry.io/auto/sdk v1.2.0 dependency causes build errors on 386 architecture. Version v1.1.0 should be used instead to maintain compatibility with 386 builds in multi-arch Docker configurations.
Applied to files:
go.mod
🪛 YAMLlint (1.37.1)
helm/templates/deployment.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
helm/charts/monitoring/templates/alloy_configmap.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
helm/charts/monitoring/templates/grafana_datasources.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
helm/charts/monitoring/templates/networkpolicy.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
[warning] 18-18: too many spaces after hyphen
(hyphens)
[warning] 19-19: too many spaces after hyphen
(hyphens)
[warning] 24-24: too many spaces after colon
(colons)
[warning] 30-30: too many spaces after hyphen
(hyphens)
[warning] 30-30: too many spaces after colon
(colons)
[warning] 39-39: too many spaces after hyphen
(hyphens)
[warning] 40-40: too many spaces after hyphen
(hyphens)
[warning] 43-43: too many spaces after hyphen
(hyphens)
[warning] 49-49: too many spaces after colon
(colons)
[warning] 59-59: too many spaces after hyphen
(hyphens)
[warning] 60-60: too many spaces after hyphen
(hyphens)
[warning] 63-63: too many spaces after hyphen
(hyphens)
[warning] 64-64: too many spaces after colon
(colons)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Analyze (go)
🔇 Additional comments (14)
helm/charts/monitoring/templates/_helpers.tpl (1)
1-74: LGTM! Well-structured Helm helpers with podmanKube support.The helper templates follow standard Helm patterns and provide good abstraction for naming, labeling, and service account management. The podmanKube mode introduces custom behavior that overrides the release name to "monitoring" for both fullname construction and instance labeling, which appears intentional for the Podman Kube deployment scenario.
go.mod (2)
14-30: LGTM! OpenTelemetry dependencies properly integrated.The added OTLP exporters (logs, metrics, traces) and stdout exporters align well with the modular OTEL setup described in the PR summary. The dependencies support both gRPC and HTTP protocols, providing flexibility for different deployment scenarios.
64-64: The upgrade to v1.2.1 correctly addresses the 386 architecture compatibility issues.v1.2.1 includes a specific fix for 32-bit architectures ("Fix uint32 bounding on 32 bit architectures"), resolving the 386 build errors that were present in v1.2.0. This upgrade is appropriate for multi-arch Docker configurations.
helm/charts/monitoring/templates/grafana_datasources.yaml (1)
1-30: LGTM! Grafana datasources properly configured.The ConfigMap correctly provisions three datasources (VictoriaMetrics, VictoriaLogs, VictoriaTraces) with appropriate types and endpoints. The dynamic fullname templating ensures correct service discovery, and the
grafana_datasource: "1"label enables automatic loading by the Grafana sidecar.Based on learnings, this monitoring chart is designed for development/testing convenience and is not production-ready without adjustments.
helm/charts/monitoring/templates/networkpolicy.yaml (1)
6-46: LGTM! Good network segmentation for monitoring components.The NetworkPolicies properly restrict access:
- Alloy accepts from any pod in the namespace (appropriate for metrics collection)
- Victoria components (metrics, logs, traces) are restricted to Alloy for writes and Grafana for reads
This provides a good security baseline for the monitoring stack.
helm/templates/deployment.yaml (1)
48-50: LGTM! Metrics port properly exposed.The new metrics port correctly exposes the application's metrics endpoint alongside the HTTP port, enabling Prometheus-style metrics collection by the monitoring stack.
helm/charts/monitoring/templates/alloy_configmap.yaml (2)
1-4: YAMLlint syntax error is a false positive.The static analysis tool reports a syntax error, but the Helm template comment syntax
{{- /* ... */ -}}is valid and will be processed by Helm before YAML parsing.
13-32: LGTM!The OTLP receiver and Prometheus exporter configuration is correct. The use of Helm templating for the VictoriaMetrics endpoint ensures proper service discovery.
helm/values.yaml (6)
1-1: LGTM!The SPDX copyright year has been appropriately updated to 2026.
11-12: LGTM!The
podmanKubeflag provides useful flexibility for Podman deployments while maintaining Kubernetes compatibility by default.
14-26: LGTM!The monitoring configuration is well-documented with clear guidance that:
- It's designed for development/testing, not production
- The Grafana admin password must be explicitly provided via
--set- The empty default password prevents accidental deployments with weak credentials
The commented-out
alloy.fullnameOverridesection provides a clear extension point for customization.
42-52: LGTM!The serviceAccount configuration follows Kubernetes best practices with sensible defaults. The
create: falsedefault avoids unnecessary service account creation while allowing opt-in when needed.
80-83: Service ports correctly configured for non-root security context.The port configuration aligns with the non-root security context (runAsUser: 65532), since ports below 1024 require root privileges. All Helm templates correctly reference the values—no hardcoded ports remain. The new metricsPort is properly integrated.
Verify that the application is actually configured to listen on ports 8080 and 8081.
102-112: The values.yaml explicitly states that the 20Mi memory request "should be fine for most deployments." SonicWeb is a lightweight static file server, not a data-intensive application. The architecture supports horizontal scaling via HPA rather than requiring increased per-pod memory, and commented limits at 256Mi provide headroom if needed. The 20Mi request aligns with the application's documented design.Likely an incorrect or invalid review comment.
| otelcol.exporter.otlp "vlogs" { | ||
| client { | ||
| endpoint = "{{ include "monitoring.fullname" . }}-logs-server:9428" | ||
| tls { insecure = true } | ||
| } | ||
| } | ||
| // forwarding to VictoriaTraces via OTLP | ||
| otelcol.exporter.otlp "vtraces" { | ||
| client { | ||
| endpoint = "{{ include "monitoring.fullname" . }}-traces-server:10428" | ||
| tls { insecure = true } | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, locate the file in question
find . -name "alloy_configmap.yaml" -type fRepository: AlphaOne1/sonicweb
Length of output: 117
🏁 Script executed:
# Check the structure of the monitoring chart
fd . helm/charts/monitoring --type f | head -20Repository: AlphaOne1/sonicweb
Length of output: 340
🏁 Script executed:
# Read the specific file to verify the code snippet
cat -n helm/charts/monitoring/templates/alloy_configmap.yaml | sed -n '30,55p'Repository: AlphaOne1/sonicweb
Length of output: 841
🏁 Script executed:
# Check the chart's values.yaml for any documentation about production readiness
cat -n helm/charts/monitoring/values.yaml | head -100Repository: AlphaOne1/sonicweb
Length of output: 1866
🏁 Script executed:
# Look for any README or documentation in the monitoring chart
find helm/charts/monitoring -name "README*" -o -name "*.md" | xargs cat -n 2>/dev/nullRepository: AlphaOne1/sonicweb
Length of output: 44
Configure proper TLS for production deployments.
Both VictoriaLogs and VictoriaTraces exporters have insecure = true, disabling TLS certificate verification. While acceptable for this development/testing chart, production deployments must enable proper TLS configuration and remove the insecure = true setting.
| # Grafana: Allow access via Ingress | ||
| apiVersion: networking.k8s.io/v1 | ||
| kind: NetworkPolicy | ||
| metadata: | ||
| name: {{ include "monitoring.fullname" . }}-grafana-ingress | ||
| spec: | ||
| podSelector: | ||
| matchLabels: | ||
| app.kubernetes.io/name: grafana | ||
| policyTypes: | ||
| - Ingress | ||
| ingress: | ||
| - from: | ||
| - ipBlock: | ||
| cidr: 0.0.0.0/0 | ||
| ports: | ||
| - protocol: TCP | ||
| port: 3000 No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider restricting Grafana ingress CIDR for production deployments.
The NetworkPolicy allows Grafana access from 0.0.0.0/0 (line 61), which permits unrestricted external access. While acceptable for development/testing environments, production deployments should restrict this to specific IP ranges (e.g., corporate network, VPN, or ingress controller IPs).
🔒 Suggested restriction pattern for production
For production use, replace the open CIDR with a restricted range:
- from:
- ipBlock:
- cidr: 0.0.0.0/0
+ cidr: 10.0.0.0/8 # Example: restrict to internal networkOr rely on ingress controller pods instead of IP-based filtering:
- from:
- - ipBlock:
- cidr: 0.0.0.0/0
+ - namespaceSelector:
+ matchLabels:
+ name: ingress-nginx
+ podSelector:
+ matchLabels:
+ app.kubernetes.io/name: ingress-nginx
ports:Based on learnings, this monitoring chart is designed for development/testing, but security posture should still be considered for production adaptation.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Grafana: Allow access via Ingress | |
| apiVersion: networking.k8s.io/v1 | |
| kind: NetworkPolicy | |
| metadata: | |
| name: {{ include "monitoring.fullname" . }}-grafana-ingress | |
| spec: | |
| podSelector: | |
| matchLabels: | |
| app.kubernetes.io/name: grafana | |
| policyTypes: | |
| - Ingress | |
| ingress: | |
| - from: | |
| - ipBlock: | |
| cidr: 0.0.0.0/0 | |
| ports: | |
| - protocol: TCP | |
| port: 3000 | |
| # Grafana: Allow access via Ingress | |
| apiVersion: networking.k8s.io/v1 | |
| kind: NetworkPolicy | |
| metadata: | |
| name: {{ include "monitoring.fullname" . }}-grafana-ingress | |
| spec: | |
| podSelector: | |
| matchLabels: | |
| app.kubernetes.io/name: grafana | |
| policyTypes: | |
| - Ingress | |
| ingress: | |
| - from: | |
| - ipBlock: | |
| cidr: 10.0.0.0/8 | |
| ports: | |
| - protocol: TCP | |
| port: 3000 |
| # Grafana: Allow access via Ingress | |
| apiVersion: networking.k8s.io/v1 | |
| kind: NetworkPolicy | |
| metadata: | |
| name: {{ include "monitoring.fullname" . }}-grafana-ingress | |
| spec: | |
| podSelector: | |
| matchLabels: | |
| app.kubernetes.io/name: grafana | |
| policyTypes: | |
| - Ingress | |
| ingress: | |
| - from: | |
| - ipBlock: | |
| cidr: 0.0.0.0/0 | |
| ports: | |
| - protocol: TCP | |
| port: 3000 | |
| # Grafana: Allow access via Ingress | |
| apiVersion: networking.k8s.io/v1 | |
| kind: NetworkPolicy | |
| metadata: | |
| name: {{ include "monitoring.fullname" . }}-grafana-ingress | |
| spec: | |
| podSelector: | |
| matchLabels: | |
| app.kubernetes.io/name: grafana | |
| policyTypes: | |
| - Ingress | |
| ingress: | |
| - from: | |
| - namespaceSelector: | |
| matchLabels: | |
| name: ingress-nginx | |
| podSelector: | |
| matchLabels: | |
| app.kubernetes.io/name: ingress-nginx | |
| ports: | |
| - protocol: TCP | |
| port: 3000 |
🧰 Tools
🪛 YAMLlint (1.37.1)
[warning] 49-49: too many spaces after colon
(colons)
[warning] 59-59: too many spaces after hyphen
(hyphens)
[warning] 60-60: too many spaces after hyphen
(hyphens)
[warning] 63-63: too many spaces after hyphen
(hyphens)
[warning] 64-64: too many spaces after colon
(colons)
…dynamic naming, and update Alloy endpoint configuration. Signed-off-by: Alexander Adam <alphaone23@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In @helm/templates/deployment.yaml:
- Around line 51-64: The template directly accesses
.Values.monitoring.alloy.fullnameOverride which can nil-panic; change the
OTEL_EXPORTER_OTLP_ENDPOINT value expression (the line building http://...:4317
for OTEL_EXPORTER_OTLP_ENDPOINT) to use a safe lookup that only reads
fullnameOverride when alloy exists, e.g. wrap the nested access with an and/
default pattern: default (printf "%s-alloy" .Release.Name) (and
.Values.monitoring.alloy .Values.monitoring.alloy.fullnameOverride) so the
default name is used when .Values.monitoring.alloy is missing.
In @helm/values.yaml:
- Around line 14-27: The values file enables monitoring (monitoring.enabled) but
does not define monitoring.alloy which template code expects; update values.yaml
to add an explicit empty map for monitoring.alloy (e.g., set monitoring.alloy to
an empty map/object) so the chart templates that reference monitoring.alloy
won’t fail when monitoring.enabled is true.
🧹 Nitpick comments (1)
helm/charts/monitoring/templates/_helpers.tpl (1)
52-59:podmanKubesets a constantapp.kubernetes.io/instance, which can cause label collisions.
When.Values.podmanKubeis true (Line 54-58),app.kubernetes.io/instancebecomesmonitoring. If multiple rendered manifests are applied in the same namespace, selectors/labels may overlap unexpectedly.Consider deriving a stable-but-unique instance value (e.g., from
monitoring.fullname, a user-provided override, or.Release.Nameif still meaningful in your podman flow).
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
helm/charts/monitoring/templates/_helpers.tplhelm/templates/deployment.yamlhelm/values.yaml
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: AlphaOne1
Repo: AlphaOne1/sonicweb PR: 213
File: helm/charts/monitoring/values.yaml:85-88
Timestamp: 2026-01-10T17:00:17.035Z
Learning: The monitoring chart in helm/charts/monitoring/ is designed as a development/testing convenience feature, not as a production-ready monitoring solution. For production use, adjustments are required including enabling persistent volumes and configuring appropriate retention periods.
📚 Learning: 2026-01-10T17:00:17.035Z
Learnt from: AlphaOne1
Repo: AlphaOne1/sonicweb PR: 213
File: helm/charts/monitoring/values.yaml:85-88
Timestamp: 2026-01-10T17:00:17.035Z
Learning: The monitoring chart in helm/charts/monitoring/ is designed as a development/testing convenience feature, not as a production-ready monitoring solution. For production use, adjustments are required including enabling persistent volumes and configuring appropriate retention periods.
Applied to files:
helm/charts/monitoring/templates/_helpers.tplhelm/values.yaml
📚 Learning: 2025-09-21T14:10:51.227Z
Learnt from: AlphaOne1
Repo: AlphaOne1/sonicweb PR: 193
File: governance.md:1-4
Timestamp: 2025-09-21T14:10:51.227Z
Learning: In the SonicWeb repository, AlphaOne1 prefers to maintain original author attribution in SPDX headers even when it differs from the standardized format used elsewhere in the repo, particularly for governance and foundational documents.
Applied to files:
helm/values.yaml
🪛 YAMLlint (1.37.1)
helm/templates/deployment.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Analyze (go)
🔇 Additional comments (3)
helm/templates/deployment.yaml (1)
1-4: If yamllint runs on Helm templates, it needs an exclusion/config tweak.
The YAMLlint error at Line 1 is typically because{{- ... -}}isn’t valid YAML. If this is part of CI, consider excludinghelm/templates/**(or configuring a Helm-aware lint step instead).helm/values.yaml (2)
18-24: Please verify “default install” behavior withmonitoring.enabled: true+ empty Grafana password.
If the monitoring subchart deploys Grafana and requires a non-empty admin password, the current defaults (Line 23) may either fail installs or create an insecure setup—depending on chart logic.
76-83: Ports update looks consistent (addsmetricsPort).
service.port: 8080+service.metricsPort: 8081aligns with the deployment exposing both ports; just ensurehelm/templates/service.yamlselects/exports themetricsport as well.
helm/values.yaml
Outdated
| # monitoring settings | ||
| # The monitoring subchart is not intended to replace a robust monitoring solution. Its primary use is to | ||
| # verify the monitoring functionality of SonicWeb. It can be used as a starting point for a more robust | ||
| # production monitoring solution. | ||
| monitoring: | ||
| # Controls if monitoring setup should be included. | ||
| enabled: true | ||
| grafana: | ||
| adminUser: admin | ||
| adminPassword: "" # empty, _must_ be provided with --set monitoring.grafana.adminPassword=<...> | ||
|
|
||
| # alloy: | ||
| # fullnameOverride: monitoring-alloy | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Defaults don’t define monitoring.alloy, but templates assume it when monitoring is enabled.
With enabled: true (Line 20) and alloy: commented out (Line 25-27), this pairs badly with the deployment template’s nested access and can break rendering.
Proposed fix (define an empty override map so defaults render)
monitoring:
# Controls if monitoring setup should be included.
enabled: true
grafana:
adminUser: admin
adminPassword: "" # empty, _must_ be provided with --set monitoring.grafana.adminPassword=<...>
-# alloy:
-# fullnameOverride: monitoring-alloy
+ alloy:
+ # Optional override for the Alloy service name used by SonicWeb OTEL exporter config.
+ fullnameOverride: ""🤖 Prompt for AI Agents
In @helm/values.yaml around lines 14 - 27, The values file enables monitoring
(monitoring.enabled) but does not define monitoring.alloy which template code
expects; update values.yaml to add an explicit empty map for monitoring.alloy
(e.g., set monitoring.alloy to an empty map/object) so the chart templates that
reference monitoring.alloy won’t fail when monitoring.enabled is true.
…lifying label definitions and instance naming logic. Signed-off-by: Alexander Adam <alphaone23@gmail.com>
…ion, remove unused `fullnameOverride` values. Signed-off-by: Alexander Adam <alphaone23@gmail.com>
Signed-off-by: Alexander Adam <alphaone23@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
helm/templates/_helpers.tpl (1)
17-29: Lowercase enforcement improves Kubernetes compatibility, but may be a breaking change.Adding
| lowerensures all generated names comply with Kubernetes DNS naming requirements (lowercase only). This is a good practice.However, this is potentially a breaking change for existing deployments: if users have existing releases with uppercase characters in
.Release.Nameor.Values.fullnameOverride, the generated resource names will change, causing Helm to create new resources instead of updating existing ones.Consider documenting this change in release notes or a migration guide if there are existing users.
helm/charts/monitoring/templates/_helpers.tpl (1)
1-10: Minor: Comment placement issue in SPDX block.The comment "Expand the name of the chart." on line 5 appears to belong to the
monitoring.nametemplate below but is placed inside the SPDX license header block. Consider moving it to a separate comment block above line 8.The
monitoring.nametemplate logic itself follows standard Helm patterns.Suggested fix
{{- /* SPDX-FileCopyrightText: 2026 The monitoring contributors. SPDX-License-Identifier: MPL-2.0 - -Expand the name of the chart. */ -}} +{{/* +Expand the name of the chart. +*/}} {{- define "monitoring.name" -}}
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
helm/charts/monitoring/templates/_helpers.tplhelm/templates/_helpers.tplhelm/templates/deployment.yamlhelm/values.yaml
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2026-01-10T17:00:17.035Z
Learnt from: AlphaOne1
Repo: AlphaOne1/sonicweb PR: 213
File: helm/charts/monitoring/values.yaml:85-88
Timestamp: 2026-01-10T17:00:17.035Z
Learning: The monitoring chart in helm/charts/monitoring/ is designed as a development/testing convenience feature, not as a production-ready monitoring solution. For production use, adjustments are required including enabling persistent volumes and configuring appropriate retention periods.
Applied to files:
helm/values.yamlhelm/charts/monitoring/templates/_helpers.tpl
📚 Learning: 2025-09-21T14:10:51.227Z
Learnt from: AlphaOne1
Repo: AlphaOne1/sonicweb PR: 193
File: governance.md:1-4
Timestamp: 2025-09-21T14:10:51.227Z
Learning: In the SonicWeb repository, AlphaOne1 prefers to maintain original author attribution in SPDX headers even when it differs from the standardized format used elsewhere in the repo, particularly for governance and foundational documents.
Applied to files:
helm/values.yaml
🪛 YAMLlint (1.37.1)
helm/templates/deployment.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
🔇 Additional comments (13)
helm/values.yaml (4)
11-20: Well-documented monitoring configuration with appropriate defaults.The monitoring block is properly documented as a dev/testing convenience rather than a production solution. The
enabled: falsedefault is appropriate, and requiring--setfor the admin password prevents accidental deployment with weak credentials.Based on learnings, this aligns with the stated purpose of the monitoring subchart as a development/testing convenience feature.
36-46: ServiceAccount configuration looks good.The service account configuration follows Kubernetes best practices with sensible defaults (
create: false,automount: true).
74-77: Port configuration is clear and well-documented.The separation of operational port (8080) and metrics port (8081) follows the common pattern of exposing application metrics on a dedicated port.
96-106: Reasonable resource requests with helpful documentation.The resource requests (100m CPU, 20Mi memory) are documented as lower bounds. The commented-out limits section with explanation about namespace-level control is a thoughtful pattern.
helm/templates/deployment.yaml (3)
1-4: SPDX header format is correct for Helm templates.The static analysis hint about a syntax error on line 1 is a false positive. YAMLlint does not understand Helm template syntax;
{{- /* ... */ -}}is the correct way to include comments that won't appear in the rendered output.
44-50: Container ports correctly expose both http and metrics endpoints.The dual-port configuration properly separates application traffic from metrics, enabling independent service mesh policies and security controls for each.
51-65: OTEL configuration is conditionally applied with sensible defaults.The use of
digto safely access nested values forfullnameOverridewith a fallback to the release-based naming convention is a good defensive pattern. The hardcoded OTLP/gRPC configuration is appropriate for the dev/testing purpose of this monitoring setup.One consideration: if users later need flexibility (e.g., HTTP protocol instead of gRPC), these values could be made configurable in
values.yaml. However, for the current dev/testing scope, the hardcoded approach keeps configuration simple.helm/templates/_helpers.tpl (1)
53-56: Selector labels remain stable.The selector labels correctly use the chart name and release name, which is the standard pattern. Note that
app.kubernetes.io/instancestill uses the raw.Release.Name(not lowercased), which is intentional for label identification purposes while the actual resource names are lowercased.helm/charts/monitoring/templates/_helpers.tpl (5)
12-28: LGTM!The
monitoring.fullnametemplate correctly applieslowerto ensure DNS-compatible resource names. The three-branch logic (fullnameOverride, release name contains chart name, combined name) follows standard Helm conventions with proper truncation at 63 characters.
30-35: LGTM!Standard chart label pattern with proper handling of semver
+characters and 63-character truncation.
37-47: LGTM!Standard Kubernetes recommended labels pattern with proper composition and quoting of AppVersion.
49-55: LGTM!Standard selector labels following Kubernetes recommended label conventions.
57-66: LGTM!Standard service account name helper with proper fallback logic.
Signed-off-by: Alexander Adam <alphaone23@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
helm/templates/deployment.yaml (1)
52-54: Consider simplifying the$alloyHostcomputation.The
toYaml | fromYamlround-trip is typically unnecessary when usingdigon.Values. This works correctly but could be simplified.♻️ Suggested simplification
- {{- $alloyHost := default (printf "%s-alloy" .Release.Name) - (dig "monitoring" "alloy" "fullnameOverride" "" - ( .Values | toYaml | fromYaml )) }} + {{- $alloyHost := default (printf "%s-alloy" .Release.Name) + (dig "monitoring" "alloy" "fullnameOverride" "" .Values) }}
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
helm/templates/deployment.yaml
🧰 Additional context used
🪛 YAMLlint (1.37.1)
helm/templates/deployment.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Analyze (go)
🔇 Additional comments (4)
helm/templates/deployment.yaml (4)
1-4: SPDX header format and year update looks correct.The static analysis hint from YAMLlint reporting a syntax error at line 1 is a false positive—YAMLlint doesn't understand Helm/Go template syntax. The
{{- /* ... */ -}}block comment is valid and correctly prevents the SPDX text from appearing in rendered output.
39-50: Container and ports configuration looks good.The addition of a dedicated
metricsport alongside the existinghttpport follows best practices for exposing application metrics separately from the main service traffic.
55-66: OTEL configuration follows standard conventions.The environment variables correctly configure all three telemetry signals (logs, metrics, traces) to use OTLP export over gRPC to the Alloy collector on the standard port 4317.
67-76: Probes and volume mounts configuration is correct.The indentation is consistent, and the conditional volumeMounts block is properly structured within the container definition.
Signed-off-by: Alexander Adam <alphaone23@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
helm/charts/monitoring/templates/_helpers.tpl (1)
9-11: Minor inconsistency:monitoring.namedoesn't applylowerwhilemonitoring.fullnamedoes.The
monitoring.fullnametemplate applies| lowerfor DNS compliance, butmonitoring.namedoesn't. Ifmonitoring.nameis used in contexts requiring lowercase (e.g., DNS labels, resource names), this could cause validation failures.Consider adding
| lowerfor consistency:♻️ Suggested fix
{{- define "monitoring.name" -}} -{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" | lower }} {{- end }}
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
helm/charts/monitoring/templates/_helpers.tplhelm/templates/_helpers.tpl
🚧 Files skipped from review as they are similar to previous changes (1)
- helm/templates/_helpers.tpl
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2026-01-10T17:00:17.035Z
Learnt from: AlphaOne1
Repo: AlphaOne1/sonicweb PR: 213
File: helm/charts/monitoring/values.yaml:85-88
Timestamp: 2026-01-10T17:00:17.035Z
Learning: The monitoring chart in helm/charts/monitoring/ is designed as a development/testing convenience feature, not as a production-ready monitoring solution. For production use, adjustments are required including enabling persistent volumes and configuring appropriate retention periods.
Applied to files:
helm/charts/monitoring/templates/_helpers.tpl
🔇 Additional comments (6)
helm/charts/monitoring/templates/_helpers.tpl (6)
1-4: LGTM!Standard SPDX license header with appropriate formatting.
18-29: LGTM!The
monitoring.fullnamehelper correctly handles all override scenarios with proper DNS compliance (lowercase, 63 char truncation, suffix trimming).
34-36: LGTM!Standard chart label helper with proper handling of semver
+characters.
41-48: LGTM!Standard Kubernetes recommended labels following Helm best practices.
53-56: LGTM!Minimal and appropriate selector labels for pod selection consistency.
61-67: LGTM!Standard service account name helper with proper fallback logic.
Signed-off-by: Alexander Adam <alphaone23@gmail.com>
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In @helm/charts/monitoring/Chart.yaml:
- Around line 20-42: The declared dependency versions in Chart.yaml are invalid;
update the version fields for grafana, alloy, victoria-metrics-single,
victoria-logs-single, and victoria-traces-single to actual published versions
from their respective repositories (use helm repo update and helm search repo or
consult the repo index.yaml to pick matching releases), then run helm dependency
update and test the chart to ensure the new versions are compatible with your
chart values and templates.
In @README.md:
- Line 116: Update the README deprecation text for the -trace-endpoint flag to
name the exact environment variable; replace the vague "OTEL_*_ENDPOINT" wording
with "OTEL_EXPORTER_OTLP_TRACES_ENDPOINT" so it matches the variable used in
setupTraceEnvVars (referenced in main.go) and removes ambiguity for users.
🧹 Nitpick comments (9)
helm/templates/hpa.yaml (1)
22-27: Verify unusual list item indentation.The
- type:syntax uses 3 spaces after the hyphen, which is unconventional. While valid YAML, most Helm charts use a single space after hyphens (- type:). This inconsistency may cause maintenance friction.♻️ Suggested conventional indentation
{{- if .Values.autoscaling.targetCPUUtilizationPercentage }} - - type: Resource - resource: - name: cpu - target: - type: Utilization - averageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }} + - type: Resource + resource: + name: cpu + target: + type: Utilization + averageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }} {{- end }}helm/values.yaml (1)
36-46: Consider defaultingautomounttofalsefor security.When
serviceAccount.createis false butautomountdefaults totrue, this could lead to unintended API credential mounting if the account is created later. Settingautomount: falseby default follows the principle of least privilege.♻️ Suggested change
serviceAccount: # Specifies whether a service account should be created create: false # Automatically mount a ServiceAccount's API credentials? - automount: true + automount: falselogging.go (1)
76-87: Consider pre-allocating the errors slice with expected capacity.Pre-allocating with
make([]error, 0)provides no benefit overvar errs []error. If you want to pre-allocate, use the handler count as capacity.♻️ Suggested optimization
func (t *MultiHandler) Handle(ctx context.Context, r slog.Record) error { - errs := make([]error, 0) + var errs []error for _, h := range t.handlers { c := r.Clone() if err := h.Handle(ctx, c); err != nil { errs = append(errs, err) } } return errors.Join(errs...) }helm/charts/monitoring/templates/grafana_datasources.yaml (1)
6-30: LGTM!The ConfigMap correctly provisions Grafana datasources for VictoriaMetrics, VictoriaLogs, and VictoriaTraces with appropriate types and service URLs. The YAMLlint error is a false positive due to Helm template syntax.
Consider adding a trailing newline at end of file for POSIX compliance.
helm/charts/monitoring/templates/networkpolicy.yaml (1)
59-64: Consider restricting external access to Grafana.The
0.0.0.0/0CIDR allows unrestricted external access to Grafana. While this is acceptable for a dev/testing convenience chart (based on learnings), for any production-adjacent use, consider:
- Using a more restrictive CIDR range
- Adding a configurable value for the allowed CIDR
- Relying on Ingress with authentication instead
♻️ Optional: Make CIDR configurable
- from: - ipBlock: - cidr: 0.0.0.0/0 + cidr: {{ .Values.grafana.networkPolicy.allowedCidr | default "0.0.0.0/0" }}helm/charts/monitoring/templates/alloy_configmap.yaml (1)
9-9: Consider templating the ConfigMap name.The ConfigMap name is hardcoded as
alloy-collector-config. For consistency with other resources in the chart that usemonitoring.fullname, consider templating this name to avoid potential conflicts when multiple releases are deployed.♻️ Suggested change
metadata: - name: alloy-collector-config + name: {{ include "monitoring.fullname" . }}-alloy-confighelm/templates/_helpers.tpl (1)
9-11: Consider adding| lowertoSonicWeb.namefor consistency.The
SonicWeb.fullnametemplate now applies| lower, butSonicWeb.namedoes not. In the monitoring subchart,monitoring.nameincludes| lower. This inconsistency could lead to mixed-case label values inSonicWeb.selectorLabels(line 55) while resource names fromSonicWeb.fullnameare lowercase.♻️ Suggested change
{{- define "SonicWeb.name" -}} -{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" | lower }} {{- end }}helm/charts/monitoring/templates/_helpers.tpl (1)
1-4: SPDX attribution differs from parent chart.The copyright holder is "The monitoring contributors" but this chart is part of the SonicWeb project. Consider using "The SonicWeb contributors" for consistency with the parent chart (
helm/templates/_helpers.tpluses "The SonicWeb contributors").♻️ Suggested change
{{- /* -SPDX-FileCopyrightText: 2026 The monitoring contributors. +SPDX-FileCopyrightText: 2026 The SonicWeb contributors. SPDX-License-Identifier: MPL-2.0 */ -}}main.go (1)
294-301: Redundant tracing check:len(config.TraceEndpoint) > 0duplicates telemetry intent.The
enableTracingparameter (line 296) checks ifTraceEndpointis set, butsetupTraceEnvVarsalready setsOTEL_EXPORTER_OTLP_TRACES_ENDPOINT. With the new OTEL SDK flow, tracing is controlled via environment variables when telemetry is enabled. Consider whether this separate flag is still needed or if it could be consolidated.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (3)
docker_root/favicon.svgis excluded by!**/*.svggo.sumis excluded by!**/*.sumsonicweb_logo.svgis excluded by!**/*.svg
📒 Files selected for processing (62)
.dockerignore.github/CODEOWNERS.github/FUNDING.yml.github/dependabot.yml.github/workflows/compliance.yml.github/workflows/dependency-review.yml.github/workflows/release.yml.github/workflows/scorecard.yml.github/workflows/security.yml.github/workflows/test.yml.gitignore.golangci.yaml.markdownlint.json.license.vscode/settings.json.licenseAUTHORS.mdCHANGELOG.mdCODE_OF_CONDUCT.mdCONTRIBUTING.mdDockerfileGOVERNANCE.mdMakefileREADME.mdSECURITY.mddocker_root/apple-touch-icon.png.licensedocker_root/favicon.ico.licensedocker_root/index.htmlgo.modgo.sum.licensehelm/.helmignorehelm/.helmignore.licensehelm/Chart.yamlhelm/charts/monitoring/Chart.yamlhelm/charts/monitoring/templates/_helpers.tplhelm/charts/monitoring/templates/alloy_configmap.yamlhelm/charts/monitoring/templates/grafana_datasources.yamlhelm/charts/monitoring/templates/networkpolicy.yamlhelm/charts/monitoring/values.yamlhelm/templates/NOTES.txthelm/templates/NOTES.txt.licensehelm/templates/_helpers.tplhelm/templates/deployment.yamlhelm/templates/hpa.yamlhelm/templates/ingress.yamlhelm/templates/service.yamlhelm/templates/serviceaccount.yamlhelm/templates/tests/test-connection.yamlhelm/values.yamlinstrumentation.gologging.gologo.tmpl.licensemain.gomain_test.goman/sonicweb.1.tmplman/sonicweb_de.1.tmplman/sonicweb_es.1.tmplmiddlewares.gonfpm.yaml.tmplserver.gotestroot/index.htmltestroot/spa.htmltestroot/testHeaders.conf.licensetls.go
💤 Files with no reviewable changes (2)
- helm/templates/NOTES.txt.license
- helm/.helmignore.license
🧰 Additional context used
🧠 Learnings (9)
📓 Common learnings
Learnt from: AlphaOne1
Repo: AlphaOne1/sonicweb PR: 213
File: helm/charts/monitoring/values.yaml:85-88
Timestamp: 2026-01-10T17:00:17.035Z
Learning: The monitoring chart in helm/charts/monitoring/ is designed as a development/testing convenience feature, not as a production-ready monitoring solution. For production use, adjustments are required including enabling persistent volumes and configuring appropriate retention periods.
📚 Learning: 2025-09-21T14:10:51.227Z
Learnt from: AlphaOne1
Repo: AlphaOne1/sonicweb PR: 193
File: governance.md:1-4
Timestamp: 2025-09-21T14:10:51.227Z
Learning: In the SonicWeb repository, AlphaOne1 prefers to maintain original author attribution in SPDX headers even when it differs from the standardized format used elsewhere in the repo, particularly for governance and foundational documents.
Applied to files:
docker_root/index.htmlhelm/.helmignoredocker_root/apple-touch-icon.png.licensetestroot/spa.htmlSECURITY.md.markdownlint.json.licenseAUTHORS.md.vscode/settings.json.licensenfpm.yaml.tmplman/sonicweb_es.1.tmplREADME.mdGOVERNANCE.md.github/CODEOWNERSCONTRIBUTING.mdtestroot/testHeaders.conf.licenseDockerfilehelm/values.yaml.github/workflows/dependency-review.ymllogo.tmpl.licensedocker_root/favicon.ico.license.dockerignoreCODE_OF_CONDUCT.md.gitignorego.sum.licenseman/sonicweb_de.1.tmplhelm/templates/_helpers.tplman/sonicweb.1.tmpltestroot/index.html.github/dependabot.ymlhelm/templates/NOTES.txtgo.modhelm/Chart.yaml
📚 Learning: 2025-08-21T22:41:16.515Z
Learnt from: AlphaOne1
Repo: AlphaOne1/sonicweb PR: 146
File: .github/workflows/release.yml:64-66
Timestamp: 2025-08-21T22:41:16.515Z
Learning: In Go, when environment variables GOOS and GOARCH are set, `go env GOOS` and `go env GOARCH` will return those environment values rather than the host defaults. This means Makefiles using `$(shell go env GOOS)` and `$(shell go env GOARCH)` will correctly pick up CI-exported values.
Applied to files:
Makefile
📚 Learning: 2025-09-17T17:34:47.959Z
Learnt from: AlphaOne1
Repo: AlphaOne1/sonicweb PR: 181
File: docker_root/index.html:79-85
Timestamp: 2025-09-17T17:34:47.959Z
Learning: In the sonicweb project, AlphaOne1 controls all Markdown content at build time (like README.md in Docker images), so DOMPurify sanitization is not required for their controlled markdown rendering scenarios. They acknowledge DOMPurify should be used with unknown/untrusted Markdown content.
Applied to files:
README.md
📚 Learning: 2025-09-21T12:04:15.890Z
Learnt from: AlphaOne1
Repo: AlphaOne1/sonicweb PR: 192
File: .github/workflows/compliance.yml:71-71
Timestamp: 2025-09-21T12:04:15.890Z
Learning: In the SonicWeb project's compliance workflow (.github/workflows/compliance.yml), AlphaOne1 prefers to check ALL commits for DCO compliance without limiting to 200 commits, even on large initial pushes, prioritizing thoroughness over performance.
Applied to files:
.github/workflows/compliance.yml.github/CODEOWNERS.github/workflows/dependency-review.yml.github/workflows/scorecard.yml.github/dependabot.yml
📚 Learning: 2026-01-10T17:00:17.035Z
Learnt from: AlphaOne1
Repo: AlphaOne1/sonicweb PR: 213
File: helm/charts/monitoring/values.yaml:85-88
Timestamp: 2026-01-10T17:00:17.035Z
Learning: The monitoring chart in helm/charts/monitoring/ is designed as a development/testing convenience feature, not as a production-ready monitoring solution. For production use, adjustments are required including enabling persistent volumes and configuring appropriate retention periods.
Applied to files:
helm/charts/monitoring/templates/alloy_configmap.yamlhelm/values.yamlhelm/charts/monitoring/templates/networkpolicy.yamlhelm/charts/monitoring/Chart.yamlhelm/charts/monitoring/templates/_helpers.tpl
📚 Learning: 2025-09-21T14:11:56.773Z
Learnt from: AlphaOne1
Repo: AlphaOne1/sonicweb PR: 193
File: governance.md:38-44
Timestamp: 2025-09-21T14:11:56.773Z
Learning: In the SonicWeb repository governance process, AlphaOne1 prefers to rely on GitHub's native issue timestamp handling for time-based waiting periods rather than specifying explicit timezones, as the issue creation time provides a clear reference point regardless of viewer timezone representation.
Applied to files:
.github/CODEOWNERSgo.mod
📚 Learning: 2026-01-10T17:00:17.035Z
Learnt from: AlphaOne1
Repo: AlphaOne1/sonicweb PR: 213
File: helm/charts/monitoring/values.yaml:85-88
Timestamp: 2026-01-10T17:00:17.035Z
Learning: In helm/charts/monitoring/values.yaml, treat the monitoring chart as a development/testing convenience. For production, enable persistence (e.g., set persistence.enabled: true and configure persistence.size) and configure appropriate retention periods (e.g., retention or alerting/logging retention settings) to ensure data durability and operability in production environments.
Applied to files:
helm/charts/monitoring/values.yaml
📚 Learning: 2025-09-14T02:05:57.237Z
Learnt from: AlphaOne1
Repo: AlphaOne1/sonicweb PR: 176
File: go.mod:46-46
Timestamp: 2025-09-14T02:05:57.237Z
Learning: The go.opentelemetry.io/auto/sdk v1.2.0 dependency causes build errors on 386 architecture. Version v1.1.0 should be used instead to maintain compatibility with 386 builds in multi-arch Docker configurations.
Applied to files:
go.modinstrumentation.go
🧬 Code graph analysis (1)
instrumentation.go (2)
logging.go (1)
NewMultiHandler(58-64)main.go (1)
ServerName(32-32)
🪛 YAMLlint (1.37.1)
helm/charts/monitoring/templates/alloy_configmap.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
helm/templates/service.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
[warning] 19-19: too many spaces after hyphen
(hyphens)
helm/templates/tests/test-connection.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
helm/charts/monitoring/templates/networkpolicy.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
[warning] 18-18: too many spaces after hyphen
(hyphens)
[warning] 19-19: too many spaces after hyphen
(hyphens)
[warning] 24-24: too many spaces after colon
(colons)
[warning] 30-30: too many spaces after hyphen
(hyphens)
[warning] 30-30: too many spaces after colon
(colons)
[warning] 39-39: too many spaces after hyphen
(hyphens)
[warning] 40-40: too many spaces after hyphen
(hyphens)
[warning] 43-43: too many spaces after hyphen
(hyphens)
[warning] 49-49: too many spaces after colon
(colons)
[warning] 59-59: too many spaces after hyphen
(hyphens)
[warning] 60-60: too many spaces after hyphen
(hyphens)
[warning] 63-63: too many spaces after hyphen
(hyphens)
[warning] 64-64: too many spaces after colon
(colons)
helm/templates/serviceaccount.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
helm/charts/monitoring/templates/grafana_datasources.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
helm/templates/hpa.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
helm/templates/deployment.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
helm/templates/ingress.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
🔇 Additional comments (79)
.github/FUNDING.yml (1)
1-1: LGTM!The SPDX copyright year update to 2026 is correct and consistent with the repository-wide metadata updates.
.github/workflows/compliance.yml (1)
1-1: LGTM!The SPDX copyright year update to 2026 is correct and consistent with the repository-wide metadata updates.
CODE_OF_CONDUCT.md (1)
1-1: LGTM!The SPDX copyright year update to 2026 is correct and consistent with the repository-wide metadata updates.
testroot/spa.html (1)
1-1: LGTM!Standard copyright year update to 2026.
testroot/testHeaders.conf.license (1)
1-1: LGTM!Standard copyright year update to 2026.
man/sonicweb_de.1.tmpl (1)
1-1: LGTM!Standard copyright year update to 2026.
.vscode/settings.json.license (1)
1-1: LGTM!Standard copyright year update to 2026.
AUTHORS.md (1)
1-1: LGTM!Standard copyright year update to 2026.
go.sum.license (1)
1-2: LGTM!Standard copyright year update, consistent with the repository-wide year bump to 2026.
docker_root/apple-touch-icon.png.license (1)
1-3: LGTM!Copyright year updated to 2026, consistent with repository-wide maintenance.
nfpm.yaml.tmpl (1)
1-2: LGTM!Copyright year updated to 2026. The nfpm package configuration remains correctly structured.
.github/CODEOWNERS (1)
1-4: LGTM!Copyright year updated to 2026. CODEOWNERS configuration remains correct.
.github/workflows/test.yml (1)
1-2: LGTM!Copyright year updated to 2026. The workflow configuration is well-structured with proper security hardening (pinned action versions, harden-runner steps) and remains unchanged.
CONTRIBUTING.md (1)
1-1: LGTM - Routine copyright year update.The SPDX copyright year update to 2026 is appropriate and consistent with the current date.
.github/workflows/dependency-review.yml (1)
1-1: LGTM - Routine copyright year update.The SPDX copyright year update to 2026 is appropriate and consistent with the current date.
SECURITY.md (1)
1-1: LGTM - Routine copyright year update.The SPDX copyright year update to 2026 is appropriate and consistent with the current date.
tls.go (1)
1-1: LGTM!Copyright year update to 2026 is appropriate. No functional changes to the TLS configuration logic.
.github/workflows/release.yml (1)
1-1: LGTM!Copyright year update to 2026 is appropriate. The release workflow configuration remains unchanged.
man/sonicweb.1.tmpl (1)
1-1: LGTM!Copyright year update to 2026 is appropriate. The man page template content remains unchanged.
Dockerfile (1)
1-1: LGTM!Copyright year update to 2026 is appropriate. The Dockerfile configuration, including security practices and multi-stage build setup, remains unchanged.
.markdownlint.json.license (1)
1-1: LGTM!Copyright year update to 2026 is appropriate.
docker_root/index.html (1)
1-1: SPDX copyright year update looks good.Standard annual copyright year bump to 2026 with no functional changes.
middlewares.go (2)
1-1: SPDX copyright year update looks good.Standard annual copyright year bump to 2026.
156-156: Grammar fix is correct.The plural "file handlers" appropriately reflects that multiple handlers could cause the redirect loop.
Makefile (1)
3-3: SPDX copyright year update looks good.Standard annual copyright year bump to 2026 with no functional changes to build targets.
GOVERNANCE.md (1)
1-2: Dual attribution correctly maintained.The original author attribution (Greg Wilson) is preserved while the SonicWeb contributors year is appropriately updated to 2026. Based on learnings, this aligns with the project's preference for maintaining original author attribution in governance documents.
.github/dependabot.yml (2)
1-1: SPDX copyright year update looks good.Standard annual copyright year bump to 2026.
26-29: Good addition of Helm ecosystem for Dependabot.The new Helm package ecosystem entry correctly targets the
/helmdirectory where the chart and new monitoring subchart reside. The weekly schedule is consistent with the other non-critical ecosystems (github-actions, docker).helm/.helmignore (1)
1-2: LGTM!Adding the SPDX license header to this file improves license compliance consistency across the repository.
testroot/index.html (1)
1-1: LGTM!Copyright year update is appropriate for the new calendar year.
docker_root/favicon.ico.license (1)
1-1: LGTM!Copyright year update is consistent with the repository-wide SPDX header updates.
.github/workflows/scorecard.yml (1)
1-1: LGTM!Copyright year update with no changes to workflow logic. The workflow maintains good security practices with pinned action versions.
.dockerignore (1)
1-1: LGTM!Copyright year update is consistent with the repository-wide SPDX header updates for 2026.
.gitignore (2)
1-1: LGTM: Copyright year updated for 2026.The year update is appropriate for the upcoming release cycle.
10-12: LGTM: Appropriate ignore patterns for Helm artifacts.The new patterns correctly exclude generated Helm artifacts (packaged charts and dependency lock files) from version control, which aligns with the monitoring subchart additions in this PR.
README.md (1)
2-2: LGTM: Copyright year updated for 2026.CHANGELOG.md (2)
1-1: LGTM: Copyright year updated for 2026.
4-10: LGTM: Clear and comprehensive release notes.The Release 1.7.0 entry accurately summarizes the key changes in this PR: monitoring infrastructure additions, OpenTelemetry integration improvements, and dependency updates. The items are appropriately scoped for a changelog.
helm/templates/hpa.yaml (1)
1-4: LGTM!The SPDX header wrapped in a Helm template block comment is a valid approach that prevents the header from appearing in rendered manifests. The YAMLlint error is a false positive since
{{- /*is valid Helm template syntax for block comments.helm/values.yaml (2)
11-20: Good documentation and security defaults.The monitoring block is well-documented as a dev/testing feature (aligned with the project's intent). The empty
adminPasswordwith a comment requiring explicit setting via--setis a secure pattern that prevents accidental deployment with default credentials.
96-106: LGTM!The resource requests are reasonable lower bounds for a web server (100m CPU, 20Mi memory). Leaving limits commented out with explanatory notes gives operators flexibility while providing guidance.
logging.go (2)
54-64: LGTM!The
MultiHandlerstruct and constructor are well-implemented. Usingslices.DeleteFuncto filter nil handlers is a clean approach that prevents nil pointer dereferences during log handling.
89-107: LGTM!The
WithAttrsandWithGroupmethods correctly implement the immutable handler pattern by creating newMultiHandlerinstances with transformed underlying handlers. Pre-allocation withlen(t.handlers)capacity is appropriate here.helm/templates/service.yaml (1)
19-22: No issues found. ThetargetPort: metricsin service.yaml correctly references the containerPort namedmetricsdefined in the Deployment template (line 48 ofhelm/templates/deployment.yaml)..golangci.yaml (1)
64-80: LGTM!The depguard allow list additions correctly permit the new OTEL packages needed for the monitoring integration: Prometheus client, otelslog bridge, OTLP exporters for logs/metrics/traces, stdout exporters, and semconv v1.37.0 (released June 25, 2025).
instrumentation.go (8)
40-47: Well-structured constants for OTLP configuration.The constants provide clear, maintainable identifiers for protocol and exporter types, avoiding magic strings throughout the implementation.
49-131: Solid OpenTelemetry bootstrap implementation with proper cleanup.The shutdown aggregation pattern is well-designed—collecting shutdown functions and joining errors on cleanup ensures all providers are properly terminated. The defensive nil checks before registering providers (lines 96, 108, 120) prevent issues when exporters are disabled.
The
MultiHandlerintegration at line 125-127 correctly preserves the existing slog handler while adding OTEL logging, and the snippet fromlogging.goconfirms nil handlers are filtered out.
133-150: Reasonable defaults for OTLP configuration.Using
os.LookupEnvcorrectly distinguishes between unset and empty values. Logging errors rather than failing is appropriate here since these are just convenience defaults.
161-184: Clean propagator configuration with sensible defaults.The implementation correctly falls back to TraceContext + Baggage when no propagators are specified. Using
strings.SplitSeq(Go 1.24+) is appropriate given thego 1.25directive in go.mod.
260-301: Tracer provider correctly implements OTEL exporter specification.The protocol override chain (specialized > general) follows OpenTelemetry conventions. Using
WithBatcherfor all exporters is the right choice for production workloads. The//nolint:nilnilannotation properly documents the intentional nil return when tracing is disabled.
378-397: Meter provider implementation handles multiple exporters correctly.The logic properly distinguishes between push-based exporters (OTLP, console) that need a
PeriodicReaderwrapper and pull-based exporters (Prometheus) that provide their own reader.Minor note: if multiple Prometheus exporters are specified (unlikely), only the last handler is retained (lines 381-383). This is acceptable since specifying the same exporter type multiple times is not a supported use case.
509-536: ImprovedserveMetricsAPI with explicit handler injection.The updated signature accepting
http.Handlerinstead of a boolean flag is cleaner—the caller controls the handler creation, and the function simply mounts it. The early return when bothmetricHandleris nil and pprof is disabled (line 512-513) correctly avoids starting an unnecessary server.
186-210: Comprehensive resource configuration.The resource includes service identification, SDK metadata, and runtime detection (process, container, OS, host). The
WithFromEnv()option properly supports OTEL_RESOURCE_ATTRIBUTES for deployment-time customization. BothServerNameandbuildInfoTagare properly defined in the same package and are accessible to this function.server.go (1)
1-1: LGTM!Copyright year update aligns with the repository-wide year bump to 2026.
helm/templates/NOTES.txt (1)
1-5: LGTM!SPDX header correctly wrapped in Helm template comment syntax to ensure it's stripped during rendering. The format is consistent with other Helm templates in this PR.
helm/templates/tests/test-connection.yaml (1)
1-4: LGTM!SPDX header correctly added using Helm template comment syntax. The YAMLlint error is a false positive—this is a Helm template file where
{{- /* ... */ -}}is valid Go template syntax that gets stripped during rendering, leaving valid YAML.go.mod (1)
14-31: LGTM! New OpenTelemetry dependencies are well-aligned.The added dependencies provide a comprehensive telemetry stack: OTLP exporters for traces/metrics/logs via both gRPC and HTTP, stdout exporters for development, Prometheus exporter for metrics, and the otelslog bridge for slog integration. Version consistency across the OTEL ecosystem (v1.39.0 for core SDK, v0.15.0 for log components) looks correct.
The
go.opentelemetry.io/auto/sdkv1.2.1 dependency is appropriate: it fixes the 32-bit (386) integer bounding issues present in v1.2.0, which is important given your release workflow builds for the 386 architecture.main_test.go (1)
1-1: LGTM!The copyright year update and error message pluralization are appropriate minor maintenance changes.
Also applies to: 382-382, 426-426
helm/templates/ingress.yaml (2)
1-4: LGTM!The SPDX header wrapped in Helm template comments is consistent with other templates in the PR. The YAMLlint syntax error is a false positive—Helm template delimiters (
{{- /* ... */ -}}) are valid and stripped during rendering.
24-45: LGTM!The indentation adjustments for TLS hosts, secretName, and ingress rules are consistent and the resulting Ingress resource structure is correct.
helm/charts/monitoring/values.yaml (2)
13-13: Security note: Wide-open Grafana access CIDR.
allowedAccessCidr: 0.0.0.0/0permits access from any IP address. Based on learnings, this monitoring chart is designed as a development/testing convenience, not for production. Consider adding a comment here clarifying this is intentional for dev environments only.
33-53: Dev/testing configuration noted.Persistence is disabled and retention periods are short (24h metrics, 1h logs/traces). Based on learnings, this is intentional for development/testing convenience. For production deployments, users should:
- Enable
persistentVolume.enabled: truewith appropriate storage- Configure longer retention periods
Consider adding a comment at the top of this file documenting these constraints.
helm/charts/monitoring/Chart.yaml (1)
4-18: LGTM!Chart metadata is well-structured with appropriate versioning and description. The subchart design cleanly combines monitoring components.
helm/templates/serviceaccount.yaml (1)
1-17: LGTM!The ServiceAccount template is well-structured with proper conditional rendering, label inclusion, and optional annotations. The
automountServiceAccountTokenfield correctly defaults to a configurable value.Note: The YAMLlint error on line 1 is a false positive—Helm template comment syntax
{{- /* ... */ -}}is valid and commonly used for SPDX headers.helm/charts/monitoring/templates/networkpolicy.yaml (2)
6-19: LGTM on Alloy ingress policy.The policy correctly restricts Alloy access to pods within the same namespace using an empty
podSelector. The YAMLlint spacing warnings are stylistic and common in Helm templates.
21-45: LGTM on Victoria access policy.The policy correctly uses
matchExpressionsto target multiple Victoria components and restricts ingress to only Alloy and Grafana pods.helm/Chart.yaml (2)
20-23: LGTM on monitoring dependency.The conditional dependency pattern using
monitoring.enabledis correct and follows Helm best practices for optional subcharts.
25-27: LGTM on maintainers section.Good addition of maintainer information for chart ownership clarity.
helm/charts/monitoring/templates/alloy_configmap.yaml (1)
35-48: Acknowledge: TLS disabled for internal traffic.The
tls { insecure = true }setting for VictoriaLogs and VictoriaTraces exporters is acceptable for internal cluster communication in a dev/testing monitoring chart. For production use, consider enabling TLS or documenting this as a required adjustment.helm/templates/deployment.yaml (3)
48-50: LGTM on metrics port addition.Adding a dedicated metrics port enables Prometheus-style scraping alongside the OTLP push model, providing flexibility in how metrics are collected.
51-66: LGTM on OTEL configuration block.The conditional OTEL environment variables follow OpenTelemetry SDK conventions. The
$alloyHostderivation correctly falls back to<release>-alloywhen no fullnameOverride is configured.Minor note: The
toYaml | fromYamlpattern on line 54 is a known workaround for Helm'sdigfunction with complex value structures. It works correctly here.
67-76: LGTM on probes and volumeMounts.The indentation changes correctly align with the container structure, and the probe/resource configurations remain properly templated.
helm/templates/_helpers.tpl (1)
18-29: LGTM! Lowercase normalization for Kubernetes resource names.The refactoring to consistently apply
| lowerto all fullname outputs is correct for Kubernetes DNS naming compliance (RFC 1123). The introduction of$relNamevariable improves code clarity.helm/charts/monitoring/templates/_helpers.tpl (1)
9-67: LGTM! Well-structured Helm helper templates.The template definitions follow Helm best practices:
- Consistent
| lowernormalization for DNS-compliant resource names- Proper 63-character truncation for Kubernetes naming limits
- Standard label and service account patterns
Based on learnings, this monitoring chart is designed as a development/testing convenience feature, and the template structure appropriately supports that use case.
main.go (5)
123-143: Well-structured deprecation handling for trace-endpoint.The function correctly:
- Warns when the deprecated flag conflicts with the environment variable
- Sets the environment variable to maintain backward compatibility
- Emits a deprecation warning to guide users toward the new approach
145-146: ExportingErrConversionis a breaking change if this was previously consumed externally.Renaming from
errConversiontoErrConversionexports the error, which is appropriate for a sentinel error. However, verify that no external consumers were relying on the unexported symbol via reflection or error message matching.
248-254: LGTM! Proper deferred OTEL shutdown with timeout context.The shutdown pattern correctly:
- Uses a separate context with timeout to avoid blocking indefinitely
- Cancels the context after shutdown
- Logs warnings on shutdown errors without failing the process
339-340: No action needed —serveMetricsalready handles nilmetricHandlergracefully. The function checks for nil at line 512 and returns early if bothmetricHandleris nil andenablePprofis false. Additionally, line 531 conditionally registers the metrics handler only if it's non-nil, with appropriate logging for both cases.
234-259:serverShutdownTimeoutis properly defined and suitable for OTEL shutdown.
serverShutdownTimeoutis defined inserver.goas5 * time.Secondwith clear documentation. The 5-second timeout falls within the appropriate range for gracefully flushing telemetry data.
| dependencies: | ||
| - name: grafana | ||
| version: 10.5.5 | ||
| repository: https://grafana.github.io/helm-charts | ||
|
|
||
| - name: alloy | ||
| version: 1.5.1 | ||
| repository: https://grafana.github.io/helm-charts | ||
|
|
||
| - name: victoria-metrics-single | ||
| version: 0.28.0 | ||
| repository: https://victoriametrics.github.io/helm-charts/ | ||
| alias: metrics | ||
|
|
||
| - name: victoria-logs-single | ||
| version: 0.11.23 | ||
| repository: https://victoriametrics.github.io/helm-charts/ | ||
| alias: logs | ||
|
|
||
| - name: victoria-traces-single | ||
| version: 0.0.5 | ||
| repository: https://victoriametrics.github.io/helm-charts/ | ||
| alias: traces |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify that the specified Helm chart versions exist in their repositories
echo "=== Checking grafana chart ==="
helm repo add grafana https://grafana.github.io/helm-charts 2>/dev/null || true
helm search repo grafana/grafana --version 10.5.5 2>/dev/null | grep -q "10.5.5" && echo "✓ grafana 10.5.5 found" || echo "✗ grafana 10.5.5 NOT found"
echo -e "\n=== Checking alloy chart ==="
helm search repo grafana/alloy --version 1.5.1 2>/dev/null | grep -q "1.5.1" && echo "✓ alloy 1.5.1 found" || echo "✗ alloy 1.5.1 NOT found"
echo -e "\n=== Checking VictoriaMetrics charts ==="
helm repo add vm https://victoriametrics.github.io/helm-charts/ 2>/dev/null || true
helm search repo vm/victoria-metrics-single --version 0.28.0 2>/dev/null | grep -q "0.28.0" && echo "✓ victoria-metrics-single 0.28.0 found" || echo "✗ victoria-metrics-single 0.28.0 NOT found"
helm search repo vm/victoria-logs-single --version 0.11.23 2>/dev/null | grep -q "0.11.23" && echo "✓ victoria-logs-single 0.11.23 found" || echo "✗ victoria-logs-single 0.11.23 NOT found"
helm search repo vm/victoria-traces-single --version 0.0.5 2>/dev/null | grep -q "0.0.5" && echo "✓ victoria-traces-single 0.0.5 found" || echo "✗ victoria-traces-single 0.0.5 NOT found"Repository: AlphaOne1/sonicweb
Length of output: 339
All dependency versions are invalid and do not exist in their repositories.
The chart declares versions that cannot be resolved:
- grafana 10.5.5 — not found
- alloy 1.5.1 — not found
- victoria-metrics-single 0.28.0 — not found
- victoria-logs-single 0.11.23 — not found
- victoria-traces-single 0.0.5 — not found
Update each dependency version to a version that exists in the upstream repositories. Verify the updated versions are compatible with the chart's configuration and values.
🤖 Prompt for AI Agents
In @helm/charts/monitoring/Chart.yaml around lines 20 - 42, The declared
dependency versions in Chart.yaml are invalid; update the version fields for
grafana, alloy, victoria-metrics-single, victoria-logs-single, and
victoria-traces-single to actual published versions from their respective
repositories (use helm repo update and helm search repo or consult the repo
index.yaml to pick matching releases), then run helm dependency update and test
the chart to ensure the new versions are compatible with your chart values and
templates.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The versions do indeed exist and the chart can be build sucessfully.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -e
echo "=== Adding and updating Helm repositories ==="
helm repo add grafana https://grafana.github.io/helm-charts --force-update
helm repo add vm https://victoriametrics.github.io/helm-charts/ --force-update
helm repo update
echo -e "\n=== Checking grafana chart version 10.5.5 ==="
helm search repo grafana/grafana --versions | grep "10.5.5" || echo "Not found in search"
echo -e "\n=== Checking alloy chart version 1.5.1 ==="
helm search repo grafana/alloy --versions | grep "1.5.1" || echo "Not found in search"
echo -e "\n=== Checking victoria-metrics-single version 0.28.0 ==="
helm search repo vm/victoria-metrics-single --versions | grep "0.28.0" || echo "Not found in search"
echo -e "\n=== Checking victoria-logs-single version 0.11.23 ==="
helm search repo vm/victoria-logs-single --versions | grep "0.11.23" || echo "Not found in search"
echo -e "\n=== Checking victoria-traces-single version 0.0.5 ==="
helm search repo vm/victoria-traces-single --versions | grep "0.0.5" || echo "Not found in search"
echo -e "\n=== Attempting to build dependencies from the monitoring chart directory ==="
cd helm/charts/monitoring
helm dependency update 2>&1 || echo "Dependency update failed"
Length of output: 88
@AlphaOne1, I apologize for the incorrect review comment. You're right—my initial verification script appears to have had issues with the repository synchronization or search. Since you've confirmed that all the dependency versions exist and the chart builds successfully, my comment was mistaken.
Thank you for the correction. The Chart.yaml dependency declarations are valid.
🧠 Learnings used
Learnt from: AlphaOne1
Repo: AlphaOne1/sonicweb PR: 213
File: helm/charts/monitoring/values.yaml:85-88
Timestamp: 2026-01-10T17:00:17.035Z
Learning: The monitoring chart in helm/charts/monitoring/ is designed as a development/testing convenience feature, not as a production-ready monitoring solution. For production use, adjustments are required including enabling persistent volumes and configuring appropriate retention periods.
Signed-off-by: Alexander Adam <alphaone23@gmail.com>
Signed-off-by: Alexander Adam <alphaone23@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @go.mod:
- Around line 14-30: The go.mod entry for the OpenTelemetry bridge
"go.opentelemetry.io/contrib/bridges/otelslog" is pinned at v0.14.0 and must be
bumped to at least v0.15.0 (recommend v0.19.0 to match other otel v1.39.0
packages) to avoid API/attribute mismatches; update the version string for
otelslog in go.mod to v0.19.0, then run module update (go get ./... or go get
go.opentelemetry.io/contrib/bridges/otelslog@v0.19.0) and go mod tidy to refresh
go.sum and ensure no incompatible transitive versions remain.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (3)
.github/workflows/scorecard.yml.github/workflows/security.ymlgo.mod
🚧 Files skipped from review as they are similar to previous changes (1)
- .github/workflows/scorecard.yml
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: AlphaOne1
Repo: AlphaOne1/sonicweb PR: 213
File: helm/charts/monitoring/values.yaml:85-88
Timestamp: 2026-01-10T17:00:17.035Z
Learning: The monitoring chart in helm/charts/monitoring/ is designed as a development/testing convenience feature, not as a production-ready monitoring solution. For production use, adjustments are required including enabling persistent volumes and configuring appropriate retention periods.
📚 Learning: 2025-09-14T02:05:57.237Z
Learnt from: AlphaOne1
Repo: AlphaOne1/sonicweb PR: 176
File: go.mod:46-46
Timestamp: 2025-09-14T02:05:57.237Z
Learning: The go.opentelemetry.io/auto/sdk v1.2.0 dependency causes build errors on 386 architecture. Version v1.1.0 should be used instead to maintain compatibility with 386 builds in multi-arch Docker configurations.
Applied to files:
go.mod
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Analyze (go)
🔇 Additional comments (3)
.github/workflows/security.yml (1)
1-1: LGTM!Routine SPDX copyright year update to 2026. No functional changes.
go.mod (2)
39-84: The updated indirect dependencies are at secure versions that include patches for known vulnerabilities. Specifically:
- google.golang.org/grpc v1.78.0: Contains fixes for CVE-2023-44487 (HTTP/2 rapid reset) and CVE-2024-7246 (HPACK poisoning)
- google.golang.org/protobuf v1.36.11: Contains fixes for CVE-2024-24786 (protojson DoS) and other known parsing issues, patched since v1.33.0
- github.com/prometheus/common v0.67.5: No specific CVE identified
- grpc-gateway v2.27.4 and other transitive dependencies: No public CVEs identified; security depends on upstream gRPC/protobuf versions which are patched here
No security advisory actions needed.
64-64: No action needed—v1.2.1 already includes the fix for 386 architecture compatibility.The uint32 bounding issue on 32-bit architectures that affected v1.2.0 was fixed in v1.2.1 (see release notes for PR #2810). The current dependency version is correct.
Signed-off-by: Alexander Adam <alphaone23@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
helm/charts/monitoring/templates/grafana_datasources.yaml (1)
6-30: LGTM! Well-structured Grafana datasources ConfigMap.The ConfigMap correctly uses:
- The
grafana_datasource: "1"label for sidecar discovery- Consistent URL construction via the
monitoring.fullnamehelper- Standard VictoriaMetrics ports (8428, 9428, 10428)
- Appropriate
access: proxymode for server-side accessConsider adding explicit
uidfields to each datasource for stable references in dashboards (e.g.,uid: victoriametrics). Without explicit UIDs, Grafana auto-generates them, which can cause dashboard provisioning issues if datasources are recreated.💡 Optional: Add stable UIDs
datasources: - name: VictoriaMetrics type: prometheus + uid: victoriametrics url: http://{{ include "monitoring.fullname" . }}-metrics-server:8428 access: proxy isDefault: true - name: VictoriaLogs type: victoriametrics-logs-datasource + uid: victorialogs url: http://{{ include "monitoring.fullname" . }}-logs-server:9428 access: proxy - name: VictoriaTraces type: jaeger + uid: victoriatraces url: http://{{ include "monitoring.fullname" . }}-traces-server:10428/select/jaeger access: proxylogging.go (1)
58-64: Consider making a defensive copy before filtering.
slices.DeleteFuncmodifies the input slice in-place. When callers use the spread operator (e.g.,NewMultiHandler(existingSlice...)), their original slice's backing array is mutated. Making a copy first avoids this subtle side effect.♻️ Suggested defensive copy
func NewMultiHandler(handlers ...slog.Handler) *MultiHandler { - return &MultiHandler{handlers: slices.DeleteFunc( - handlers, + h := slices.Clone(handlers) + return &MultiHandler{handlers: slices.DeleteFunc( + h, func(h slog.Handler) bool { return h == nil })} }helm/values.yaml (1)
55-68: Fix inconsistent indentation in securityContext block.The
capabilitiesblock uses 2-space indentation while other blocks use 4-space indentation, and line 65 has inconsistent leading whitespace.♻️ Suggested fix for consistent indentation
securityContext: capabilities: - drop: - - ALL + drop: + - ALL allowPrivilegeEscalation: false seccompProfile: type: RuntimeDefault - # readOnlyRootFilesystem: true + # readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 65532 runAsGroup: 65532
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
README.mdhelm/charts/monitoring/templates/_helpers.tplhelm/charts/monitoring/templates/grafana_datasources.yamlhelm/templates/_helpers.tplhelm/values.yamllogging.go
🚧 Files skipped from review as they are similar to previous changes (1)
- helm/charts/monitoring/templates/_helpers.tpl
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: AlphaOne1
Repo: AlphaOne1/sonicweb PR: 213
File: helm/charts/monitoring/values.yaml:85-88
Timestamp: 2026-01-10T17:00:25.785Z
Learning: The monitoring chart in helm/charts/monitoring/ is designed as a development/testing convenience feature, not as a production-ready monitoring solution. For production use, adjustments are required including enabling persistent volumes and configuring appropriate retention periods.
📚 Learning: 2025-09-21T14:10:51.227Z
Learnt from: AlphaOne1
Repo: AlphaOne1/sonicweb PR: 193
File: governance.md:1-4
Timestamp: 2025-09-21T14:10:51.227Z
Learning: In the SonicWeb repository, AlphaOne1 prefers to maintain original author attribution in SPDX headers even when it differs from the standardized format used elsewhere in the repo, particularly for governance and foundational documents.
Applied to files:
README.mdhelm/values.yaml
📚 Learning: 2025-09-17T17:34:47.959Z
Learnt from: AlphaOne1
Repo: AlphaOne1/sonicweb PR: 181
File: docker_root/index.html:79-85
Timestamp: 2025-09-17T17:34:47.959Z
Learning: In the sonicweb project, AlphaOne1 controls all Markdown content at build time (like README.md in Docker images), so DOMPurify sanitization is not required for their controlled markdown rendering scenarios. They acknowledge DOMPurify should be used with unknown/untrusted Markdown content.
Applied to files:
README.md
📚 Learning: 2026-01-10T17:00:25.785Z
Learnt from: AlphaOne1
Repo: AlphaOne1/sonicweb PR: 213
File: helm/charts/monitoring/values.yaml:85-88
Timestamp: 2026-01-10T17:00:25.785Z
Learning: The monitoring chart in helm/charts/monitoring/ is designed as a development/testing convenience feature, not as a production-ready monitoring solution. For production use, adjustments are required including enabling persistent volumes and configuring appropriate retention periods.
Applied to files:
helm/values.yaml
🪛 YAMLlint (1.37.1)
helm/charts/monitoring/templates/grafana_datasources.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Analyze (go)
🔇 Additional comments (14)
helm/charts/monitoring/templates/grafana_datasources.yaml (1)
1-4: YAMLlint false positive - Helm template syntax is valid.The static analysis error from YAMLlint is a false positive. The
{{- /* ... */ -}}syntax is valid Helm/Go template comment syntax. YAMLlint cannot parse Helm templates and incorrectly flags the template directive as invalid YAML.logging.go (3)
66-74: LGTM!Correct semantic for a composite handler — returns true if any underlying handler is enabled, with efficient short-circuit evaluation.
76-87: LGTM!Cloning the record for each handler is the right approach to prevent handlers from interfering with each other. Error aggregation via
errors.Joinis clean and nil-safe.
89-107: LGTM!Both methods correctly implement the immutable pattern expected by
slog.Handler, returning newMultiHandlerinstances with propagated attributes/groups. Pre-allocating slice capacity is a nice touch.README.md (1)
2-2: LGTM!The copyright year update to 2026 is appropriate for the current date.
helm/templates/_helpers.tpl (3)
1-4: LGTM!SPDX header year update and comment formatting changes are appropriate.
9-11: Good addition of lowercase normalization.Adding
| lowerensures the chart name conforms to DNS-1123 label requirements, which mandate lowercase alphanumeric characters. This is a best practice for Kubernetes resource naming.
18-30: Well-structured refactoring with consistent lowercasing.The introduction of
$relNamevariable improves readability, and applying| lowerconsistently across all code paths ensures DNS-compliant names regardless of the override configuration. The logic correctly handles the case where the release name already contains the chart name.helm/values.yaml (6)
1-1: LGTM!SPDX header year update is appropriate.
36-46: Good scaffolding for service account configuration.The defaults (
create: false,automount: false) follow the principle of least privilege by not creating unnecessary service accounts or mounting API credentials by default.
70-77: Service configuration with dedicated metrics port looks good.Separating the operational port (8080) from the metrics port (8081) is a best practice for observability, enabling independent scraping without affecting application traffic.
96-106: Reasonable resource defaults with helpful documentation.The resource requests are conservative and appropriate for a lightweight web service. The commented-out limits with explanatory note give operators flexibility to configure based on their namespace constraints.
110-118: Root path is appropriate for health probes.The application serves files from the root path
/by default throughhttp.FileServerFS, making it an ideal endpoint for both liveness and readiness probes. The FileServer handler is registered at the BasePath (defaulting to/), ensuring this endpoint is available during normal operation.
11-20: Monitoring configuration is correct; empty adminPassword is the secure approach.The empty
adminPasswordwith the comment requiring--setis appropriate. When left empty, Grafana's Helm chart 10.5.5 automatically generates a random admin password and stores it in a Kubernetes Secret. Users can retrieve it using the standard kubectl command shown in the chart NOTES, avoiding hardcoded or predictable defaults. This aligns well with the dev/testing nature of the monitoring subchart documented above.
Signed-off-by: Alexander Adam <alphaone23@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
helm/values.yaml (2)
15-20: Consider the empty password default behavior.While the comment clearly states the password must be provided, an empty default for
adminPasswordmay allow deployments without a password if Grafana's chart permits it. Based on learnings, this chart is for dev/testing, but you may want to consider using a placeholder like"CHANGEME"or adding validation in the templates to fail explicitly when monitoring is enabled without a password set.
58-68: Minor indentation inconsistency.Line 65 (
# readOnlyRootFilesystem: true) uses 2-space indentation while surrounding properties use 4 spaces. This doesn't affect functionality but could be aligned for consistency.Otherwise, excellent security configuration with dropped capabilities, disabled privilege escalation, seccomp profile, and non-root user.
Suggested fix
seccompProfile: type: RuntimeDefault - # readOnlyRootFilesystem: true + # readOnlyRootFilesystem: true runAsNonRoot: truelogging.go (1)
58-66: Consider defensive copy to avoid mutating caller's slice.
slices.DeleteFuncmodifies the slice in-place. If a caller passesNewMultiHandler(existingSlice...), the original slice could be corrupted when nil handlers are present. A defensive copy prevents this edge case.♻️ Suggested fix
func NewMultiHandler(handlers ...slog.Handler) *MultiHandler { - tmp := slices.DeleteFunc( - handlers, + tmp := slices.DeleteFunc( + slices.Clone(handlers), func(h slog.Handler) bool { return h == nil })
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
helm/charts/monitoring/templates/grafana_datasources.yamlhelm/values.yamllogging.go
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: AlphaOne1
Repo: AlphaOne1/sonicweb PR: 213
File: helm/charts/monitoring/values.yaml:85-88
Timestamp: 2026-01-10T17:00:25.785Z
Learning: The monitoring chart in helm/charts/monitoring/ is designed as a development/testing convenience feature, not as a production-ready monitoring solution. For production use, adjustments are required including enabling persistent volumes and configuring appropriate retention periods.
📚 Learning: 2026-01-10T17:00:25.785Z
Learnt from: AlphaOne1
Repo: AlphaOne1/sonicweb PR: 213
File: helm/charts/monitoring/values.yaml:85-88
Timestamp: 2026-01-10T17:00:25.785Z
Learning: The monitoring chart in helm/charts/monitoring/ is designed as a development/testing convenience feature, not as a production-ready monitoring solution. For production use, adjustments are required including enabling persistent volumes and configuring appropriate retention periods.
Applied to files:
helm/values.yaml
📚 Learning: 2025-09-21T14:10:51.227Z
Learnt from: AlphaOne1
Repo: AlphaOne1/sonicweb PR: 193
File: governance.md:1-4
Timestamp: 2025-09-21T14:10:51.227Z
Learning: In the SonicWeb repository, AlphaOne1 prefers to maintain original author attribution in SPDX headers even when it differs from the standardized format used elsewhere in the repo, particularly for governance and foundational documents.
Applied to files:
helm/values.yaml
🪛 YAMLlint (1.37.1)
helm/charts/monitoring/templates/grafana_datasources.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
🔇 Additional comments (8)
helm/charts/monitoring/templates/grafana_datasources.yaml (1)
1-33: LGTM!The ConfigMap template is well-structured:
- Proper use of
monitoring.fullnamehelper for consistent naming across services- The
grafana_datasource: "1"label correctly enables Grafana sidecar auto-discovery- Datasource types are appropriate:
prometheusfor VictoriaMetrics,victoriametrics-logs-datasourcefor logs, andjaegerfor traces (VictoriaTraces exposes Jaeger-compatible API)- Service URLs and ports align with VictoriaMetrics stack defaults
Note: The YAMLlint syntax error is a false positive—it cannot parse Helm/Go template syntax (
{{-). This is expected behavior for template files.Based on learnings, this monitoring chart is intended as a development/testing convenience feature rather than a production-ready solution.
helm/values.yaml (3)
36-46: LGTM!Good security defaults with
create: falseandautomount: false, following the principle of least privilege by not automatically mounting ServiceAccount API credentials.
71-77: LGTM!Good choice using unprivileged ports (8080, 8081) which aligns with the non-root security configuration. Separating metrics to its own port is a best practice for observability.
96-106: LGTM!Sensible resource requests as a baseline, with helpful documentation explaining the rationale. The commented-out limits with guidance about namespace-level constraints is a thoughtful approach for flexibility.
logging.go (4)
68-76: LGTM!Correct fan-out semantic: returns true if any underlying handler is enabled for the level, with proper short-circuit evaluation.
78-89: LGTM!Record cloning prevents mutation side effects between handlers, and error aggregation with
errors.Joinis idiomatic.
91-99: LGTM!Correctly propagates attributes to all underlying handlers and returns a new immutable
MultiHandler.
101-109: LGTM!Correctly propagates the group name to all underlying handlers and maintains immutability.
Signed-off-by: Alexander Adam <alphaone23@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@logging.go`:
- Around line 58-67: In NewMultiHandler, slices.DeleteFunc returns a new slice
and its result must be assigned — the current call discards the result so nil
handlers remain in tmp causing panics in Enabled/Handle/WithAttrs/WithGroup; fix
by assigning tmp = slices.DeleteFunc(tmp, func(h slog.Handler) bool { return h
== nil }) (or replace with an explicit filter that builds a new slice), so tmp
contains only non-nil handlers before returning &MultiHandler{handlers: tmp}.
🧹 Nitpick comments (2)
helm/values.yaml (2)
58-68: Security context follows container hardening best practices.Dropping all capabilities, disabling privilege escalation, and running as non-root with a specific unprivileged user (65532) are all correct.
Consider enabling
readOnlyRootFilesystem: true(line 65) if the application doesn't require write access to the container filesystem—this provides an additional layer of defense against container compromise.
96-106: Resource requests are reasonable for a lightweight application.The 100m CPU and 20Mi memory requests are modest and appropriate defaults. The commented-out limits with explanation provide good guidance.
For production deployments, consider documenting recommended limit values in the comments or README, as unbounded resource usage can impact other workloads on shared clusters.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
helm/values.yamllogging.go
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: AlphaOne1
Repo: AlphaOne1/sonicweb PR: 213
File: helm/charts/monitoring/values.yaml:85-88
Timestamp: 2026-01-10T17:00:25.785Z
Learning: The monitoring chart in helm/charts/monitoring/ is designed as a development/testing convenience feature, not as a production-ready monitoring solution. For production use, adjustments are required including enabling persistent volumes and configuring appropriate retention periods.
📚 Learning: 2026-01-10T17:00:25.785Z
Learnt from: AlphaOne1
Repo: AlphaOne1/sonicweb PR: 213
File: helm/charts/monitoring/values.yaml:85-88
Timestamp: 2026-01-10T17:00:25.785Z
Learning: The monitoring chart in helm/charts/monitoring/ is designed as a development/testing convenience feature, not as a production-ready monitoring solution. For production use, adjustments are required including enabling persistent volumes and configuring appropriate retention periods.
Applied to files:
helm/values.yaml
📚 Learning: 2025-09-21T14:10:51.227Z
Learnt from: AlphaOne1
Repo: AlphaOne1/sonicweb PR: 193
File: governance.md:1-4
Timestamp: 2025-09-21T14:10:51.227Z
Learning: In the SonicWeb repository, AlphaOne1 prefers to maintain original author attribution in SPDX headers even when it differs from the standardized format used elsewhere in the repo, particularly for governance and foundational documents.
Applied to files:
helm/values.yaml
🪛 GitHub Actions: Security
logging.go
[error] 60-60: govet: unusedresult: result of slices.DeleteFunc call not used
🔇 Additional comments (7)
logging.go (4)
69-77: LGTM!The
Enabledmethod correctly short-circuits when any underlying handler enables the given level. Logic is correct assuming the constructor fix is applied.
79-90: LGTM!Correctly clones the record for each handler to prevent mutation side effects, and properly aggregates errors via
errors.Join.
92-100: LGTM!The method correctly propagates attributes to all underlying handlers and returns a new
MultiHandler.
102-110: LGTM!The method correctly propagates the group name to all underlying handlers and returns a new
MultiHandler.helm/values.yaml (3)
36-46: LGTM!Service account configuration follows security best practices with
create: falseandautomount: falseas defaults, preventing unnecessary credential exposure.
71-77: Service port configuration is appropriate.Using port 8080 (non-privileged) aligns well with the security context running as non-root. The separate
metricsPort: 8081cleanly separates operational traffic from observability endpoints. Previous review concerns about breaking changes have been addressed.
15-20: Remove the incorrect suggestion about adding an alloy placeholder.The monitoring configuration is correct as-is. The
enabled: falsedefault and password requirement are appropriate. The concern about a "missing alloy section" is unfounded—Alloy is configured within the monitoring subchart's ownhelm/charts/monitoring/values.yaml(lines 15-31), which includes all necessary ports and ConfigMap references. When monitoring is enabled, the subchart deploys with its own complete configuration; no explicit alloy section is needed in the main values.yaml.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
Signed-off-by: Alexander Adam <alphaone23@gmail.com>
Signed-off-by: Alexander Adam <alphaone23@gmail.com>
Signed-off-by: Alexander Adam <alphaone23@gmail.com>
Summary by CodeRabbit
New Features
Chores
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.