Skip to content

Conversation

@renovate
Copy link
Contributor

@renovate renovate bot commented Jun 14, 2025

This PR contains the following updates:

Package Update Change
openebs (source) minor 4.2.04.4.0

Release Notes

openebs/openebs (openebs)

v4.4.0

Compare Source

OpenEBS 4.4.0 Release Notes
Release Summary

OpenEBS version 4.4 introduces several functional fixes and new features focused on improving Data Security, User Experience, High availability (HA), replica rebuilds, and overall stability. The key highlights are LocalPV LVM snapshot restores . In addition, the release includes various usability and functional fixes for mayastor, ZFS, LocalPV LVM and LocalPV Hostpath provisioners, along with documentation enhancements to help users and new contributors get started quickly.

Replicated Storage (Mayastor)
New Features and Enhancements
  • DiskPool Expansion
    It's now possible to expand a DiskPool's capacity by expanding the underlying storage device.

NOTE: As a precondition you must create the DiskPool with sufficient metadata to accommodate/support future growth, please read more about this here.

  • Configurable ClusterSize
    You can now configure the cluster size when creating a pool - larger cluster sizes may be beneficial when using very large storage devices.

NOTE: As an initial limitation volumes may not be placed across pools with different cluster sizes

  • Pool Cordon
    Extend cordoning functionality to pools. This can be used to prevent new replicas from being created on a pool, and also as a way of migrating a volume replica out of it via scale-up/scale-down operations.
  • Orphaned Retain Snapshot Delete
    Similar to volumes, when with snapshots retain move are deleted, the underlying storage is kept by the provisioner and must be deleted with provisioner specific commands.
    We've added a plugin sub-command to delete these orphaned snapshots safely.
  • Node Spread
    Node spread topology may now be used
  • Affinity Group ScaleDown
    Affinity group volumes may now be scaled down to 1 replica, provided the anti-affinity across nodes is not violated.
Bug Fixes and Improvements
  • Update replica health as an atomic etcd transaction
  • Exit io-engine with error if the gRPC port is busy
  • Set PR_SET_IO_FLUSHER for io-engine to resolve potential deadlock
  • Don't let 1 bad nexus lockup the entire nexus subsystem
  • Clean up uuid from DISKS output uri
  • Honor stsAffinity on backup restores via external tools
  • Validate K8s secret in diskpool operator ahead of pool creation request
  • Allow pool creation with zfs volume paths
  • Added support for kubeconfig context switching (kubectl-mayastor plugin)
  • Fixed creating pools on very-slow/very-large storage devices
  • Use udev kernel monitor
  • Fixed race condition where udev events were lost leading to connecting nvme devices
  • Fixed HA enablement on the latest rhel and derivatives
  • Fixed open permissions on call-home encryption dir
  • Configurable ports of services with hostNetwork
  • Add support for 1GiB hugepages
  • etcd dependency updated to 12.0.14
  • Use normalized etcdUrl in default etcd-probe init containers
  • Use correct grpc port in metrics exporter
  • Fix volume mkfs stuck on very large pools/volumes
  • Fix agent-core panic when scheduling replicas
  • Add default priority class to the daemon sets
Release Notes
Limitations
  • The Mayastor IO engine fully utilizes allocated CPU cores regardless of I/O load, running a poller at full speed.
  • A Mayastor DiskPool is limited to a single block device and cannot span multiple block devices.
  • The new at-rest encryption feature does not support rotating Data Encryption Keys(DEK).
  • Volume rebuilds are only performed on published volumes.
Known Issues
  • IO-Engine Pod Restarts
    • Under heavy I/O and during constant scaling up/down of volume replicas, the io-engine pod may restart occasionally.
  • fsfreeze Operation Failure
    • If a pod-based workload is scheduled on a node that reboots and the pod lacks a controller (such as a Deployment or StatefulSet), the volume unpublish operation might not be triggered.
    • This leads the control plane to assume the volume is still published, causing the fsfreeze operation to fail during snapshot creation.
      • Workaround Recreate or reinstate the pod to ensure proper volume mounting.
  • Diskpool's backing device failure
    • If the backend device that hosts a diskpool runs into a fault, or gets removed e.g cloud disk removal, the status of diskpool and hosted replicas isn't clearly updated to reflect the problem.
    • As a result the resultant failures aren't gracefully handled and volume might remain Degraded for an extended period of time.
  • Extremely large pool undergoing dirty shutdown
    • In case of a dirty shutdown of io-engine node hosting an extremely large pool e.g 10TiB or 20TiB.
    • The recovery of pool takes a while after the node comes online.
LocalPV ZFS
New Features and Enhancements
  • Update Go runtime to 1.24.
    Bumps up go runtime and all dependents to their latest available releases
  • Allow users to configure CPU and memory requests/limits for all zfs-node and zfs-controller containers via values.yaml, improving resource management and deployment flexibility
Bug Fixes and Improvements
  • Removes encryption parameter handling from buildCloneCreateArgs() since clones automatically inherit encryption from the parent snapshot and the property cannot be set (it's read-only)
Continuous Integration and Maintenance
  • Staging CI
    Introduction of the staging CI, which enables creating a staging build for e2e testing before releasing, the artifacts are then copied over to production build hosts.
Release Notes
LocalPV LVM
New Features and Enhancements
  • Snapshot restore
    LocalPV-LVM snapshot had limited capabilities. Now we support restoring a snapshot to volume
  • ThinPool space reclamation
    LocalPV-LVM will cleanup the thinpool LV after deleting the last thin volume of the thinpool
  • Scheduler fixes and enhancements
    Record thinpool statistics in lvmnode CR. Fail fast CreateVolume request if thick PVC size cannot be accommodated by any VG.
    Considers thinpool free space while scheduling thin pvc in SpaceWeighted algorithm
  • Runtime improvements
    Updates Go runtime, k8s modules, golint packages etc by @​jochenseeber in openebs/lvm-localpv#416
Continuous Integration and Maintenance
  • Staging CI
    Introduction of the staging CI, which enables creating a staging build for e2e testing before releasing, the artifacts are then copied over to production build hosts.
Release Notes
Known Issues
  • There's no unmap/reclaim for thin pool capacity.
    It is not tracked in the lvmnode, which may lead to unexpected behaviour when scheduling volumes.
    Read more about this here
LocalPV Hostpath
Release Notes
LocalPV RawFile
New Features and Enhancements
  • VolumeSnapshots based on the rawfile image
  • Volume Restore from snapshot
  • Volume Clone
Release Notes

Make sure you follow the install guide when upgrading.
Refer to the Rawfile v0.12.0 release for detailed changes.

Known Issues
  • Controller Pod Restart on Single Node Setup
    After upgrading, single node setups may face issues where the ZFS-localpv/LVM-localpv controller pod does not enter the Running state due to changes in the controller manifest (now a Deployment) and missing affinity rules.

    Workaround: Delete the old controller pod to allow the new pod to be scheduled correctly. This does not happen if upgrading from the previous release of ZFS-localpv/LVM-localpv.

Upgrade and Backward Incompatibilities
  • Kubernetes Requirement: Kubernetes 1.23 or higher is recommended.
  • Engine Compatibility: Upgrades to OpenEBS 4.4.0 are supported only for the following engines:
    • Local PV Hostpath
    • Local PV LVM
    • Local PV ZFS
    • Mayastor (from earlier editions, 3.10.x or below)
    • Local PV Rawfile

v4.3.3

Compare Source

This patch bring in a few fixes, as well as update of the bitnami repo which is needed. For more details see bitnami/charts#35164.

What's Changed

Full Changelog: openebs/openebs@v4.3.2...v4.3.3

v4.3.2

Compare Source

What's Changed

Full Changelog: openebs/openebs@v4.3.1...v4.3.2

v4.3.1

Compare Source

Fixes
  • #​3968: kubectl openebs upgrade fails for localpvs if mayastor is disabled. This is fixed. Detecting if Mayastor is enabled, was bugged. (@​niladrih, #​3967)
  • #​3892: openebs/openebs helm chart's pre-upgrade-job lacked helm values knobs for configuring ImagePullSecrets and Tolerations. This is fixed now. (@​nneram, #​3966)

Full Changelog: openebs/openebs@v4.3.0...v4.3.1

v4.3.0

Compare Source

OpenEBS 4.3.0 Release Notes
Release Summary

OpenEBS version 4.3 introduces several functional fixes and new features focused on improving Data Security, User Experience, High availability (HA), replica rebuilds, and overall stability. The key highlights are Mayastor's support for At-Rest data encryption and a new Openebs plugin thats allows users to interact with all engines supplied by OpenEBS project. In addition, the release includes various usability and functional fixes for mayastor, ZFS, LocalPV LVM and LocalPV Hostpath provisioners, along with documentation enhancements to help users and new contributors get started quickly.

Umbrella Features
  • Unified Plugin
    • With this umbrella plugin, OpenEBS users who have installed cluster using OpenEBS umbrella chart will be able to interface all engines i.e Mayastor, localpv-lvm, localpv-zfs, hostpath using a single plugin i.e kubectl openebs.
  • One-Step Upgrade
    • All OpenEBS storage engines can now be upgraded using a unified umbrella upgrade process.
  • Supportability
    • Support bundle collection for all stable OpenEBS engines—LocalPV ZFS, LocalPV LVM, LocalPV HostPath, and Mayastor—is now supported via the kubectl openebs dump system command.
    • This unified approach enables comprehensive system state capture for efficient debugging and troubleshooting. Previously, support was limited to Mayastor through the kubectl-mayastor plugin.
Replicated Storage (Mayastor)
New Feature
  • Support for at-rest data encryption
    OpenEBS offers support for data-at-rest encryption to help ensure the confidentiality of persistent data stored on disk.
    With this capability, any disk pool configured with a user-defined encryption key can host encrypted volume replicas.
    This feature is particularly beneficial in environments requiring compliance with regulatory or security standards.
Enhancements
  • Added support for IPv6.
  • Added support for formatOptions via storage class.
  • Prefers cordoned nodes while removing volumes replicas eg. volume scale down.
  • We now restrict pool creation using non-persistent devlinks (/dev/sdX).
  • User do not have to recreate SC while restoring volume from thick snapshot. This fix was important for CSI based backup operations.
  • Add new volume health information to better showcase what the current state of the volume is.
  • Added a plugin command to delete volume. Mainly applicable for a PVC with RETAIN policy where user can end up in a situation where mayastor may have a volume without a PV object.
  • Avoid full rebuild if partial rebuild call fails due to the max rebuild limit.
Upgrading
  • Volume Health information now reflects the true status of the volume
    This means that a volume status may now be reported as Degraded whereas it would have previously been reported as Online. This has a particular impact for unpublished volumes (in other words, volumes which are not mounted used by a pod) since volume rebuilds are currently not available for unpublished volumes.
    This behaviour can be reverted by setting a helm chart variable: agents.core.volumeHealth=false.
  • This version of the OpenEBS chart adds three new components out of the box, i.e. Loki, Minio and Alloy, this change is necessary for collecting debugging information and capture cluster state. This includes the newer Loki stack that can be deployed in a HA fashion given there exists one object storage backing it, which is Minio in this case as a default option. Users can choose to avoid Minio or object storage backend and deploy Loki with filesystem storage, as defined here. The new Loki stack would be enabled by default with 3 replicas of Loki and 3 replicas of Minio. This behaviour can be disabled by setting a helm chart variable: loki.enabled=false, alloy.enabled=false.
Release Notes
Limitations
  • The Mayastor IO engine fully utilizes allocated CPU cores regardless of I/O load, running a poller at full speed.
  • A Mayastor DiskPool is limited to a single block device and cannot span multiple block devices.
  • The new at-rest encryption feature does not support rotating Data Encryption Keys(DEK).
  • Volume rebuilds are only performed on published volumes.
Known Issues
  • DiskPool Capacity Expansion
    • Mayastor does not support the capacity expansion of DiskPools as of v2.9.0.
  • IO-Engine Pod Restarts
    • Under heavy I/O and during constant scaling up/down of volume replicas, the io-engine pod may restart occasionally.
  • fsfreeze Operation Failure
    • If a pod-based workload is scheduled on a node that reboots and the pod lacks a controller (such as a Deployment or StatefulSet), the volume unpublish operation might not be triggered.
    • This leads the control plane to assume the volume is still published, causing the fsfreeze operation to fail during snapshot creation.
      • Workaround Recreate or reinstate the pod to ensure proper volume mounting.
  • Diskpool's backing device failure
    • If the backend device that hosts a diskpool runs into a fault, or gets removed e.g cloud disk removal, the status of diskpool and hosted replicas isn't clearly updated to reflect the problem.
    • As a result the resultant failures aren't gracefully handled and volume might remain Degraded for an extended period of time.
  • Extremely large pool undergoing dirty shutdown
    • In case of a dirty shutdown of io-engine node hosting an extremely large pool e.g 10TiB or 20TiB.
    • The recovery of pool hangs after the node comes online.
  • Extremely large filesystem volumes fail to provision
    • Filesystems volumes of sizes ranging in Terabytes e.g. more than 15TiB fails to provision successfully due to filesystem formatting getting hung.
Local Storage (LocalPV ZFS, LocalPV LVM, LocalPV Hostpath)
Fixes and Enhancements
  • LocalPV ZFS Enhancements

    • Introduced a backup garbage collector in the controller to automatically clean up stale or orphaned backup resources.
    • Updated CSI spec and associated sidecar containers to CSI v1.11.
    • Added improved and consistent labeling, including logging-related labels, to enhance Helm chart maintainability and observability.
  • LocalPV ZFS Fixes

    • Fixed an issue where the quota property was not correctly retained during upgrades.
    • Ensured backward compatibility of quotatype values during volume restores.
    • Fixed a crash where unhandled errors in the CSI NodeGetInfo call could cause the controller to exit unexpectedly.
    • The gRPC server now gracefully handles SIGTERM and SIGINT signals for clean exit.
    • The agent now leverages the OpenEBS lib-csi Kubernetes client to reliably load kubeconfig from multiple locations.
    • The CLI flag --plugin now only accepts controller and agent, disallowing invalid values like node.
  • LocalPV LVM Enhancements

    • Added support for formatOptions via storage class. These options will be used when formatting the device using mkfs tool.
    • Excludes Kubernetes cordoned nodes while provisioning volumes.
    • Updated CSI spec to v1.9 and associated sidecar images.
  • LocalPV Hostpath Enhancements

    • Fixed a scenario where a pod crashes when creating an init pod; new pods always failed because the init pod already existed.
    • Added support to specify file permissions for PVC hostpaths.
Release Notes
Limitations
  • LocalPV-LVM
    LVM-localpv has support for volume snapshot. But it doesn't support restore from a snapshot yet. It is in our roadmap.
Known Issues
  • Controller Pod Restart on Single Node Setup
    After upgrading, single node setups may face issues where the ZFS-localpv/LVM-localpv controller pod does not enter the Running state due to changes in the controller manifest (now a Deployment) and missing affinity rules.

    Workaround: Delete the old controller pod to allow the new pod to be scheduled correctly. This does not happen if upgrading from the previous release of ZFS-localpv/LVM-localpv.

  • Thin pool issue with LocalPV-LVM
    We do not unmap/reclaim Thin pool capacity. It is not tracked in lvmnode cr also which can cause unexpected behaviour when scheduling volumes. Refer (When using lvm thinpool type, csistoragecapacities calculation is incorrect · Issue #​382 · openebs/lvm-localpv)

Upgrade and Backward Incompatibilities
  • Kubernetes Requirement: Kubernetes 1.23 or higher is recommended.
  • Engine Compatibility: Upgrades to OpenEBS 4.3.0 are supported only for the following engines:
    • Local PV Hostpath
    • Local PV LVM
    • Local PV ZFS
    • Mayastor (from earlier editions, 3.10.x or below)

Configuration

📅 Schedule: Branch creation - "every weekend" (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@github-actions
Copy link

github-actions bot commented Jun 14, 2025

--- kubernetes/apps/storage/openebs/app Kustomization: flux-system/openebs HelmRelease: storage/openebs

+++ kubernetes/apps/storage/openebs/app Kustomization: flux-system/openebs HelmRelease: storage/openebs

@@ -13,13 +13,13 @@

     spec:
       chart: openebs
       sourceRef:
         kind: HelmRepository
         name: openebs
         namespace: flux-system
-      version: 4.2.0
+      version: 4.4.0
   install:
     remediation:
       retries: 3
   interval: 30m
   upgrade:
     cleanupOnFail: true

@github-actions
Copy link

github-actions bot commented Jun 14, 2025

--- HelmRelease: storage/openebs Deployment: storage/openebs-localpv-provisioner

+++ HelmRelease: storage/openebs Deployment: storage/openebs-localpv-provisioner

@@ -25,18 +25,19 @@

         heritage: Helm
         app: localpv-provisioner
         release: openebs
         component: localpv-provisioner
         openebs.io/component-name: openebs-localpv-provisioner
         name: openebs-localpv-provisioner
+        openebs.io/logging: 'true'
     spec:
       serviceAccountName: openebs-localpv-provisioner
       securityContext: {}
       containers:
       - name: openebs-localpv-provisioner
-        image: quay.io/openebs/provisioner-localpv:4.2.0
+        image: quay.io/openebs/provisioner-localpv:4.4.0
         imagePullPolicy: IfNotPresent
         resources: null
         env:
         - name: OPENEBS_NAMESPACE
           valueFrom:
             fieldRef:
@@ -51,13 +52,13 @@

               fieldPath: spec.serviceAccountName
         - name: OPENEBS_IO_ENABLE_ANALYTICS
           value: 'true'
         - name: OPENEBS_IO_BASE_PATH
           value: /var/openebs/local
         - name: OPENEBS_IO_HELPER_IMAGE
-          value: quay.io/openebs/linux-utils:4.1.0
+          value: quay.io/openebs/linux-utils:4.3.0
         - name: OPENEBS_IO_HELPER_POD_HOST_NETWORK
           value: 'false'
         - name: OPENEBS_IO_INSTALLER_TYPE
           value: localpv-charts-helm
         - name: LEADER_ELECTION_ENABLED
           value: 'true'
--- HelmRelease: storage/openebs ServiceAccount: storage/openebs-pre-upgrade-hook

+++ HelmRelease: storage/openebs ServiceAccount: storage/openebs-pre-upgrade-hook

@@ -1,14 +0,0 @@

----
-apiVersion: v1
-kind: ServiceAccount
-metadata:
-  name: openebs-pre-upgrade-hook
-  namespace: storage
-  labels:
-    app.kubernetes.io/managed-by: Helm
-    app.kubernetes.io/instance: openebs
-  annotations:
-    helm.sh/hook: pre-upgrade
-    helm.sh/hook-weight: '-2'
-    helm.sh/hook-delete-policy: hook-succeeded
-
--- HelmRelease: storage/openebs ClusterRole: storage/openebs-pre-upgrade-hook

+++ HelmRelease: storage/openebs ClusterRole: storage/openebs-pre-upgrade-hook

@@ -1,28 +0,0 @@

----
-kind: ClusterRole
-apiVersion: rbac.authorization.k8s.io/v1
-metadata:
-  name: openebs-pre-upgrade-hook
-  labels:
-    app.kubernetes.io/managed-by: Helm
-    app.kubernetes.io/instance: openebs
-  annotations:
-    helm.sh/hook: pre-upgrade
-    helm.sh/hook-weight: '-2'
-    helm.sh/hook-delete-policy: hook-succeeded
-rules:
-- apiGroups:
-  - apiextensions.k8s.io
-  resources:
-  - customresourcedefinitions
-  verbs:
-  - get
-  - patch
-- apiGroups:
-  - apps
-  resources:
-  - deployments
-  verbs:
-  - delete
-  - list
-
--- HelmRelease: storage/openebs ClusterRoleBinding: storage/openebs-pre-upgrade-hook

+++ HelmRelease: storage/openebs ClusterRoleBinding: storage/openebs-pre-upgrade-hook

@@ -1,21 +0,0 @@

----
-kind: ClusterRoleBinding
-apiVersion: rbac.authorization.k8s.io/v1
-metadata:
-  name: openebs-pre-upgrade-hook
-  labels:
-    app.kubernetes.io/managed-by: Helm
-    app.kubernetes.io/instance: openebs
-  annotations:
-    helm.sh/hook: pre-upgrade
-    helm.sh/hook-weight: '-1'
-    helm.sh/hook-delete-policy: hook-succeeded
-subjects:
-- kind: ServiceAccount
-  name: openebs-pre-upgrade-hook
-  namespace: storage
-roleRef:
-  kind: ClusterRole
-  name: openebs-pre-upgrade-hook
-  apiGroup: rbac.authorization.k8s.io
-
--- HelmRelease: storage/openebs Job: storage/openebs-pre-upgrade-hook

+++ HelmRelease: storage/openebs Job: storage/openebs-pre-upgrade-hook

@@ -1,35 +0,0 @@

----
-apiVersion: batch/v1
-kind: Job
-metadata:
-  name: openebs-pre-upgrade-hook
-  labels:
-    app.kubernetes.io/managed-by: Helm
-    app.kubernetes.io/instance: openebs
-  annotations:
-    helm.sh/hook: pre-upgrade
-    helm.sh/hook-weight: '0'
-    helm.sh/hook-delete-policy: hook-succeeded
-spec:
-  template:
-    metadata:
-      name: openebs-pre-upgrade-hook
-      labels:
-        app.kubernetes.io/managed-by: Helm
-        app.kubernetes.io/instance: openebs
-    spec:
-      serviceAccountName: openebs-pre-upgrade-hook
-      restartPolicy: Never
-      containers:
-      - name: pre-upgrade-job
-        image: docker.io/bitnami/kubectl:1.25.15
-        imagePullPolicy: IfNotPresent
-        command:
-        - /bin/sh
-        - -c
-        args:
-        - (kubectl annotate --overwrite crd volumesnapshots.snapshot.storage.k8s.io
-          volumesnapshotclasses.snapshot.storage.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io
-          helm.sh/resource-policy=keep || true) && (kubectl -n storage delete deploy
-          -l openebs.io/component-name=openebs-localpv-provisioner --ignore-not-found)
-
--- HelmRelease: storage/openebs ServiceAccount: storage/openebs-alloy

+++ HelmRelease: storage/openebs ServiceAccount: storage/openebs-alloy

@@ -0,0 +1,14 @@

+---
+apiVersion: v1
+kind: ServiceAccount
+automountServiceAccountToken: true
+metadata:
+  name: openebs-alloy
+  namespace: storage
+  labels:
+    app.kubernetes.io/name: alloy
+    app.kubernetes.io/instance: openebs
+    app.kubernetes.io/managed-by: Helm
+    app.kubernetes.io/part-of: alloy
+    app.kubernetes.io/component: rbac
+
--- HelmRelease: storage/openebs ServiceAccount: storage/minio-sa

+++ HelmRelease: storage/openebs ServiceAccount: storage/minio-sa

@@ -0,0 +1,6 @@

+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+  name: minio-sa
+
--- HelmRelease: storage/openebs ServiceAccount: storage/loki

+++ HelmRelease: storage/openebs ServiceAccount: storage/loki

@@ -0,0 +1,11 @@

+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+  name: loki
+  namespace: storage
+  labels:
+    app.kubernetes.io/name: loki
+    app.kubernetes.io/instance: openebs
+automountServiceAccountToken: true
+
--- HelmRelease: storage/openebs ConfigMap: storage/openebs-alloy

+++ HelmRelease: storage/openebs ConfigMap: storage/openebs-alloy

@@ -0,0 +1,109 @@

+---
+apiVersion: v1
+kind: ConfigMap
+metadata:
+  name: openebs-alloy
+  namespace: storage
+  labels:
+    app.kubernetes.io/name: alloy
+    app.kubernetes.io/instance: openebs
+    app.kubernetes.io/managed-by: Helm
+    app.kubernetes.io/part-of: alloy
+    app.kubernetes.io/component: config
+data:
+  config.alloy: |-
+    livedebugging {
+      enabled = false
+    }
+
+    discovery.kubernetes "openebs_pods_name" {
+      role = "pod"
+    }
+
+    discovery.relabel "openebs_pods_name" {
+      targets = discovery.kubernetes.openebs_pods_name.targets
+
+      rule {
+        source_labels = [
+          "__meta_kubernetes_pod_label_openebs_io_logging",
+        ]
+        separator     = ";"
+        regex         = "^true$"
+        action        = "keep"
+      }
+
+      rule {
+        regex  = "__meta_kubernetes_pod_label_(.+)"
+        action = "labelmap"
+      }
+
+      rule {
+        regex  = "__meta_kubernetes_pod_label_(.+)"
+        action = "labelmap"
+      }
+
+      rule {
+        source_labels = ["__meta_kubernetes_namespace"]
+        separator     = "/"
+        target_label  = "job"
+      }
+
+      rule {
+        source_labels = ["__meta_kubernetes_pod_name"]
+        target_label  = "pod"
+      }
+
+      rule {
+        source_labels = ["__meta_kubernetes_pod_container_name"]
+        target_label  = "container"
+      }
+
+      rule {
+        source_labels = ["__meta_kubernetes_pod_node_name"]
+        target_label  = "hostname"
+      }
+
+      rule {
+        source_labels = ["__meta_kubernetes_pod_uid", "__meta_kubernetes_pod_container_name"]
+        separator     = "/"
+        target_label  = "__path__"
+        replacement   = "/var/log/pods/*$1/*.log"
+      }
+    }
+
+    local.file_match "openebs_pod_files" {
+      path_targets = discovery.relabel.openebs_pods_name.output
+    }
+
+    loki.source.file "openebs_pod_logs" {
+      targets    = local.file_match.openebs_pod_files.targets
+      forward_to = [loki.process.openebs_process_logs.receiver]
+    }
+
+    loki.process "openebs_process_logs" {
+      forward_to = [loki.write.default.receiver]
+
+      stage.docker { }
+
+      stage.replace {
+        expression = "(\\n)"
+        replace = ""
+      }
+
+      stage.multiline {
+        firstline = "^  \\x1b\\[2m(\\d{4})-(\\d{2})-(\\d{2})T(\\d{2}):(\\d{2}):(\\d{2}).(\\d{6})Z"
+      }
+
+      stage.multiline {
+        firstline = "^  (\\d{4})-(\\d{2})-(\\d{2})T(\\d{2}):(\\d{2}):(\\d{2}).(\\d{6})Z"
+      }
+    }
+
+    loki.write "default" {
+        endpoint {
+        url       = "http://openebs-loki:3100/loki/api/v1/push"
+        tenant_id = "openebs"
+      }
+      external_labels = {}
+    }
+
--- HelmRelease: storage/openebs ConfigMap: storage/openebs-minio

+++ HelmRelease: storage/openebs ConfigMap: storage/openebs-minio

@@ -0,0 +1,326 @@

+---
+apiVersion: v1
+kind: ConfigMap
+metadata:
+  name: openebs-minio
+  labels:
+    app: minio
+    release: openebs
+    heritage: Helm
+data:
+  initialize: "#!/bin/sh\nset -e # Have script exit in the event of a failed command.\n\
+    MC_CONFIG_DIR=\"/etc/minio/mc/\"\nMC=\"/usr/bin/mc --insecure --config-dir ${MC_CONFIG_DIR}\"\
+    \n\n# connectToMinio\n# Use a check-sleep-check loop to wait for MinIO service\
+    \ to be available\nconnectToMinio() {\n\tSCHEME=$1\n\tATTEMPTS=0\n\tLIMIT=29 #\
+    \ Allow 30 attempts\n\tset -e   # fail if we can't read the keys.\n\tACCESS=$(cat\
+    \ /config/rootUser)\n\tSECRET=$(cat /config/rootPassword)\n\tset +e # The connections\
+    \ to minio are allowed to fail.\n\techo \"Connecting to MinIO server: $SCHEME://$MINIO_ENDPOINT:$MINIO_PORT\"\
+    \n\tMC_COMMAND=\"${MC} alias set myminio $SCHEME://$MINIO_ENDPOINT:$MINIO_PORT\
+    \ $ACCESS $SECRET\"\n\t$MC_COMMAND\n\tSTATUS=$?\n\tuntil [ $STATUS = 0 ]; do\n\
+    \t\tATTEMPTS=$(expr $ATTEMPTS + 1)\n\t\techo \\\"Failed attempts: $ATTEMPTS\\\"\
+    \n\t\tif [ $ATTEMPTS -gt $LIMIT ]; then\n\t\t\texit 1\n\t\tfi\n\t\tsleep 2 # 1\
+    \ second intervals between attempts\n\t\t$MC_COMMAND\n\t\tSTATUS=$?\n\tdone\n\t\
+    set -e # reset `e` as active\n\treturn 0\n}\n\n# checkBucketExists ($bucket)\n\
+    # Check if the bucket exists, by using the exit code of `mc ls`\ncheckBucketExists()\
+    \ {\n\tBUCKET=$1\n\tCMD=$(${MC} stat myminio/$BUCKET >/dev/null 2>&1)\n\treturn\
+    \ $?\n}\n\n# createBucket ($bucket, $policy, $purge)\n# Ensure bucket exists,\
+    \ purging if asked to\ncreateBucket() {\n\tBUCKET=$1\n\tPOLICY=$2\n\tPURGE=$3\n\
+    \tVERSIONING=$4\n\tOBJECTLOCKING=$5\n\n\t# Purge the bucket, if set & exists\n\
+    \t# Since PURGE is user input, check explicitly for `true`\n\tif [ $PURGE = true\
+    \ ]; then\n\t\tif checkBucketExists $BUCKET; then\n\t\t\techo \"Purging bucket\
+    \ '$BUCKET'.\"\n\t\t\tset +e # don't exit if this fails\n\t\t\t${MC} rm -r --force\
+    \ myminio/$BUCKET\n\t\t\tset -e # reset `e` as active\n\t\telse\n\t\t\techo \"\
+    Bucket '$BUCKET' does not exist, skipping purge.\"\n\t\tfi\n\tfi\n\n\t# Create\
+    \ the bucket if it does not exist and set objectlocking if enabled (NOTE: versioning\
+    \ will be not changed if OBJECTLOCKING is set because it enables versioning to\
+    \ the Buckets created)\n\tif ! checkBucketExists $BUCKET; then\n\t\tif [ ! -z\
+    \ $OBJECTLOCKING ]; then\n\t\t\tif [ $OBJECTLOCKING = true ]; then\n\t\t\t\techo\
+    \ \"Creating bucket with OBJECTLOCKING '$BUCKET'\"\n\t\t\t\t${MC} mb --with-lock\
+    \ myminio/$BUCKET\n\t\t\telif [ $OBJECTLOCKING = false ]; then\n\t\t\t\techo \"\
+    Creating bucket '$BUCKET'\"\n\t\t\t\t${MC} mb myminio/$BUCKET\n\t\t\tfi\n\t\t\
+    elif [ -z $OBJECTLOCKING ]; then\n\t\t\techo \"Creating bucket '$BUCKET'\"\n\t\
+    \t\t${MC} mb myminio/$BUCKET\n\t\telse\n\t\t\techo \"Bucket '$BUCKET' already\
+    \ exists.\"\n\t\tfi\n\tfi\n\n\t# set versioning for bucket if objectlocking is\
+    \ disabled or not set\n\tif [ $OBJECTLOCKING = false ]; then\n\t\tif [ ! -z $VERSIONING\
+    \ ]; then\n\t\t\tif [ $VERSIONING = true ]; then\n\t\t\t\techo \"Enabling versioning\
+    \ for '$BUCKET'\"\n\t\t\t\t${MC} version enable myminio/$BUCKET\n\t\t\telif [\
+    \ $VERSIONING = false ]; then\n\t\t\t\techo \"Suspending versioning for '$BUCKET'\"\
+    \n\t\t\t\t${MC} version suspend myminio/$BUCKET\n\t\t\tfi\n\t\tfi\n\telse\n\t\t\
+    echo \"Bucket '$BUCKET' versioning unchanged.\"\n\tfi\n\n\t# At this point, the\
+    \ bucket should exist, skip checking for existence\n\t# Set policy on the bucket\n\
+    \techo \"Setting policy of bucket '$BUCKET' to '$POLICY'.\"\n\t${MC} anonymous\
+    \ set $POLICY myminio/$BUCKET\n}\n\n# Try connecting to MinIO instance\nscheme=http\n\
+    connectToMinio $scheme\n\n\n\n# Create the buckets\ncreateBucket chunks \"none\"\
+    \ false false false\ncreateBucket ruler \"none\" false false false\ncreateBucket\
+    \ admin \"none\" false false false"
+  add-user: |-
+    #!/bin/sh
+    set -e ; # Have script exit in the event of a failed command.
+    MC_CONFIG_DIR="/etc/minio/mc/"
+    MC="/usr/bin/mc --insecure --config-dir ${MC_CONFIG_DIR}"
+
+    # AccessKey and secretkey credentials file are added to prevent shell execution errors caused by special characters.
+    # Special characters for example : ',",<,>,{,}
+    MINIO_ACCESSKEY_SECRETKEY_TMP="/tmp/accessKey_and_secretKey_tmp"
+
+    # connectToMinio
+    # Use a check-sleep-check loop to wait for MinIO service to be available
+    connectToMinio() {
+      SCHEME=$1
+      ATTEMPTS=0 ; LIMIT=29 ; # Allow 30 attempts
+      set -e ; # fail if we can't read the keys.
+      ACCESS=$(cat /config/rootUser) ; SECRET=$(cat /config/rootPassword) ;
+      set +e ; # The connections to minio are allowed to fail.
+      echo "Connecting to MinIO server: $SCHEME://$MINIO_ENDPOINT:$MINIO_PORT" ;
+      MC_COMMAND="${MC} alias set myminio $SCHEME://$MINIO_ENDPOINT:$MINIO_PORT $ACCESS $SECRET" ;
+      $MC_COMMAND ;
+      STATUS=$? ;
+      until [ $STATUS = 0 ]
+      do
+        ATTEMPTS=`expr $ATTEMPTS + 1` ;
+        echo \"Failed attempts: $ATTEMPTS\" ;
+        if [ $ATTEMPTS -gt $LIMIT ]; then
+          exit 1 ;
+        fi ;
+        sleep 2 ; # 1 second intervals between attempts
+        $MC_COMMAND ;
+        STATUS=$? ;
+      done ;
+      set -e ; # reset `e` as active
+      return 0
+    }
+
+    # checkUserExists ()
+    # Check if the user exists, by using the exit code of `mc admin user info`
+    checkUserExists() {
+      CMD=$(${MC} admin user info myminio $(head -1 $MINIO_ACCESSKEY_SECRETKEY_TMP) > /dev/null 2>&1)
+      return $?
+    }
+
+    # createUser ($policy)
+    createUser() {
+      POLICY=$1
+      #check accessKey_and_secretKey_tmp file
+      if [[ ! -f $MINIO_ACCESSKEY_SECRETKEY_TMP ]];then
+        echo "credentials file does not exist"
+        return 1
+      fi
+      if [[ $(cat $MINIO_ACCESSKEY_SECRETKEY_TMP|wc -l) -ne 2 ]];then
+        echo "credentials file is invalid"
+        rm -f $MINIO_ACCESSKEY_SECRETKEY_TMP
+        return 1
+      fi
+      USER=$(head -1 $MINIO_ACCESSKEY_SECRETKEY_TMP)
+      # Create the user if it does not exist
+      if ! checkUserExists ; then
+        echo "Creating user '$USER'"
+        cat $MINIO_ACCESSKEY_SECRETKEY_TMP | ${MC} admin user add myminio
+      else
+        echo "User '$USER' already exists."
+      fi
+      #clean up credentials files.
+      rm -f $MINIO_ACCESSKEY_SECRETKEY_TMP
+
+      # set policy for user
+      if [ ! -z $POLICY -a $POLICY != " " ] ; then
+          echo "Adding policy '$POLICY' for '$USER'"
+          set +e ; # policy already attach errors out, allow it.
+          ${MC} admin policy attach myminio $POLICY --user=$USER
+          set -e
+      else
+          echo "User '$USER' has no policy attached."
+      fi
+    }
+
+    # Try connecting to MinIO instance
+    scheme=http
+    connectToMinio $scheme
+
+
+
+    # Create the users
+    echo logs-user > $MINIO_ACCESSKEY_SECRETKEY_TMP
+    echo supersecretpassword >> $MINIO_ACCESSKEY_SECRETKEY_TMP
+    createUser readwrite
+  add-policy: |-
+    #!/bin/sh
+    set -e ; # Have script exit in the event of a failed command.
+    MC_CONFIG_DIR="/etc/minio/mc/"
+    MC="/usr/bin/mc --insecure --config-dir ${MC_CONFIG_DIR}"
+
+    # connectToMinio
+    # Use a check-sleep-check loop to wait for MinIO service to be available
+    connectToMinio() {
+      SCHEME=$1
+      ATTEMPTS=0 ; LIMIT=29 ; # Allow 30 attempts
+      set -e ; # fail if we can't read the keys.
+      ACCESS=$(cat /config/rootUser) ; SECRET=$(cat /config/rootPassword) ;
+      set +e ; # The connections to minio are allowed to fail.
+      echo "Connecting to MinIO server: $SCHEME://$MINIO_ENDPOINT:$MINIO_PORT" ;
+      MC_COMMAND="${MC} alias set myminio $SCHEME://$MINIO_ENDPOINT:$MINIO_PORT $ACCESS $SECRET" ;
+      $MC_COMMAND ;
+      STATUS=$? ;
+      until [ $STATUS = 0 ]
+      do
+        ATTEMPTS=`expr $ATTEMPTS + 1` ;
+        echo \"Failed attempts: $ATTEMPTS\" ;
+        if [ $ATTEMPTS -gt $LIMIT ]; then
+          exit 1 ;
+        fi ;
+        sleep 2 ; # 1 second intervals between attempts
+        $MC_COMMAND ;
+        STATUS=$? ;
+      done ;
+      set -e ; # reset `e` as active
+      return 0
+    }
+
+    # checkPolicyExists ($policy)
+    # Check if the policy exists, by using the exit code of `mc admin policy info`
+    checkPolicyExists() {
+      POLICY=$1
+      CMD=$(${MC} admin policy info myminio $POLICY > /dev/null 2>&1)
+      return $?
+    }
+
+    # createPolicy($name, $filename)
+    createPolicy () {
+      NAME=$1
+      FILENAME=$2
+
+      # Create the name if it does not exist
+      echo "Checking policy: $NAME (in /config/$FILENAME.json)"
+      if ! checkPolicyExists $NAME ; then
+        echo "Creating policy '$NAME'"
+      else
+        echo "Policy '$NAME' already exists."
+      fi
+      ${MC} admin policy create myminio $NAME /config/$FILENAME.json
+
+    }
+
+    # Try connecting to MinIO instance
+    scheme=http
+    connectToMinio $scheme
+  add-svcacct: |-
+    #!/bin/sh
+    set -e ; # Have script exit in the event of a failed command.
+    MC_CONFIG_DIR="/etc/minio/mc/"
+    MC="/usr/bin/mc --insecure --config-dir ${MC_CONFIG_DIR}"
+
+    # AccessKey and secretkey credentials file are added to prevent shell execution errors caused by special characters.
+    # Special characters for example : ',",<,>,{,}
+    MINIO_ACCESSKEY_SECRETKEY_TMP="/tmp/accessKey_and_secretKey_svcacct_tmp"
+
+    # connectToMinio
+    # Use a check-sleep-check loop to wait for MinIO service to be available
+    connectToMinio() {
+      SCHEME=$1
+      ATTEMPTS=0 ; LIMIT=29 ; # Allow 30 attempts
+      set -e ; # fail if we can't read the keys.
+      ACCESS=$(cat /config/rootUser) ; SECRET=$(cat /config/rootPassword) ;
+      set +e ; # The connections to minio are allowed to fail.
[Diff truncated by flux-local]
--- HelmRelease: storage/openebs ConfigMap: storage/loki

+++ HelmRelease: storage/openebs ConfigMap: storage/loki

@@ -0,0 +1,100 @@

+---
+apiVersion: v1
+kind: ConfigMap
+metadata:
+  name: loki
+  namespace: storage
+  labels:
+    app.kubernetes.io/name: loki
+    app.kubernetes.io/instance: openebs
+data:
+  config.yaml: |2
+
+    auth_enabled: true
+    bloom_build:
+      builder:
+        planner_address: ""
+      enabled: false
+    bloom_gateway:
+      client:
+        addresses: ""
+      enabled: false
+    common:
+      compactor_address: 'http://openebs-loki:3100'
+      path_prefix: /var/loki
+      replication_factor: 3
+      storage:
+        s3:
+          access_key_id: root-user
+          bucketnames: chunks
+          endpoint: openebs-minio.storage.svc:9000
+          insecure: true
+          s3forcepathstyle: true
+          secret_access_key: supersecretpassword
+    frontend:
+      scheduler_address: ""
+      tail_proxy_url: ""
+    frontend_worker:
+      scheduler_address: ""
+    index_gateway:
+      mode: simple
+    ingester:
+      chunk_encoding: snappy
+    limits_config:
+      ingestion_burst_size_mb: 1000
+      ingestion_rate_mb: 10000
+      max_cache_freshness_per_query: 10m
+      max_label_names_per_series: 20
+      query_timeout: 300s
+      reject_old_samples: true
+      reject_old_samples_max_age: 168h
+      split_queries_by_interval: 15m
+      volume_enabled: true
+    memberlist:
+      join_members:
+      - loki-memberlist
+    pattern_ingester:
+      enabled: false
+    querier:
+      max_concurrent: 1
+    query_range:
+      align_queries_with_step: true
+    ruler:
+      storage:
+        s3:
+          bucketnames: ruler
+        type: s3
+      wal:
+        dir: /var/loki/ruler-wal
+    runtime_config:
+      file: /etc/loki/runtime-config/runtime-config.yaml
+    schema_config:
+      configs:
+      - from: "2024-04-01"
+        index:
+          period: 24h
+          prefix: loki_index_
+        object_store: s3
+        schema: v13
+        store: tsdb
+    server:
+      grpc_listen_port: 9095
+      http_listen_port: 3100
+      http_server_read_timeout: 600s
+      http_server_write_timeout: 600s
+    storage_config:
+      bloom_shipper:
+        working_directory: /var/loki/data/bloomshipper
+      boltdb_shipper:
+        index_gateway_client:
+          server_address: ""
+      hedging:
+        at: 250ms
+        max_per_second: 20
+        up_to: 3
+      tsdb_shipper:
+        index_gateway_client:
+          server_address: ""
+    tracing:
+      enabled: true
+
--- HelmRelease: storage/openebs ConfigMap: storage/loki-runtime

+++ HelmRelease: storage/openebs ConfigMap: storage/loki-runtime

@@ -0,0 +1,13 @@

+---
+apiVersion: v1
+kind: ConfigMap
+metadata:
+  name: loki-runtime
+  namespace: storage
+  labels:
+    app.kubernetes.io/name: loki
+    app.kubernetes.io/instance: openebs
+data:
+  runtime-config.yaml: |
+    {}
+
--- HelmRelease: storage/openebs StorageClass: storage/openebs-loki-localpv

+++ HelmRelease: storage/openebs StorageClass: storage/openebs-loki-localpv

@@ -0,0 +1,16 @@

+---
+apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+  annotations:
+    cas.openebs.io/config: |
+      - name: StorageType
+        value: "hostpath"
+      - name: BasePath
+        value: "/var/local/openebs/localpv-hostpath/loki"
+    openebs.io/cas-type: local
+  name: openebs-loki-localpv
+provisioner: openebs.io/local
+reclaimPolicy: Delete
+volumeBindingMode: WaitForFirstConsumer
+
--- HelmRelease: storage/openebs StorageClass: storage/openebs-minio-localpv

+++ HelmRelease: storage/openebs StorageClass: storage/openebs-minio-localpv

@@ -0,0 +1,16 @@

+---
+apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+  annotations:
+    cas.openebs.io/config: |
+      - name: StorageType
+        value: "hostpath"
+      - name: BasePath
+        value: "/var/local/openebs/localpv-hostpath/minio"
+    openebs.io/cas-type: local
+  name: openebs-minio-localpv
+provisioner: openebs.io/local
+reclaimPolicy: Delete
+volumeBindingMode: WaitForFirstConsumer
+
--- HelmRelease: storage/openebs ClusterRole: storage/openebs-alloy

+++ HelmRelease: storage/openebs ClusterRole: storage/openebs-alloy

@@ -0,0 +1,104 @@

+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+  name: openebs-alloy
+  labels:
+    app.kubernetes.io/name: alloy
+    app.kubernetes.io/instance: openebs
+    app.kubernetes.io/managed-by: Helm
+    app.kubernetes.io/part-of: alloy
+    app.kubernetes.io/component: rbac
+rules:
+- apiGroups:
+  - ''
+  - discovery.k8s.io
+  - networking.k8s.io
+  resources:
+  - endpoints
+  - endpointslices
+  - ingresses
+  - nodes
+  - nodes/proxy
+  - nodes/metrics
+  - pods
+  - services
+  verbs:
+  - get
+  - list
+  - watch
+- apiGroups:
+  - ''
+  resources:
+  - pods
+  - pods/log
+  - namespaces
+  verbs:
+  - get
+  - list
+  - watch
+- apiGroups:
+  - monitoring.grafana.com
+  resources:
+  - podlogs
+  verbs:
+  - get
+  - list
+  - watch
+- apiGroups:
+  - monitoring.coreos.com
+  resources:
+  - prometheusrules
+  verbs:
+  - get
+  - list
+  - watch
+- nonResourceURLs:
+  - /metrics
+  verbs:
+  - get
+- apiGroups:
+  - monitoring.coreos.com
+  resources:
+  - podmonitors
+  - servicemonitors
+  - probes
+  - scrapeconfigs
+  verbs:
+  - get
+  - list
+  - watch
+- apiGroups:
+  - ''
+  resources:
+  - events
+  verbs:
+  - get
+  - list
+  - watch
+- apiGroups:
+  - ''
+  resources:
+  - configmaps
+  - secrets
+  verbs:
+  - get
+  - list
+  - watch
+- apiGroups:
+  - apps
+  resources:
+  - replicasets
+  verbs:
+  - get
+  - list
+  - watch
+- apiGroups:
+  - extensions
+  resources:
+  - replicasets
+  verbs:
+  - get
+  - list
+  - watch
+
--- HelmRelease: storage/openebs ClusterRole: storage/openebs-loki-clusterrole

+++ HelmRelease: storage/openebs ClusterRole: storage/openebs-loki-clusterrole

@@ -0,0 +1,19 @@

+---
+kind: ClusterRole
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+  labels:
+    app.kubernetes.io/name: loki
+    app.kubernetes.io/instance: openebs
+  name: openebs-loki-clusterrole
+rules:
+- apiGroups:
+  - ''
+  resources:
+  - configmaps
+  - secrets
+  verbs:
+  - get
+  - watch
+  - list
+
--- HelmRelease: storage/openebs ClusterRoleBinding: storage/openebs-alloy

+++ HelmRelease: storage/openebs ClusterRoleBinding: storage/openebs-alloy

@@ -0,0 +1,20 @@

+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRoleBinding
+metadata:
+  name: openebs-alloy
+  labels:
+    app.kubernetes.io/name: alloy
+    app.kubernetes.io/instance: openebs
+    app.kubernetes.io/managed-by: Helm
+    app.kubernetes.io/part-of: alloy
+    app.kubernetes.io/component: rbac
+roleRef:
+  apiGroup: rbac.authorization.k8s.io
+  kind: ClusterRole
+  name: openebs-alloy
+subjects:
+- kind: ServiceAccount
+  name: openebs-alloy
+  namespace: storage
+
--- HelmRelease: storage/openebs ClusterRoleBinding: storage/openebs-loki-clusterrolebinding

+++ HelmRelease: storage/openebs ClusterRoleBinding: storage/openebs-loki-clusterrolebinding

@@ -0,0 +1,17 @@

+---
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+  name: openebs-loki-clusterrolebinding
+  labels:
+    app.kubernetes.io/name: loki
+    app.kubernetes.io/instance: openebs
+subjects:
+- kind: ServiceAccount
+  name: loki
+  namespace: storage
+roleRef:
+  kind: ClusterRole
+  name: openebs-loki-clusterrole
+  apiGroup: rbac.authorization.k8s.io
+
--- HelmRelease: storage/openebs Service: storage/openebs-alloy

+++ HelmRelease: storage/openebs Service: storage/openebs-alloy

@@ -0,0 +1,24 @@

+---
+apiVersion: v1
+kind: Service
+metadata:
+  name: openebs-alloy
+  namespace: storage
+  labels:
+    app.kubernetes.io/name: alloy
+    app.kubernetes.io/instance: openebs
+    app.kubernetes.io/managed-by: Helm
+    app.kubernetes.io/part-of: alloy
+    app.kubernetes.io/component: networking
+spec:
+  type: ClusterIP
+  selector:
+    app.kubernetes.io/name: alloy
+    app.kubernetes.io/instance: openebs
+  internalTrafficPolicy: Cluster
+  ports:
+  - name: http-metrics
+    port: 12345
+    targetPort: 12345
+    protocol: TCP
+
--- HelmRelease: storage/openebs Service: storage/openebs-minio-console

+++ HelmRelease: storage/openebs Service: storage/openebs-minio-console

@@ -0,0 +1,20 @@

+---
+apiVersion: v1
+kind: Service
+metadata:
+  name: openebs-minio-console
+  labels:
+    app: minio
+    release: openebs
+    heritage: Helm
+spec:
+  type: ClusterIP
+  ports:
+  - name: http
+    port: 9001
+    protocol: TCP
+    targetPort: 9001
+  selector:
+    app: minio
+    release: openebs
+
--- HelmRelease: storage/openebs Service: storage/openebs-minio

+++ HelmRelease: storage/openebs Service: storage/openebs-minio

@@ -0,0 +1,21 @@

+---
+apiVersion: v1
+kind: Service
+metadata:
+  name: openebs-minio
+  labels:
+    app: minio
+    release: openebs
+    heritage: Helm
+    monitoring: 'true'
+spec:
+  type: ClusterIP
+  ports:
+  - name: http
+    port: 9000
+    protocol: TCP
+    targetPort: 9000
+  selector:
+    app: minio
+    release: openebs
+
--- HelmRelease: storage/openebs Service: storage/openebs-minio-svc

+++ HelmRelease: storage/openebs Service: storage/openebs-minio-svc

@@ -0,0 +1,21 @@

+---
+apiVersion: v1
+kind: Service
+metadata:
+  name: openebs-minio-svc
+  labels:
+    app: minio
+    release: openebs
+    heritage: Helm
+spec:
+  publishNotReadyAddresses: true
+  clusterIP: None
+  ports:
+  - name: http
+    port: 9000
+    protocol: TCP
+    targetPort: 9000
+  selector:
+    app: minio
+    release: openebs
+
--- HelmRelease: storage/openebs Service: storage/loki-memberlist

+++ HelmRelease: storage/openebs Service: storage/loki-memberlist

@@ -0,0 +1,22 @@

+---
+apiVersion: v1
+kind: Service
+metadata:
+  name: loki-memberlist
+  namespace: storage
+  labels:
+    app.kubernetes.io/name: loki
+    app.kubernetes.io/instance: openebs
+spec:
+  type: ClusterIP
+  clusterIP: None
+  ports:
+  - name: tcp
+    port: 7946
+    targetPort: http-memberlist
+    protocol: TCP
+  selector:
+    app.kubernetes.io/name: loki
+    app.kubernetes.io/instance: openebs
+    app.kubernetes.io/part-of: memberlist
+
--- HelmRelease: storage/openebs Service: storage/loki-headless

+++ HelmRelease: storage/openebs Service: storage/loki-headless

@@ -0,0 +1,23 @@

+---
+apiVersion: v1
+kind: Service
+metadata:
+  name: loki-headless
+  namespace: storage
+  labels:
+    app.kubernetes.io/name: loki
+    app.kubernetes.io/instance: openebs
+    app: loki
+    variant: headless
+    prometheus.io/service-monitor: 'false'
+spec:
+  clusterIP: None
+  ports:
+  - name: http-metrics
+    port: 3100
+    targetPort: http-metrics
+    protocol: TCP
+  selector:
+    app.kubernetes.io/name: loki
+    app.kubernetes.io/instance: openebs
+
--- HelmRelease: storage/openebs Service: storage/openebs-loki

+++ HelmRelease: storage/openebs Service: storage/openebs-loki

@@ -0,0 +1,26 @@

+---
+apiVersion: v1
+kind: Service
+metadata:
+  name: openebs-loki
+  namespace: storage
+  labels:
+    app.kubernetes.io/name: loki
+    app.kubernetes.io/instance: openebs
+    app: loki
+spec:
+  type: ClusterIP
+  ports:
+  - name: http-metrics
+    port: 3100
+    targetPort: http-metrics
+    protocol: TCP
+  - name: grpc
+    port: 9095
+    targetPort: grpc
+    protocol: TCP
+  selector:
+    app.kubernetes.io/name: loki
+    app.kubernetes.io/instance: openebs
+    app.kubernetes.io/component: single-binary
+
--- HelmRelease: storage/openebs DaemonSet: storage/openebs-alloy

+++ HelmRelease: storage/openebs DaemonSet: storage/openebs-alloy

@@ -0,0 +1,81 @@

+---
+apiVersion: apps/v1
+kind: DaemonSet
+metadata:
+  name: openebs-alloy
+  namespace: storage
+  labels:
+    app.kubernetes.io/name: alloy
+    app.kubernetes.io/instance: openebs
+    app.kubernetes.io/managed-by: Helm
+    app.kubernetes.io/part-of: alloy
+spec:
+  minReadySeconds: 10
+  selector:
+    matchLabels:
+      app.kubernetes.io/name: alloy
+      app.kubernetes.io/instance: openebs
+  template:
+    metadata:
+      annotations:
+        kubectl.kubernetes.io/default-container: alloy
+      labels:
+        app.kubernetes.io/name: alloy
+        app.kubernetes.io/instance: openebs
+    spec:
+      serviceAccountName: openebs-alloy
+      containers:
+      - name: alloy
+        image: docker.io/grafana/alloy:v1.8.1
+        imagePullPolicy: IfNotPresent
+        args:
+        - run
+        - /etc/alloy/config.alloy
+        - --storage.path=/tmp/alloy
+        - --server.http.listen-addr=0.0.0.0:12345
+        - --server.http.ui-path-prefix=/
+        - --stability.level=generally-available
+        env:
+        - name: ALLOY_DEPLOY_MODE
+          value: helm
+        - name: HOSTNAME
+          valueFrom:
+            fieldRef:
+              fieldPath: spec.nodeName
+        ports:
+        - containerPort: 12345
+          name: http-metrics
+        readinessProbe:
+          httpGet:
+            path: /-/ready
+            port: 12345
+            scheme: HTTP
+          initialDelaySeconds: 10
+          timeoutSeconds: 1
+        volumeMounts:
+        - name: config
+          mountPath: /etc/alloy
+        - name: varlog
+          mountPath: /var/log
+          readOnly: true
+      - name: config-reloader
+        image: quay.io/prometheus-operator/prometheus-config-reloader:v0.81.0
+        args:
+        - --watched-dir=/etc/alloy
+        - --reload-url=http://localhost:12345/-/reload
+        volumeMounts:
+        - name: config
+          mountPath: /etc/alloy
+        resources:
+          requests:
+            cpu: 10m
+            memory: 50Mi
+      dnsPolicy: ClusterFirst
+      volumes:
+      - name: config
+        configMap:
+          name: openebs-alloy
+      - name: varlog
+        hostPath:
+          path: /var/log
+
--- HelmRelease: storage/openebs StatefulSet: storage/openebs-minio

+++ HelmRelease: storage/openebs StatefulSet: storage/openebs-minio

@@ -0,0 +1,87 @@

+---
+apiVersion: apps/v1
+kind: StatefulSet
+metadata:
+  name: openebs-minio
+  labels:
+    app: minio
+    release: openebs
+    heritage: Helm
+spec:
+  updateStrategy:
+    type: RollingUpdate
+  podManagementPolicy: Parallel
+  serviceName: openebs-minio-svc
+  replicas: 3
+  selector:
+    matchLabels:
+      app: minio
+      release: openebs
+  template:
+    metadata:
+      name: openebs-minio
+      labels:
+        app: minio
+        release: openebs
+      annotations:
+        checksum/secrets: b38f14fd791c3ad8559031ba36c0bfb017cad4ce8c06105b5bcb4b3751f0dfb3
+    spec:
+      securityContext:
+        fsGroup: 1000
+        fsGroupChangePolicy: OnRootMismatch
+        runAsGroup: 1000
+        runAsUser: 1000
+      serviceAccountName: minio-sa
+      containers:
+      - name: minio
+        image: quay.io/minio/minio:RELEASE.2024-12-18T13-15-44Z
+        imagePullPolicy: IfNotPresent
+        command:
+        - /bin/sh
+        - -ce
+        - /usr/bin/docker-entrypoint.sh minio server http://openebs-minio-{0...2}.openebs-minio-svc.storage.svc/export
+          -S /etc/minio/certs/ --address :9000 --console-address :9001
+        volumeMounts:
+        - name: export
+          mountPath: /export
+        ports:
+        - name: http
+          containerPort: 9000
+        - name: http-console
+          containerPort: 9001
+        env:
+        - name: MINIO_ROOT_USER
+          valueFrom:
+            secretKeyRef:
+              name: openebs-minio
+              key: rootUser
+        - name: MINIO_ROOT_PASSWORD
+          valueFrom:
+            secretKeyRef:
+              name: openebs-minio
+              key: rootPassword
+        - name: MINIO_PROMETHEUS_AUTH_TYPE
+          value: public
+        resources:
+          requests:
+            cpu: 100m
+            memory: 128Mi
+        securityContext:
+          readOnlyRootFilesystem: false
+      volumes:
+      - name: minio-user
+        secret:
+          secretName: openebs-minio
+  volumeClaimTemplates:
+  - apiVersion: v1
+    kind: PersistentVolumeClaim
+    metadata:
+      name: export
+    spec:
+      accessModes:
+      - ReadWriteOnce
+      storageClassName: openebs-minio-localpv
+      resources:
+        requests:
+          storage: 2Gi
+
--- HelmRelease: storage/openebs StatefulSet: storage/openebs-loki

+++ HelmRelease: storage/openebs StatefulSet: storage/openebs-loki

@@ -0,0 +1,147 @@

+---
+apiVersion: apps/v1
+kind: StatefulSet
+metadata:
+  name: openebs-loki
+  namespace: storage
+  labels:
+    app.kubernetes.io/name: loki
+    app.kubernetes.io/instance: openebs
+    app.kubernetes.io/component: single-binary
+    app.kubernetes.io/part-of: memberlist
+spec:
+  replicas: 3
+  podManagementPolicy: Parallel
+  updateStrategy:
+    rollingUpdate:
+      partition: 0
+  serviceName: openebs-loki-headless
+  revisionHistoryLimit: 10
+  persistentVolumeClaimRetentionPolicy:
+    whenDeleted: Delete
+    whenScaled: Delete
+  selector:
+    matchLabels:
+      app.kubernetes.io/name: loki
+      app.kubernetes.io/instance: openebs
+      app.kubernetes.io/component: single-binary
+  template:
+    metadata:
+      labels:
+        app.kubernetes.io/name: loki
+        app.kubernetes.io/instance: openebs
+        app.kubernetes.io/component: single-binary
+        app: loki
+        app.kubernetes.io/part-of: memberlist
+    spec:
+      serviceAccountName: loki
+      automountServiceAccountToken: true
+      enableServiceLinks: true
+      securityContext:
+        fsGroup: 10001
+        runAsGroup: 10001
+        runAsNonRoot: true
+        runAsUser: 10001
+      terminationGracePeriodSeconds: 30
+      containers:
+      - name: loki-sc-rules
+        image: docker.io/kiwigrid/k8s-sidecar:1.30.2
+        imagePullPolicy: IfNotPresent
+        env:
+        - name: METHOD
+          value: WATCH
+        - name: LABEL
+          value: loki_rule
+        - name: FOLDER
+          value: /rules
+        - name: RESOURCE
+          value: both
+        - name: WATCH_SERVER_TIMEOUT
+          value: '60'
+        - name: WATCH_CLIENT_TIMEOUT
+          value: '60'
+        - name: LOG_LEVEL
+          value: INFO
+        securityContext:
+          allowPrivilegeEscalation: false
+          capabilities:
+            drop:
+            - ALL
+          readOnlyRootFilesystem: true
+        volumeMounts:
+        - name: sc-rules-volume
+          mountPath: /rules
+      - name: loki
+        image: docker.io/grafana/loki:3.4.2
+        imagePullPolicy: IfNotPresent
+        args:
+        - -config.file=/etc/loki/config/config.yaml
+        - -target=all
+        ports:
+        - name: http-metrics
+          containerPort: 3100
+          protocol: TCP
+        - name: grpc
+          containerPort: 9095
+          protocol: TCP
+        - name: http-memberlist
+          containerPort: 7946
+          protocol: TCP
+        securityContext:
+          allowPrivilegeEscalation: false
+          capabilities:
+            drop:
+            - ALL
+          readOnlyRootFilesystem: true
+        readinessProbe:
+          httpGet:
+            path: /ready
+            port: http-metrics
+          initialDelaySeconds: 30
+          timeoutSeconds: 1
+        volumeMounts:
+        - name: tmp
+          mountPath: /tmp
+        - name: config
+          mountPath: /etc/loki/config
+        - name: runtime-config
+          mountPath: /etc/loki/runtime-config
+        - name: storage
+          mountPath: /var/loki
+        - name: sc-rules-volume
+          mountPath: /rules
+        resources: {}
+      affinity:
+        podAntiAffinity:
+          requiredDuringSchedulingIgnoredDuringExecution:
+          - labelSelector:
+              matchLabels:
+                app.kubernetes.io/component: single-binary
+            topologyKey: kubernetes.io/hostname
+      volumes:
+      - name: tmp
+        emptyDir: {}
+      - name: config
+        configMap:
+          name: loki
+          items:
+          - key: config.yaml
+            path: config.yaml
+      - name: runtime-config
+        configMap:
+          name: loki-runtime
+      - name: sc-rules-volume
+        emptyDir: {}
+  volumeClaimTemplates:
+  - apiVersion: v1
+    kind: PersistentVolumeClaim
+    metadata:
+      name: storage
+    spec:
+      accessModes:
+      - ReadWriteOnce
+      storageClassName: openebs-loki-localpv
+      resources:
+        requests:
+          storage: 2Gi
+
--- HelmRelease: storage/openebs Job: storage/openebs-minio-post-job

+++ HelmRelease: storage/openebs Job: storage/openebs-minio-post-job

@@ -0,0 +1,77 @@

+---
+apiVersion: batch/v1
+kind: Job
+metadata:
+  name: openebs-minio-post-job
+  labels:
+    app: minio-post-job
+    release: openebs
+    heritage: Helm
+  annotations:
+    helm.sh/hook: post-install,post-upgrade
+    helm.sh/hook-delete-policy: hook-succeeded,before-hook-creation
+spec:
+  template:
+    metadata:
+      labels:
+        app: minio-job
+        release: openebs
+    spec:
+      restartPolicy: OnFailure
+      volumes:
+      - name: etc-path
+        emptyDir: {}
+      - name: tmp
+        emptyDir: {}
+      - name: minio-configuration
+        projected:
+          sources:
+          - configMap:
+              name: openebs-minio
+          - secret:
+              name: openebs-minio
+      serviceAccountName: minio-sa
+      containers:
+      - name: minio-make-bucket
+        image: quay.io/minio/mc:RELEASE.2024-11-21T17-21-54Z
+        imagePullPolicy: IfNotPresent
+        command:
+        - /bin/sh
+        - /config/initialize
+        env:
+        - name: MINIO_ENDPOINT
+          value: openebs-minio
+        - name: MINIO_PORT
+          value: '9000'
+        volumeMounts:
+        - name: etc-path
+          mountPath: /etc/minio/mc
+        - name: tmp
+          mountPath: /tmp
+        - name: minio-configuration
+          mountPath: /config
+        resources:
+          requests:
+            memory: 128Mi
+      - name: minio-make-user
+        image: quay.io/minio/mc:RELEASE.2024-11-21T17-21-54Z
+        imagePullPolicy: IfNotPresent
+        command:
+        - /bin/sh
+        - /config/add-user
+        env:
+        - name: MINIO_ENDPOINT
+          value: openebs-minio
+        - name: MINIO_PORT
+          value: '9000'
+        volumeMounts:
+        - name: etc-path
+          mountPath: /etc/minio/mc
+        - name: tmp
+          mountPath: /tmp
+        - name: minio-configuration
+          mountPath: /config
+        resources:
+          requests:
+            memory: 128Mi
+

@renovate renovate bot force-pushed the renovate/openebs-4.x branch from 0a61a66 to 3fdcb43 Compare June 20, 2025 13:08
@renovate renovate bot changed the title feat(helm): update chart openebs to 4.3.0 feat(helm): update chart openebs to 4.3.1 Jun 20, 2025
@renovate renovate bot changed the title feat(helm): update chart openebs to 4.3.1 feat(helm): update chart openebs to 4.3.2 Jun 23, 2025
@renovate renovate bot force-pushed the renovate/openebs-4.x branch from 3fdcb43 to 3646151 Compare June 23, 2025 16:49
@renovate renovate bot force-pushed the renovate/openebs-4.x branch from 3646151 to faee5ff Compare July 30, 2025 17:27
@renovate renovate bot changed the title feat(helm): update chart openebs to 4.3.2 feat(helm): update chart openebs to 4.3.3 Aug 29, 2025
@renovate renovate bot force-pushed the renovate/openebs-4.x branch from faee5ff to 1ff5fbf Compare August 29, 2025 03:04
@renovate renovate bot force-pushed the renovate/openebs-4.x branch from 1ff5fbf to dc5e519 Compare November 21, 2025 17:48
@renovate renovate bot changed the title feat(helm): update chart openebs to 4.3.3 feat(helm): update chart openebs to 4.4.0 Nov 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant