-
Notifications
You must be signed in to change notification settings - Fork 0
feat(helm): update chart openebs to 4.4.0 #409
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
renovate
wants to merge
1
commit into
main
Choose a base branch
from
renovate/openebs-4.x
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
--- kubernetes/apps/storage/openebs/app Kustomization: flux-system/openebs HelmRelease: storage/openebs
+++ kubernetes/apps/storage/openebs/app Kustomization: flux-system/openebs HelmRelease: storage/openebs
@@ -13,13 +13,13 @@
spec:
chart: openebs
sourceRef:
kind: HelmRepository
name: openebs
namespace: flux-system
- version: 4.2.0
+ version: 4.4.0
install:
remediation:
retries: 3
interval: 30m
upgrade:
cleanupOnFail: true |
--- HelmRelease: storage/openebs Deployment: storage/openebs-localpv-provisioner
+++ HelmRelease: storage/openebs Deployment: storage/openebs-localpv-provisioner
@@ -25,18 +25,19 @@
heritage: Helm
app: localpv-provisioner
release: openebs
component: localpv-provisioner
openebs.io/component-name: openebs-localpv-provisioner
name: openebs-localpv-provisioner
+ openebs.io/logging: 'true'
spec:
serviceAccountName: openebs-localpv-provisioner
securityContext: {}
containers:
- name: openebs-localpv-provisioner
- image: quay.io/openebs/provisioner-localpv:4.2.0
+ image: quay.io/openebs/provisioner-localpv:4.4.0
imagePullPolicy: IfNotPresent
resources: null
env:
- name: OPENEBS_NAMESPACE
valueFrom:
fieldRef:
@@ -51,13 +52,13 @@
fieldPath: spec.serviceAccountName
- name: OPENEBS_IO_ENABLE_ANALYTICS
value: 'true'
- name: OPENEBS_IO_BASE_PATH
value: /var/openebs/local
- name: OPENEBS_IO_HELPER_IMAGE
- value: quay.io/openebs/linux-utils:4.1.0
+ value: quay.io/openebs/linux-utils:4.3.0
- name: OPENEBS_IO_HELPER_POD_HOST_NETWORK
value: 'false'
- name: OPENEBS_IO_INSTALLER_TYPE
value: localpv-charts-helm
- name: LEADER_ELECTION_ENABLED
value: 'true'
--- HelmRelease: storage/openebs ServiceAccount: storage/openebs-pre-upgrade-hook
+++ HelmRelease: storage/openebs ServiceAccount: storage/openebs-pre-upgrade-hook
@@ -1,14 +0,0 @@
----
-apiVersion: v1
-kind: ServiceAccount
-metadata:
- name: openebs-pre-upgrade-hook
- namespace: storage
- labels:
- app.kubernetes.io/managed-by: Helm
- app.kubernetes.io/instance: openebs
- annotations:
- helm.sh/hook: pre-upgrade
- helm.sh/hook-weight: '-2'
- helm.sh/hook-delete-policy: hook-succeeded
-
--- HelmRelease: storage/openebs ClusterRole: storage/openebs-pre-upgrade-hook
+++ HelmRelease: storage/openebs ClusterRole: storage/openebs-pre-upgrade-hook
@@ -1,28 +0,0 @@
----
-kind: ClusterRole
-apiVersion: rbac.authorization.k8s.io/v1
-metadata:
- name: openebs-pre-upgrade-hook
- labels:
- app.kubernetes.io/managed-by: Helm
- app.kubernetes.io/instance: openebs
- annotations:
- helm.sh/hook: pre-upgrade
- helm.sh/hook-weight: '-2'
- helm.sh/hook-delete-policy: hook-succeeded
-rules:
-- apiGroups:
- - apiextensions.k8s.io
- resources:
- - customresourcedefinitions
- verbs:
- - get
- - patch
-- apiGroups:
- - apps
- resources:
- - deployments
- verbs:
- - delete
- - list
-
--- HelmRelease: storage/openebs ClusterRoleBinding: storage/openebs-pre-upgrade-hook
+++ HelmRelease: storage/openebs ClusterRoleBinding: storage/openebs-pre-upgrade-hook
@@ -1,21 +0,0 @@
----
-kind: ClusterRoleBinding
-apiVersion: rbac.authorization.k8s.io/v1
-metadata:
- name: openebs-pre-upgrade-hook
- labels:
- app.kubernetes.io/managed-by: Helm
- app.kubernetes.io/instance: openebs
- annotations:
- helm.sh/hook: pre-upgrade
- helm.sh/hook-weight: '-1'
- helm.sh/hook-delete-policy: hook-succeeded
-subjects:
-- kind: ServiceAccount
- name: openebs-pre-upgrade-hook
- namespace: storage
-roleRef:
- kind: ClusterRole
- name: openebs-pre-upgrade-hook
- apiGroup: rbac.authorization.k8s.io
-
--- HelmRelease: storage/openebs Job: storage/openebs-pre-upgrade-hook
+++ HelmRelease: storage/openebs Job: storage/openebs-pre-upgrade-hook
@@ -1,35 +0,0 @@
----
-apiVersion: batch/v1
-kind: Job
-metadata:
- name: openebs-pre-upgrade-hook
- labels:
- app.kubernetes.io/managed-by: Helm
- app.kubernetes.io/instance: openebs
- annotations:
- helm.sh/hook: pre-upgrade
- helm.sh/hook-weight: '0'
- helm.sh/hook-delete-policy: hook-succeeded
-spec:
- template:
- metadata:
- name: openebs-pre-upgrade-hook
- labels:
- app.kubernetes.io/managed-by: Helm
- app.kubernetes.io/instance: openebs
- spec:
- serviceAccountName: openebs-pre-upgrade-hook
- restartPolicy: Never
- containers:
- - name: pre-upgrade-job
- image: docker.io/bitnami/kubectl:1.25.15
- imagePullPolicy: IfNotPresent
- command:
- - /bin/sh
- - -c
- args:
- - (kubectl annotate --overwrite crd volumesnapshots.snapshot.storage.k8s.io
- volumesnapshotclasses.snapshot.storage.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io
- helm.sh/resource-policy=keep || true) && (kubectl -n storage delete deploy
- -l openebs.io/component-name=openebs-localpv-provisioner --ignore-not-found)
-
--- HelmRelease: storage/openebs ServiceAccount: storage/openebs-alloy
+++ HelmRelease: storage/openebs ServiceAccount: storage/openebs-alloy
@@ -0,0 +1,14 @@
+---
+apiVersion: v1
+kind: ServiceAccount
+automountServiceAccountToken: true
+metadata:
+ name: openebs-alloy
+ namespace: storage
+ labels:
+ app.kubernetes.io/name: alloy
+ app.kubernetes.io/instance: openebs
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/part-of: alloy
+ app.kubernetes.io/component: rbac
+
--- HelmRelease: storage/openebs ServiceAccount: storage/minio-sa
+++ HelmRelease: storage/openebs ServiceAccount: storage/minio-sa
@@ -0,0 +1,6 @@
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: minio-sa
+
--- HelmRelease: storage/openebs ServiceAccount: storage/loki
+++ HelmRelease: storage/openebs ServiceAccount: storage/loki
@@ -0,0 +1,11 @@
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: loki
+ namespace: storage
+ labels:
+ app.kubernetes.io/name: loki
+ app.kubernetes.io/instance: openebs
+automountServiceAccountToken: true
+
--- HelmRelease: storage/openebs ConfigMap: storage/openebs-alloy
+++ HelmRelease: storage/openebs ConfigMap: storage/openebs-alloy
@@ -0,0 +1,109 @@
+---
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: openebs-alloy
+ namespace: storage
+ labels:
+ app.kubernetes.io/name: alloy
+ app.kubernetes.io/instance: openebs
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/part-of: alloy
+ app.kubernetes.io/component: config
+data:
+ config.alloy: |-
+ livedebugging {
+ enabled = false
+ }
+
+ discovery.kubernetes "openebs_pods_name" {
+ role = "pod"
+ }
+
+ discovery.relabel "openebs_pods_name" {
+ targets = discovery.kubernetes.openebs_pods_name.targets
+
+ rule {
+ source_labels = [
+ "__meta_kubernetes_pod_label_openebs_io_logging",
+ ]
+ separator = ";"
+ regex = "^true$"
+ action = "keep"
+ }
+
+ rule {
+ regex = "__meta_kubernetes_pod_label_(.+)"
+ action = "labelmap"
+ }
+
+ rule {
+ regex = "__meta_kubernetes_pod_label_(.+)"
+ action = "labelmap"
+ }
+
+ rule {
+ source_labels = ["__meta_kubernetes_namespace"]
+ separator = "/"
+ target_label = "job"
+ }
+
+ rule {
+ source_labels = ["__meta_kubernetes_pod_name"]
+ target_label = "pod"
+ }
+
+ rule {
+ source_labels = ["__meta_kubernetes_pod_container_name"]
+ target_label = "container"
+ }
+
+ rule {
+ source_labels = ["__meta_kubernetes_pod_node_name"]
+ target_label = "hostname"
+ }
+
+ rule {
+ source_labels = ["__meta_kubernetes_pod_uid", "__meta_kubernetes_pod_container_name"]
+ separator = "/"
+ target_label = "__path__"
+ replacement = "/var/log/pods/*$1/*.log"
+ }
+ }
+
+ local.file_match "openebs_pod_files" {
+ path_targets = discovery.relabel.openebs_pods_name.output
+ }
+
+ loki.source.file "openebs_pod_logs" {
+ targets = local.file_match.openebs_pod_files.targets
+ forward_to = [loki.process.openebs_process_logs.receiver]
+ }
+
+ loki.process "openebs_process_logs" {
+ forward_to = [loki.write.default.receiver]
+
+ stage.docker { }
+
+ stage.replace {
+ expression = "(\\n)"
+ replace = ""
+ }
+
+ stage.multiline {
+ firstline = "^ \\x1b\\[2m(\\d{4})-(\\d{2})-(\\d{2})T(\\d{2}):(\\d{2}):(\\d{2}).(\\d{6})Z"
+ }
+
+ stage.multiline {
+ firstline = "^ (\\d{4})-(\\d{2})-(\\d{2})T(\\d{2}):(\\d{2}):(\\d{2}).(\\d{6})Z"
+ }
+ }
+
+ loki.write "default" {
+ endpoint {
+ url = "http://openebs-loki:3100/loki/api/v1/push"
+ tenant_id = "openebs"
+ }
+ external_labels = {}
+ }
+
--- HelmRelease: storage/openebs ConfigMap: storage/openebs-minio
+++ HelmRelease: storage/openebs ConfigMap: storage/openebs-minio
@@ -0,0 +1,326 @@
+---
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: openebs-minio
+ labels:
+ app: minio
+ release: openebs
+ heritage: Helm
+data:
+ initialize: "#!/bin/sh\nset -e # Have script exit in the event of a failed command.\n\
+ MC_CONFIG_DIR=\"/etc/minio/mc/\"\nMC=\"/usr/bin/mc --insecure --config-dir ${MC_CONFIG_DIR}\"\
+ \n\n# connectToMinio\n# Use a check-sleep-check loop to wait for MinIO service\
+ \ to be available\nconnectToMinio() {\n\tSCHEME=$1\n\tATTEMPTS=0\n\tLIMIT=29 #\
+ \ Allow 30 attempts\n\tset -e # fail if we can't read the keys.\n\tACCESS=$(cat\
+ \ /config/rootUser)\n\tSECRET=$(cat /config/rootPassword)\n\tset +e # The connections\
+ \ to minio are allowed to fail.\n\techo \"Connecting to MinIO server: $SCHEME://$MINIO_ENDPOINT:$MINIO_PORT\"\
+ \n\tMC_COMMAND=\"${MC} alias set myminio $SCHEME://$MINIO_ENDPOINT:$MINIO_PORT\
+ \ $ACCESS $SECRET\"\n\t$MC_COMMAND\n\tSTATUS=$?\n\tuntil [ $STATUS = 0 ]; do\n\
+ \t\tATTEMPTS=$(expr $ATTEMPTS + 1)\n\t\techo \\\"Failed attempts: $ATTEMPTS\\\"\
+ \n\t\tif [ $ATTEMPTS -gt $LIMIT ]; then\n\t\t\texit 1\n\t\tfi\n\t\tsleep 2 # 1\
+ \ second intervals between attempts\n\t\t$MC_COMMAND\n\t\tSTATUS=$?\n\tdone\n\t\
+ set -e # reset `e` as active\n\treturn 0\n}\n\n# checkBucketExists ($bucket)\n\
+ # Check if the bucket exists, by using the exit code of `mc ls`\ncheckBucketExists()\
+ \ {\n\tBUCKET=$1\n\tCMD=$(${MC} stat myminio/$BUCKET >/dev/null 2>&1)\n\treturn\
+ \ $?\n}\n\n# createBucket ($bucket, $policy, $purge)\n# Ensure bucket exists,\
+ \ purging if asked to\ncreateBucket() {\n\tBUCKET=$1\n\tPOLICY=$2\n\tPURGE=$3\n\
+ \tVERSIONING=$4\n\tOBJECTLOCKING=$5\n\n\t# Purge the bucket, if set & exists\n\
+ \t# Since PURGE is user input, check explicitly for `true`\n\tif [ $PURGE = true\
+ \ ]; then\n\t\tif checkBucketExists $BUCKET; then\n\t\t\techo \"Purging bucket\
+ \ '$BUCKET'.\"\n\t\t\tset +e # don't exit if this fails\n\t\t\t${MC} rm -r --force\
+ \ myminio/$BUCKET\n\t\t\tset -e # reset `e` as active\n\t\telse\n\t\t\techo \"\
+ Bucket '$BUCKET' does not exist, skipping purge.\"\n\t\tfi\n\tfi\n\n\t# Create\
+ \ the bucket if it does not exist and set objectlocking if enabled (NOTE: versioning\
+ \ will be not changed if OBJECTLOCKING is set because it enables versioning to\
+ \ the Buckets created)\n\tif ! checkBucketExists $BUCKET; then\n\t\tif [ ! -z\
+ \ $OBJECTLOCKING ]; then\n\t\t\tif [ $OBJECTLOCKING = true ]; then\n\t\t\t\techo\
+ \ \"Creating bucket with OBJECTLOCKING '$BUCKET'\"\n\t\t\t\t${MC} mb --with-lock\
+ \ myminio/$BUCKET\n\t\t\telif [ $OBJECTLOCKING = false ]; then\n\t\t\t\techo \"\
+ Creating bucket '$BUCKET'\"\n\t\t\t\t${MC} mb myminio/$BUCKET\n\t\t\tfi\n\t\t\
+ elif [ -z $OBJECTLOCKING ]; then\n\t\t\techo \"Creating bucket '$BUCKET'\"\n\t\
+ \t\t${MC} mb myminio/$BUCKET\n\t\telse\n\t\t\techo \"Bucket '$BUCKET' already\
+ \ exists.\"\n\t\tfi\n\tfi\n\n\t# set versioning for bucket if objectlocking is\
+ \ disabled or not set\n\tif [ $OBJECTLOCKING = false ]; then\n\t\tif [ ! -z $VERSIONING\
+ \ ]; then\n\t\t\tif [ $VERSIONING = true ]; then\n\t\t\t\techo \"Enabling versioning\
+ \ for '$BUCKET'\"\n\t\t\t\t${MC} version enable myminio/$BUCKET\n\t\t\telif [\
+ \ $VERSIONING = false ]; then\n\t\t\t\techo \"Suspending versioning for '$BUCKET'\"\
+ \n\t\t\t\t${MC} version suspend myminio/$BUCKET\n\t\t\tfi\n\t\tfi\n\telse\n\t\t\
+ echo \"Bucket '$BUCKET' versioning unchanged.\"\n\tfi\n\n\t# At this point, the\
+ \ bucket should exist, skip checking for existence\n\t# Set policy on the bucket\n\
+ \techo \"Setting policy of bucket '$BUCKET' to '$POLICY'.\"\n\t${MC} anonymous\
+ \ set $POLICY myminio/$BUCKET\n}\n\n# Try connecting to MinIO instance\nscheme=http\n\
+ connectToMinio $scheme\n\n\n\n# Create the buckets\ncreateBucket chunks \"none\"\
+ \ false false false\ncreateBucket ruler \"none\" false false false\ncreateBucket\
+ \ admin \"none\" false false false"
+ add-user: |-
+ #!/bin/sh
+ set -e ; # Have script exit in the event of a failed command.
+ MC_CONFIG_DIR="/etc/minio/mc/"
+ MC="/usr/bin/mc --insecure --config-dir ${MC_CONFIG_DIR}"
+
+ # AccessKey and secretkey credentials file are added to prevent shell execution errors caused by special characters.
+ # Special characters for example : ',",<,>,{,}
+ MINIO_ACCESSKEY_SECRETKEY_TMP="/tmp/accessKey_and_secretKey_tmp"
+
+ # connectToMinio
+ # Use a check-sleep-check loop to wait for MinIO service to be available
+ connectToMinio() {
+ SCHEME=$1
+ ATTEMPTS=0 ; LIMIT=29 ; # Allow 30 attempts
+ set -e ; # fail if we can't read the keys.
+ ACCESS=$(cat /config/rootUser) ; SECRET=$(cat /config/rootPassword) ;
+ set +e ; # The connections to minio are allowed to fail.
+ echo "Connecting to MinIO server: $SCHEME://$MINIO_ENDPOINT:$MINIO_PORT" ;
+ MC_COMMAND="${MC} alias set myminio $SCHEME://$MINIO_ENDPOINT:$MINIO_PORT $ACCESS $SECRET" ;
+ $MC_COMMAND ;
+ STATUS=$? ;
+ until [ $STATUS = 0 ]
+ do
+ ATTEMPTS=`expr $ATTEMPTS + 1` ;
+ echo \"Failed attempts: $ATTEMPTS\" ;
+ if [ $ATTEMPTS -gt $LIMIT ]; then
+ exit 1 ;
+ fi ;
+ sleep 2 ; # 1 second intervals between attempts
+ $MC_COMMAND ;
+ STATUS=$? ;
+ done ;
+ set -e ; # reset `e` as active
+ return 0
+ }
+
+ # checkUserExists ()
+ # Check if the user exists, by using the exit code of `mc admin user info`
+ checkUserExists() {
+ CMD=$(${MC} admin user info myminio $(head -1 $MINIO_ACCESSKEY_SECRETKEY_TMP) > /dev/null 2>&1)
+ return $?
+ }
+
+ # createUser ($policy)
+ createUser() {
+ POLICY=$1
+ #check accessKey_and_secretKey_tmp file
+ if [[ ! -f $MINIO_ACCESSKEY_SECRETKEY_TMP ]];then
+ echo "credentials file does not exist"
+ return 1
+ fi
+ if [[ $(cat $MINIO_ACCESSKEY_SECRETKEY_TMP|wc -l) -ne 2 ]];then
+ echo "credentials file is invalid"
+ rm -f $MINIO_ACCESSKEY_SECRETKEY_TMP
+ return 1
+ fi
+ USER=$(head -1 $MINIO_ACCESSKEY_SECRETKEY_TMP)
+ # Create the user if it does not exist
+ if ! checkUserExists ; then
+ echo "Creating user '$USER'"
+ cat $MINIO_ACCESSKEY_SECRETKEY_TMP | ${MC} admin user add myminio
+ else
+ echo "User '$USER' already exists."
+ fi
+ #clean up credentials files.
+ rm -f $MINIO_ACCESSKEY_SECRETKEY_TMP
+
+ # set policy for user
+ if [ ! -z $POLICY -a $POLICY != " " ] ; then
+ echo "Adding policy '$POLICY' for '$USER'"
+ set +e ; # policy already attach errors out, allow it.
+ ${MC} admin policy attach myminio $POLICY --user=$USER
+ set -e
+ else
+ echo "User '$USER' has no policy attached."
+ fi
+ }
+
+ # Try connecting to MinIO instance
+ scheme=http
+ connectToMinio $scheme
+
+
+
+ # Create the users
+ echo logs-user > $MINIO_ACCESSKEY_SECRETKEY_TMP
+ echo supersecretpassword >> $MINIO_ACCESSKEY_SECRETKEY_TMP
+ createUser readwrite
+ add-policy: |-
+ #!/bin/sh
+ set -e ; # Have script exit in the event of a failed command.
+ MC_CONFIG_DIR="/etc/minio/mc/"
+ MC="/usr/bin/mc --insecure --config-dir ${MC_CONFIG_DIR}"
+
+ # connectToMinio
+ # Use a check-sleep-check loop to wait for MinIO service to be available
+ connectToMinio() {
+ SCHEME=$1
+ ATTEMPTS=0 ; LIMIT=29 ; # Allow 30 attempts
+ set -e ; # fail if we can't read the keys.
+ ACCESS=$(cat /config/rootUser) ; SECRET=$(cat /config/rootPassword) ;
+ set +e ; # The connections to minio are allowed to fail.
+ echo "Connecting to MinIO server: $SCHEME://$MINIO_ENDPOINT:$MINIO_PORT" ;
+ MC_COMMAND="${MC} alias set myminio $SCHEME://$MINIO_ENDPOINT:$MINIO_PORT $ACCESS $SECRET" ;
+ $MC_COMMAND ;
+ STATUS=$? ;
+ until [ $STATUS = 0 ]
+ do
+ ATTEMPTS=`expr $ATTEMPTS + 1` ;
+ echo \"Failed attempts: $ATTEMPTS\" ;
+ if [ $ATTEMPTS -gt $LIMIT ]; then
+ exit 1 ;
+ fi ;
+ sleep 2 ; # 1 second intervals between attempts
+ $MC_COMMAND ;
+ STATUS=$? ;
+ done ;
+ set -e ; # reset `e` as active
+ return 0
+ }
+
+ # checkPolicyExists ($policy)
+ # Check if the policy exists, by using the exit code of `mc admin policy info`
+ checkPolicyExists() {
+ POLICY=$1
+ CMD=$(${MC} admin policy info myminio $POLICY > /dev/null 2>&1)
+ return $?
+ }
+
+ # createPolicy($name, $filename)
+ createPolicy () {
+ NAME=$1
+ FILENAME=$2
+
+ # Create the name if it does not exist
+ echo "Checking policy: $NAME (in /config/$FILENAME.json)"
+ if ! checkPolicyExists $NAME ; then
+ echo "Creating policy '$NAME'"
+ else
+ echo "Policy '$NAME' already exists."
+ fi
+ ${MC} admin policy create myminio $NAME /config/$FILENAME.json
+
+ }
+
+ # Try connecting to MinIO instance
+ scheme=http
+ connectToMinio $scheme
+ add-svcacct: |-
+ #!/bin/sh
+ set -e ; # Have script exit in the event of a failed command.
+ MC_CONFIG_DIR="/etc/minio/mc/"
+ MC="/usr/bin/mc --insecure --config-dir ${MC_CONFIG_DIR}"
+
+ # AccessKey and secretkey credentials file are added to prevent shell execution errors caused by special characters.
+ # Special characters for example : ',",<,>,{,}
+ MINIO_ACCESSKEY_SECRETKEY_TMP="/tmp/accessKey_and_secretKey_svcacct_tmp"
+
+ # connectToMinio
+ # Use a check-sleep-check loop to wait for MinIO service to be available
+ connectToMinio() {
+ SCHEME=$1
+ ATTEMPTS=0 ; LIMIT=29 ; # Allow 30 attempts
+ set -e ; # fail if we can't read the keys.
+ ACCESS=$(cat /config/rootUser) ; SECRET=$(cat /config/rootPassword) ;
+ set +e ; # The connections to minio are allowed to fail.
[Diff truncated by flux-local]
--- HelmRelease: storage/openebs ConfigMap: storage/loki
+++ HelmRelease: storage/openebs ConfigMap: storage/loki
@@ -0,0 +1,100 @@
+---
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: loki
+ namespace: storage
+ labels:
+ app.kubernetes.io/name: loki
+ app.kubernetes.io/instance: openebs
+data:
+ config.yaml: |2
+
+ auth_enabled: true
+ bloom_build:
+ builder:
+ planner_address: ""
+ enabled: false
+ bloom_gateway:
+ client:
+ addresses: ""
+ enabled: false
+ common:
+ compactor_address: 'http://openebs-loki:3100'
+ path_prefix: /var/loki
+ replication_factor: 3
+ storage:
+ s3:
+ access_key_id: root-user
+ bucketnames: chunks
+ endpoint: openebs-minio.storage.svc:9000
+ insecure: true
+ s3forcepathstyle: true
+ secret_access_key: supersecretpassword
+ frontend:
+ scheduler_address: ""
+ tail_proxy_url: ""
+ frontend_worker:
+ scheduler_address: ""
+ index_gateway:
+ mode: simple
+ ingester:
+ chunk_encoding: snappy
+ limits_config:
+ ingestion_burst_size_mb: 1000
+ ingestion_rate_mb: 10000
+ max_cache_freshness_per_query: 10m
+ max_label_names_per_series: 20
+ query_timeout: 300s
+ reject_old_samples: true
+ reject_old_samples_max_age: 168h
+ split_queries_by_interval: 15m
+ volume_enabled: true
+ memberlist:
+ join_members:
+ - loki-memberlist
+ pattern_ingester:
+ enabled: false
+ querier:
+ max_concurrent: 1
+ query_range:
+ align_queries_with_step: true
+ ruler:
+ storage:
+ s3:
+ bucketnames: ruler
+ type: s3
+ wal:
+ dir: /var/loki/ruler-wal
+ runtime_config:
+ file: /etc/loki/runtime-config/runtime-config.yaml
+ schema_config:
+ configs:
+ - from: "2024-04-01"
+ index:
+ period: 24h
+ prefix: loki_index_
+ object_store: s3
+ schema: v13
+ store: tsdb
+ server:
+ grpc_listen_port: 9095
+ http_listen_port: 3100
+ http_server_read_timeout: 600s
+ http_server_write_timeout: 600s
+ storage_config:
+ bloom_shipper:
+ working_directory: /var/loki/data/bloomshipper
+ boltdb_shipper:
+ index_gateway_client:
+ server_address: ""
+ hedging:
+ at: 250ms
+ max_per_second: 20
+ up_to: 3
+ tsdb_shipper:
+ index_gateway_client:
+ server_address: ""
+ tracing:
+ enabled: true
+
--- HelmRelease: storage/openebs ConfigMap: storage/loki-runtime
+++ HelmRelease: storage/openebs ConfigMap: storage/loki-runtime
@@ -0,0 +1,13 @@
+---
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: loki-runtime
+ namespace: storage
+ labels:
+ app.kubernetes.io/name: loki
+ app.kubernetes.io/instance: openebs
+data:
+ runtime-config.yaml: |
+ {}
+
--- HelmRelease: storage/openebs StorageClass: storage/openebs-loki-localpv
+++ HelmRelease: storage/openebs StorageClass: storage/openebs-loki-localpv
@@ -0,0 +1,16 @@
+---
+apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ annotations:
+ cas.openebs.io/config: |
+ - name: StorageType
+ value: "hostpath"
+ - name: BasePath
+ value: "/var/local/openebs/localpv-hostpath/loki"
+ openebs.io/cas-type: local
+ name: openebs-loki-localpv
+provisioner: openebs.io/local
+reclaimPolicy: Delete
+volumeBindingMode: WaitForFirstConsumer
+
--- HelmRelease: storage/openebs StorageClass: storage/openebs-minio-localpv
+++ HelmRelease: storage/openebs StorageClass: storage/openebs-minio-localpv
@@ -0,0 +1,16 @@
+---
+apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ annotations:
+ cas.openebs.io/config: |
+ - name: StorageType
+ value: "hostpath"
+ - name: BasePath
+ value: "/var/local/openebs/localpv-hostpath/minio"
+ openebs.io/cas-type: local
+ name: openebs-minio-localpv
+provisioner: openebs.io/local
+reclaimPolicy: Delete
+volumeBindingMode: WaitForFirstConsumer
+
--- HelmRelease: storage/openebs ClusterRole: storage/openebs-alloy
+++ HelmRelease: storage/openebs ClusterRole: storage/openebs-alloy
@@ -0,0 +1,104 @@
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: openebs-alloy
+ labels:
+ app.kubernetes.io/name: alloy
+ app.kubernetes.io/instance: openebs
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/part-of: alloy
+ app.kubernetes.io/component: rbac
+rules:
+- apiGroups:
+ - ''
+ - discovery.k8s.io
+ - networking.k8s.io
+ resources:
+ - endpoints
+ - endpointslices
+ - ingresses
+ - nodes
+ - nodes/proxy
+ - nodes/metrics
+ - pods
+ - services
+ verbs:
+ - get
+ - list
+ - watch
+- apiGroups:
+ - ''
+ resources:
+ - pods
+ - pods/log
+ - namespaces
+ verbs:
+ - get
+ - list
+ - watch
+- apiGroups:
+ - monitoring.grafana.com
+ resources:
+ - podlogs
+ verbs:
+ - get
+ - list
+ - watch
+- apiGroups:
+ - monitoring.coreos.com
+ resources:
+ - prometheusrules
+ verbs:
+ - get
+ - list
+ - watch
+- nonResourceURLs:
+ - /metrics
+ verbs:
+ - get
+- apiGroups:
+ - monitoring.coreos.com
+ resources:
+ - podmonitors
+ - servicemonitors
+ - probes
+ - scrapeconfigs
+ verbs:
+ - get
+ - list
+ - watch
+- apiGroups:
+ - ''
+ resources:
+ - events
+ verbs:
+ - get
+ - list
+ - watch
+- apiGroups:
+ - ''
+ resources:
+ - configmaps
+ - secrets
+ verbs:
+ - get
+ - list
+ - watch
+- apiGroups:
+ - apps
+ resources:
+ - replicasets
+ verbs:
+ - get
+ - list
+ - watch
+- apiGroups:
+ - extensions
+ resources:
+ - replicasets
+ verbs:
+ - get
+ - list
+ - watch
+
--- HelmRelease: storage/openebs ClusterRole: storage/openebs-loki-clusterrole
+++ HelmRelease: storage/openebs ClusterRole: storage/openebs-loki-clusterrole
@@ -0,0 +1,19 @@
+---
+kind: ClusterRole
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ labels:
+ app.kubernetes.io/name: loki
+ app.kubernetes.io/instance: openebs
+ name: openebs-loki-clusterrole
+rules:
+- apiGroups:
+ - ''
+ resources:
+ - configmaps
+ - secrets
+ verbs:
+ - get
+ - watch
+ - list
+
--- HelmRelease: storage/openebs ClusterRoleBinding: storage/openebs-alloy
+++ HelmRelease: storage/openebs ClusterRoleBinding: storage/openebs-alloy
@@ -0,0 +1,20 @@
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRoleBinding
+metadata:
+ name: openebs-alloy
+ labels:
+ app.kubernetes.io/name: alloy
+ app.kubernetes.io/instance: openebs
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/part-of: alloy
+ app.kubernetes.io/component: rbac
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: openebs-alloy
+subjects:
+- kind: ServiceAccount
+ name: openebs-alloy
+ namespace: storage
+
--- HelmRelease: storage/openebs ClusterRoleBinding: storage/openebs-loki-clusterrolebinding
+++ HelmRelease: storage/openebs ClusterRoleBinding: storage/openebs-loki-clusterrolebinding
@@ -0,0 +1,17 @@
+---
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: openebs-loki-clusterrolebinding
+ labels:
+ app.kubernetes.io/name: loki
+ app.kubernetes.io/instance: openebs
+subjects:
+- kind: ServiceAccount
+ name: loki
+ namespace: storage
+roleRef:
+ kind: ClusterRole
+ name: openebs-loki-clusterrole
+ apiGroup: rbac.authorization.k8s.io
+
--- HelmRelease: storage/openebs Service: storage/openebs-alloy
+++ HelmRelease: storage/openebs Service: storage/openebs-alloy
@@ -0,0 +1,24 @@
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: openebs-alloy
+ namespace: storage
+ labels:
+ app.kubernetes.io/name: alloy
+ app.kubernetes.io/instance: openebs
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/part-of: alloy
+ app.kubernetes.io/component: networking
+spec:
+ type: ClusterIP
+ selector:
+ app.kubernetes.io/name: alloy
+ app.kubernetes.io/instance: openebs
+ internalTrafficPolicy: Cluster
+ ports:
+ - name: http-metrics
+ port: 12345
+ targetPort: 12345
+ protocol: TCP
+
--- HelmRelease: storage/openebs Service: storage/openebs-minio-console
+++ HelmRelease: storage/openebs Service: storage/openebs-minio-console
@@ -0,0 +1,20 @@
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: openebs-minio-console
+ labels:
+ app: minio
+ release: openebs
+ heritage: Helm
+spec:
+ type: ClusterIP
+ ports:
+ - name: http
+ port: 9001
+ protocol: TCP
+ targetPort: 9001
+ selector:
+ app: minio
+ release: openebs
+
--- HelmRelease: storage/openebs Service: storage/openebs-minio
+++ HelmRelease: storage/openebs Service: storage/openebs-minio
@@ -0,0 +1,21 @@
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: openebs-minio
+ labels:
+ app: minio
+ release: openebs
+ heritage: Helm
+ monitoring: 'true'
+spec:
+ type: ClusterIP
+ ports:
+ - name: http
+ port: 9000
+ protocol: TCP
+ targetPort: 9000
+ selector:
+ app: minio
+ release: openebs
+
--- HelmRelease: storage/openebs Service: storage/openebs-minio-svc
+++ HelmRelease: storage/openebs Service: storage/openebs-minio-svc
@@ -0,0 +1,21 @@
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: openebs-minio-svc
+ labels:
+ app: minio
+ release: openebs
+ heritage: Helm
+spec:
+ publishNotReadyAddresses: true
+ clusterIP: None
+ ports:
+ - name: http
+ port: 9000
+ protocol: TCP
+ targetPort: 9000
+ selector:
+ app: minio
+ release: openebs
+
--- HelmRelease: storage/openebs Service: storage/loki-memberlist
+++ HelmRelease: storage/openebs Service: storage/loki-memberlist
@@ -0,0 +1,22 @@
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: loki-memberlist
+ namespace: storage
+ labels:
+ app.kubernetes.io/name: loki
+ app.kubernetes.io/instance: openebs
+spec:
+ type: ClusterIP
+ clusterIP: None
+ ports:
+ - name: tcp
+ port: 7946
+ targetPort: http-memberlist
+ protocol: TCP
+ selector:
+ app.kubernetes.io/name: loki
+ app.kubernetes.io/instance: openebs
+ app.kubernetes.io/part-of: memberlist
+
--- HelmRelease: storage/openebs Service: storage/loki-headless
+++ HelmRelease: storage/openebs Service: storage/loki-headless
@@ -0,0 +1,23 @@
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: loki-headless
+ namespace: storage
+ labels:
+ app.kubernetes.io/name: loki
+ app.kubernetes.io/instance: openebs
+ app: loki
+ variant: headless
+ prometheus.io/service-monitor: 'false'
+spec:
+ clusterIP: None
+ ports:
+ - name: http-metrics
+ port: 3100
+ targetPort: http-metrics
+ protocol: TCP
+ selector:
+ app.kubernetes.io/name: loki
+ app.kubernetes.io/instance: openebs
+
--- HelmRelease: storage/openebs Service: storage/openebs-loki
+++ HelmRelease: storage/openebs Service: storage/openebs-loki
@@ -0,0 +1,26 @@
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: openebs-loki
+ namespace: storage
+ labels:
+ app.kubernetes.io/name: loki
+ app.kubernetes.io/instance: openebs
+ app: loki
+spec:
+ type: ClusterIP
+ ports:
+ - name: http-metrics
+ port: 3100
+ targetPort: http-metrics
+ protocol: TCP
+ - name: grpc
+ port: 9095
+ targetPort: grpc
+ protocol: TCP
+ selector:
+ app.kubernetes.io/name: loki
+ app.kubernetes.io/instance: openebs
+ app.kubernetes.io/component: single-binary
+
--- HelmRelease: storage/openebs DaemonSet: storage/openebs-alloy
+++ HelmRelease: storage/openebs DaemonSet: storage/openebs-alloy
@@ -0,0 +1,81 @@
+---
+apiVersion: apps/v1
+kind: DaemonSet
+metadata:
+ name: openebs-alloy
+ namespace: storage
+ labels:
+ app.kubernetes.io/name: alloy
+ app.kubernetes.io/instance: openebs
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/part-of: alloy
+spec:
+ minReadySeconds: 10
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: alloy
+ app.kubernetes.io/instance: openebs
+ template:
+ metadata:
+ annotations:
+ kubectl.kubernetes.io/default-container: alloy
+ labels:
+ app.kubernetes.io/name: alloy
+ app.kubernetes.io/instance: openebs
+ spec:
+ serviceAccountName: openebs-alloy
+ containers:
+ - name: alloy
+ image: docker.io/grafana/alloy:v1.8.1
+ imagePullPolicy: IfNotPresent
+ args:
+ - run
+ - /etc/alloy/config.alloy
+ - --storage.path=/tmp/alloy
+ - --server.http.listen-addr=0.0.0.0:12345
+ - --server.http.ui-path-prefix=/
+ - --stability.level=generally-available
+ env:
+ - name: ALLOY_DEPLOY_MODE
+ value: helm
+ - name: HOSTNAME
+ valueFrom:
+ fieldRef:
+ fieldPath: spec.nodeName
+ ports:
+ - containerPort: 12345
+ name: http-metrics
+ readinessProbe:
+ httpGet:
+ path: /-/ready
+ port: 12345
+ scheme: HTTP
+ initialDelaySeconds: 10
+ timeoutSeconds: 1
+ volumeMounts:
+ - name: config
+ mountPath: /etc/alloy
+ - name: varlog
+ mountPath: /var/log
+ readOnly: true
+ - name: config-reloader
+ image: quay.io/prometheus-operator/prometheus-config-reloader:v0.81.0
+ args:
+ - --watched-dir=/etc/alloy
+ - --reload-url=http://localhost:12345/-/reload
+ volumeMounts:
+ - name: config
+ mountPath: /etc/alloy
+ resources:
+ requests:
+ cpu: 10m
+ memory: 50Mi
+ dnsPolicy: ClusterFirst
+ volumes:
+ - name: config
+ configMap:
+ name: openebs-alloy
+ - name: varlog
+ hostPath:
+ path: /var/log
+
--- HelmRelease: storage/openebs StatefulSet: storage/openebs-minio
+++ HelmRelease: storage/openebs StatefulSet: storage/openebs-minio
@@ -0,0 +1,87 @@
+---
+apiVersion: apps/v1
+kind: StatefulSet
+metadata:
+ name: openebs-minio
+ labels:
+ app: minio
+ release: openebs
+ heritage: Helm
+spec:
+ updateStrategy:
+ type: RollingUpdate
+ podManagementPolicy: Parallel
+ serviceName: openebs-minio-svc
+ replicas: 3
+ selector:
+ matchLabels:
+ app: minio
+ release: openebs
+ template:
+ metadata:
+ name: openebs-minio
+ labels:
+ app: minio
+ release: openebs
+ annotations:
+ checksum/secrets: b38f14fd791c3ad8559031ba36c0bfb017cad4ce8c06105b5bcb4b3751f0dfb3
+ spec:
+ securityContext:
+ fsGroup: 1000
+ fsGroupChangePolicy: OnRootMismatch
+ runAsGroup: 1000
+ runAsUser: 1000
+ serviceAccountName: minio-sa
+ containers:
+ - name: minio
+ image: quay.io/minio/minio:RELEASE.2024-12-18T13-15-44Z
+ imagePullPolicy: IfNotPresent
+ command:
+ - /bin/sh
+ - -ce
+ - /usr/bin/docker-entrypoint.sh minio server http://openebs-minio-{0...2}.openebs-minio-svc.storage.svc/export
+ -S /etc/minio/certs/ --address :9000 --console-address :9001
+ volumeMounts:
+ - name: export
+ mountPath: /export
+ ports:
+ - name: http
+ containerPort: 9000
+ - name: http-console
+ containerPort: 9001
+ env:
+ - name: MINIO_ROOT_USER
+ valueFrom:
+ secretKeyRef:
+ name: openebs-minio
+ key: rootUser
+ - name: MINIO_ROOT_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: openebs-minio
+ key: rootPassword
+ - name: MINIO_PROMETHEUS_AUTH_TYPE
+ value: public
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ securityContext:
+ readOnlyRootFilesystem: false
+ volumes:
+ - name: minio-user
+ secret:
+ secretName: openebs-minio
+ volumeClaimTemplates:
+ - apiVersion: v1
+ kind: PersistentVolumeClaim
+ metadata:
+ name: export
+ spec:
+ accessModes:
+ - ReadWriteOnce
+ storageClassName: openebs-minio-localpv
+ resources:
+ requests:
+ storage: 2Gi
+
--- HelmRelease: storage/openebs StatefulSet: storage/openebs-loki
+++ HelmRelease: storage/openebs StatefulSet: storage/openebs-loki
@@ -0,0 +1,147 @@
+---
+apiVersion: apps/v1
+kind: StatefulSet
+metadata:
+ name: openebs-loki
+ namespace: storage
+ labels:
+ app.kubernetes.io/name: loki
+ app.kubernetes.io/instance: openebs
+ app.kubernetes.io/component: single-binary
+ app.kubernetes.io/part-of: memberlist
+spec:
+ replicas: 3
+ podManagementPolicy: Parallel
+ updateStrategy:
+ rollingUpdate:
+ partition: 0
+ serviceName: openebs-loki-headless
+ revisionHistoryLimit: 10
+ persistentVolumeClaimRetentionPolicy:
+ whenDeleted: Delete
+ whenScaled: Delete
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: loki
+ app.kubernetes.io/instance: openebs
+ app.kubernetes.io/component: single-binary
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/name: loki
+ app.kubernetes.io/instance: openebs
+ app.kubernetes.io/component: single-binary
+ app: loki
+ app.kubernetes.io/part-of: memberlist
+ spec:
+ serviceAccountName: loki
+ automountServiceAccountToken: true
+ enableServiceLinks: true
+ securityContext:
+ fsGroup: 10001
+ runAsGroup: 10001
+ runAsNonRoot: true
+ runAsUser: 10001
+ terminationGracePeriodSeconds: 30
+ containers:
+ - name: loki-sc-rules
+ image: docker.io/kiwigrid/k8s-sidecar:1.30.2
+ imagePullPolicy: IfNotPresent
+ env:
+ - name: METHOD
+ value: WATCH
+ - name: LABEL
+ value: loki_rule
+ - name: FOLDER
+ value: /rules
+ - name: RESOURCE
+ value: both
+ - name: WATCH_SERVER_TIMEOUT
+ value: '60'
+ - name: WATCH_CLIENT_TIMEOUT
+ value: '60'
+ - name: LOG_LEVEL
+ value: INFO
+ securityContext:
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop:
+ - ALL
+ readOnlyRootFilesystem: true
+ volumeMounts:
+ - name: sc-rules-volume
+ mountPath: /rules
+ - name: loki
+ image: docker.io/grafana/loki:3.4.2
+ imagePullPolicy: IfNotPresent
+ args:
+ - -config.file=/etc/loki/config/config.yaml
+ - -target=all
+ ports:
+ - name: http-metrics
+ containerPort: 3100
+ protocol: TCP
+ - name: grpc
+ containerPort: 9095
+ protocol: TCP
+ - name: http-memberlist
+ containerPort: 7946
+ protocol: TCP
+ securityContext:
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop:
+ - ALL
+ readOnlyRootFilesystem: true
+ readinessProbe:
+ httpGet:
+ path: /ready
+ port: http-metrics
+ initialDelaySeconds: 30
+ timeoutSeconds: 1
+ volumeMounts:
+ - name: tmp
+ mountPath: /tmp
+ - name: config
+ mountPath: /etc/loki/config
+ - name: runtime-config
+ mountPath: /etc/loki/runtime-config
+ - name: storage
+ mountPath: /var/loki
+ - name: sc-rules-volume
+ mountPath: /rules
+ resources: {}
+ affinity:
+ podAntiAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ - labelSelector:
+ matchLabels:
+ app.kubernetes.io/component: single-binary
+ topologyKey: kubernetes.io/hostname
+ volumes:
+ - name: tmp
+ emptyDir: {}
+ - name: config
+ configMap:
+ name: loki
+ items:
+ - key: config.yaml
+ path: config.yaml
+ - name: runtime-config
+ configMap:
+ name: loki-runtime
+ - name: sc-rules-volume
+ emptyDir: {}
+ volumeClaimTemplates:
+ - apiVersion: v1
+ kind: PersistentVolumeClaim
+ metadata:
+ name: storage
+ spec:
+ accessModes:
+ - ReadWriteOnce
+ storageClassName: openebs-loki-localpv
+ resources:
+ requests:
+ storage: 2Gi
+
--- HelmRelease: storage/openebs Job: storage/openebs-minio-post-job
+++ HelmRelease: storage/openebs Job: storage/openebs-minio-post-job
@@ -0,0 +1,77 @@
+---
+apiVersion: batch/v1
+kind: Job
+metadata:
+ name: openebs-minio-post-job
+ labels:
+ app: minio-post-job
+ release: openebs
+ heritage: Helm
+ annotations:
+ helm.sh/hook: post-install,post-upgrade
+ helm.sh/hook-delete-policy: hook-succeeded,before-hook-creation
+spec:
+ template:
+ metadata:
+ labels:
+ app: minio-job
+ release: openebs
+ spec:
+ restartPolicy: OnFailure
+ volumes:
+ - name: etc-path
+ emptyDir: {}
+ - name: tmp
+ emptyDir: {}
+ - name: minio-configuration
+ projected:
+ sources:
+ - configMap:
+ name: openebs-minio
+ - secret:
+ name: openebs-minio
+ serviceAccountName: minio-sa
+ containers:
+ - name: minio-make-bucket
+ image: quay.io/minio/mc:RELEASE.2024-11-21T17-21-54Z
+ imagePullPolicy: IfNotPresent
+ command:
+ - /bin/sh
+ - /config/initialize
+ env:
+ - name: MINIO_ENDPOINT
+ value: openebs-minio
+ - name: MINIO_PORT
+ value: '9000'
+ volumeMounts:
+ - name: etc-path
+ mountPath: /etc/minio/mc
+ - name: tmp
+ mountPath: /tmp
+ - name: minio-configuration
+ mountPath: /config
+ resources:
+ requests:
+ memory: 128Mi
+ - name: minio-make-user
+ image: quay.io/minio/mc:RELEASE.2024-11-21T17-21-54Z
+ imagePullPolicy: IfNotPresent
+ command:
+ - /bin/sh
+ - /config/add-user
+ env:
+ - name: MINIO_ENDPOINT
+ value: openebs-minio
+ - name: MINIO_PORT
+ value: '9000'
+ volumeMounts:
+ - name: etc-path
+ mountPath: /etc/minio/mc
+ - name: tmp
+ mountPath: /tmp
+ - name: minio-configuration
+ mountPath: /config
+ resources:
+ requests:
+ memory: 128Mi
+ |
0a61a66 to
3fdcb43
Compare
3fdcb43 to
3646151
Compare
3646151 to
faee5ff
Compare
faee5ff to
1ff5fbf
Compare
1ff5fbf to
dc5e519
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
4.2.0→4.4.0Release Notes
openebs/openebs (openebs)
v4.4.0Compare Source
OpenEBS 4.4.0 Release Notes
Release Summary
OpenEBS version 4.4 introduces several functional fixes and new features focused on improving Data Security, User Experience, High availability (HA), replica rebuilds, and overall stability. The key highlights are LocalPV LVM snapshot restores . In addition, the release includes various usability and functional fixes for mayastor, ZFS, LocalPV LVM and LocalPV Hostpath provisioners, along with documentation enhancements to help users and new contributors get started quickly.
Replicated Storage (Mayastor)
New Features and Enhancements
It's now possible to expand a DiskPool's capacity by expanding the underlying storage device.
You can now configure the cluster size when creating a pool - larger cluster sizes may be beneficial when using very large storage devices.
Extend cordoning functionality to pools. This can be used to prevent new replicas from being created on a pool, and also as a way of migrating a volume replica out of it via scale-up/scale-down operations.
Similar to volumes, when with snapshots retain move are deleted, the underlying storage is kept by the provisioner and must be deleted with provisioner specific commands.
We've added a plugin sub-command to delete these orphaned snapshots safely.
Node spread topology may now be used
Affinity group volumes may now be scaled down to 1 replica, provided the anti-affinity across nodes is not violated.
Bug Fixes and Improvements
12.0.14Release Notes
Limitations
Known Issues
LocalPV ZFS
New Features and Enhancements
Bumps up go runtime and all dependents to their latest available releases
Bug Fixes and Improvements
buildCloneCreateArgs()since clones automatically inherit encryption from the parent snapshot and the property cannot be set (it's read-only)Continuous Integration and Maintenance
Introduction of the staging CI, which enables creating a staging build for e2e testing before releasing, the artifacts are then copied over to production build hosts.
Release Notes
LocalPV LVM
New Features and Enhancements
LocalPV-LVM snapshot had limited capabilities. Now we support restoring a snapshot to volume
LocalPV-LVM will cleanup the thinpool LV after deleting the last thin volume of the thinpool
Record thinpool statistics in lvmnode CR. Fail fast CreateVolume request if thick PVC size cannot be accommodated by any VG.
Considers thinpool free space while scheduling thin pvc in SpaceWeighted algorithm
Updates Go runtime, k8s modules, golint packages etc by @jochenseeber in openebs/lvm-localpv#416
Continuous Integration and Maintenance
Introduction of the staging CI, which enables creating a staging build for e2e testing before releasing, the artifacts are then copied over to production build hosts.
Release Notes
Known Issues
It is not tracked in the
lvmnode, which may lead to unexpected behaviour when scheduling volumes.Read more about this here
LocalPV Hostpath
Release Notes
LocalPV RawFile
New Features and Enhancements
Release Notes
Make sure you follow the install guide when upgrading.
Refer to the Rawfile v0.12.0 release for detailed changes.
Known Issues
Controller Pod Restart on Single Node Setup
After upgrading, single node setups may face issues where the ZFS-localpv/LVM-localpv controller pod does not enter the Running state due to changes in the controller manifest (now a Deployment) and missing affinity rules.
Workaround: Delete the old controller pod to allow the new pod to be scheduled correctly. This does not happen if upgrading from the previous release of ZFS-localpv/LVM-localpv.
Upgrade and Backward Incompatibilities
v4.3.3Compare Source
This patch bring in a few fixes, as well as update of the bitnami repo which is needed. For more details see bitnami/charts#35164.
What's Changed
Full Changelog: openebs/openebs@v4.3.2...v4.3.3
v4.3.2Compare Source
What's Changed
Full Changelog: openebs/openebs@v4.3.1...v4.3.2
v4.3.1Compare Source
Fixes
kubectl openebs upgradefails for localpvs if mayastor is disabled. This is fixed. Detecting if Mayastor is enabled, was bugged. (@niladrih, #3967)Full Changelog: openebs/openebs@v4.3.0...v4.3.1
v4.3.0Compare Source
OpenEBS 4.3.0 Release Notes
Release Summary
OpenEBS version 4.3 introduces several functional fixes and new features focused on improving Data Security, User Experience, High availability (HA), replica rebuilds, and overall stability. The key highlights are Mayastor's support for At-Rest data encryption and a new Openebs plugin thats allows users to interact with all engines supplied by OpenEBS project. In addition, the release includes various usability and functional fixes for mayastor, ZFS, LocalPV LVM and LocalPV Hostpath provisioners, along with documentation enhancements to help users and new contributors get started quickly.
Umbrella Features
kubectl openebs.kubectl openebs dump systemcommand.kubectl-mayastorplugin.Replicated Storage (Mayastor)
New Feature
OpenEBS offers support for data-at-rest encryption to help ensure the confidentiality of persistent data stored on disk.
With this capability, any disk pool configured with a user-defined encryption key can host encrypted volume replicas.
This feature is particularly beneficial in environments requiring compliance with regulatory or security standards.
Enhancements
Upgrading
This means that a volume status may now be reported as
Degradedwhereas it would have previously been reported asOnline. This has a particular impact for unpublished volumes (in other words, volumes which are not mounted used by a pod) since volume rebuilds are currently not available for unpublished volumes.This behaviour can be reverted by setting a helm chart variable:
agents.core.volumeHealth=false.loki.enabled=false, alloy.enabled=false.Release Notes
Limitations
Known Issues
Local Storage (LocalPV ZFS, LocalPV LVM, LocalPV Hostpath)
Fixes and Enhancements
LocalPV ZFS Enhancements
LocalPV ZFS Fixes
LocalPV LVM Enhancements
LocalPV Hostpath Enhancements
Release Notes
LocalPV ZFS
LocalPV LVM
LocalPV Hostpath
Limitations
LVM-localpv has support for volume snapshot. But it doesn't support restore from a snapshot yet. It is in our roadmap.
Known Issues
Controller Pod Restart on Single Node Setup
After upgrading, single node setups may face issues where the ZFS-localpv/LVM-localpv controller pod does not enter the Running state due to changes in the controller manifest (now a Deployment) and missing affinity rules.
Workaround: Delete the old controller pod to allow the new pod to be scheduled correctly. This does not happen if upgrading from the previous release of ZFS-localpv/LVM-localpv.
Thin pool issue with LocalPV-LVM
We do not unmap/reclaim Thin pool capacity. It is not tracked in lvmnode cr also which can cause unexpected behaviour when scheduling volumes. Refer (When using lvm thinpool type, csistoragecapacities calculation is incorrect · Issue #382 · openebs/lvm-localpv)
Upgrade and Backward Incompatibilities
Configuration
📅 Schedule: Branch creation - "every weekend" (UTC), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR was generated by Mend Renovate. View the repository job log.