Releases: piraeusdatastore/linstor-csi
v1.10.4
A user reported an issue that caused the delete-local: true snapshot parameter to also affect the remote portion of the snapshot, causing complete loss of the snapshot. It also turned out that the new URL-based snapshot names did not correctly work in this case, creating InCluster://... based snapshot names when S3://... should have been used.
This has now been fixed, along with a fix to not run fsck for ROX volumes, as there is no use running fsck in read-only mode.
Changed
- Do not run fsck on read-only mounts.
Fixed
- Only delete local snapshot of backup when
delete-localparameter is set. - Fixed volume snapshot handle using the wrong format for S3 remotes when
delete-localis set.
v1.10.3
When we started running fsck before every mount, we did not think about the possibility of the volume being mounted multiple times on the same node, such as when using an RWO volume from multiple Pods running on the same node. This caused fsck to fail. We have fixed this by passing an option to fsck to skip mounted volumes.
Changed
- Do not run fsck for already mounted filesystems.
v1.10.2
We noticed an undesired change in behaviour regarding the new snapshot format. Previously, a remote snapshot would be searched for in all remotes. With the new snapshot ID, we integrated the remote name, so we skipped searching other remotes. However, there might be situations, such as when deploying a new cluster with differently named remotes, where this fails.
We now try to search additional remotes if the snapshot is not found by the complete ID.
Changed
- Search for snapshots in differently named remotes if using new snapshot ID format.
v1.10.1
This release fixes some issues that have been cropping up from time to time when using ext4 FS volumes. Notably, the resize2fs tasks did not want to work because fsck has not be run before mounting the filesystem. This caused all kinds of weird errors on mount, so we now ensure fsck is run before mounting. This does not affect xfs volumes, as there fsck is a no-op.
Added
- Run "fsck" before mounting a filesystem
v1.10.0
This is one of biggest releases in some time.
We finally support RWX volumes via dynamically managed NFS exports. The exports are managed by NFS Servers running in user space. This makes it possible to dynamically add new volumes, migrate or fail-over NFS volumes to other nodes independently.
The second big feature in this release are Group Snapshots. With this feature, you can request a snapshot of multiple volumes that are taken at the same point in time.
Added
- Enable Group Snapshots for in-cluster snapshots.
- Add components for RWX volumes via NFS server.
Changed
- Changed the format of the snapshot ID generated by LINSTOR CSI, so that it contains the source volume as well as
remote configuration for S3 and L2L snapshots.
v1.9.0
This release contains various fixes and improvements. Notably, we ensure that very small volumes can still be provisioned based on the chosen filesystem, and also make use of LINSTOR 1.32 to allow better cloning of volumes using the dedicated API.
Changed
- Enforce larger minimum volume size for filesystem volumes.
- Skip LINSTOR-KV interaction for local and S3 snapshots.
- Switch to using LINSTOR clones instead of temporary snapshots.
- Ensure block volumes are usable on mount.
v1.8.1
v1.8.0
This release adds automatic reconciliation of VolumeSnapshotClasses with LINSTOR "Remotes". This fixes an issue when trying to restore volumes from an S3 backup to a new cluster: previously, you would have to manually add the LINSTOR Remote for the snapshots to be found: this now happens automatically.
We've also updated the volume creation logic to report better errors on invalid configurations, such as RWX filesystem volumes.
Added
- Automatic addition of remotes configured in VolumeSnapshotClasses without the need to create snapshots first.
Changed
- Update validation of volume creation and registration requests, providing better validation of volumes.
v1.7.1
This release ensure that volumes restored from a snapshot, that use the "wrong" filesystem can still be used.
Changed
- Update
linstor-wait-until, adding a User-Agent header.
Fixed
- Set the FS type during mount operations based on the FS type stored in LINSTOR.
v1.7.0
This release adds support for LINSTORs x-replicas-on-different feature via the xReplicasOnDifferent storage class parameter.
For example, you may want to have 2 copies of the volume available in each zone, with 4 copies in total:
...
parameters:
autoPlace: "4"
xReplicasOnDifferent: |
topology.kubernetes.io/zone: 2
Added
- Add
XReplicasOnDifferentparameter.