Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
[id="cnf-image-based-upgrade-installing-lifecycle-agent-using-cli_{context}"]
= Installing the {lcao} by using the CLI

[role="_abstract"]
You can use the OpenShift CLI (`oc`) to install the {lcao}.

.Prerequisites
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
[id="cnf-image-based-upgrade-installing-lifecycle-agent-using-web-console_{context}"]
= Installing the {lcao} by using the web console

[role="_abstract"]
You can use the {product-title} web console to install the {lcao}.

.Prerequisites
Expand Down
13 changes: 9 additions & 4 deletions modules/ztp-image-based-upgrade-installing-lca.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,14 +5,16 @@
[id="ztp-image-based-upgrade-installing-lcao-with-gitops_{context}"]
= Installing the {lcao} with {ztp}

[role="_abstract"]
Install the {lcao} with {ztp-first} to do an image-based upgrade.

.Procedure

. Extract the following CRs from the `ztp-site-generate` container image and push them to the `source-cr` directory:
+
--
.Example `LcaSubscriptionNS.yaml` file
Example `LcaSubscriptionNS.yaml` file:

[source,yaml]
----
apiVersion: v1
Expand All @@ -26,7 +28,8 @@ metadata:
kubernetes.io/metadata.name: openshift-lifecycle-agent
----

.Example `LcaSubscriptionOperGroup.yaml` file
Example `LcaSubscriptionOperGroup.yaml` file:

[source,yaml]
----
apiVersion: operators.coreos.com/v1
Expand All @@ -41,7 +44,8 @@ spec:
- openshift-lifecycle-agent
----

.Example `LcaSubscription.yaml` file
Example `LcaSubscription.yaml` file:

[source,yaml]
----
apiVersion: operators.coreos.com/v1alpha1
Expand All @@ -61,7 +65,8 @@ status:
state: AtLatestKnown
----

.Example directory structure
Example directory structure:

[source,terminal]
----
├── kustomization.yaml
Expand Down
48 changes: 32 additions & 16 deletions modules/ztp-image-based-upgrade-installing-oadp.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,14 +5,16 @@
[id="ztp-image-based-upgrade-installing-oadp_{context}"]
= Installing and configuring the {oadp-short} Operator with {ztp}

[role="_abstract"]
Install and configure the {oadp-short} Operator with {ztp} before starting the upgrade.

.Procedure

. Extract the following CRs from the `ztp-site-generate` container image and push them to the `source-cr` directory:
+
--
.Example `OadpSubscriptionNS.yaml` file
Example `OadpSubscriptionNS.yaml` file:

[source,yaml]
----
apiVersion: v1
Expand All @@ -25,7 +27,8 @@ metadata:
kubernetes.io/metadata.name: openshift-adp
----

.Example `OadpSubscriptionOperGroup.yaml` file
Example `OadpSubscriptionOperGroup.yaml` file:

[source,yaml]
----
apiVersion: operators.coreos.com/v1
Expand All @@ -40,7 +43,8 @@ spec:
- openshift-adp
----

.Example `OadpSubscription.yaml` file
Example `OadpSubscription.yaml` file:

[source,yaml]
----
apiVersion: operators.coreos.com/v1alpha1
Expand All @@ -60,7 +64,8 @@ status:
state: AtLatestKnown
----

.Example `OadpOperatorStatus.yaml` file
Example `OadpOperatorStatus.yaml` file:

[source,yaml]
----
apiVersion: operators.coreos.com/v1
Expand Down Expand Up @@ -90,7 +95,8 @@ status:
reason: InstallSucceeded
----

.Example directory structure
Example directory structure:

[source,terminal]
----
├── kustomization.yaml
Expand Down Expand Up @@ -138,7 +144,8 @@ spec:
.. Extract the following CRs from the `ztp-site-generate` container image and push them to the `source-cr` directory:
+
--
.Example `OadpDataProtectionApplication.yaml` file
Example `OadpDataProtectionApplication.yaml` file:

[source,yaml]
----
apiVersion: oadp.openshift.io/v1alpha1
Expand Down Expand Up @@ -179,10 +186,14 @@ status:
status: "True"
type: Reconciled
----
<1> The `spec.configuration.restic.enable` field must be set to `false` for an image-based upgrade because persistent volume contents are retained and reused after the upgrade.
<2> The bucket defines the bucket name that is created in S3 backend. The prefix defines the name of the subdirectory that will be automatically created in the bucket. The combination of bucket and prefix must be unique for each target cluster to avoid interference between them. To ensure a unique storage directory for each target cluster, you can use the {rh-rhacm-title} hub template function, for example, `prefix: {{hub .ManagedClusterName hub}}`.
+
Where:
+
* `spec.configuration.restic.enable`: This field must be set to `false` for an image-based upgrade because persistent volume contents are retained and reused after the upgrade.
* `bucket` and `prefix`: The bucket defines the bucket name that is created in S3 backend. The prefix defines the name of the subdirectory that will be automatically created in the bucket. The combination of bucket and prefix must be unique for each target cluster to avoid interference between them. To ensure a unique storage directory for each target cluster, you can use the {rh-rhacm-title} hub template function, for example, `prefix: {{hub .ManagedClusterName hub}}`.

Example `OadpSecret.yaml` file:

.Example `OadpSecret.yaml` file
[source,yaml]
----
apiVersion: v1
Expand All @@ -195,7 +206,8 @@ metadata:
type: Opaque
----

.Example `OadpBackupStorageLocationStatus.yaml` file
Example `OadpBackupStorageLocationStatus.yaml` file:

[source,yaml]
----
apiVersion: velero.io/v1
Expand All @@ -208,8 +220,9 @@ metadata:
status:
phase: Available
----
<1> The `name` value in the `BackupStorageLocation` resource must follow the `<DataProtectionApplication.metadata.name>-<index>` pattern. The `<index>` represents the position of the corresponding `backupLocations` entry in the `spec.backupLocations` field in the `DataProtectionApplication` resource. The position starts from `1`. If the `metadata.name` value of the `DataProtectionApplication` resource is changed in the `OadpDataProtectionApplication.yaml` file, update the `metadata.name` field in the `BackupStorageLocation` resource accordingly.

+
The `name` value in the `BackupStorageLocation` resource must follow the `<DataProtectionApplication.metadata.name>-<index>` pattern. The `<index>` represents the position of the corresponding `backupLocations` entry in the `spec.backupLocations` field in the `DataProtectionApplication` resource. The position starts from `1`. If the `metadata.name` value of the `DataProtectionApplication` resource is changed in the `OadpDataProtectionApplication.yaml` file, update the `metadata.name` field in the `BackupStorageLocation` resource accordingly.
+
The `OadpBackupStorageLocationStatus.yaml` CR verifies the availability of backup storage locations created by OADP.
--

Expand Down Expand Up @@ -255,7 +268,10 @@ spec:
- fileName: OadpBackupStorageLocationStatus.yaml
policyName: "config-policy"
----
<1> Specify your credentials for your S3 storage backend.
<2> If more than one `backupLocations` entries are defined in the `OadpDataProtectionApplication` CR, ensure that each location has a corresponding `OadpBackupStorageLocation` CR added for status tracking. Ensure that the name of each additional `OadpBackupStorageLocation` CR is overridden with the correct index as described in the example `OadpBackupStorageLocationStatus.yaml` file.
<3> Specify the URL for your S3-compatible bucket.
<4> The `bucket` defines the bucket name that is created in S3 backend. The `prefix` defines the name of the subdirectory that will be automatically created in the `bucket`. The combination of `bucket` and `prefix` must be unique for each target cluster to avoid interference between them. To ensure a unique storage directory for each target cluster, you can use the {rh-rhacm-title} hub template function, for example, `prefix: {{hub .ManagedClusterName hub}}`.
+
Where:
+
* `cloud`: Specify your credentials for your S3 storage backend.
* `OadpDataProtectionApplication.yaml`: If more than one `backupLocations` entries are defined in the `OadpDataProtectionApplication` CR, ensure that each location has a corresponding `OadpBackupStorageLocation` CR added for status tracking. Ensure that the name of each additional `OadpBackupStorageLocation` CR is overridden with the correct index as described in the example `OadpBackupStorageLocationStatus.yaml` file.
* `s3Url`: Specify the URL for your S3-compatible bucket.
* `bucket` and `prefix`: The `bucket` defines the bucket name that is created in S3 backend. The `prefix` defines the name of the subdirectory that will be automatically created in the `bucket`. The combination of `bucket` and `prefix` must be unique for each target cluster to avoid interference between them. To ensure a unique storage directory for each target cluster, you can use the {rh-rhacm-title} hub template function, for example, `prefix: {{hub .ManagedClusterName hub}}`.