Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 7 additions & 4 deletions modules/virt-add-custom-golden-image-heterogeneous-cluster.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
//
// * virt/virtual_machines/advanced_vm_management/virt-creating-vms-from-rh-images-overview.adoc

:_mod-docs-content-type: PROCEDURE
:_mod-docs-content-type: PROCEDURE
[id="virt-add-custom-golden-image-heterogeneous-cluster_{context}"]

= Adding a custom golden image in a heterogeneous cluster
Expand Down Expand Up @@ -39,7 +39,7 @@ spec:
- metadata:
name: custom-image1
annotations:
ssp.kubevirt.io/dict.architectures: "<architecture_list>" <1>
ssp.kubevirt.io/dict.architectures: "<architecture_list>"
spec:
schedule: "0 */12 * * *"
template:
Expand All @@ -48,10 +48,13 @@ spec:
registry:
url: docker://myprivateregistry/custom1
managedDataSource: custom1
retentionPolicy: "All"
retentionPolicy: "All"
#...
----
<1> The comma-separated list of supported architectures for this image. For example, if the image supports `amd64` and `arm64` architectures, the value would be `"amd64,arm64"`.
+
where:
+
`<architecture_list>`:: Specifies a comma-separated list of supported architectures for this image. For example, if the image supports `amd64` and `arm64` architectures, the value would be `"amd64,arm64"`.
+
[NOTE]
====
Expand Down
5 changes: 3 additions & 2 deletions modules/virt-adding-container-disk-as-cd.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -34,15 +34,16 @@ spec:
devices:
disks:
- name: virtiocontainerdisk
bootOrder: 2 <1>
bootOrder: 2
cdrom:
bus: sata
volumes:
- containerDisk:
image: container-native-virtualization/virtio-win
name: virtiocontainerdisk
----
<1> {VirtProductName} boots the VM disks in the order defined in the `VirtualMachine` manifest. You can either define other VM disks that boot before the `container-native-virtualization/virtio-win` container disk or use the optional `bootOrder` parameter to ensure the VM boots from the correct disk. If you configure the boot order for a disk, you must configure the boot order for the other disks.
+
{VirtProductName} boots the VM disks in the order defined in the `VirtualMachine` manifest. You can either define other VM disks that boot before the `container-native-virtualization/virtio-win` container disk, or use the optional `bootOrder` parameter to ensure the VM boots from the correct disk. If you configure the boot order for a disk, you must configure the boot order for the other disks.

. Apply the changes:
* If the VM is not running, run the following command:
Expand Down
3 changes: 2 additions & 1 deletion modules/virt-adding-public-key-vm-cli.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -42,9 +42,10 @@ $ oc create -f <manifest_file>.yaml
[source,terminal]
----
$ virtctl start vm example-vm -n example-namespace
----
----

.Verification

* Get the VM configuration:
+
[source,terminal]
Expand Down
38 changes: 22 additions & 16 deletions modules/virt-adding-vm-to-service-mesh.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ To avoid port conflicts, do not use ports used by the Istio sidecar proxy. These
.Prerequisites

* You have installed the {oc-first}.
* You have installed the Service Mesh Operator.
* You have installed the {SMProductShortName} Operator.

.Procedure

Expand All @@ -39,15 +39,15 @@ spec:
metadata:
labels:
kubevirt.io/vm: vm-istio
app: vm-istio <1>
app: vm-istio
annotations:
sidecar.istio.io/inject: "true" <2>
sidecar.istio.io/inject: "true"
spec:
domain:
devices:
interfaces:
- name: default
masquerade: {} <3>
masquerade: {}
disks:
- disk:
bus: virtio
Expand All @@ -67,20 +67,23 @@ spec:
image: registry:5000/kubevirt/fedora-cloud-container-disk-demo:devel
name: containerdisk
----
<1> The key/value pair (label) that must be matched to the service selector attribute.
<2> The annotation to enable automatic sidecar injection.
<3> The binding method (masquerade mode) for use with the default pod network.
** `spec.template.metadata.labels.app` specifies the key/value pair (label) that must be matched to the service selector attribute.
Comment thread
abrennan89 marked this conversation as resolved.
** `spec.template.metadata.annotations.sidecar.istio.io/inject` is the annotation to enable automatic sidecar injection.
** `spec.template.spec.domain.devices.interfaces.masquerade` is the binding method (masquerade mode) for use with the default pod network.

. Apply the VM configuration:
. Run the following command to apply the VM configuration:
+
[source,terminal]
----
$ oc apply -f <vm_name>.yaml <1>
$ oc apply -f <vm_name>.yaml
----
<1> The name of the virtual machine YAML file.
+
where:
+
`<vm_name>`:: Specifies the name of the virtual machine YAML file.


. Create a `Service` object to expose your VM to the service mesh.
. Create a `Service` object to expose your VM to the service mesh:
+
[source,yaml]
----
Expand All @@ -90,18 +93,21 @@ metadata:
name: vm-istio
spec:
selector:
app: vm-istio <1>
app: vm-istio
ports:
- port: 8080
name: http
protocol: TCP
----
<1> The service selector that determines the set of pods targeted by a service. This attribute corresponds to the `spec.metadata.labels` field in the VM configuration file. In the above example, the `Service` object named `vm-istio` targets TCP port 8080 on any pod with the label `app=vm-istio`.
** `spec.selector.app` specifies the service selector that determines the set of pods targeted by a service. This attribute corresponds to the `spec.metadata.labels` field in the VM configuration file. In the above example, the `Service` object named `vm-istio` targets TCP port 8080 on any pod with the label `app=vm-istio`.

. Create the service:
. Run the following command to create the service:
+
[source,terminal]
----
$ oc create -f <service_name>.yaml <1>
$ oc create -f <service_name>.yaml
----
<1> The name of the service YAML file.
+
where:
+
`<service_name>`:: Specifies the name of the service YAML file.
11 changes: 5 additions & 6 deletions modules/virt-adding-vtpm-to-vm.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -7,17 +7,16 @@
= Adding a vTPM device to a virtual machine

[role="_abstract"]
Adding a virtual Trusted Platform Module (vTPM) device to a virtual machine
(VM) allows you to run a VM created from a Windows 11 image without a physical
TPM device. A vTPM device also stores secrets for that VM.
Adding a virtual Trusted Platform Module (vTPM) device to a virtual machine (VM) allows you to run a VM created from a Windows 11 image without a physical TPM device. A vTPM device also stores secrets for that VM.

[IMPORTANT]
====
When you add a virtual Trusted Platform Module (vTPM) device to a Windows VM, it is important to make the vTPM device persistent. The BitLocker Drive is encrypted successfully and the encryption system check passes even if the vTPM device is not persistent. If the vTPM device is not persistent, it is discarded on shutdown.
====

.Prerequisites
* You have installed the OpenShift CLI (`oc`).

* You have installed the {oc-first}.

.Procedure

Expand Down Expand Up @@ -45,8 +44,8 @@ spec:
persistent: true <2>
# ...
----
<1> Adds the vTPM device to the VM.
<2> Specifies that the vTPM device state persists after the VM is shut down. The default value is `false`.
** `spec.template.spec.domain.devices.tpm` specifies the vTPM device to add to the VM.
** `spec.template.spec.domain.devices.tpm.persistent` specifies that the vTPM device state persists after the VM is shut down. The default value is `false`.

. To apply your changes, save and exit the editor.

Expand Down
11 changes: 9 additions & 2 deletions modules/virt-assigning-pci-device-virtual-machine.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@
When a PCI device is available in a cluster, you can assign it to a virtual machine and enable PCI passthrough.

.Procedure

* Assign the PCI device to a virtual machine as a host device.
+
Example:
Expand All @@ -22,16 +23,22 @@ spec:
domain:
devices:
hostDevices:
- deviceName: nvidia.com/TU104GL_Tesla_T4 <1>
- deviceName: nvidia.com/TU104GL_Tesla_T4
name: hostdevices1
----
<1> The name of the PCI device that is permitted on the cluster as a host device. The virtual machine can access this host device.
+
where:
+
`deviceName`:: Specifies the name of the PCI device that is permitted on the cluster as a host device. The virtual machine can access this host device.

.Verification

* Use the following command to verify that the host device is available from the virtual machine.
+
[source,terminal]
----
$ lspci -nnk | grep NVIDIA
----
+
Example output:
+
Expand Down
12 changes: 6 additions & 6 deletions modules/virt-assigning-vgpu-vm-cli.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -12,11 +12,11 @@ Assign mediated devices such as virtual GPUs (vGPUs) to virtual machines (VMs).
.Prerequisites

* The mediated device is configured in the `HyperConverged` custom resource.
* The VM is stopped.
* The virtual machine (VM) is stopped.

.Procedure

* Assign the mediated device to a virtual machine (VM) by editing the `spec.domain.devices.gpus` stanza of the `VirtualMachine` manifest.
* Assign the mediated device to a VM by editing the `spec.domain.devices.gpus` stanza of the `VirtualMachine` manifest.
+
Example virtual machine manifest:
+
Expand All @@ -28,13 +28,13 @@ spec:
domain:
devices:
gpus:
- deviceName: nvidia.com/TU104GL_Tesla_T4 <1>
name: gpu1 <2>
- deviceName: nvidia.com/TU104GL_Tesla_T4
name: gpu1
- deviceName: nvidia.com/GRID_T4-2Q
name: gpu2
----
<1> The resource name associated with the mediated device.
<2> A name to identify the device on the VM.
** `spec.template.spec.domain.devices.gpus.deviceName` specifies the resource name associated with the mediated device.
** `spec.template.spec.domain.devices.gpus.name` specifies a name to identify the device on the VM.

.Verification

Expand Down