Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
33 changes: 15 additions & 18 deletions modules/nw-sriov-configuring-device.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -69,15 +69,14 @@ spec:
deviceType: vfio-pci
isRdma: false
----
+
`metadata.name`:: Specify a name for the `SriovNetworkNodePolicy` object.
`metadata.namespace`:: Specify the namespace where the SR-IOV Network Operator is installed.
`spec.resourceName`:: Specify the resource name of the SR-IOV device plugin. You can create multiple `SriovNetworkNodePolicy` objects for a resource name.
`spec.nodeSelector.feature.node.kubernetes.io/network-sriov.capable`:: Specify the node selector to select which nodes are configured. Only SR-IOV network devices on selected nodes are configured. The SR-IOV Container Network Interface (CNI) plugin and device plugin are deployed only on selected nodes.
`spec.priority`:: Optional: Specify an integer value between `0` and `99`. A smaller number gets higher priority, so a priority of `10` is higher than a priority of `99`. The default value is `99`.
`spec.mtu`:: Optional: Specify a value for the maximum transmission unit (MTU) of the virtual function. The maximum MTU value can vary for different NIC models.
`spec.numVfs`:: Specify the number of the virtual functions (VF) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than `127`.
`spec.nicSelector`:: The `nicSelector` mapping selects the Ethernet device for the Operator to configure. You do not need to specify values for all the parameters.
** `metadata.name` specfies a name for the `SriovNetworkNodePolicy` object.
** `metadata.namespace` specifies the namespace where the SR-IOV Network Operator is installed.
** `spec.resourceName` specifies the resource name of the SR-IOV device plugin. You can create multiple `SriovNetworkNodePolicy` objects for a resource name.
** `spec.nodeSelector.feature.node.kubernetes.io/network-sriov.capable` specifies the node selector to select which nodes are configured. Only SR-IOV network devices on selected nodes are configured. The SR-IOV Container Network Interface (CNI) plugin and device plugin are deployed only on selected nodes.
** `spec.priority` is an optional field that specifies an integer value between `0` and `99`. A smaller number gets higher priority, so a priority of `10` is higher than a priority of `99`. The default value is `99`.
** `spec.mtu` is an optional field that specifies a value for the maximum transmission unit (MTU) of the virtual function. The maximum MTU value can vary for different NIC models.
** `spec.numVfs` specifies the number of the virtual functions (VF) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than `127`.
** `spec.nicSelector` selects the Ethernet device for the Operator to configure. You do not need to specify values for all the parameters.
+
[NOTE]
====
Expand All @@ -86,12 +85,12 @@ If you specify `rootDevices`, you must also specify a value for `vendor`, `devic
====
+
If you specify both `pfNames` and `rootDevices` at the same time, ensure that they point to an identical device.
`spec.nicSelector.vendor`:: Optional: Specify the vendor hex code of the SR-IOV network device. The only allowed values are either `8086` or `15b3`.
`spec.nicSelector.deviceID`:: Optional: Specify the device hex code of SR-IOV network device. The only allowed values are `158b`, `1015`, `1017`.
`spec.nicSelector.pfNames`:: Optional: The parameter accepts an array of one or more physical function (PF) names for the Ethernet device.
`spec.nicSelector.rootDevices`:: The parameter accepts an array of one or more PCI bus addresses for the physical function of the Ethernet device. Provide the address in the following format: `0000:02:00.1`.
`spec.deviceType`:: The `vfio-pci` driver type is required for virtual functions in {VirtProductName}.
`spec.isRdma`:: Optional: Specify whether to enable remote direct memory access (RDMA) mode. For a Mellanox card, set `isRdma` to `false`. The default value is `false`.
** `spec.nicSelector.vendor` is an optional field that specifies the vendor hex code of the SR-IOV network device. The only allowed values are either `8086` or `15b3`.
** `spec.nicSelector.deviceID` is an optional field that specifies the device hex code of SR-IOV network device. The only allowed values are `158b`, `1015`, `1017`.
** `spec.nicSelector.pfNames` is an optional field that specifies an array of one or more physical function (PF) names for the Ethernet device.
** `spec.nicSelector.rootDevices` is an optional field that specifies an array of one or more PCI bus addresses for the physical function of the Ethernet device. Provide the address in the following format: `0000:02:00.1`.
** `spec.deviceType` specifies the driver type. The `vfio-pci` driver type is required for virtual functions in {VirtProductName}.
** `spec.isRdma` is an optional field that specifies whether to enable remote direct memory access (RDMA) mode. For a Mellanox card, set `isRdma` to `false`. The default value is `false`.
+
[NOTE]
====
Expand All @@ -103,15 +102,13 @@ endif::virt-sriov[]

. Optional: Label the SR-IOV capable cluster nodes with `SriovNetworkNodePolicy.Spec.NodeSelector` if they are not already labeled. For more information about labeling nodes, see "Understanding how to update labels on nodes".

. Create the `SriovNetworkNodePolicy` object:
. Create the `SriovNetworkNodePolicy` object. When running the following command, replace `<name>` with the name for this configuration:
+
[source,terminal]
----
$ oc create -f <name>-sriov-node-network.yaml
----
+
where `<name>` specifies the name for this configuration.
+
After applying the configuration update, all the pods in `sriov-network-operator` namespace transition to the `Running` status.

. To verify that the SR-IOV network device is configured, enter the following command. Replace `<node_name>` with the name of a node with the SR-IOV network device that you just configured.
Expand Down
7 changes: 4 additions & 3 deletions modules/virt-adding-public-key-vm-cli.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,9 +26,10 @@ Example manifest:
----
include::snippets/virt-static-key.yaml[]
----
<1> Specify the `cloudInitNoCloud` data source.
<2> Specify the `Secret` object name.
<3> Paste the public SSH key.
+
* `spec.template.spec.volumes.cloudInitNoCloud` specifies the `cloudInitNoCloud` data source.
* `spec.template.spec.accessCredentials.sshPublicKey.source.secret.secretName` specifies the `Secret` object name.
* `data.key` specifies the public SSH key.

. Create the `VirtualMachine` and `Secret` objects by running the following command:
+
Expand Down
20 changes: 11 additions & 9 deletions modules/virt-attaching-vm-to-ovn-secondary-nw-cli.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,10 +10,12 @@
You can connect a virtual machine (VM) to the OVN-Kubernetes secondary network by including the network details in the VM configuration.

.Prerequisites

* You have access to the cluster as a user with `cluster-admin` privileges.
* You have installed the OpenShift CLI (`oc`).
* You have installed the {oc-first}.

.Procedure

. Edit the `VirtualMachine` manifest to add the OVN-Kubernetes secondary network interface details, as in the following example:
+
[source,yaml]
Expand All @@ -29,23 +31,23 @@ spec:
domain:
devices:
interfaces:
- name: secondary # <1>
- name: secondary
bridge: {}
resources:
requests:
memory: 1024Mi
networks:
- name: secondary # <2>
- name: secondary
multus:
networkName: <nad_name> # <3>
networkName: <nad_name>
nodeSelector:
node-role.kubernetes.io/worker: '' # <4>
node-role.kubernetes.io/worker: ''
# ...
----
<1> The name of the OVN-Kubernetes secondary interface.
<2> The name of the network. This must match the value of the `spec.template.spec.domain.devices.interfaces.name` field.
<3> The name of the `NetworkAttachmentDefinition` object.
<4> Specifies the nodes on which the VM can be scheduled. The recommended node selector value is `node-role.kubernetes.io/worker: ''`.
** `spec.template.spec.domain.devices.interfaces.name` specifies the name of the OVN-Kubernetes secondary interface.
** `spec.template.spec.networks.name` specifies the name of the network. This must match the value of the `spec.template.spec.domain.devices.interfaces.name` field.
** `spec.template.spec.networks.multus.networkName` specifies the name of the `NetworkAttachmentDefinition` object.
** `spec.template.spec.nodeSelector` specifies the nodes on which the VM can be scheduled. The recommended node selector value is `node-role.kubernetes.io/worker: ''`.

. Apply the `VirtualMachine` manifest:
+
Expand Down
23 changes: 12 additions & 11 deletions modules/virt-cluster-resource-requirements.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -37,18 +37,19 @@ Virtual machine memory overhead::
+
----
Memory overhead per virtual machine ≈ (0.002 × requested memory) \
+ 218 MiB \ <1>
+ 8 MiB × (number of vCPUs) \ <2>
+ 16 MiB × (number of graphics devices) \ <3>
+ (additional memory overhead) <4>
+ 218 MiB \
+ 8 MiB × (number of vCPUs) \
+ 16 MiB × (number of graphics devices) \
+ (additional memory overhead)
----
<1> Required for the processes that run in the `virt-launcher` pod.
<2> Number of virtual CPUs requested by the virtual machine.
<3> Number of virtual graphics cards requested by the virtual machine.
<4> Additional memory overhead:
* If your environment includes a Single Root I/O Virtualization (SR-IOV) network device or a Graphics Processing Unit (GPU), allocate 1 GiB additional memory overhead for each device.
* If Secure Encrypted Virtualization (SEV) is enabled, add 256 MiB.
* If Trusted Platform Module (TPM) is enabled, add 53 MiB.
+
* `218 MiB` is required for the processes that run in the `virt-launcher` pod.
* `8 MiB × (number of vCPUs)` refers to the number of virtual CPUs requested by the virtual machine.
* `16 MiB × (number of graphics devices)` refers to the number of virtual graphics cards requested by the virtual machine.
* Additional memory overhead:
** If your environment includes a Single Root I/O Virtualization (SR-IOV) network device or a Graphics Processing Unit (GPU), allocate 1 GiB additional memory overhead for each device.
** If Secure Encrypted Virtualization (SEV) is enabled, add 256 MiB.
** If Trusted Platform Module (TPM) is enabled, add 53 MiB.

[id="CPU-overhead_{context}"]
== CPU overhead
Expand Down
16 changes: 5 additions & 11 deletions modules/virt-configuring-secondary-network-vm-live-migration.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -41,13 +41,10 @@ spec:
}
}'
----
+
where:
+
`metadata.name`:: Specify the name of the `NetworkAttachmentDefinition` object.
`config.master`:: Specify the name of the NIC to use for live migration.
`config.type`:: Specify the name of the CNI plugin that provides the network for the NAD.
`config.range`:: Specify an IP address range for the secondary network. This range must not overlap the IP addresses of the main network.
** `metadata.name` specifies the name of the `NetworkAttachmentDefinition` object.
** `config.master` specifies the name of the NIC to be used for live migration.
** `config.type` specifies the name of the CNI plugin that provides the network for the NAD.
** `config.range` specifies an IP address range for the secondary network. This range must not overlap the IP addresses of the main network.

. Open the `HyperConverged` CR in your default editor by running the following command:
+
Expand Down Expand Up @@ -76,10 +73,7 @@ spec:
progressTimeout: 150
# ...
----
+
where:
+
`network`:: Specify the name of the Multus `NetworkAttachmentDefinition` object to use for live migrations.
** `spec.liveMigrationConfig.network` specifies the name of the Multus `NetworkAttachmentDefinition` object to be used for live migrations.

. Save your changes and exit the editor. The `virt-handler` pods restart and connect to the secondary network.

Expand Down
8 changes: 6 additions & 2 deletions modules/virt-deploying-libguestfs-with-virtctl.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,10 @@ You can use the `virtctl guestfs` command to deploy an interactive container wit
+
[source,terminal]
----
$ virtctl guestfs -n <namespace> <pvc_name> <1>
$ virtctl guestfs -n <namespace> <pvc_name>
----
<1> The PVC name is a required argument. If you do not include it, an error message appears.
+
[IMPORTANT]
====
PVC name is a required argument. If you do not include it, an error message appears.
====
22 changes: 12 additions & 10 deletions modules/virt-exposing-pci-device-in-cluster-cli.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ To expose PCI host devices in the cluster, add details about the PCI devices to
* You have installed the {oc-first}.

.Procedure

. Edit the `HyperConverged` CR in your default editor by running the following command:
+
[source,terminal,subs="attributes+"]
Expand All @@ -33,22 +34,22 @@ metadata:
name: kubevirt-hyperconverged
namespace: {CNVNamespace}
spec:
permittedHostDevices: <1>
pciHostDevices: <2>
- pciDeviceSelector: "10DE:1DB6" <3>
resourceName: "nvidia.com/GV100GL_Tesla_V100" <4>
permittedHostDevices:
pciHostDevices:
- pciDeviceSelector: "10DE:1DB6"
resourceName: "nvidia.com/GV100GL_Tesla_V100"
- pciDeviceSelector: "10DE:1EB8"
resourceName: "nvidia.com/TU104GL_Tesla_T4"
- pciDeviceSelector: "8086:6F54"
resourceName: "intel.com/qat"
externalResourceProvider: true <5>
externalResourceProvider: true
# ...
----
<1> The host devices that are permitted to be used in the cluster.
<2> The list of PCI devices available on the node.
<3> The `vendor-ID` and the `device-ID` required to identify the PCI device.
<4> The name of a PCI host device.
<5> Optional: Setting this field to `true` indicates that the resource is provided by an external device plugin. {VirtProductName} allows the usage of this device in the cluster but leaves the allocation and monitoring to an external device plugin.
** `spec.permittedHostDevices` specifies the host devices that are permitted to be used in the cluster.
** `spec.permittedHostDevices.pciHostDevices` specifies the list of PCI devices available on the node.
** `spec.permittedHostDevices.pciHostDevices.pciDeviceSelector` specifies the `vendor-ID` and the `device-ID` required to identify the PCI device.
** `spec.permittedHostDevices.pciHostDevices.resourceName` specifies the name of a PCI host device.
** `spec.permittedHostDevices.pciHostDevices.externalResourceProvider` is an optional setting. Setting this field to `true` indicates that the resource is provided by an external device plugin. {VirtProductName} allows the usage of this device in the cluster but leaves the allocation and monitoring to an external device plugin.
+
[NOTE]
====
Expand All @@ -58,6 +59,7 @@ The above example snippet shows two PCI host devices that are named `nvidia.com/
. Save your changes and exit the editor.

.Verification

* Verify that the PCI host devices were added to the node by running the following command. The example output shows that there is one device each associated with the `nvidia.com/GV100GL_Tesla_V100`, `nvidia.com/TU104GL_Tesla_T4`, and `intel.com/qat` resource names.
+
[source,terminal]
Expand Down
19 changes: 10 additions & 9 deletions modules/virt-hot-plugging-bridge-network-interface-cli.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -33,20 +33,20 @@ template:
- name: defaultnetwork
masquerade: {}
# new interface
- name: <secondary_nic> # <1>
- name: <secondary_nic>
bridge: {}
networks:
- name: defaultnetwork
pod: {}
# new network
- name: <secondary_nic> # <2>
- name: <secondary_nic>
multus:
networkName: <nad_name> # <3>
networkName: <nad_name>
# ...
----
<1> Specifies the name of the new network interface.
<2> Specifies the name of the network. This must be the same as the `name` of the new network interface that you defined in the `template.spec.domain.devices.interfaces` list.
<3> Specifies the name of the `NetworkAttachmentDefinition` object.
** `spec.template.spec.domain.devices.interfaces.name` specifies the name of the new network interface.
** `spec.template.spec.networks.name` specifies the name of the network. This must be the same as the `name` of the new network interface that you defined in the `template.spec.domain.devices.interfaces` list.
** `spec.template.spec.networks.multus.networkName` specifies the name of the `NetworkAttachmentDefinition` object.

. Save your changes and exit the editor.

Expand All @@ -58,7 +58,7 @@ $ oc apply -f <filename>.yaml
----
+
where:

+
<filename>:: Specifies the name of your `VirtualMachine` manifest YAML file.

.Verification
Expand Down Expand Up @@ -111,9 +111,10 @@ Example output:
"infoSource": "domain, guest-agent, multus-status",
"interfaceName": "eth1",
"mac": "02:d8:b8:00:00:2a",
"name": "bridge-interface", <1>
"name": "bridge-interface",
"queueCount": 1
}
]
----
<1> The hot plugged interface appears in the VMI status.
+
The hot plugged interface appears in the VMI status.
9 changes: 5 additions & 4 deletions modules/virt-hot-unplugging-bridge-network-interface-cli.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -40,9 +40,9 @@ template:
interfaces:
- name: defaultnetwork
masquerade: {}
# set the interface state to absent
# set the interface state to absent
- name: <secondary_nic>
state: absent # <1>
state: absent
bridge: {}
networks:
- name: defaultnetwork
Expand All @@ -52,7 +52,8 @@ template:
networkName: <nad_name>
# ...
----
<1> Set the interface state to `absent` to detach it from the running VM. Removing the interface details from the VM specification does not hot unplug the secondary network interface.
+
Set the interface state to `absent` to detach it from the running VM. Removing the interface details from the VM specification does not hot unplug the secondary network interface.

. Save your changes and exit the editor.

Expand All @@ -64,5 +65,5 @@ $ oc apply -f <filename>.yaml
----
+
where:

+
<filename>:: Specifies the name of your `VirtualMachine` manifest YAML file.
16 changes: 8 additions & 8 deletions modules/virt-linux-bridge-nad-port-isolation.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -35,20 +35,20 @@ spec:
config: |
{
"cniVersion": "0.3.1",
"name": "bridge-network", <1>
"type": "bridge", <2>
"bridge": "br1", <3>
"name": "bridge-network",
"type": "bridge",
"bridge": "br1",
"preserveDefaultVlan": false,
"vlan": 100,
"disableContainerInterface": false,
"portIsolation": true <4>
"portIsolation": true
}
# ...
----
<1> The name for the configuration. The name must match the value in the `metadata.name` of the NAD.
<2> The actual name of the Container Network Interface (CNI) plugin that provides the network for this network attachment definition. Do not change this field unless you want to use a different CNI.
<3> The name of the Linux bridge that is configured on the node. The name must match the interface bridge name defined in the NodeNetworkConfigurationPolicy manifest.
<4> Enables or disables port isolation on the virtual bridge. Default value is `false`. When set to `true`, each VM or pod is assigned to an isolated port. The virtual bridge prevents traffic from one isolated port from reaching another isolated port.
** `spec.config.name` specifies the name for the configuration. The name must match the value in the `metadata.name` of the NAD.
** `spec.config.type` specifies the actual name of the Container Network Interface (CNI) plugin that provides the network for this network attachment definition. Do not change this field unless you want to use a different CNI.
** `spec.config.bridge` specifies the name of the Linux bridge that is configured on the node. The name must match the interface bridge name defined in the `NodeNetworkConfigurationPolicy` manifest.
** `spec.config.portIsolation` specifies whether port isolation on the virtual bridge is enabled or disabled. The default value is `false`. When set to `true`, each VM or pod is assigned to an isolated port. The virtual bridge prevents traffic from one isolated port from reaching another isolated port.

. Apply the configuration:
+
Expand Down
Loading