Skip to content

Commit 3de9707

Browse files
committed
OSDOCS-16871-1-abstracts: SCALE-1: Core Scalability Planning and Resource Management
1 parent 0fc6eed commit 3de9707

12 files changed

Lines changed: 14 additions & 12 deletions

modules/admin-quota-limits.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
= Limit ranges in a LimitRange object
88

99
[role="_abstract"]
10-
To define compute resource constraints at the object level, create a `LimitRange` object. By creating this object, you can specify the exact amount of resources that an individual pod, container, image, or persistent volume claim can consume.
10+
To define compute resource constraints at the object level, create a `LimitRange` object. By creating this object, you can specify the exact amount of resources that an individual pod, container, image, image stream, or persistent volume claim can consume.
1111

1212
All requests to create and modify resources are evaluated against each `LimitRange` object in the project. If the resource violates any of the enumerated constraints, the resource is rejected. If the resource does not set an explicit value, and if the constraint supports a default value, the default value is applied to the resource.
1313

modules/admin-quota-usage.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
= Admin quota usage
88

99
[role="_abstract"]
10-
To ensure projects remain within defined constraints, monitor admin quota usage. By tracking the aggregate consumption of compute resources and storage, you can identify when `ResourceQuota` limits are reached or approached.
10+
To ensure projects remain within defined constraints, monitor admin quota usage. After a resource quota for a project is first created, the project restricts the ability to create any new resources that can violate a quota constraint until it has calculated updated usage statistics.
1111

1212
Quota enforcement::
1313
+

modules/configure-guest-caching-for-disk.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
= Configure guest caching for disk
88

99
[role="_abstract"]
10-
To ensure that the guest manages caching instead of the host, configure your disk devices. This setting shifts caching responsibility to the guest operating system, preventing the host from caching disk operations.
10+
To ensure that the guest manages caching instead of the host, configure your disk devices.
1111

1212
Ensure that the driver element of the disk device includes the `cache="none"` and `io="native"` parameters.
1313

modules/configuring-quota-synchronization-period.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
= Configuring quota synchronization period
88

99
[role="_abstract"]
10-
To control the synchronization time frame when resources are deleted, configure the `resource-quota-sync-period` setting. This parameter in the `/etc/origin/master/master-config.yaml` file determines how frequently the system updates usage statistics to reflect deleted resources.
10+
When a set of resources are deleted, the synchronization time frame of resources is determined by the `resource-quota-sync-period` setting in the `/etc/origin/master/master-config.yaml` file. You can change the `resource-quota-sync-period` setting to have the set of resources regenerate in the needed amount of time (in seconds) for the resources to be once again available.
1111

1212
[NOTE]
1313
====

modules/create-perf-profile-workload-partitioning.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
= Performance profiles and workload partitioning
88

99
[role="_abstract"]
10-
To enable workload partitioning, apply a performance profile. This configuration specifies the isolated and reserved CPUs, ensuring that customer workloads run on dedicated cores without interruption from platform processes.
10+
To enable workload partitioning, apply a performance profile.
1111

1212
An appropriately configured performance profile specifies the `isolated` and `reserved` CPUs. Create a performance profile by using the Performance Profile Creator (PPC) tool.
1313

modules/disabling-the-cpuset-cgroup-controller.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
= Disabling the cpuset cgroup controller
88

99
[role="_abstract"]
10-
To allow the kernel scheduler to freely distribute processes across all available resources, disable the `cpuset` cgroup controller. This configuration prevents the system from enforcing processor affinity constraints, ensuring that tasks can use any available CPU or memory node.
10+
You can disable the cpuset cgroup controller. Disabling the controller requires a restart of the libvirtd daemon.
1111

1212
[NOTE]
1313
====

modules/enabling-workload-partitioning.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
= Enabling workload partitioning
88

99
[role="_abstract"]
10-
To partition cluster management pods into a specified CPU affinity, enable workload partitioning. This configuration ensures that management pods operate within the reserved CPU limits defined in your Performance Profile, preventing them from consuming resources intended for customer workloads.
10+
To partition cluster management pods into a specified CPU affinity, enable workload partitioning. This configuration ensures that management pods operate within the reserved CPU limits defined in your Performance Profile.
1111

1212
Consider additional post-installation Operators that use workload partitioning when calculating how many reserved CPU cores to set aside for the platform.
1313

modules/ibm-z-boost-networking-performance-with-rfs.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
= Boosting networking performance with RFS
88

99
[role="_abstract"]
10-
To boost networking performance, activate Receive Flow Steering (RFS) by using the Machine Config Operator (MCO). This configuration improves packet processing efficiency by directing network traffic to specific CPUs.
10+
To boost networking performance, activate Receive Flow Steering (RFS) by using the Machine Config Operator (MCO). This configuration improves packet processing efficiency.
1111

1212
RFS extends Receive Packet Steering (RPS) by further reducing network latency. RFS is technically based on RPS, and improves the efficiency of packet processing by increasing the CPU cache hit rate. RFS achieves this, while considering queue length, by determining the most convenient CPU for computation so that cache hits are more likely to occur within the CPU. This means that the CPU cache is invalidated less and requires fewer cycles to rebuild the cache, which reduces packet processing run time.
1313

modules/ibm-z-choose-networking-setup.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
= Choose your networking setup
88

99
[role="_abstract"]
10-
To optimize performance for specific workloads and traffic patterns, select a networking setup based on your chosen hypervisor. This configuration ensures the networking stack meets the operational requirements of {product-title} clusters on IBM Z infrastructure.
10+
For {ibm-z-name} setups, the networking setup depends on the hypervisor of your choice. Depending on the workload and the application, the best fit usually changes with the use case and the traffic pattern.
1111

1212
The networking stack is one of the most important components for a Kubernetes-based product like {product-title}.
1313

modules/ibm-z-rhel-kvm-host-recommendations.adoc

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,8 @@
77
= {op-system-base} KVM on {ibm-z-title} host recommendations
88

99
[role="_abstract"]
10-
To optimize Kernel-based Virtual Machine (KVM) performance on {ibm-z-title}, apply host recommendations. Because optimal settings depend strongly on specific workloads and available resources, finding the best balance for your {op-system-base} environment often requires experimentation to avoid adverse effects.
10+
To optimize Kernel-based Virtual Machine (KVM) performance on {ibm-z-title}, apply host recommendations.
11+
12+
Optimizing a KVM virtual server environment strongly depends on the workloads of the virtual servers and on the available resources. The same action that enhances performance in one environment can have adverse effects in another. Finding the best balance for a particular setting can be a challenge and often involves experimentation.
1113

1214
The following sections introduces some best practices when using {product-title} with {op-system-base} KVM on {ibm-z-name} and {ibm-linuxone-name} environments.

0 commit comments

Comments
 (0)