You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
mysql> SELECT * FROM cloud.host_details WHERE name='password' AND host_id={previous step ID};
250
+
mysql> SELECT * FROM cloud.host_details WHERE name='password' AND host_id={previous step ID};
251
251
252
252
#. Update the passwords for the host in the database. In this example,
253
253
we change the passwords for hosts with host IDs 5 and 12 and host_details IDs 8 and 22 to
@@ -266,15 +266,15 @@ Over-Provisioning and Service Offering Limits
266
266
CPU and memory (RAM) over-provisioning factors can be set for each
267
267
cluster to change the number of VMs that can run on each host in the
268
268
cluster. This helps optimize the use of resources. By increasing the
269
-
over-provisioning ratio, more resource capacity will be used. If the
270
-
ratio is set to 1, no over-provisioning is done.
269
+
over-provisioning factor, more resource capacity will be used. If the
270
+
factor is set to 1, no over-provisioning is done.
271
271
272
-
The administrator can also set global default over-provisioning ratios
272
+
The administrator can also set global default over-provisioning factors
273
273
in the cpu.overprovisioning.factor and mem.overprovisioning.factor
274
274
global configuration variables. The default value of these variables is
275
275
1: over-provisioning is turned off by default.
276
276
277
-
Over-provisioning ratios are dynamically substituted in CloudStack's
277
+
Over-provisioning factors are dynamically substituted in CloudStack's
278
278
capacity calculations. For example:
279
279
280
280
Capacity = 2 GB
@@ -286,8 +286,8 @@ With this configuration, suppose you deploy 3 VMs of 1 GB each:
286
286
Used = 3 GB
287
287
Free = 1 GB
288
288
289
-
The administrator can specify a memory over-provisioning ratio, and can
290
-
specify both CPU and memory over-provisioning ratios on a per-cluster
289
+
The administrator can specify a memory over-provisioning factor, and can
290
+
specify both CPU and memory over-provisioning factors on a per-cluster
291
291
basis.
292
292
293
293
In any given cloud, the optimum number of VMs for each host is affected
@@ -303,8 +303,8 @@ The overprovisioning settings can be used along with dedicated resources
303
303
(assigning a specific cluster to an account) to effectively offer
304
304
different levels of service to different accounts. For example, an
305
305
account paying for a more expensive level of service could be assigned
306
-
to a dedicated cluster with an over-provisioning ratio of 1, and a
307
-
lower-paying account to a cluster with a ratio of 2.
306
+
to a dedicated cluster with an over-provisioning factor of 1, and a
307
+
lower-paying account to a cluster with a factor of 2.
308
308
309
309
When a new host is added to a cluster, CloudStack will assume the host
310
310
has the capability to perform the CPU and RAM over-provisioning which is
@@ -385,46 +385,48 @@ VMware, KVM
385
385
Memory ballooning is supported by default.
386
386
387
387
388
-
Setting Over-Provisioning Ratios
388
+
Setting Over-Provisioning Factors
389
389
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
390
390
391
391
There are two ways the root admin can set CPU and RAM over-provisioning
392
-
ratios. First, the global configuration settings
392
+
factors. First, the global configuration settings
393
393
cpu.overprovisioning.factor and mem.overprovisioning.factor will be
394
-
applied when a new cluster is created. Later, the ratios can be modified
394
+
applied when a new cluster is created. Later, the factors can be modified
395
395
for an existing cluster.
396
396
397
397
Only VMs deployed after the change are affected by the new setting. If
398
398
you want VMs deployed before the change to adopt the new
399
-
over-provisioning ratio, you must stop and restart the VMs. When this is
399
+
over-provisioning factor, you must stop and restart the VMs. When this is
400
400
done, CloudStack recalculates or scales the used and reserved capacities
401
-
based on the new over-provisioning ratios, to ensure that CloudStack is
401
+
based on the new over-provisioning factors, to ensure that CloudStack is
402
402
correctly tracking the amount of free capacity.
403
403
404
-
.. note::
405
-
It is safer not to deploy additional new VMs while the capacity
406
-
recalculation is underway, in case the new values for available
407
-
capacity are not high enough to accommodate the new VMs. Just wait
408
-
for the new used/available values to become available, to be sure
404
+
.. note::
405
+
It is safer not to deploy additional new VMs while the capacity
406
+
recalculation is underway, in case the new values for available
407
+
capacity are not high enough to accommodate the new VMs. Just wait
408
+
for the new used/available values to become available, to be sure
409
409
there is room for all the new VMs you want.
410
410
411
-
To change the over-provisioning ratios for an existing cluster:
411
+
To change the over-provisioning factors for an existing cluster:
412
412
413
413
#. Log in as administrator to the CloudStack UI.
414
414
415
415
#. In the left navigation bar, click Infrastructure.
416
416
417
-
#. Under Clusters, click View All.
417
+
#. Select Clusters.
418
418
419
-
#. Select the cluster you want to work with, and click the Edit button.
419
+
#. Select the cluster you want to work with, and click the Settings button.
420
+
421
+
#. Search for overprovisioning.
420
422
421
423
#. Fill in your desired over-provisioning multipliers in the fields CPU
422
-
overcommit ratio and RAM overcommit ratio. The value which is
424
+
overcommit factor and RAM overcommit factor. The value which is
423
425
intially shown in these fields is the default value inherited from
424
426
the global configuration settings.
425
427
426
-
.. note::
427
-
In XenServer, due to a constraint of this hypervisor, you can not
428
+
.. note::
429
+
In XenServer, due to a constraint of this hypervisor, you can not
428
430
use an over-provisioning factor greater than 4.
429
431
430
432
@@ -514,8 +516,7 @@ range.
514
516
515
517
#. In the left navigation, choose Infrastructure.
516
518
517
-
#. On Zones, click View More, then click the zone to which you want to
518
-
work with.
519
+
#. Click Zones and select the zone you'd like to modify.
519
520
520
521
#. Click Physical Network.
521
522
@@ -572,8 +573,8 @@ To enable you to assign VLANs to Isolated networks,
572
573
network and the state is changed to Setup. In this state, the network
573
574
will not be garbage collected.
574
575
575
-
.. note::
576
-
You cannot change a VLAN once it's assigned to the network. The VLAN
576
+
.. note::
577
+
You cannot change a VLAN once it's assigned to the network. The VLAN
577
578
remains with the network for its entire life cycle.
578
579
579
580
@@ -769,7 +770,7 @@ Feature Overview
769
770
770
771
- This feature applies to KVM hosts.
771
772
- KVM utilised under CloudStack uses the standard Libvirt hook script behaviour as outlined in the Libvirt documentation page `hooks`_.
772
-
- During the install of the KVM CloudStack agent, the Libvirt hook script "/etc/libvirt/hooks/qemu", referred to as the qemu script hereafter is installed.
773
+
- During the install of the KVM CloudStack agent, the Libvirt hook script "/etc/libvirt/hooks/qemu", referred to as the qemu script hereafter is installed.
773
774
- This is a python script that carries out network management tasks every time a VM is started, stopped or migrated, as per the Libvirt hooks specification.
774
775
- Custom network configuration tasks can be done at the same time as the qemu script is called.
775
776
- Since the tasks in question are user-specific, they cannot be included in the CloudStack-provided qemu script.
@@ -788,8 +789,8 @@ Usage
788
789
~~~~~~
789
790
790
791
- The cloudstack-agent package will install the qemu script in the /etc/libvirt/hooks directory of Libvirt.
791
-
- The Libvirt documentation page `arguments`_ describes the arguments that can be passed to the qemu script.
792
-
- The input arguments are:
792
+
- The Libvirt documentation page `arguments`_ describes the arguments that can be passed to the qemu script.
793
+
- The input arguments are:
793
794
794
795
#. Name of the object involved in the operation, or '-' if there is none. For example, the name of a guest being started.
795
796
#. Name of the operation being performed. For example, 'start' if a guest is being started.
@@ -800,7 +801,7 @@ Usage
800
801
801
802
- If an invalid operation argument is received, the qemu script will log the fact, not execute any custom scripts and exit.
802
803
803
-
- All input arguments that are passed to the qemu script will also be passed to each custom script.
804
+
- All input arguments that are passed to the qemu script will also be passed to each custom script.
804
805
805
806
- For each of the above actions, the qemu script will find and run scripts by the name "<action>_<custom script name>" in a custom include path /etc/libvirt/hooks/custom/. Custom scripts that do not follow this naming convention will be ignored and not be executed.
806
807
@@ -825,7 +826,7 @@ Timeout Configuration
825
826
826
827
Custom Script Naming for a Specific VM Action
827
828
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
828
-
- For a custom script that needs to be executed at the end of a specific VM action, do the following:
829
+
- For a custom script that needs to be executed at the end of a specific VM action, do the following:
829
830
830
831
#. Navigate to the custom script that needs to be executed for a specific action.
831
832
#. Rename the file by prefixing to the filename the specific action name followed by an underscore. For example, if a custom script is named abc.sh, then prefix 'migrate' and an underscore to the name to become migrate_abc.sh.
@@ -840,7 +841,7 @@ Custom Script Naming for All VM Actions
840
841
#. Rename the file by prefixing 'all' to the filename, followed by an underscore. For example, if a custom script is named def.py, then prefix 'all' and an underscore to the name to become all_def.py.
841
842
842
843
Custom Script Execution Configuration
843
-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
844
+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
844
845
- Grant each custom script execute permissions so that the underlying host operating system can execute them:
845
846
846
847
#. Navigate to the custom script that needs to be executable.
@@ -879,7 +880,7 @@ There are four stages in the KVM rolling maintenance process:
879
880
880
881
#. Post-Maintenance stage: Post-maintenance script ((``PostMaintenance`` or ``PostMaintenance.sh`` or ``PostMaintenance.py``)) is expected to perform validation after the host exits maintenance. These scripts will help to detect any problem during the maintenance process, including reboots or restarts within scripts.
881
882
882
-
.. note::
883
+
.. note::
883
884
Pre-flight and pre-maintenance scripts’ execution can determine if the maintenance stage is not required for a host. The special exit code = 70 on a pre-flight or pre-maintenance script will let CloudStack know that the maintenance stage is not required for a host.
884
885
885
886
Administrators must define only one script per stage. In case a stage does not contain a script, it is skipped, continuing with the next stage. Administrators are responsible for defining and copying scripts into the hosts
@@ -952,15 +953,15 @@ Before attempting any maintenance actions, pre-flight and capacity checks are pe
952
953
953
954
The pre-flight script may signal that no maintenance is needed on the host. In that case, the host is skipped from the rolling maintenance hosts iteration.
954
955
955
-
Once pre-flight checks pass, then the management server iterates through each host in the selected scope and sends a command to execute each of the rest of the stages in order. The hosts in the selected scope are grouped by clusters, therefore all the hosts in a cluster are processed before processing the hosts of a different cluster.
956
+
Once pre-flight checks pass, then the management server iterates through each host in the selected scope and sends a command to execute each of the rest of the stages in order. The hosts in the selected scope are grouped by clusters, therefore all the hosts in a cluster are processed before processing the hosts of a different cluster.
956
957
957
958
The management server iterates through hosts in each cluster on the selected scope and for each of the hosts does the following:
958
959
959
960
- Disables the cluster (if it has not been disabled previously)
960
961
- The existence of the maintenance script on the host is checked (this check is performed only for the maintenance script, not for the rest of the stages)
961
962
962
963
- If the host does not contain a maintenance script, then the host is skipped and the iteration continues with the next host in the cluster.
963
-
964
+
964
965
- Execute pre-maintenance script (if any) before entering maintenance mode.
965
966
966
967
- The pre-maintenance script may signal that no maintenance is needed on the host. In that case, the host is skipped and the iteration continues with the next host in the cluster.
0 commit comments