You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: modules/cnf-image-based-upgrade-seed-image-config.adoc
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,7 +20,7 @@ Therefore, the seed image generated from the seed cluster cannot contain any clu
20
20
endif::[]
21
21
22
22
ifdef::ibi[]
23
-
You can create a seed image from a {sno} cluster with with the same hardware as your bare-metal host, and with a similar target cluster configuration. However, the seed image generated from the seed cluster cannot contain any cluster-specific configuration.
23
+
You can create a seed image from a {sno} cluster with the same hardware as your bare-metal host, and with a similar target cluster configuration. However, the seed image generated from the seed cluster cannot contain any cluster-specific configuration.
24
24
endif::[]
25
25
26
26
The following table lists the components, resources, and configurations that you must and must not include in your seed image:
= Running the Performance Profile Creator wrapper script
8
8
9
-
The wrapper script simplifies the process of creating a performance profile with the Performance Profile Creator (PPC) tool. The script handles tasks such as pulling and running the required container image, mounting directories into the container, and providing parameters directly to the container through Podman.
9
+
The wrapper script simplifies the process of creating a performance profile with the Performance Profile Creator (PPC) tool. The script handles tasks such as pulling and running the required container image, mounting directories into the container, and providing parameters directly to the container through Podman.
10
10
11
11
For more information about the Performance Profile Creator arguments, see the section _"Performance Profile Creator arguments"_.
12
12
@@ -183,7 +183,7 @@ Flags:
183
183
+
184
184
[NOTE]
185
185
====
186
-
You can optionally set a path for the Node Tuning Operator image using the `-p` option. If you do not set a path, the wrapper script uses the default image: `registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:vBranch Build`.
186
+
You can optionally set a path for the Node Tuning Operator image using the `-p` option. If you do not set a path, the wrapper script uses the default image: `registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:v{product-version}`.
187
187
====
188
188
189
189
. To display information about the cluster, run the PPC tool with the `log` argument by running the following command:
@@ -216,7 +216,7 @@ level=info msg="CPU(s): 4"
216
216
level=info msg=---
217
217
----
218
218
219
-
. Create a performance profile by running the following command.
219
+
. Create a performance profile by running the following command.
Copy file name to clipboardExpand all lines: modules/deployment-plug-in-cluster.adoc
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,7 +13,7 @@ After pushing an image with your changes to a registry, you can deploy the plugi
13
13
+
14
14
[NOTE]
15
15
====
16
-
You can specify additional parameters based on the needs of your plugin. The link:https://github.com/openshift/console-plugin-template/blob/main/charts/openshift-console-plugin/values.yaml[`values.yaml`] file provides a full set of suported parameters.
16
+
You can specify additional parameters based on the needs of your plugin. The link:https://github.com/openshift/console-plugin-template/blob/main/charts/openshift-console-plugin/values.yaml[`values.yaml`] file provides a full set of supported parameters.
You can optionally define additional resources in an image-based deployment for {sno} clusters.
9
+
You can optionally define additional resources in an image-based deployment for {sno} clusters.
10
10
11
11
Create the additional resources in an `extra-manifests` folder in the same working directory that has the `install-config.yaml` and `image-based-config.yaml` manifests.
12
12
13
13
== Creating a resource in the extra-manifests folder
14
14
15
-
You can create a resource in the `extra-manifests` folder of your working directory to add extra manifests to the image-based deployment for {sno} clusters.
15
+
You can create a resource in the `extra-manifests` folder of your working directory to add extra manifests to the image-based deployment for {sno} clusters.
16
16
17
17
The following example adds an single-root I/O virtualization (SR-IOV) network to the deployment.
18
18
@@ -22,7 +22,7 @@ The following example adds an single-root I/O virtualization (SR-IOV) network to
22
22
23
23
.Procedure
24
24
25
-
. Go to your working directory and create the `extra-manifests` folder by running the follow command:
25
+
. Go to your working directory and create the `extra-manifests` folder by running the following command:
Copy file name to clipboardExpand all lines: modules/installation-nutanix-config-yaml.adoc
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -256,7 +256,7 @@ If you disable simultaneous multithreading, ensure that your capacity planning a
256
256
====
257
257
<4> Optional: Provide additional configuration for the machine pool parameters for the compute and control plane machines.
258
258
<5> Optional: Provide one or more pairs of a prism category key and a prism category value. These category key-value pairs must exist in Prism Central. You can provide separate categories to compute machines, control plane machines, or all machines.
259
-
<6> TThe cluster network plugin to install. The default value `OVNKubernetes` is the only supported value.
259
+
<6> The cluster network plugin to install. The default value `OVNKubernetes` is the only supported value.
260
260
<7> Optional: Specify a project with which VMs are associated. Specify either `name` or `uuid` for the project type, and then provide the corresponding UUID or project name. You can associate projects to compute machines, control plane machines, or all machines.
261
261
<8> Optional: By default, the installation program downloads and installs the {op-system-first} image. If Prism Central does not have internet access, you can override the default behavior by hosting the {op-system} image on any HTTP server or Nutanix Objects and pointing the installation program to the image.
262
262
<9> For `<local_registry>`, specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example `registry.example.com` or `registry.example.com:5000`. For `<credentials>`,
Copy file name to clipboardExpand all lines: modules/installing-ocp-agent-ibm-z.adoc
+24-24Lines changed: 24 additions & 24 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,49 +17,49 @@ Depending on your {ibm-z-name} environment, you can choose from the following op
17
17
18
18
[NOTE]
19
19
====
20
-
Currently, ISO boot support on {ibm-z-name} (`s390x``) is available only for {op-system-base-full} KVM, which provides the flexibility to choose either PXE or ISO-based installation. For installations with z/VM and Logical Partition (LPAR), only PXE boot is supported.
20
+
Currently, ISO boot support on {ibm-z-name} (`s390x`) is available only for {op-system-base-full} KVM, which provides the flexibility to choose either PXE or ISO-based installation. For installations with z/VM and Logical Partition (LPAR), only PXE boot is supported.
21
21
====
22
22
23
23
[id="networking-reqs-ibmz_{context}"]
24
24
== Networking requirements for {ibm-z-title}
25
25
26
-
In {ibm-z-title} environments, advanced networking technologies such as Open Systems Adapter (OSA), HiperSockets, and Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE) require specific configurations that deviate from the standard network settings and those needs to be persisted for multiple boot scenarios that occur in the Agent-based Installation.
26
+
In {ibm-z-title} environments, advanced networking technologies such as Open Systems Adapter (OSA), HiperSockets, and Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE) require specific configurations that deviate from the standard network settings and those needs to be persisted for multiple boot scenarios that occur in the Agent-based Installation.
27
27
28
28
To persist these parameters during boot, the `ai.ip_cfg_override=1` parameter is required in the `paramline`. This parameter is used with the configured network cards to ensure a successful and efficient deployment on {ibm-z-title}.
29
29
30
30
The following table lists the network devices that are supported on each hypervisor for the network configuration override functionality :
31
31
32
32
[cols="3,2,2,2,2", options="header"]
33
33
|====
34
-
| Network device
35
-
| z/VM
36
-
| KVM
37
-
| LPAR Classic
34
+
| Network device
35
+
| z/VM
36
+
| KVM
37
+
| LPAR Classic
38
38
| LPAR Dynamic Partition Manager (DPM)
39
39
40
-
| Virtual Switch
40
+
| Virtual Switch
41
41
| Supported ^[1]^
42
-
| Not applicable ^[2]^
43
-
| Not applicable
44
-
| Not applicable
42
+
| Not applicable ^[2]^
43
+
| Not applicable
44
+
| Not applicable
45
45
46
-
| Direct attached Open Systems Adapter (OSA)
47
-
| Supported
46
+
| Direct attached Open Systems Adapter (OSA)
47
+
| Supported
48
48
| Not required ^[3]^
49
-
| Supported
50
-
| Not required
49
+
| Supported
50
+
| Not required
51
51
52
-
| RDMA over Converged Ethernet (RoCE)
53
-
| Not required
54
-
| Not required
55
-
| Not required
56
-
| Not required
52
+
| RDMA over Converged Ethernet (RoCE)
53
+
| Not required
54
+
| Not required
55
+
| Not required
56
+
| Not required
57
57
58
-
| HiperSockets
59
-
| Supported
60
-
| Not required
61
-
| Supported
62
-
| Not required
58
+
| HiperSockets
59
+
| Supported
60
+
| Not required
61
+
| Supported
62
+
| Not required
63
63
|====
64
64
. Supported: When the `ai.ip_cfg_override` parameter is required for the installation procedure.
65
65
. Not Applicable: When a network card is not applicable to be used on the hypervisor.
Cluster administrators might want to disable the VMWare vSphere Container Storage Interface (CSI) Driver as a Day 2 operation, so the vSphere CSI Driver does not interface with your vSphere setup.
10
+
Cluster administrators might want to disable the VMware vSphere Container Storage Interface (CSI) Driver as a Day 2 operation, so the vSphere CSI Driver does not interface with your vSphere setup.
0 commit comments