You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: hosted_control_planes/hcp-deploy/hcp-deploy-ibmz.adoc
+8-15Lines changed: 8 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ You can deploy {hcp} by configuring a cluster to function as a management cluste
10
10
11
11
[NOTE]
12
12
====
13
-
The _management_ cluster is not the _managed_ cluster. A managed cluster is a cluster that the hub cluster manages.
13
+
The _management_ cluster is not the _managed_ cluster. A managed cluster is a cluster that the hub cluster manages. The _management_ cluster can run on either the x86_64 architecture, supported beginning with {product-title} 4.17 and {mce} 2.7, or the s390x architecture, supported beginning with {product-title} 4.20 and {mce} 2.10.
14
14
====
15
15
16
16
You can convert a managed cluster to a management cluster by using the `hypershift` add-on to deploy the HyperShift Operator on that cluster. Then, you can start to create the hosted cluster.
@@ -19,7 +19,7 @@ The {mce-short} supports only the default `local-cluster`, which is a hub cluste
19
19
20
20
To provision {hcp} on bare metal, you can use the Agent platform. The Agent platform uses the central infrastructure management service to add worker nodes to a hosted cluster. For more information, see "Enabling the central infrastructure management service".
21
21
22
-
Each {ibm-z-title} system host must be started with the PXE images provided by the central infrastructure management. After each host starts, it runs an Agent process to discover the details of the host and completes the installation. An Agent custom resource represents each host.
22
+
Each {ibm-z-title} system host must be started with the PXE or ISO images that are provided by the central infrastructure management. After each host starts, it runs an Agent process to discover the details of the host and completes the installation. An Agent custom resource represents each host.
23
23
24
24
When you create a hosted cluster with the Agent platform, HyperShift Operator installs the Agent Cluster API provider in the hosted control plane namespace.
* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.14/html/clusters/cluster_mce_overview#enable-cim[Enabling the central infrastructure management service]
33
33
* xref:../../hosted_control_planes/hcp-prepare/hcp-cli.adoc#hcp-cli-terminal_hcp-cli[Installing the {hcp} command-line interface]
34
-
* xref:../../hosted_control_planes/hcp-prepare/hcp-enable-disable.adoc[Enabling or disabling the {hcp} feature]
== Creating a hosted cluster on bare metal for {ibm-z-title}
42
+
43
+
You can create a hosted cluster or import one. When the Assisted Installer is enabled as an add-on to {mce-short} and you create a hosted cluster with the Agent platform, the HyperShift Operator installs the Agent Cluster API provider in the hosted control plane namespace.
Copy file name to clipboardExpand all lines: modules/hcp-bm-hc.adoc
-231Lines changed: 0 additions & 231 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,27 +22,6 @@ On bare-metal infrastructure, you can create or import a hosted cluster. After y
22
22
23
23
- Verify that you have a default storage class configured for your cluster. Otherwise, you might see pending persistent volume claims (PVCs).
24
24
25
-
- By default when you use the `hcp create cluster agent` command, the command creates a hosted cluster with configured node ports. The preferred publishing strategy for hosted clusters on bare metal exposes services through a load balancer. If you create a hosted cluster by using the web console or by using {rh-rhacm-title}, to set a publishing strategy for a service besides the Kubernetes API server, you must manually specify the `servicePublishingStrategy` information in the `HostedCluster` custom resource.
26
-
27
-
- Ensure that you meet the requirements described in "Requirements for {hcp} on bare metal", which includes requirements related to infrastructure, firewalls, ports, and services. For example, those requirements describe how to add the appropriate zone labels to the bare-metal hosts in your management cluster, as shown in the following example commands:
- Ensure that you have added bare-metal nodes to a hardware inventory.
45
-
46
25
.Procedure
47
26
48
27
. Create a namespace by entering the following command:
@@ -88,39 +67,6 @@ $ hcp create cluster agent \
88
67
<11> Specify the node pool replica count, such as `3`. You must specify the replica count as `0` or greater to create the same number of replicas. Otherwise, you do not create node pools.
89
68
<12> After the `--ssh-key` flag, specify the path to the SSH key, such as `user/.ssh/id_rsa`.
90
69
91
-
. Configure the service publishing strategy. By default, hosted clusters use the `NodePort` service publishing strategy because node ports are always available without additional infrastructure. However, you can configure the service publishing strategy to use a load balancer.
92
-
93
-
** If you are using the default `NodePort` strategy, configure the DNS to point to the hosted cluster compute nodes, not the management cluster nodes. For more information, see "DNS configurations on bare metal".
94
-
95
-
** For production environments, use the `LoadBalancer` strategy because this strategy provides certificate handling and automatic DNS resolution. The following example demonstrates changing the service publishing `LoadBalancer` strategy in your hosted cluster configuration file:
96
-
+
97
-
[source,yaml]
98
-
----
99
-
# ...
100
-
spec:
101
-
services:
102
-
- service: APIServer
103
-
servicePublishingStrategy:
104
-
type: LoadBalancer #<1>
105
-
- service: Ignition
106
-
servicePublishingStrategy:
107
-
type: Route
108
-
- service: Konnectivity
109
-
servicePublishingStrategy:
110
-
type: Route
111
-
- service: OAuthServer
112
-
servicePublishingStrategy:
113
-
type: Route
114
-
- service: OIDC
115
-
servicePublishingStrategy:
116
-
type: Route
117
-
sshKey:
118
-
name: <ssh_key>
119
-
# ...
120
-
----
121
-
+
122
-
<1> Specify `LoadBalancer` as the API Server type. For all other services, specify `Route` as the type.
123
-
124
70
. Apply the changes to the hosted cluster configuration file by entering the following command:
125
71
+
126
72
[source,terminal]
@@ -153,183 +99,6 @@ $ oc get pods -n <hosted_cluster_namespace>
153
99
154
100
. Confirm that the hosted cluster is ready. The status of `Available: True` indicates the readiness of the cluster and the node pool status shows `AllMachinesReady: True`. These statuses indicate the healthiness of all cluster Operators.
155
101
156
-
. Install MetalLB in the hosted cluster:
157
-
+
158
-
.. Extract the `kubeconfig` file from the hosted cluster and set the environment variable for hosted cluster access by entering the following commands:
.. Install the MetalLB Operator by creating the `install-metallb-operator.yaml` file:
176
-
+
177
-
[source,yaml]
178
-
----
179
-
apiVersion: v1
180
-
kind: Namespace
181
-
metadata:
182
-
name: metallb-system
183
-
---
184
-
apiVersion: operators.coreos.com/v1
185
-
kind: OperatorGroup
186
-
metadata:
187
-
name: metallb-operator
188
-
namespace: metallb-system
189
-
---
190
-
apiVersion: operators.coreos.com/v1alpha1
191
-
kind: Subscription
192
-
metadata:
193
-
name: metallb-operator
194
-
namespace: metallb-system
195
-
spec:
196
-
channel: "stable"
197
-
name: metallb-operator
198
-
source: redhat-operators
199
-
sourceNamespace: openshift-marketplace
200
-
installPlanApproval: Automatic
201
-
# ...
202
-
----
203
-
+
204
-
.. Apply the file by entering the following command:
205
-
+
206
-
[source,terminal]
207
-
----
208
-
$ oc apply -f install-metallb-operator.yaml
209
-
----
210
-
+
211
-
.. Configure the MetalLB IP address pool by creating the `deploy-metallb-ipaddresspool.yaml` file:
212
-
+
213
-
[source,yaml]
214
-
----
215
-
apiVersion: metallb.io/v1beta1
216
-
kind: IPAddressPool
217
-
metadata:
218
-
name: metallb
219
-
namespace: metallb-system
220
-
spec:
221
-
autoAssign: true
222
-
addresses:
223
-
- 10.11.176.71-10.11.176.75
224
-
---
225
-
apiVersion: metallb.io/v1beta1
226
-
kind: L2Advertisement
227
-
metadata:
228
-
name: l2advertisement
229
-
namespace: metallb-system
230
-
spec:
231
-
ipAddressPools:
232
-
- metallb
233
-
# ...
234
-
----
235
-
+
236
-
.. Apply the configuration by entering the following command:
237
-
+
238
-
[source,terminal]
239
-
----
240
-
$ oc apply -f deploy-metallb-ipaddresspool.yaml
241
-
----
242
-
+
243
-
.. Verify the installation of MetalLB by checking the Operator status, the IP address pool, and the `L2Advertisement` resource by entering the following commands:
. Configure the DNS to work with the load balancer:
312
-
+
313
-
.. Configure the DNS for the `apps` domain by pointing the `*.apps.<hosted_cluster_namespace>.<base_domain>` wildcard DNS record to the load balancer IP address.
314
-
+
315
-
.. Verify the DNS resolution by entering the following command:
Copy file name to clipboardExpand all lines: modules/hcp-cli-gateway.adoc
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ You can install the {hcp} command-line interface (CLI), `hcp`, by using the cont
9
9
10
10
.Prerequisites
11
11
12
-
* On an {product-title} cluster, you have installed {mce} 2.5 or later. The {mce-short} is automatically installed when you install Red{nbsp}Hat Advanced Cluster Management. You can also install {mce-short} without Red{nbsp}Hat Advanced Management as an Operator from the {product-title}software catalog.
12
+
* On an {product-title} cluster, you have installed {mce} 2.7 or later. The {mce-short} is automatically installed when you install Red{nbsp}Hat Advanced Cluster Management. You can also install {mce-short} without Red{nbsp}Hat Advanced Management as an Operator from {product-title}OperatorHub.
Copy file name to clipboardExpand all lines: modules/hcp-ibm-z-infraenv.adoc
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -35,7 +35,7 @@ spec:
35
35
$ oc apply -f infraenv-config.yaml
36
36
----
37
37
38
-
. To fetch the URL to download the PXE images, such as, `initrd.img`, `kernel.img`, or `rootfs.img`, which allows {ibm-z-title} machines to join as agents, enter the following command:
38
+
. To fetch the URL to download the PXE or ISO images, such as, `initrd.img`, `kernel.img`, or `rootfs.img`, which allows {ibm-z-title} machines to join as agents, enter the following command:
Copy file name to clipboardExpand all lines: modules/hcp-ibm-z-prereqs.adoc
+7-2Lines changed: 7 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,9 +6,9 @@
6
6
[id="hcp-ibm-z-prereqs_{context}"]
7
7
= Prerequisites to configure {hcp} on {ibm-z-title}
8
8
9
-
* The {mce} version 2.5 or later must be installed on an {product-title} cluster. You can install {mce-short} as an Operator from the {product-title}software catalog.
9
+
* The {mce} version 2.7 or later must be installed on an {product-title} cluster. You can install {mce-short} as an Operator from the {product-title}OperatorHub.
10
10
11
-
* The {mce-short} must have at least one managed {product-title} cluster. The `local-cluster` is automatically imported in {mce-short} 2.5 and later. For more information about the `local-cluster`, see _Advanced configuration_ in the Red{nbsp}Hat Advanced Cluster Management documentation. You can check the status of your hub cluster by running the following command:
11
+
* The {mce-short} must have at least one managed {product-title} cluster. The `local-cluster` is automatically imported in {mce-short} 2.7 and later. For more information about the `local-cluster`, see _Advanced configuration_ in the Red{nbsp}Hat Advanced Cluster Management documentation. You can check the status of your hub cluster by running the following command:
12
12
+
13
13
[source,terminal]
14
14
----
@@ -20,3 +20,8 @@ $ oc get managedclusters local-cluster
20
20
* You need to enable the central infrastructure management service. For more information, see _Enabling the central infrastructure management service_.
21
21
22
22
* You need to install the hosted control plane command-line interface. For more information, see _Installing the hosted control plane command-line interface_.
23
+
24
+
[NOTE]
25
+
====
26
+
The _management_ cluster can run on either the x86_64 architecture, supported beginning with {product-title} 4.17 and {mce} 2.7, or the s390x architecture, supported beginning with {product-title} 4.20 and {mce} 2.10.
0 commit comments