Skip to content

Commit 28f7e88

Browse files
authored
Merge pull request #100032 from lg-rh/hcp_4.20_ibmz
[OCPBUGS- 62751]IBM Z docs changes for HCP 4.20 Release
2 parents e5764b4 + 14634df commit 28f7e88

File tree

7 files changed

+19
-252
lines changed

7 files changed

+19
-252
lines changed

hosted_control_planes/hcp-deploy/hcp-deploy-ibmz.adoc

Lines changed: 8 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ You can deploy {hcp} by configuring a cluster to function as a management cluste
1010

1111
[NOTE]
1212
====
13-
The _management_ cluster is not the _managed_ cluster. A managed cluster is a cluster that the hub cluster manages.
13+
The _management_ cluster is not the _managed_ cluster. A managed cluster is a cluster that the hub cluster manages. The _management_ cluster can run on either the x86_64 architecture, supported beginning with {product-title} 4.17 and {mce} 2.7, or the s390x architecture, supported beginning with {product-title} 4.20 and {mce} 2.10.
1414
====
1515

1616
You can convert a managed cluster to a management cluster by using the `hypershift` add-on to deploy the HyperShift Operator on that cluster. Then, you can start to create the hosted cluster.
@@ -19,7 +19,7 @@ The {mce-short} supports only the default `local-cluster`, which is a hub cluste
1919

2020
To provision {hcp} on bare metal, you can use the Agent platform. The Agent platform uses the central infrastructure management service to add worker nodes to a hosted cluster. For more information, see "Enabling the central infrastructure management service".
2121

22-
Each {ibm-z-title} system host must be started with the PXE images provided by the central infrastructure management. After each host starts, it runs an Agent process to discover the details of the host and completes the installation. An Agent custom resource represents each host.
22+
Each {ibm-z-title} system host must be started with the PXE or ISO images that are provided by the central infrastructure management. After each host starts, it runs an Agent process to discover the details of the host and completes the installation. An Agent custom resource represents each host.
2323

2424
When you create a hosted cluster with the Agent platform, HyperShift Operator installs the Agent Cluster API provider in the hosted control plane namespace.
2525

@@ -31,20 +31,18 @@ include::modules/hcp-ibm-z-prereqs.adoc[leveloffset=+1]
3131
* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.14/html/clusters/cluster_mce_overview#advanced-config-engine[Advanced configuration]
3232
* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.14/html/clusters/cluster_mce_overview#enable-cim[Enabling the central infrastructure management service]
3333
* xref:../../hosted_control_planes/hcp-prepare/hcp-cli.adoc#hcp-cli-terminal_hcp-cli[Installing the {hcp} command-line interface]
34-
* xref:../../hosted_control_planes/hcp-prepare/hcp-enable-disable.adoc[Enabling or disabling the {hcp} feature]
3534
3635
include::modules/hcp-ibm-z-infra-reqs.adoc[leveloffset=+1]
37-
38-
[role="_additional-resources"]
39-
.Additional resources
40-
41-
* xref:../../hosted_control_planes/hcp-prepare/hcp-enable-disable.adoc[Enabling or disabling the {hcp} feature]
42-
4336
include::modules/hcp-ibm-z-dns.adoc[leveloffset=+1]
4437

4538
include::modules/hcp-custom-dns.adoc[leveloffset=+2]
4639

47-
include::modules/hcp-bm-hc.adoc[leveloffset=+1]
40+
[id="hcp-bm-create-hc-ibm-z"]
41+
== Creating a hosted cluster on bare metal for {ibm-z-title}
42+
43+
You can create a hosted cluster or import one. When the Assisted Installer is enabled as an add-on to {mce-short} and you create a hosted cluster with the Agent platform, the HyperShift Operator installs the Agent Cluster API provider in the hosted control plane namespace.
44+
45+
include::modules/hcp-bm-hc.adoc[leveloffset=+2]
4846

4947
[role="_additional-resources"]
5048
.Additional resources
@@ -73,8 +71,3 @@ include::modules/hcp-ibm-z-lpar-agents.adoc[leveloffset=+2]
7371
include::modules/hcp-ibm-z-zvm-agents.adoc[leveloffset=+2]
7472

7573
include::modules/hcp-ibm-z-scale-np.adoc[leveloffset=+1]
76-
77-
[role="_additional-resources"]
78-
.Additional resources
79-
80-
* xref:../../installing/installing_ibm_z/upi/installing-ibm-z.adoc#installation-operators-config[Initial Operator configuration]

modules/hcp-bm-hc.adoc

Lines changed: 0 additions & 231 deletions
Original file line numberDiff line numberDiff line change
@@ -22,27 +22,6 @@ On bare-metal infrastructure, you can create or import a hosted cluster. After y
2222
2323
- Verify that you have a default storage class configured for your cluster. Otherwise, you might see pending persistent volume claims (PVCs).
2424
25-
- By default when you use the `hcp create cluster agent` command, the command creates a hosted cluster with configured node ports. The preferred publishing strategy for hosted clusters on bare metal exposes services through a load balancer. If you create a hosted cluster by using the web console or by using {rh-rhacm-title}, to set a publishing strategy for a service besides the Kubernetes API server, you must manually specify the `servicePublishingStrategy` information in the `HostedCluster` custom resource.
26-
27-
- Ensure that you meet the requirements described in "Requirements for {hcp} on bare metal", which includes requirements related to infrastructure, firewalls, ports, and services. For example, those requirements describe how to add the appropriate zone labels to the bare-metal hosts in your management cluster, as shown in the following example commands:
28-
+
29-
[source,terminal]
30-
----
31-
$ oc label node [compute-node-1] topology.kubernetes.io/zone=zone1
32-
----
33-
+
34-
[source,terminal]
35-
----
36-
$ oc label node [compute-node-2] topology.kubernetes.io/zone=zone2
37-
----
38-
+
39-
[source,terminal]
40-
----
41-
$ oc label node [compute-node-3] topology.kubernetes.io/zone=zone3
42-
----
43-
44-
- Ensure that you have added bare-metal nodes to a hardware inventory.
45-
4625
.Procedure
4726

4827
. Create a namespace by entering the following command:
@@ -88,39 +67,6 @@ $ hcp create cluster agent \
8867
<11> Specify the node pool replica count, such as `3`. You must specify the replica count as `0` or greater to create the same number of replicas. Otherwise, you do not create node pools.
8968
<12> After the `--ssh-key` flag, specify the path to the SSH key, such as `user/.ssh/id_rsa`.
9069

91-
. Configure the service publishing strategy. By default, hosted clusters use the `NodePort` service publishing strategy because node ports are always available without additional infrastructure. However, you can configure the service publishing strategy to use a load balancer.
92-
93-
** If you are using the default `NodePort` strategy, configure the DNS to point to the hosted cluster compute nodes, not the management cluster nodes. For more information, see "DNS configurations on bare metal".
94-
95-
** For production environments, use the `LoadBalancer` strategy because this strategy provides certificate handling and automatic DNS resolution. The following example demonstrates changing the service publishing `LoadBalancer` strategy in your hosted cluster configuration file:
96-
+
97-
[source,yaml]
98-
----
99-
# ...
100-
spec:
101-
services:
102-
- service: APIServer
103-
servicePublishingStrategy:
104-
type: LoadBalancer #<1>
105-
- service: Ignition
106-
servicePublishingStrategy:
107-
type: Route
108-
- service: Konnectivity
109-
servicePublishingStrategy:
110-
type: Route
111-
- service: OAuthServer
112-
servicePublishingStrategy:
113-
type: Route
114-
- service: OIDC
115-
servicePublishingStrategy:
116-
type: Route
117-
sshKey:
118-
name: <ssh_key>
119-
# ...
120-
----
121-
+
122-
<1> Specify `LoadBalancer` as the API Server type. For all other services, specify `Route` as the type.
123-
12470
. Apply the changes to the hosted cluster configuration file by entering the following command:
12571
+
12672
[source,terminal]
@@ -153,183 +99,6 @@ $ oc get pods -n <hosted_cluster_namespace>
15399

154100
. Confirm that the hosted cluster is ready. The status of `Available: True` indicates the readiness of the cluster and the node pool status shows `AllMachinesReady: True`. These statuses indicate the healthiness of all cluster Operators.
155101

156-
. Install MetalLB in the hosted cluster:
157-
+
158-
.. Extract the `kubeconfig` file from the hosted cluster and set the environment variable for hosted cluster access by entering the following commands:
159-
+
160-
[source,terminal]
161-
----
162-
$ oc get secret \
163-
<hosted_cluster_namespace>-admin-kubeconfig \
164-
-n <hosted_cluster_namespace> \
165-
-o jsonpath='{.data.kubeconfig}' \
166-
| base64 -d > \
167-
kubeconfig-<hosted_cluster_namespace>.yaml
168-
----
169-
+
170-
[source,terminal]
171-
----
172-
$ export KUBECONFIG="/path/to/kubeconfig-<hosted_cluster_namespace>.yaml"
173-
----
174-
+
175-
.. Install the MetalLB Operator by creating the `install-metallb-operator.yaml` file:
176-
+
177-
[source,yaml]
178-
----
179-
apiVersion: v1
180-
kind: Namespace
181-
metadata:
182-
name: metallb-system
183-
---
184-
apiVersion: operators.coreos.com/v1
185-
kind: OperatorGroup
186-
metadata:
187-
name: metallb-operator
188-
namespace: metallb-system
189-
---
190-
apiVersion: operators.coreos.com/v1alpha1
191-
kind: Subscription
192-
metadata:
193-
name: metallb-operator
194-
namespace: metallb-system
195-
spec:
196-
channel: "stable"
197-
name: metallb-operator
198-
source: redhat-operators
199-
sourceNamespace: openshift-marketplace
200-
installPlanApproval: Automatic
201-
# ...
202-
----
203-
+
204-
.. Apply the file by entering the following command:
205-
+
206-
[source,terminal]
207-
----
208-
$ oc apply -f install-metallb-operator.yaml
209-
----
210-
+
211-
.. Configure the MetalLB IP address pool by creating the `deploy-metallb-ipaddresspool.yaml` file:
212-
+
213-
[source,yaml]
214-
----
215-
apiVersion: metallb.io/v1beta1
216-
kind: IPAddressPool
217-
metadata:
218-
name: metallb
219-
namespace: metallb-system
220-
spec:
221-
autoAssign: true
222-
addresses:
223-
- 10.11.176.71-10.11.176.75
224-
---
225-
apiVersion: metallb.io/v1beta1
226-
kind: L2Advertisement
227-
metadata:
228-
name: l2advertisement
229-
namespace: metallb-system
230-
spec:
231-
ipAddressPools:
232-
- metallb
233-
# ...
234-
----
235-
+
236-
.. Apply the configuration by entering the following command:
237-
+
238-
[source,terminal]
239-
----
240-
$ oc apply -f deploy-metallb-ipaddresspool.yaml
241-
----
242-
+
243-
.. Verify the installation of MetalLB by checking the Operator status, the IP address pool, and the `L2Advertisement` resource by entering the following commands:
244-
+
245-
[source,terminal]
246-
----
247-
$ oc get pods -n metallb-system
248-
----
249-
+
250-
[source,terminal]
251-
----
252-
$ oc get ipaddresspool -n metallb-system
253-
----
254-
+
255-
[source,terminal]
256-
----
257-
$ oc get l2advertisement -n metallb-system
258-
----
259-
260-
. Configure the load balancer for ingress:
261-
+
262-
.. Create the `ingress-loadbalancer.yaml` file:
263-
+
264-
[source,yaml]
265-
----
266-
apiVersion: v1
267-
kind: Service
268-
metadata:
269-
annotations:
270-
metallb.universe.tf/address-pool: metallb
271-
name: metallb-ingress
272-
namespace: openshift-ingress
273-
spec:
274-
ports:
275-
- name: http
276-
protocol: TCP
277-
port: 80
278-
targetPort: 80
279-
- name: https
280-
protocol: TCP
281-
port: 443
282-
targetPort: 443
283-
selector:
284-
ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default
285-
type: LoadBalancer
286-
# ...
287-
----
288-
+
289-
.. Apply the configuration by entering the following command:
290-
+
291-
[source,terminal]
292-
----
293-
$ oc apply -f ingress-loadbalancer.yaml
294-
----
295-
+
296-
.. Verify that the load balancer service works as expected by entering the following command:
297-
+
298-
[source,terminal]
299-
----
300-
$ oc get svc metallb-ingress -n openshift-ingress
301-
----
302-
+
303-
.Example output
304-
+
305-
[source,text]
306-
----
307-
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
308-
metallb-ingress LoadBalancer 172.31.127.129 10.11.176.71 80:30961/TCP,443:32090/TCP 16h
309-
----
310-
311-
. Configure the DNS to work with the load balancer:
312-
+
313-
.. Configure the DNS for the `apps` domain by pointing the `*.apps.<hosted_cluster_namespace>.<base_domain>` wildcard DNS record to the load balancer IP address.
314-
+
315-
.. Verify the DNS resolution by entering the following command:
316-
+
317-
[source,terminal]
318-
----
319-
$ nslookup console-openshift-console.apps.<hosted_cluster_namespace>.<base_domain> <load_balancer_ip_address>
320-
----
321-
+
322-
.Example output
323-
+
324-
[source,text]
325-
----
326-
Server: 10.11.176.1
327-
Address: 10.11.176.1#53
328-
329-
Name: console-openshift-console.apps.my-hosted-cluster.sample-base-domain.com
330-
Address: 10.11.176.71
331-
----
332-
333102
.Verification
334103

335104
. Check the cluster Operators by entering the following command:

modules/hcp-cli-gateway.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ You can install the {hcp} command-line interface (CLI), `hcp`, by using the cont
99

1010
.Prerequisites
1111

12-
* On an {product-title} cluster, you have installed {mce} 2.5 or later. The {mce-short} is automatically installed when you install Red{nbsp}Hat Advanced Cluster Management. You can also install {mce-short} without Red{nbsp}Hat Advanced Management as an Operator from the {product-title} software catalog.
12+
* On an {product-title} cluster, you have installed {mce} 2.7 or later. The {mce-short} is automatically installed when you install Red{nbsp}Hat Advanced Cluster Management. You can also install {mce-short} without Red{nbsp}Hat Advanced Management as an Operator from {product-title} OperatorHub.
1313
1414
.Procedure
1515

modules/hcp-ibm-z-infraenv.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ spec:
3535
$ oc apply -f infraenv-config.yaml
3636
----
3737

38-
. To fetch the URL to download the PXE images, such as, `initrd.img`, `kernel.img`, or `rootfs.img`, which allows {ibm-z-title} machines to join as agents, enter the following command:
38+
. To fetch the URL to download the PXE or ISO images, such as, `initrd.img`, `kernel.img`, or `rootfs.img`, which allows {ibm-z-title} machines to join as agents, enter the following command:
3939
+
4040
[source,terminal]
4141
----

modules/hcp-ibm-z-lpar-agents.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ console=ttysclp0 \
2020
ignition.firstboot ignition.platform.id=metal
2121
coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \// <1>
2222
coreos.inst.persistent-kargs=console=ttysclp0
23-
ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \// <2>
23+
ip=<ip>::<gateway>:<netmask>::<interface>:none nameserver=<dns> \// <2>
2424
rd.znet=qeth,<network_adaptor_range>,layer2=1
2525
rd.<disk_type>=<adapter> \// <3>
2626
zfcp.allow_lun_scan=0

modules/hcp-ibm-z-prereqs.adoc

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,9 +6,9 @@
66
[id="hcp-ibm-z-prereqs_{context}"]
77
= Prerequisites to configure {hcp} on {ibm-z-title}
88

9-
* The {mce} version 2.5 or later must be installed on an {product-title} cluster. You can install {mce-short} as an Operator from the {product-title} software catalog.
9+
* The {mce} version 2.7 or later must be installed on an {product-title} cluster. You can install {mce-short} as an Operator from the {product-title} OperatorHub.
1010
11-
* The {mce-short} must have at least one managed {product-title} cluster. The `local-cluster` is automatically imported in {mce-short} 2.5 and later. For more information about the `local-cluster`, see _Advanced configuration_ in the Red{nbsp}Hat Advanced Cluster Management documentation. You can check the status of your hub cluster by running the following command:
11+
* The {mce-short} must have at least one managed {product-title} cluster. The `local-cluster` is automatically imported in {mce-short} 2.7 and later. For more information about the `local-cluster`, see _Advanced configuration_ in the Red{nbsp}Hat Advanced Cluster Management documentation. You can check the status of your hub cluster by running the following command:
1212
+
1313
[source,terminal]
1414
----
@@ -20,3 +20,8 @@ $ oc get managedclusters local-cluster
2020
* You need to enable the central infrastructure management service. For more information, see _Enabling the central infrastructure management service_.
2121
2222
* You need to install the hosted control plane command-line interface. For more information, see _Installing the hosted control plane command-line interface_.
23+
24+
[NOTE]
25+
====
26+
The _management_ cluster can run on either the x86_64 architecture, supported beginning with {product-title} 4.17 and {mce} 2.7, or the s390x architecture, supported beginning with {product-title} 4.20 and {mce} 2.10.
27+
====

modules/hcp-ibm-z-zvm-agents.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ console=ttysclp0 \
2323
ignition.firstboot ignition.platform.id=metal \
2424
coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \// <1>
2525
coreos.inst.persistent-kargs=console=ttysclp0
26-
ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \// <2>
26+
ip=<ip>::<gateway>:<netmask>::<interface>:none nameserver=<dns> \// <2>
2727
rd.znet=qeth,<network_adaptor_range>,layer2=1
2828
rd.<disk_type>=<adapter> \// <3>
2929
zfcp.allow_lun_scan=0

0 commit comments

Comments
 (0)