Skip to content

Commit bfe6f7f

Browse files
authored
Merge pull request #97733 from AedinC/OSDOCS-14506
OSDOCS-14506:Pruning Support book.
2 parents 6dfeee4 + 57b9f8a commit bfe6f7f

18 files changed

+83
-86
lines changed

_topic_maps/_topic_map_rosa.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -392,10 +392,10 @@ Topics:
392392
# File: troubleshooting-installations
393393
- Name: Review your cluster notifications
394394
File: mos-tshoot-cluster-notifications
395-
- Name: Troubleshooting ROSA installations
395+
- Name: Troubleshooting Red Hat OpenShift Service on AWS classic architecture installations
396396
File: rosa-troubleshooting-installations
397-
- Name: Troubleshooting ROSA with HCP installations
398-
File: rosa-troubleshooting-installations-hcp
397+
# - Name: Troubleshooting Red Hat OpenShift Service on AWS installations
398+
# File: rosa-troubleshooting-installations-hcp
399399
- Name: Troubleshooting networking
400400
File: rosa-troubleshooting-networking
401401
- Name: Verifying node health

_topic_maps/_topic_map_rosa_hcp.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -402,9 +402,9 @@ Topics:
402402
# File: troubleshooting-installations
403403
- Name: Review your cluster notifications
404404
File: mos-tshoot-cluster-notifications
405-
- Name: Troubleshooting ROSA installations
406-
File: rosa-troubleshooting-installations
407-
- Name: Troubleshooting ROSA with HCP installations
405+
# - Name: Troubleshooting ROSA installations
406+
# File: rosa-troubleshooting-installations
407+
- Name: Troubleshooting Red Hat OpenShift Service on AWS installations
408408
File: rosa-troubleshooting-installations-hcp
409409
- Name: Troubleshooting networking
410410
File: rosa-troubleshooting-networking

modules/rosa-hcp-no-console-access.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
[id="rosa-hcp-no-console-access_{context}"]
66
= Troubleshooting access to {hybrid-console}
77

8-
In {hcp-title} clusters, the {product-title} OAuth server is hosted in the Red Hat service's AWS account while the web console service is published using the cluster's default ingress controller in the cluster's AWS account. If you can log in to your cluster using the OpenShift CLI (oc) but cannot access the {product-title} web console, verify the following criteria are met:
8+
In {product-title} clusters, the {product-title} OAuth server is hosted in the Red Hat service's AWS account while the web console service is published using the cluster's default ingress controller in the cluster's AWS account. If you can log in to your cluster using the OpenShift CLI (oc) but cannot access the {product-title} web console, verify the following criteria are met:
99

1010
* The console workloads are running.
1111
* The default ingress controller's load balancer is active.

modules/rosa-hcp-private-ready-no-console-access.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,10 +3,10 @@
33
// * support/rosa-troubleshooting-installations-hcp .adoc
44
:_mod-docs-content-type: PROCEDURE
55
[id="rosa-hcp-private-ready-no-console-access_{context}"]
6-
= Verifying access to {hybrid-console} for private {hcp-title} clusters
6+
= Verifying access to {hybrid-console} for private {product-title} clusters
77

88
The console of the private cluster is private by default. During cluster installation, the default Ingress Controller managed by OpenShift's Ingress Operator is configured with an internal AWS Network Load Balancer (NLB).
99

10-
If your private {hcp-title} cluster shows a `ready` status but you cannot access the {product-title} web console for the cluster, try accessing the cluster console from either within the cluster VPC or from a network that is connected to the VPC.
10+
If your private {product-title} cluster shows a `ready` status but you cannot access the {product-title} web console for the cluster, try accessing the cluster console from either within the cluster VPC or from a network that is connected to the VPC.
1111

1212

modules/rosa-hcp-ready-no-console-access.adoc

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3,10 +3,10 @@
33
// * support/rosa-troubleshooting-installations-hcp .adoc
44
:_mod-docs-content-type: PROCEDURE
55
[id="rosa-hcp-ready-no-console-access_{context}"]
6-
= Verifying access to {product-title} web console for {hcp-title} cluster in ready state
6+
= Verifying access to {product-title} web console for {product-title} cluster in ready state
77

8-
{hcp-title} clusters return a `ready` status when the control plane hosted in the {product-title} service account becomes ready. Cluster console workloads are deployed on the cluster's worker nodes. The {product-title} web console will not be available and accessible until the worker nodes have joined the cluster and console workloads are running.
8+
{product-title} clusters return a `ready` status when the control plane hosted in the {product-title} service account becomes ready. Cluster console workloads are deployed on the cluster's worker nodes. The {product-title} web console will not be available and accessible until the worker nodes have joined the cluster and console workloads are running.
99

10-
If your {hcp-title} cluster is ready but you are unable to access the {product-title} web console for the cluster, wait for the worker nodes to join the cluster and retry accessing the console.
10+
If your {product-title} cluster is ready but you are unable to access the {product-title} web console for the cluster, wait for the worker nodes to join the cluster and retry accessing the console.
1111

12-
You can either log in to the {hcp-title} cluster or use the `rosa describe machinepool` command in the `rosa` CLI watch the nodes.
12+
You can either log in to the {product-title} cluster or use the `rosa describe machinepool` command in the `rosa` CLI watch the nodes.

modules/rosa-troubleshoot-hcp-install.adoc

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -4,11 +4,11 @@
44

55
:_mod-docs-content-type: CONCEPT
66
[id="rosa-troubleshoot-hcp-install_{context}"]
7-
= Troubleshooting {hcp-title} installation error codes
7+
= Troubleshooting {product-title} installation error codes
88

9-
The following table lists {hcp-title-first} installation error codes and what you can do to troubleshoot these errors.
9+
The following table lists {product-title} installation error codes and what you can do to troubleshoot these errors.
1010

11-
.{hcp-title} installation error codes
11+
.{product-title} installation error codes
1212
[options="header",cols="3"]
1313
|===
1414
| Error code | Description | Resolution
@@ -18,7 +18,7 @@ The following table lists {hcp-title-first} installation error codes and what yo
1818
| Check the cluster installation logs for more details, or delete this cluster and retry cluster installation. If this issue persists, contact support by logging in to the link:https://access.redhat.com/support/cases/#/case/list[*Customer Support* page].
1919

2020
| OCM5001
21-
| {hcp-title} cluster provision has failed.
21+
| {product-title} cluster provision has failed.
2222
| Check the cluster installation logs for more details, or delete this cluster and retry cluster installation. If this issue persists, contact support by logging in to the link:https://access.redhat.com/support/cases/#/case/list[*Customer Support* page].
2323

2424
| OCM5002
@@ -27,15 +27,15 @@ The following table lists {hcp-title-first} installation error codes and what yo
2727

2828
| OCM5003
2929
| Unable to establish an AWS client to provision the cluster.
30-
| You must create several role resources on your AWS account to create and manage a {hcp-title} cluster. Ensure that your provided AWS credentials are correct and retry cluster installation.
30+
| You must create several role resources on your AWS account to create and manage a {product-title} cluster. Ensure that your provided AWS credentials are correct and retry cluster installation.
3131

32-
For more information about {hcp-title} IAM role resources, see _ROSA IAM role resources_ in the _Additional resources_ section.
32+
For more information about {product-title} IAM role resources, see _ROSA IAM role resources_ in the _Additional resources_ section.
3333

3434
| OCM5004
3535
| Unable to establish a cross-account AWS client to provision the cluster.
36-
| You must create several role resources on your AWS account to create and manage a {hcp-title} cluster. Ensure that your provided AWS credentials are correct and retry cluster installation.
36+
| You must create several role resources on your AWS account to create and manage a {product-title} cluster. Ensure that your provided AWS credentials are correct and retry cluster installation.
3737

38-
For more information about {hcp-title} IAM role resources, see _ROSA IAM role resources_ in the _Additional resources_ section.
38+
For more information about {product-title} IAM role resources, see _ROSA IAM role resources_ in the _Additional resources_ section.
3939

4040
| OCM5005
4141
| Failed to retrieve AWS subnets defined for the cluster.
@@ -55,7 +55,7 @@ For more information about {hcp-title} IAM role resources, see _ROSA IAM role re
5555

5656
| OCM5009
5757
| The cluster version could not be found.
58-
| Ensure that the configured version ID matches a valid {hcp-title} version.
58+
| Ensure that the configured version ID matches a valid {product-title} version.
5959

6060
| OCM5010
6161
| Failed to tag subnets for the cluster.

modules/rosa-verify-hcp-install.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,6 @@
33
// * support/rosa-troubleshooting-installations-hcp .adoc
44
:_mod-docs-content-type: PROCEDURE
55
[id="rosa-verify-hcp-install_{context}"]
6-
= Verifying installation of {hcp-title} clusters
6+
= Verifying installation of {product-title} clusters
77

88
If the {hcp-title} cluster is in the installing state for over 30 minutes and has not become ready, ensure the AWS account environment is prepared for the required cluster configurations. If the AWS account environment is prepared for the required cluster configurations correctly, try to delete and recreate the cluster. If the problem persists, contact support.

modules/support-collecting-host-network-trace.adoc

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -74,23 +74,23 @@ ifdef::openshift-origin[]
7474
[source,terminal]
7575
----
7676
$ oc adm must-gather \
77-
--dest-dir /tmp/captures \ <.>
78-
--source-dir '/tmp/tcpdump/' \ <.>
79-
--image quay.io/openshift/origin-network-tools:latest \ <.>
80-
--node-selector 'node-role.kubernetes.io/worker' \ <.>
81-
--host-network=true \ <.>
82-
--timeout 30s \ <.>
77+
--dest-dir /tmp/captures \ <1>
78+
--source-dir '/tmp/tcpdump/' \ <2>
79+
--image quay.io/openshift/origin-network-tools:latest \ <3>
80+
--node-selector 'node-role.kubernetes.io/worker' \ <4>
81+
--host-network=true \ <5>
82+
--timeout 30s \ <6>
8383
-- \
84-
tcpdump -i any \ <.>
84+
tcpdump -i any \ <7>
8585
-w /tmp/tcpdump/%Y-%m-%dT%H:%M:%S.pcap -W 1 -G 300
8686
----
87-
<.> The `--dest-dir` argument specifies that `oc adm must-gather` stores the packet captures in directories that are relative to `/tmp/captures` on the client machine. You can specify any writable directory.
88-
<.> When `tcpdump` is run in the debug pod that `oc adm must-gather` starts, the `--source-dir` argument specifies that the packet captures are temporarily stored in the `/tmp/tcpdump` directory on the pod.
89-
<.> The `--image` argument specifies a container image that includes the `tcpdump` command.
90-
<.> The `--node-selector` argument and example value specifies to perform the packet captures on the worker nodes. As an alternative, you can specify the `--node-name` argument instead to run the packet capture on a single node. If you omit both the `--node-selector` and the `--node-name` argument, the packet captures are performed on all nodes.
91-
<.> The `--host-network=true` argument is required so that the packet captures are performed on the network interfaces of the node.
92-
<.> The `--timeout` argument and value specify to run the debug pod for 30 seconds. If you do not specify the `--timeout` argument and a duration, the debug pod runs for 10 minutes.
93-
<.> The `-i any` argument for the `tcpdump` command specifies to capture packets on all network interfaces. As an alternative, you can specify a network interface name.
87+
<1> The `--dest-dir` argument specifies that `oc adm must-gather` stores the packet captures in directories that are relative to `/tmp/captures` on the client machine. You can specify any writable directory.
88+
<2> When `tcpdump` is run in the debug pod that `oc adm must-gather` starts, the `--source-dir` argument specifies that the packet captures are temporarily stored in the `/tmp/tcpdump` directory on the pod.
89+
<3> The `--image` argument specifies a container image that includes the `tcpdump` command.
90+
<4> The `--node-selector` argument and example value specifies to perform the packet captures on the worker nodes. As an alternative, you can specify the `--node-name` argument instead to run the packet capture on a single node. If you omit both the `--node-selector` and the `--node-name` argument, the packet captures are performed on all nodes.
91+
<5> The `--host-network=true` argument is required so that the packet captures are performed on the network interfaces of the node.
92+
<6> The `--timeout` argument and value specify to run the debug pod for 30 seconds. If you do not specify the `--timeout` argument and a duration, the debug pod runs for 10 minutes.
93+
<7> The `-i any` argument for the `tcpdump` command specifies to capture packets on all network interfaces. As an alternative, you can specify a network interface name.
9494
endif::openshift-origin[]
9595

9696
. Perform the action, such as accessing a web application, that triggers the network communication issue while the network trace captures packets.

rosa_hcp/rosa-hcp-aws-private-creating-cluster.adoc

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -16,21 +16,22 @@ include::modules/rosa-additional-principals-overview.adoc[leveloffset=+1]
1616
include::modules/rosa-additional-principals-create.adoc[leveloffset=+2]
1717
include::modules/rosa-additional-principals-edit.adoc[leveloffset=+2]
1818

19-
//unclear on why this is here given this is a HCP assembly
20-
ifndef::openshift-rosa-hcp[]
19+
ifdef::openshift-rosa[]
2120
[id="next-steps_rosa-hcp-aws-private-creating-cluster"]
2221
== Next steps
2322
xref:../rosa_install_access_delete_clusters/rosa-sts-config-identity-providers.adoc#rosa-sts-config-identity-providers[Configuring identity providers]
23+
endif::openshift-rosa[]
2424

2525
[role="_additional-resources"]
2626
[id="additional-resources_rosa-hcp-aws-privatelink-creating-cluster"]
2727
== Additional resources
2828

2929
* xref:../rosa_planning/rosa-hcp-aws-prereqs.adoc#rosa-hcp-firewall-prerequisites_rosa-hcp-aws-prereqs[AWS PrivateLink firewall prerequisites]
30-
// This link must remain hidden until the HCP migration is completed
30+
// Commenting out until pruning of other books is complete as these are breaking the build for Pruning Support task
3131
// * xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-hcp-firewall-prerequisites_rosa-sts-aws-prereqs[AWS PrivateLink firewall prerequisites]
32-
* xref:../rosa_getting_started/rosa-sts-getting-started-workflow.adoc#rosa-sts-overview-of-the-deployment-workflow[Overview of the ROSA with STS deployment workflow]
33-
* xref:../rosa_install_access_delete_clusters/rosa-sts-deleting-cluster.adoc#rosa-sts-deleting-cluster[Deleting a ROSA cluster]
34-
* xref:../architecture/rosa-architecture-models.adoc#rosa-architecture-models[ROSA architecture models]
32+
//* xref:../rosa_getting_started/rosa-sts-getting-started-workflow.adoc#rosa-sts-overview-of-the-deployment-workflow[Overview of the ROSA with STS deployment workflow]
33+
//* xref:../rosa_install_access_delete_clusters/rosa-sts-deleting-cluster.adoc#rosa-sts-deleting-cluster[Deleting a ROSA cluster]
34+
//* xref:../architecture/rosa-architecture-models.adoc#rosa-architecture-models[ROSA architecture models]
35+
ifdef::openshift-rosa-hcp[]
3536
* xref:../support/troubleshooting/rosa-troubleshooting-installations-hcp.adoc#rosa-troubleshooting-installations-hcp[Troubleshooting ROSA with HCP cluster installations]
3637
endif::openshift-rosa-hcp[]

rosa_hcp/rosa-hcp-creating-cluster-with-aws-kms-key.adoc

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -100,6 +100,8 @@ ifndef::openshift-rosa-hcp[]
100100
// * xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-sts-aws-prereqs[AWS prerequisites for ROSA with STS]]
101101
* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-understanding-deployment-modes_rosa-sts-creating-a-cluster-with-customizations[Understanding the auto and manual deployment modes]
102102
* link:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_oidc.html[Creating OpenID Connect (OIDC) identity providers]
103-
* xref:../support/troubleshooting/rosa-troubleshooting-installations-hcp.adoc#rosa-troubleshooting-installations-hcp[Troubleshooting ROSA with HCP cluster installations]
103+
endif::openshift-rosa-hcp[]
104104
* xref:../support/getting-support.adoc#getting-support[Getting support for Red{nbsp}Hat OpenShift Service on AWS]
105+
ifdef::openshift-rosa-hcp[]
106+
* xref:../support/troubleshooting/rosa-troubleshooting-installations-hcp.adoc#rosa-troubleshooting-installations-hcp[Troubleshooting ROSA with HCP cluster installations]
105107
endif::openshift-rosa-hcp[]

0 commit comments

Comments
 (0)