You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: modules/rosa-hcp-no-console-access.adoc
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@
5
5
[id="rosa-hcp-no-console-access_{context}"]
6
6
= Troubleshooting access to {hybrid-console}
7
7
8
-
In {hcp-title} clusters, the {product-title} OAuth server is hosted in the Red Hat service's AWS account while the web console service is published using the cluster's default ingress controller in the cluster's AWS account. If you can log in to your cluster using the OpenShift CLI (oc) but cannot access the {product-title} web console, verify the following criteria are met:
8
+
In {product-title} clusters, the {product-title} OAuth server is hosted in the Red Hat service's AWS account while the web console service is published using the cluster's default ingress controller in the cluster's AWS account. If you can log in to your cluster using the OpenShift CLI (oc) but cannot access the {product-title} web console, verify the following criteria are met:
9
9
10
10
* The console workloads are running.
11
11
* The default ingress controller's load balancer is active.
= Verifying access to {hybrid-console} for private {hcp-title} clusters
6
+
= Verifying access to {hybrid-console} for private {product-title} clusters
7
7
8
8
The console of the private cluster is private by default. During cluster installation, the default Ingress Controller managed by OpenShift's Ingress Operator is configured with an internal AWS Network Load Balancer (NLB).
9
9
10
-
If your private {hcp-title} cluster shows a `ready` status but you cannot access the {product-title} web console for the cluster, try accessing the cluster console from either within the cluster VPC or from a network that is connected to the VPC.
10
+
If your private {product-title} cluster shows a `ready` status but you cannot access the {product-title} web console for the cluster, try accessing the cluster console from either within the cluster VPC or from a network that is connected to the VPC.
= Verifying access to {product-title} web console for {hcp-title} cluster in ready state
6
+
= Verifying access to {product-title} web console for {product-title} cluster in ready state
7
7
8
-
{hcp-title} clusters return a `ready` status when the control plane hosted in the {product-title} service account becomes ready. Cluster console workloads are deployed on the cluster's worker nodes. The {product-title} web console will not be available and accessible until the worker nodes have joined the cluster and console workloads are running.
8
+
{product-title} clusters return a `ready` status when the control plane hosted in the {product-title} service account becomes ready. Cluster console workloads are deployed on the cluster's worker nodes. The {product-title} web console will not be available and accessible until the worker nodes have joined the cluster and console workloads are running.
9
9
10
-
If your {hcp-title} cluster is ready but you are unable to access the {product-title} web console for the cluster, wait for the worker nodes to join the cluster and retry accessing the console.
10
+
If your {product-title} cluster is ready but you are unable to access the {product-title} web console for the cluster, wait for the worker nodes to join the cluster and retry accessing the console.
11
11
12
-
You can either log in to the {hcp-title} cluster or use the `rosa describe machinepool` command in the `rosa` CLI watch the nodes.
12
+
You can either log in to the {product-title} cluster or use the `rosa describe machinepool` command in the `rosa` CLI watch the nodes.
The following table lists {hcp-title-first} installation error codes and what you can do to troubleshoot these errors.
9
+
The following table lists {product-title} installation error codes and what you can do to troubleshoot these errors.
10
10
11
-
.{hcp-title} installation error codes
11
+
.{product-title} installation error codes
12
12
[options="header",cols="3"]
13
13
|===
14
14
| Error code | Description | Resolution
@@ -18,7 +18,7 @@ The following table lists {hcp-title-first} installation error codes and what yo
18
18
| Check the cluster installation logs for more details, or delete this cluster and retry cluster installation. If this issue persists, contact support by logging in to the link:https://access.redhat.com/support/cases/#/case/list[*Customer Support* page].
19
19
20
20
| OCM5001
21
-
| {hcp-title} cluster provision has failed.
21
+
| {product-title} cluster provision has failed.
22
22
| Check the cluster installation logs for more details, or delete this cluster and retry cluster installation. If this issue persists, contact support by logging in to the link:https://access.redhat.com/support/cases/#/case/list[*Customer Support* page].
23
23
24
24
| OCM5002
@@ -27,15 +27,15 @@ The following table lists {hcp-title-first} installation error codes and what yo
27
27
28
28
| OCM5003
29
29
| Unable to establish an AWS client to provision the cluster.
30
-
| You must create several role resources on your AWS account to create and manage a {hcp-title} cluster. Ensure that your provided AWS credentials are correct and retry cluster installation.
30
+
| You must create several role resources on your AWS account to create and manage a {product-title} cluster. Ensure that your provided AWS credentials are correct and retry cluster installation.
31
31
32
-
For more information about {hcp-title} IAM role resources, see _ROSA IAM role resources_ in the _Additional resources_ section.
32
+
For more information about {product-title} IAM role resources, see _ROSA IAM role resources_ in the _Additional resources_ section.
33
33
34
34
| OCM5004
35
35
| Unable to establish a cross-account AWS client to provision the cluster.
36
-
| You must create several role resources on your AWS account to create and manage a {hcp-title} cluster. Ensure that your provided AWS credentials are correct and retry cluster installation.
36
+
| You must create several role resources on your AWS account to create and manage a {product-title} cluster. Ensure that your provided AWS credentials are correct and retry cluster installation.
37
37
38
-
For more information about {hcp-title} IAM role resources, see _ROSA IAM role resources_ in the _Additional resources_ section.
38
+
For more information about {product-title} IAM role resources, see _ROSA IAM role resources_ in the _Additional resources_ section.
39
39
40
40
| OCM5005
41
41
| Failed to retrieve AWS subnets defined for the cluster.
@@ -55,7 +55,7 @@ For more information about {hcp-title} IAM role resources, see _ROSA IAM role re
55
55
56
56
| OCM5009
57
57
| The cluster version could not be found.
58
-
| Ensure that the configured version ID matches a valid {hcp-title} version.
58
+
| Ensure that the configured version ID matches a valid {product-title} version.
= Verifying installation of {product-title} clusters
7
7
8
8
If the {hcp-title} cluster is in the installing state for over 30 minutes and has not become ready, ensure the AWS account environment is prepared for the required cluster configurations. If the AWS account environment is prepared for the required cluster configurations correctly, try to delete and recreate the cluster. If the problem persists, contact support.
<.> The `--dest-dir` argument specifies that `oc adm must-gather` stores the packet captures in directories that are relative to `/tmp/captures` on the client machine. You can specify any writable directory.
88
-
<.> When `tcpdump` is run in the debug pod that `oc adm must-gather` starts, the `--source-dir` argument specifies that the packet captures are temporarily stored in the `/tmp/tcpdump` directory on the pod.
89
-
<.> The `--image` argument specifies a container image that includes the `tcpdump` command.
90
-
<.> The `--node-selector` argument and example value specifies to perform the packet captures on the worker nodes. As an alternative, you can specify the `--node-name` argument instead to run the packet capture on a single node. If you omit both the `--node-selector` and the `--node-name` argument, the packet captures are performed on all nodes.
91
-
<.> The `--host-network=true` argument is required so that the packet captures are performed on the network interfaces of the node.
92
-
<.> The `--timeout` argument and value specify to run the debug pod for 30 seconds. If you do not specify the `--timeout` argument and a duration, the debug pod runs for 10 minutes.
93
-
<.> The `-i any` argument for the `tcpdump` command specifies to capture packets on all network interfaces. As an alternative, you can specify a network interface name.
87
+
<1> The `--dest-dir` argument specifies that `oc adm must-gather` stores the packet captures in directories that are relative to `/tmp/captures` on the client machine. You can specify any writable directory.
88
+
<2> When `tcpdump` is run in the debug pod that `oc adm must-gather` starts, the `--source-dir` argument specifies that the packet captures are temporarily stored in the `/tmp/tcpdump` directory on the pod.
89
+
<3> The `--image` argument specifies a container image that includes the `tcpdump` command.
90
+
<4> The `--node-selector` argument and example value specifies to perform the packet captures on the worker nodes. As an alternative, you can specify the `--node-name` argument instead to run the packet capture on a single node. If you omit both the `--node-selector` and the `--node-name` argument, the packet captures are performed on all nodes.
91
+
<5> The `--host-network=true` argument is required so that the packet captures are performed on the network interfaces of the node.
92
+
<6> The `--timeout` argument and value specify to run the debug pod for 30 seconds. If you do not specify the `--timeout` argument and a duration, the debug pod runs for 10 minutes.
93
+
<7> The `-i any` argument for the `tcpdump` command specifies to capture packets on all network interfaces. As an alternative, you can specify a network interface name.
94
94
endif::openshift-origin[]
95
95
96
96
. Perform the action, such as accessing a web application, that triggers the network communication issue while the network trace captures packets.
* xref:../rosa_getting_started/rosa-sts-getting-started-workflow.adoc#rosa-sts-overview-of-the-deployment-workflow[Overview of the ROSA with STS deployment workflow]
33
-
* xref:../rosa_install_access_delete_clusters/rosa-sts-deleting-cluster.adoc#rosa-sts-deleting-cluster[Deleting a ROSA cluster]
//* xref:../rosa_getting_started/rosa-sts-getting-started-workflow.adoc#rosa-sts-overview-of-the-deployment-workflow[Overview of the ROSA with STS deployment workflow]
33
+
//* xref:../rosa_install_access_delete_clusters/rosa-sts-deleting-cluster.adoc#rosa-sts-deleting-cluster[Deleting a ROSA cluster]
* xref:../support/troubleshooting/rosa-troubleshooting-installations-hcp.adoc#rosa-troubleshooting-installations-hcp[Troubleshooting ROSA with HCP cluster installations]
Copy file name to clipboardExpand all lines: rosa_hcp/rosa-hcp-creating-cluster-with-aws-kms-key.adoc
+3-1Lines changed: 3 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -100,6 +100,8 @@ ifndef::openshift-rosa-hcp[]
100
100
// * xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-sts-aws-prereqs[AWS prerequisites for ROSA with STS]]
101
101
* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-understanding-deployment-modes_rosa-sts-creating-a-cluster-with-customizations[Understanding the auto and manual deployment modes]
* xref:../support/troubleshooting/rosa-troubleshooting-installations-hcp.adoc#rosa-troubleshooting-installations-hcp[Troubleshooting ROSA with HCP cluster installations]
103
+
endif::openshift-rosa-hcp[]
104
104
* xref:../support/getting-support.adoc#getting-support[Getting support for Red{nbsp}Hat OpenShift Service on AWS]
105
+
ifdef::openshift-rosa-hcp[]
106
+
* xref:../support/troubleshooting/rosa-troubleshooting-installations-hcp.adoc#rosa-troubleshooting-installations-hcp[Troubleshooting ROSA with HCP cluster installations]
0 commit comments