|
| 1 | +// Module included in the following assemblies: |
| 2 | +// * hosted-control-planes/hcp-prepare/hcp-sizing-guidance.adoc |
| 3 | + |
| 4 | +:_mod-docs-content-type: CONCEPT |
| 5 | +[id="hcp-sizing-calculation_{context}"] |
| 6 | += Sizing calculation example |
| 7 | + |
| 8 | +This example provides sizing guidance for the following scenario: |
| 9 | + |
| 10 | +* Three bare-metal workers that are labeled as `hypershift.openshift.io/control-plane` nodes |
| 11 | +* `maxPods` value set to 500 |
| 12 | +* The expected API rate is medium or about 1000, according to the load-based limits |
| 13 | +
|
| 14 | +.Limit inputs |
| 15 | +|=== |
| 16 | +| Limit description | Server 1 | Server 2 |
| 17 | + |
| 18 | +| Number of vCPUs on worker node |
| 19 | +| 64 |
| 20 | +| 128 |
| 21 | + |
| 22 | +| Memory on worker node (GiB) |
| 23 | +| 128 |
| 24 | +| 256 |
| 25 | + |
| 26 | +| Maximum pods per worker |
| 27 | +| 500 |
| 28 | +| 500 |
| 29 | + |
| 30 | +| Number of workers used to host control planes |
| 31 | +| 3 |
| 32 | +| 3 |
| 33 | + |
| 34 | +| Maximum QPS target rate (API requests per second) |
| 35 | +| 1000 |
| 36 | +| 1000 |
| 37 | +|=== |
| 38 | + |
| 39 | +.Sizing calculation example |
| 40 | +|=== |
| 41 | + |
| 42 | +| Calculated values based on worker node size and API rate | Server 1 | Server 2 | Calculation notes |
| 43 | + |
| 44 | +| Maximum {hcp} per worker based on vCPU requests |
| 45 | +| 12.8 |
| 46 | +| 25.6 |
| 47 | +| Number of worker vCPUs ÷ 5 total vCPU requests per hosted control plane |
| 48 | + |
| 49 | +| Maximum {hcp} per worker based on vCPU usage |
| 50 | +| 5.4 |
| 51 | +| 10.7 |
| 52 | +| Number of vCPUS ÷ (2.9 measured idle vCPU usage + (QPS target rate ÷ 1000) × 9.0 measured vCPU usage per 1000 QPS increase) |
| 53 | + |
| 54 | +| Maximum {hcp} per worker based on memory requests |
| 55 | +| 7.1 |
| 56 | +| 14.2 |
| 57 | +| Worker memory GiB ÷ 18 GiB total memory request per hosted control plane |
| 58 | + |
| 59 | +| Maximum {hcp} per worker based on memory usage |
| 60 | +| 9.4 |
| 61 | +| 18.8 |
| 62 | +| Worker memory GiB ÷ (11.1 measured idle memory usage + (QPS target rate ÷ 1000) × 2.5 measured memory usage per 1000 QPS increase) |
| 63 | + |
| 64 | +| Maximum {hcp} per worker based on per node pod limit |
| 65 | +| 6.7 |
| 66 | +| 6.7 |
| 67 | +| 500 `maxPods` ÷ 75 pods per hosted control plane |
| 68 | + |
| 69 | +| Minimum of previously mentioned maximums |
| 70 | +| 5.4 |
| 71 | +| 6.7 |
| 72 | +| |
| 73 | + |
| 74 | +| |
| 75 | +| vCPU limiting factor |
| 76 | +| `maxPods` limiting factor |
| 77 | +| |
| 78 | + |
| 79 | +| Maximum number of {hcp} within a management cluster |
| 80 | +| 16 |
| 81 | +| 20 |
| 82 | +| Minimum of previously mentioned maximums × 3 control-plane workers |
| 83 | +|=== |
| 84 | + |
| 85 | +.{hcp-capital} capacity metrics |
| 86 | +|=== |
| 87 | + |
| 88 | +| Name | Description |
| 89 | + |
| 90 | +| `mce_hs_addon_request_based_hcp_capacity_gauge` |
| 91 | +| Estimated maximum number of {hcp} the cluster can host based on a highly available {hcp} resource request. |
| 92 | + |
| 93 | +| `mce_hs_addon_low_qps_based_hcp_capacity_gauge` |
| 94 | +| Estimated maximum number of {hcp} the cluster can host if all {hcp} make around 50 QPS to the clusters Kube API server. |
| 95 | + |
| 96 | +| `mce_hs_addon_medium_qps_based_hcp_capacity_gauge` |
| 97 | +| Estimated maximum number of {hcp} the cluster can host if all {hcp} make around 1000 QPS to the clusters Kube API server. |
| 98 | + |
| 99 | +| `mce_hs_addon_high_qps_based_hcp_capacity_gauge` |
| 100 | +| Estimated maximum number of {hcp} the cluster can host if all {hcp} make around 2000 QPS to the clusters Kube API server. |
| 101 | + |
| 102 | +| `mce_hs_addon_average_qps_based_hcp_capacity_gauge` |
| 103 | +| Estimated maximum number of {hcp} the cluster can host based on the existing average QPS of {hcp}. If you do not have an active {hcp}, you can expect low QPS. |
| 104 | +|=== |
0 commit comments