|
| 1 | +// Module included in the following assemblies: |
| 2 | +// |
| 3 | +// * observability/logging/logging-6.0/log6x-about.adoc |
| 4 | + |
| 5 | +:_mod-docs-content-type: PROCEDURE |
| 6 | +[id="quick-start-opentelemetry_{context}"] |
| 7 | += Quick start with OpenTelemetry |
| 8 | + |
| 9 | +:FeatureName: The OpenTelemetry Protocol (OTLP) output log forwarder |
| 10 | +include::snippets/technology-preview.adoc[] |
| 11 | + |
| 12 | +To configure OTLP ingestion and enable the OpenTelemetry data model, follow these steps: |
| 13 | + |
| 14 | +.Prerequisites |
| 15 | +* Cluster administrator permissions |
| 16 | +
|
| 17 | +.Procedure |
| 18 | + |
| 19 | +. Install the {clo}, {loki-op}, and {coo-first} from OperatorHub. |
| 20 | + |
| 21 | +. Create a `LokiStack` custom resource (CR) in the `openshift-logging` namespace: |
| 22 | ++ |
| 23 | +[source,yaml] |
| 24 | +---- |
| 25 | +apiVersion: loki.grafana.com/v1 |
| 26 | +kind: LokiStack |
| 27 | +metadata: |
| 28 | + name: logging-loki |
| 29 | + namespace: openshift-logging |
| 30 | +spec: |
| 31 | + managementState: Managed |
| 32 | + size: 1x.extra-small |
| 33 | + storage: |
| 34 | + schemas: |
| 35 | + - effectiveDate: '2024-10-01' |
| 36 | + version: v13 |
| 37 | + secret: |
| 38 | + name: logging-loki-s3 |
| 39 | + type: s3 |
| 40 | + storageClassName: gp3-csi |
| 41 | + tenants: |
| 42 | + mode: openshift-logging |
| 43 | +---- |
| 44 | ++ |
| 45 | +[NOTE] |
| 46 | +==== |
| 47 | +Ensure that the `logging-loki-s3` secret is created beforehand. The contents of this secret vary depending on the object storage in use. For more information, see "Secrets and TLS Configuration". |
| 48 | +==== |
| 49 | + |
| 50 | +. Create a service account for the collector: |
| 51 | ++ |
| 52 | +[source,terminal] |
| 53 | +---- |
| 54 | +$ oc create sa collector -n openshift-logging |
| 55 | +---- |
| 56 | + |
| 57 | +. Allow the collector's service account to write data to the `LokiStack` CR: |
| 58 | ++ |
| 59 | +[source,terminal] |
| 60 | +---- |
| 61 | +$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector |
| 62 | +---- |
| 63 | ++ |
| 64 | +[NOTE] |
| 65 | +==== |
| 66 | +The `ClusterRole` resource is created automatically during the Cluster Logging Operator installation and does not need to be created manually. |
| 67 | +==== |
| 68 | + |
| 69 | +. Allow the collector's service account to collect logs: |
| 70 | ++ |
| 71 | +[source,terminal] |
| 72 | +---- |
| 73 | +$ oc project openshift-logging |
| 74 | +---- |
| 75 | ++ |
| 76 | +[source,terminal] |
| 77 | +---- |
| 78 | +$ oc adm policy add-cluster-role-to-user collect-application-logs -z collector |
| 79 | +---- |
| 80 | ++ |
| 81 | +[source,terminal] |
| 82 | +---- |
| 83 | +$ oc adm policy add-cluster-role-to-user collect-audit-logs -z collector |
| 84 | +---- |
| 85 | ++ |
| 86 | +[source,terminal] |
| 87 | +---- |
| 88 | +$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector |
| 89 | +---- |
| 90 | ++ |
| 91 | +[NOTE] |
| 92 | +==== |
| 93 | +The example binds the collector to all three roles (application, infrastructure, and audit). By default, only application and infrastructure logs are collected. To collect audit logs, update your `ClusterLogForwarder` configuration to include them. Assign roles based on the specific log types required for your environment. |
| 94 | +==== |
| 95 | + |
| 96 | +. Create a `UIPlugin` CR to enable the *Log* section in the *Observe* tab: |
| 97 | ++ |
| 98 | +[source,yaml] |
| 99 | +---- |
| 100 | +apiVersion: observability.openshift.io/v1alpha1 |
| 101 | +kind: UIPlugin |
| 102 | +metadata: |
| 103 | + name: logging |
| 104 | +spec: |
| 105 | + type: Logging |
| 106 | + logging: |
| 107 | + lokiStack: |
| 108 | + name: logging-loki |
| 109 | +---- |
| 110 | + |
| 111 | +. Create a `ClusterLogForwarder` CR to configure log forwarding: |
| 112 | ++ |
| 113 | +[source,yaml] |
| 114 | +---- |
| 115 | +apiVersion: observability.openshift.io/v1 |
| 116 | +kind: ClusterLogForwarder |
| 117 | +metadata: |
| 118 | + name: collector |
| 119 | + namespace: openshift-logging |
| 120 | + annotations: |
| 121 | + observability.openshift.io/tech-preview-otlp-output: "enabled" # <1> |
| 122 | +spec: |
| 123 | + serviceAccount: |
| 124 | + name: collector |
| 125 | + outputs: |
| 126 | + - name: loki-otlp |
| 127 | + type: lokiStack # <2> |
| 128 | + lokiStack: |
| 129 | + target: |
| 130 | + name: logging-loki |
| 131 | + namespace: openshift-logging |
| 132 | + dataModel: Otel # <3> |
| 133 | + authentication: |
| 134 | + token: |
| 135 | + from: serviceAccount |
| 136 | + tls: |
| 137 | + ca: |
| 138 | + key: service-ca.crt |
| 139 | + configMapName: openshift-service-ca.crt |
| 140 | + pipelines: |
| 141 | + - name: my-pipeline |
| 142 | + inputRefs: |
| 143 | + - application |
| 144 | + - infrastructure |
| 145 | + outputRefs: |
| 146 | + - loki-otlp |
| 147 | +---- |
| 148 | +<1> Use the annotation to enable the `Otel` data model, which is a Technology Preview feature. |
| 149 | +<2> Define the output type as `lokiStack`. |
| 150 | +<3> Specifies the OpenTelemetry data model. |
| 151 | ++ |
| 152 | +[NOTE] |
| 153 | +==== |
| 154 | +You cannot use `lokiStack.labelKeys` when `dataModel` is `Otel`. To achieve similar functionality when `dataModel` is `Otel`, refer to "Configuring LokiStack for OTLP data ingestion". |
| 155 | +==== |
| 156 | + |
| 157 | +.Verification |
| 158 | +* Verify that OTLP is functioning correctly by going to *Observe* -> *OpenShift Logging* -> *LokiStack* -> *Writes* in the OpenShift web console, and checking *Distributor - Structured Metadata*. |
0 commit comments