Skip to content

Commit 0fdd439

Browse files
authored
Merge pull request #103273 from gwynnemonahan/OSDOCS-17450
OSDOCS-17450 [NETOBSERV] Module short descriptions to configuring-operator.adoc
2 parents c7c50a8 + 4449cab commit 0fdd439

11 files changed

+117
-84
lines changed

modules/network-observability-configuring-FLP-sampling.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,10 +4,10 @@
44

55
:_mod-docs-content-type: PROCEDURE
66
[id="network-observability-config-FLP-sampling_{context}"]
7+
= Updating the FlowCollector resource
78

8-
= Updating the Flow Collector resource
9-
10-
As an alternative to editing YAML in the {product-title} web console, you can configure specifications, such as eBPF sampling, by patching the `flowcollector` custom resource (CR):
9+
[role="_abstract"]
10+
As an alternative to using the web console, use the `oc patch` command with the `flowcollector` custom resource to quickly update specific specifications, such as eBPF sampling
1111

1212
.Procedure
1313

modules/network-observability-configuring-quickfilters-flowcollector.adoc

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,11 +2,14 @@
22

33
// * networking/network_observability/configuring-operators.adoc
44

5-
:_mod-docs-content-type: PROCEDURE
5+
:_mod-docs-content-type: REFERENCE
66
[id="network-observability-config-quick-filters_{context}"]
77
= Configuring quick filters
88

9-
You can modify the filters in the `FlowCollector` resource. Exact matches are possible using double-quotes around values. Otherwise, partial matches are used for textual values. The bang (!) character, placed at the end of a key, means negation. See the sample `FlowCollector` resource for more context about modifying the YAML.
9+
[role="_abstract"]
10+
Use the list of available source, destination, and universal filter keys to modify quick filters within the `FlowCollector` resource.
11+
12+
Exact matches are possible using double-quotes around values. Otherwise, partial matches are used for textual values. The bang (!) character, placed at the end of a key, means negation. See the sample `FlowCollector` resource for more context about modifying the YAML.
1013

1114
[NOTE]
1215
====

modules/network-observability-enriched-flows.adoc

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,12 @@
66
[id="network-observability-enriched-flows_{context}"]
77
= Export enriched network flow data
88

9-
You can send network flows to Kafka, IPFIX, the Red{nbsp}Hat build of OpenTelemetry, or all three at the same time. For Kafka or IPFIX, any processor or storage that supports those inputs, such as Splunk, Elasticsearch, or Fluentd, can consume the enriched network flow data. For OpenTelemetry, network flow data and metrics can be exported to a compatible OpenTelemetry endpoint, such as Red{nbsp}Hat build of OpenTelemetry or Prometheus.
9+
[role="_abstract"]
10+
Configure the `FlowCollector` resource to export enriched network flow data simultaneously to Kafka, IPFIX, or an OpenTelemetry endpoint for external consumption by tools like Splunk or Prometheus.
11+
12+
For Kafka or IPFIX, any processor or storage that supports those inputs, such as Splunk, Elasticsearch, or Fluentd, can consume the enriched network flow data.
13+
14+
For OpenTelemetry, network flow data and metrics can be exported to a compatible OpenTelemetry endpoint, such as {OTELName} or Prometheus.
1015

1116
.Prerequisites
1217
* Your Kafka, IPFIX, or OpenTelemetry collector endpoints are available from Network Observability `flowlogs-pipeline` pods.

modules/network-observability-con_filter-network-flows-at-ingestion.adoc renamed to modules/network-observability-filter-network-flows-at-ingestion.adoc

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,8 @@
66
[id="network-observability-filter-network-flows-at-ingestion_{context}"]
77
= Filter network flows at ingestion
88

9-
You can create filters to reduce the number of generated network flows. Filtering network flows can reduce the resource usage of the network observability components.
9+
[role="_abstract"]
10+
Create filters to reduce the number of generated network flows. Filtering network flows can reduce the resource usage of the network observability components.
1011

1112
You can configure two kinds of filters:
1213

Lines changed: 80 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,80 @@
1+
// Module included in the following assemblies:
2+
3+
// * networking/network_observability/configuring-operators.adoc
4+
5+
:_mod-docs-content-type: REFERENCE
6+
[id="network-observability-flowcollector-example_{context}"]
7+
= Example of a FlowCollector resource
8+
9+
[role="_abstract"]
10+
Review a comprehensive, annotated example of the `FlowCollector` custom resource that demonstrates configurations for `eBPF` sampling, conversation tracking, Loki integration, and console quick filters.
11+
12+
[id="network-observability-flowcollector-configuring-about-sample_{context}"]
13+
== Sample `FlowCollector` resource
14+
[source, yaml]
15+
----
16+
apiVersion: flows.netobserv.io/v1beta2
17+
kind: FlowCollector
18+
metadata:
19+
name: cluster
20+
spec:
21+
namespace: netobserv
22+
deploymentModel: Direct
23+
agent:
24+
type: eBPF <1>
25+
ebpf:
26+
sampling: 50 <2>
27+
logLevel: info
28+
privileged: false
29+
resources:
30+
requests:
31+
memory: 50Mi
32+
cpu: 100m
33+
limits:
34+
memory: 800Mi
35+
processor: <3>
36+
logLevel: info
37+
resources:
38+
requests:
39+
memory: 100Mi
40+
cpu: 100m
41+
limits:
42+
memory: 800Mi
43+
logTypes: Flows
44+
advanced:
45+
conversationEndTimeout: 10s
46+
conversationHeartbeatInterval: 30s
47+
loki: <4>
48+
mode: LokiStack <5>
49+
consolePlugin:
50+
register: true
51+
logLevel: info
52+
portNaming:
53+
enable: true
54+
portNames:
55+
"3100": loki
56+
quickFilters: <6>
57+
- name: Applications
58+
filter:
59+
src_namespace!: 'openshift-,netobserv'
60+
dst_namespace!: 'openshift-,netobserv'
61+
default: true
62+
- name: Infrastructure
63+
filter:
64+
src_namespace: 'openshift-,netobserv'
65+
dst_namespace: 'openshift-,netobserv'
66+
- name: Pods network
67+
filter:
68+
src_kind: 'Pod'
69+
dst_kind: 'Pod'
70+
default: true
71+
- name: Services network
72+
filter:
73+
dst_kind: 'Service'
74+
----
75+
<1> The Agent specification, `spec.agent.type`, must be `EBPF`. eBPF is the only {product-title} supported option.
76+
<2> You can set the Sampling specification, `spec.agent.ebpf.sampling`, to manage resources. By default, eBPF sampling is set to `50`, so a flow has a 1 in 50 chance of being sampled. A lower sampling interval value requires more computational, memory, and storage resources. A value of `0` or `1` means all flows are sampled. It is recommended to start with the default value and refine it empirically to determine the optimal setting for your cluster.
77+
<3> The Processor specification `spec.processor.` can be set to enable conversation tracking. When enabled, conversation events are queryable in the web console. The `spec.processor.logTypes` value is `Flows`. The `spec.processor.advanced` values are `Conversations`, `EndedConversations`, or `ALL`. Storage requirements are highest for `All` and lowest for `EndedConversations`.
78+
<4> The Loki specification, `spec.loki`, specifies the Loki client. The default values match the Loki install paths mentioned in the Installing the Loki Operator section. If you used another installation method for Loki, specify the appropriate client information for your install.
79+
<5> The `LokiStack` mode automatically sets a few configurations: `querierUrl`, `ingesterUrl` and `statusUrl`, `tenantID`, and corresponding TLS configuration. Cluster roles and a cluster role binding are created for reading and writing logs to Loki. And `authToken` is set to `Forward`. You can set these manually using the `Manual` mode.
80+
<6> The `spec.quickFilters` specification defines filters that show up in the web console. The `Application` filter keys,`src_namespace` and `dst_namespace`, are negated (`!`), so the `Application` filter shows all traffic that _does not_ originate from, or have a destination to, any `openshift-` or `netobserv` namespaces. For more information, see Configuring quick filters below.

modules/network-observability-flowcollector-kafka-config.adoc

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,9 +4,12 @@
44

55
:_mod-docs-content-type: PROCEDURE
66
[id="network-observability-flowcollector-kafka-config_{context}"]
7-
= Configuring the Flow Collector resource with Kafka
7+
= Configuring the FlowCollector resource with Kafka
88

9-
You can configure the `FlowCollector` resource to use Kafka for high-throughput and low-latency data feeds. A Kafka instance needs to be running, and a Kafka topic dedicated to {product-title} Network Observability must be created in that instance. For more information, see link:https://access.redhat.com/documentation/en-us/red_hat_amq/7.7/html/using_amq_streams_on_openshift/using-the-topic-operator-str[Kafka documentation with AMQ Streams].
9+
[role="_abstract"]
10+
Configure the `FlowCollector` resource to use Kafka for high-throughput and low-latency data feeds.
11+
12+
A Kafka instance needs to be running, and a Kafka topic dedicated to {product-title} Network Observability must be created in that instance. For more information, see link:https://access.redhat.com/documentation/en-us/red_hat_amq/7.7/html/using_amq_streams_on_openshift/using-the-topic-operator-str[Kafka documentation with AMQ Streams].
1013

1114
.Prerequisites
1215
* Kafka is installed. Red Hat supports Kafka with AMQ Streams Operator.
@@ -19,7 +22,7 @@ You can configure the `FlowCollector` resource to use Kafka for high-throughput
1922
. Select the cluster and then click the *YAML* tab.
2023

2124
. Modify the `FlowCollector` resource for {product-title} Network Observability Operator to use Kafka, as shown in the following sample YAML:
22-
25+
+
2326
.Sample Kafka configuration in `FlowCollector` resource
2427
[source, yaml]
2528
----

modules/network-observability-flowcollector-view.adoc

Lines changed: 3 additions & 73 deletions
Original file line numberDiff line numberDiff line change
@@ -2,84 +2,14 @@
22

33
// * networking/network_observability/configuring-operators.adoc
44

5-
:_mod-docs-content-type: CONCEPT
5+
:_mod-docs-content-type: PROCEDURE
66
[id="network-observability-flowcollector-view_{context}"]
77
= View the FlowCollector resource
88

9-
The `FlowCollector` resource can be viewed and modified in the {product-title} web console through the integrated setup, advanced form, or by editing the YAML directly.
9+
[role="_abstract"]
10+
View and modify the `FlowCollector` resource in the {product-title} web console through the integrated setup, advanced form, or by editing the YAML directly to configure the Network Observability Operator.
1011

1112
.Procedure
1213
. In the web console, navigate to *Ecosystem* -> *Installed Operators*.
1314
. Under the *Provided APIs* heading for the *NetObserv Operator*, select *Flow Collector*.
1415
. Select *cluster* then select the *YAML* tab. There, you can modify the `FlowCollector` resource to configure the Network Observability Operator.
15-
16-
The following example shows a sample `FlowCollector` resource for {product-title} Network Observability Operator:
17-
[id="network-observability-flowcollector-configuring-about-sample_{context}"]
18-
.Sample `FlowCollector` resource
19-
[source, yaml]
20-
----
21-
apiVersion: flows.netobserv.io/v1beta2
22-
kind: FlowCollector
23-
metadata:
24-
name: cluster
25-
spec:
26-
namespace: netobserv
27-
deploymentModel: Direct
28-
agent:
29-
type: eBPF <1>
30-
ebpf:
31-
sampling: 50 <2>
32-
logLevel: info
33-
privileged: false
34-
resources:
35-
requests:
36-
memory: 50Mi
37-
cpu: 100m
38-
limits:
39-
memory: 800Mi
40-
processor: <3>
41-
logLevel: info
42-
resources:
43-
requests:
44-
memory: 100Mi
45-
cpu: 100m
46-
limits:
47-
memory: 800Mi
48-
logTypes: Flows
49-
advanced:
50-
conversationEndTimeout: 10s
51-
conversationHeartbeatInterval: 30s
52-
loki: <4>
53-
mode: LokiStack <5>
54-
consolePlugin:
55-
register: true
56-
logLevel: info
57-
portNaming:
58-
enable: true
59-
portNames:
60-
"3100": loki
61-
quickFilters: <6>
62-
- name: Applications
63-
filter:
64-
src_namespace!: 'openshift-,netobserv'
65-
dst_namespace!: 'openshift-,netobserv'
66-
default: true
67-
- name: Infrastructure
68-
filter:
69-
src_namespace: 'openshift-,netobserv'
70-
dst_namespace: 'openshift-,netobserv'
71-
- name: Pods network
72-
filter:
73-
src_kind: 'Pod'
74-
dst_kind: 'Pod'
75-
default: true
76-
- name: Services network
77-
filter:
78-
dst_kind: 'Service'
79-
----
80-
<1> The Agent specification, `spec.agent.type`, must be `EBPF`. eBPF is the only {product-title} supported option.
81-
<2> You can set the Sampling specification, `spec.agent.ebpf.sampling`, to manage resources. By default, eBPF sampling is set to `50`, so a flow has a 1 in 50 chance of being sampled. A lower sampling interval value requires more computational, memory, and storage resources. A value of `0` or `1` means all flows are sampled. It is recommended to start with the default value and refine it empirically to determine the optimal setting for your cluster.
82-
<3> The Processor specification `spec.processor.` can be set to enable conversation tracking. When enabled, conversation events are queryable in the web console. The `spec.processor.logTypes` value is `Flows`. The `spec.processor.advanced` values are `Conversations`, `EndedConversations`, or `ALL`. Storage requirements are highest for `All` and lowest for `EndedConversations`.
83-
<4> The Loki specification, `spec.loki`, specifies the Loki client. The default values match the Loki install paths mentioned in the Installing the Loki Operator section. If you used another installation method for Loki, specify the appropriate client information for your install.
84-
<5> The `LokiStack` mode automatically sets a few configurations: `querierUrl`, `ingesterUrl` and `statusUrl`, `tenantID`, and corresponding TLS configuration. Cluster roles and a cluster role binding are created for reading and writing logs to Loki. And `authToken` is set to `Forward`. You can set these manually using the `Manual` mode.
85-
<6> The `spec.quickFilters` specification defines filters that show up in the web console. The `Application` filter keys,`src_namespace` and `dst_namespace`, are negated (`!`), so the `Application` filter shows all traffic that _does not_ originate from, or have a destination to, any `openshift-` or `netobserv` namespaces. For more information, see Configuring quick filters below.

modules/network-observability-resource-recommendations.adoc

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,9 @@
55
[id="network-observability-resource-recommendations_{context}"]
66
= Resource management and performance considerations
77

8+
[role="_abstract"]
9+
Review the key configuration settings, including eBPF sampling, feature enablement, and resource limits, necessary to manage performance criteria and optimize resource consumption for network observability.
10+
811
The amount of resources required by network observability depends on the size of your cluster and your requirements for the cluster to ingest and store observability data. To manage resources and set performance criteria for your cluster, consider configuring the following settings. Configuring these settings might meet your optimal setup and observability needs.
912

1013
The following settings can help you manage resources and performance from the outset:

modules/network-observability-resources-table.adoc

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,9 @@
55
[id="network-observability-resources-table_{context}"]
66
= Resource considerations
77

8+
[role="_abstract"]
9+
Review the resource considerations table, which provides baseline examples for configuration settings, such as eBPF memory limits and LokiStack size, tailored to various cluster workload sizes.
10+
811
The following table outlines examples of resource considerations for clusters with certain workload sizes.
912

1013
[IMPORTANT]

modules/network-observability-total-resource-usage.adoc

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,9 @@
55
[id="network-observability-total-resource-usage-table_{context}"]
66
= Total average memory and CPU usage
77

8+
[role="_abstract"]
9+
Review the table detailing the total average CPU and memory usage for network observability components under two distinct traffic scenarios (`Test 1` and `Test 2`) at different eBPF sampling values.
10+
811
The following table outlines averages of total resource usage for clusters with a sampling value of `1` and `50` for two different tests: `Test 1` and `Test 2`. The tests differ in the following ways:
912

1013
- `Test 1` takes into account high ingress traffic volume in addition to the total number of namespace, pods and services in an {product-title} cluster, places load on the eBPF agent, and represents use cases with a high number of workloads for a given cluster size. For example, `Test 1` consists of 76 Namespaces, 5153 Pods, and 2305 Services with a network traffic scale of ~350 MB/s.

0 commit comments

Comments
 (0)