diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index dc5ed4c4dede..61a28bae5ff6 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -1762,6 +1762,8 @@ Topics: File: verifying-connectivity-endpoint - Name: Changing the cluster network MTU File: changing-cluster-network-mtu + - Name: Network bonding considerations + File: network-bonding-considerations - Name: Using Stream Control Transmission Protocol File: using-sctp - Name: Associating secondary interfaces metrics to network attachments diff --git a/installing/installing_bare_metal/ipi/ipi-install-installation-workflow.adoc b/installing/installing_bare_metal/ipi/ipi-install-installation-workflow.adoc index bed610170e75..40ae2e3cf43b 100644 --- a/installing/installing_bare_metal/ipi/ipi-install-installation-workflow.adoc +++ b/installing/installing_bare_metal/ipi/ipi-install-installation-workflow.adoc @@ -38,9 +38,6 @@ include::modules/creating-manifest-file-customized-br-ex-bridge.adoc[leveloffset // Scale each machine set to compute nodes include::modules/creating-scaling-machine-sets-compute-nodes-networking.adoc[leveloffset=+2] -// Enabling OVS balance-slb mode for your cluster -include::modules/enabling-OVS-balance-slb-mode.adoc[leveloffset=+1] - // Establishing communication between subnets include::modules/ipi-install-establishing-communication-between-subnets.adoc[leveloffset=+1] diff --git a/installing/installing_bare_metal/upi/installing-bare-metal-network-customizations.adoc b/installing/installing_bare_metal/upi/installing-bare-metal-network-customizations.adoc index b99bb78a4f09..906b081223b7 100644 --- a/installing/installing_bare_metal/upi/installing-bare-metal-network-customizations.adoc +++ b/installing/installing_bare_metal/upi/installing-bare-metal-network-customizations.adoc @@ -84,9 +84,6 @@ include::modules/creating-manifest-file-customized-br-ex-bridge.adoc[leveloffset // Scale each machine set to compute nodes include::modules/creating-scaling-machine-sets-compute-nodes-networking.adoc[leveloffset=+2] -// Enabling OVS balance-slb mode for your cluster -include::modules/enabling-OVS-balance-slb-mode.adoc[leveloffset=+1] - include::modules/installation-infrastructure-user-infra.adoc[leveloffset=+1] [role="_additional-resources"] diff --git a/installing/installing_bare_metal/upi/installing-restricted-networks-bare-metal.adoc b/installing/installing_bare_metal/upi/installing-restricted-networks-bare-metal.adoc index 7cfd47898394..a3b35bd10b61 100644 --- a/installing/installing_bare_metal/upi/installing-restricted-networks-bare-metal.adoc +++ b/installing/installing_bare_metal/upi/installing-restricted-networks-bare-metal.adoc @@ -99,9 +99,6 @@ include::modules/creating-manifest-file-customized-br-ex-bridge.adoc[leveloffset // Scale each machine set to compute nodes include::modules/creating-scaling-machine-sets-compute-nodes-networking.adoc[leveloffset=+2] -// Enabling OVS balance-slb mode for your cluster -include::modules/enabling-OVS-balance-slb-mode.adoc[leveloffset=+1] - include::modules/installation-infrastructure-user-infra.adoc[leveloffset=+1] [role="_additional-resources"] diff --git a/modules/configuring-localnet-switched-topology.adoc b/modules/configuring-localnet-switched-topology.adoc index 0b2654710689..face0b20f8d7 100644 --- a/modules/configuring-localnet-switched-topology.adoc +++ b/modules/configuring-localnet-switched-topology.adoc @@ -9,7 +9,7 @@ [role="_abstract"] The switched `localnet` topology interconnects the workloads created as Network Attachment Definitions (NADs) through a cluster-wide logical switch to a physical network. -You must map a secondary network to the OVS bridge to use it as an OVN-Kubernetes secondary network. Bridge mappings allow network traffic to reach the physical network. A bridge mapping associates a physical network name, also known as an interface label, to a bridge created with Open vSwitch (OVS). +You must map a secondary network to the ovs-bridge to use it as an OVN-Kubernetes secondary network. Bridge mappings allow network traffic to reach the physical network. A bridge mapping associates a physical network name, also known as an interface label, to a bridge created with Open vSwitch (OVS). You can create an `NodeNetworkConfigurationPolicy` (NNCP) object, part of the `nmstate.io/v1` API group, to declaratively create the mapping. This API is provided by the NMState Operator. By using this API you can apply the bridge mapping to nodes that match your specified `nodeSelector` expression, such as `node-role.kubernetes.io/worker: ''`. With this declarative approach, the NMState Operator applies secondary network configuration to all nodes specified by the node selector automatically and transparently. @@ -18,6 +18,11 @@ When attaching a secondary network, you can either use the existing `br-ex` brid - If your nodes include only a single network interface, you must use the existing bridge. This network interface is owned and managed by OVN-Kubernetes and you must not remove it from the `br-ex` bridge or alter the interface configuration. If you remove or alter the network interface, your cluster network stops working correctly. - If your nodes include several network interfaces, you can attach a different network interface to a new bridge, and use that for your secondary network. This approach provides for traffic isolation from your primary cluster network. +[NOTE] +==== +As a postinstallation task, you cannot make configuration changes to the `br-ex` bridge or its underlying interfaces in the `NodeNetworkConfigurationPolicy` (NNCP) resource. As a workaround, use a secondary network interface connected to your host or switch. +==== + The `localnet1` network is mapped to the `br-ex` bridge in the following sharing-a-bridge example: [source,yaml] @@ -35,17 +40,16 @@ spec: - localnet: localnet1 bridge: br-ex state: present -# ... ---- - ++ where: - -`name`:: The name for the configuration object. ++ +`metadata.name`:: The name for the configuration object. `node-role.kubernetes.io/worker`:: A node selector that specifies the nodes to apply the node network configuration policy to. `localnet`:: The name for the secondary network from which traffic is forwarded to the OVS bridge. This secondary network must match the name of the `spec.config.name` field of the `NetworkAttachmentDefinition` CRD that defines the OVN-Kubernetes secondary network. `bridge`:: The name of the OVS bridge on the node. This value is required only if you specify `state: present`. `state`:: The state for the mapping. Must be either `present` to add the bridge or `absent` to remove the bridge. The default value is `present`. - ++ The following JSON example configures a localnet secondary network that is named `localnet1`. Note that the value for the `mtu` parameter must match the MTU value that was set for the secondary network interface that is mapped to the `br-ex` bridge interface. [source,json] @@ -96,18 +100,18 @@ spec: bridge: ovs-br1 state: present ---- - ++ where: - -`name`:: Specifies the name of the configuration object. ++ +`metadata.name`:: Specifies the name of the configuration object. `node-role.kubernetes.io/worker`:: Specifies a node selector that identifies the nodes to which the node network configuration policy applies. `interfaces.name`:: Specifies a new OVS bridge that operates separately from the default bridge used by OVN-Kubernetes for cluster traffic. `mcast-snooping-enable`:: Specifies whether to enable multicast snooping. When enabled, multicast snooping prevents network devices from flooding multicast traffic to all network members. By default, an OVS bridge does not enable multicast snooping. The default value is `false`. -``port.name`:: Specifies the network device on the host system to associate with the new OVS bridge. -`bridge-mappings.localnet`:: Specifies the name of the secondary network that forwards traffic to the OVS bridge. This name must match the value of the `spec.config.name` field in the `NetworkAttachmentDefinition` CRD that defines the OVN-Kubernetes secondary network. -`bridge-mappings.bridge`:: Specifies the name of the OVS bridge on the node. The value is required only when `state: present` is set. -`bridge-mappings.state`:: Specifies the state of the mapping. Valid values are `present` to add the bridge or `absent` to remove the bridge. The default value is `present`. - +`port.name`:: Specifies the network device on the host system to associate with the new OVS bridge. +`localnet`:: Specifies the name of the secondary network that forwards traffic to the OVS bridge. This name must match the value of the `spec.config.name` field in the `NetworkAttachmentDefinition` CRD that defines the OVN-Kubernetes secondary network. +`bridge`:: Specifies the name of the OVS bridge on the node. The value is required only when `state: present` is set. +`state`:: Specifies the state of the mapping. Valid values are `present` to add the bridge or `absent` to remove the bridge. The default value is `present`. ++ The following JSON example configures a localnet secondary network that is named `localnet2`. Note that the value for the `mtu` parameter must match the MTU value that was set for the `eth1` secondary network interface. [source,json] @@ -125,4 +129,3 @@ The following JSON example configures a localnet secondary network that is named "excludeSubnets": "10.100.200.0/29" } ---- - diff --git a/modules/configuring-ovnk-use-second-ovs-bridge.adoc b/modules/configuring-ovnk-use-second-ovs-bridge.adoc index 09249a777e53..fff348e4ef4a 100644 --- a/modules/configuring-ovnk-use-second-ovs-bridge.adoc +++ b/modules/configuring-ovnk-use-second-ovs-bridge.adoc @@ -6,6 +6,7 @@ [id="configuring-ovnk-use-second-ovs-bridge_{context}"] = Configuring OVN-Kubernetes to use a secondary OVS bridge +[role="_abstract"] You can create an additional or _secondary_ Open vSwitch (OVS) bridge, `br-ex1`, that OVN-Kubernetes manages and the Multiple External Gateways (MEG) implementation uses for defining external gateways for an {product-title} node. You can define a MEG in an `AdminPolicyBasedExternalRoute` custom resource (CR). The MEG implementation provides a pod with access to multiple gateways, equal-cost multipath (ECMP) routes, and the Bidirectional Forwarding Detection (BFD) implementation. Consider a use case for pods impacted by the Multiple External Gateways (MEG) feature and you want to egress traffic to a different interface, for example `br-ex1`, on a node. Egress traffic for pods not impacted by MEG get routed to the default OVS `br-ex` bridge. @@ -17,6 +18,11 @@ Currently, MEG is unsupported for use with other egress features, such as egress You must define the additional bridge in an interface definition of a machine configuration manifest file. The Machine Config Operator uses the manifest to create a new file at `/etc/ovnk/extra_bridge` on the host. The new file includes the name of the network interface that the additional OVS bridge configures for a node. +[IMPORTANT] +==== +Do not use the `nmstate` API to make configuration changes to the secondary interface that is defined in the `/etc/ovnk/extra_bridge` directory path. The `configure-ovs.sh` configuration script creates and manages OVS bridge interfaces, so any interruptive changes to these interfaces by the `nmstate` API can lead to network configuration instability. +==== + After you create and edit the manifest file, the Machine Config Operator completes tasks in the following order: . Drains nodes in singular order based on the selected machine configuration pool. @@ -39,9 +45,9 @@ For more information about useful situations for the additional `br-ex1` bridge + [IMPORTANT] ==== -Do not use the Kubernetes NMState Operator to define or a `NodeNetworkConfigurationPolicy` (NNCP) manifest file to define the additional interface. +Do not use the Kubernetes NMState Operator or a `NodeNetworkConfigurationPolicy` (NNCP) manifest file to define the additional interface. Ensure that the additional interface or sub-interfaces when defining a `bond` interface are not used by an existing `br-ex` OVN Kubernetes network deployment. -Also ensure that the additional interface or sub-interfaces when defining a `bond` interface are not used by an existing `br-ex` OVN Kubernetes network deployment. +As a postinstallation task, you cannot make configuration changes to the `br-ex` bridge or its underlying interfaces. As a workaround, use a secondary network interface connected to your host or switch. ==== + .. Create the following interface definition files. These files get added to a machine configuration manifest file so that host nodes can access the definition files. diff --git a/modules/enable-active-backup-mode.adoc b/modules/enable-active-backup-mode.adoc new file mode 100644 index 000000000000..ad162934176d --- /dev/null +++ b/modules/enable-active-backup-mode.adoc @@ -0,0 +1,16 @@ +// Module included in the following assemblies: +// +// * networking/advanced_networking/network-bonding-considerations.adoc + +:_mod-docs-content-type: PROCEDURE +[id="enable-active-backup-mode_{context}"] += Enable active-backup mode for your cluster + +[role="_abstract"] +The `active-backup` mode provides fault tolerance for network connections by switching to a backup link where the primary link fails. The mode specifies the following ports for your cluster: + +* An active port where one physical interface sends and receives traffic at any given time. +* A standby port where all other ports act as backup links and continously monitor their link status. + +During a failover process, if an active port or its link fails, the bonding logic switches all network traffic to a standby port. This standby port becomes the new active port. For failover to work, all ports in a bond must share the same Media Access Control (MAC) address. + diff --git a/modules/installation-network-user-infra.adoc b/modules/installation-network-user-infra.adoc index 152c0d5c81cd..e24e47858525 100644 --- a/modules/installation-network-user-infra.adoc +++ b/modules/installation-network-user-infra.adoc @@ -74,6 +74,7 @@ endif::[] [id="installation-network-user-infra_{context}"] = Networking requirements for user-provisioned infrastructure +[role="_abstract"] All the {op-system-first} machines require networking to be configured in `initramfs` during boot to fetch their Ignition config files. @@ -94,17 +95,13 @@ During the initial boot, the machines require an IP address configuration that i [NOTE] ==== -* It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. +* Consider using a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. * If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at {op-system} install time. These can be passed as boot arguments if you are installing from an ISO image. See the _Installing {op-system} and starting the {product-title} bootstrap process_ section for more information about static IP provisioning and advanced networking options. ==== endif::ibm-z[] -The Kubernetes API server must be able to resolve the node names of the cluster -machines. If the API servers and worker nodes are in different zones, you can -configure a default DNS search zone to allow the API server to resolve the -node names. Another supported approach is to always refer to hosts by their -fully-qualified domain names in both the node objects and all DNS requests. +The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. endif::azure,gcp[] ifndef::ibm-z,azure[] @@ -119,9 +116,7 @@ endif::ibm-z,azure[] [id="installation-network-connectivity-user-infra_{context}"] == Network connectivity requirements -You must configure the network connectivity between machines to allow {product-title} cluster -components to communicate. Each machine must be able to resolve the hostnames -of all other machines in the cluster. +You must configure the network connectivity between machines to allow {product-title} cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. diff --git a/modules/installation-user-infra-machines-static-network.adoc b/modules/installation-user-infra-machines-static-network.adoc index e267b7160805..fabed2a1494d 100644 --- a/modules/installation-user-infra-machines-static-network.adoc +++ b/modules/installation-user-infra-machines-static-network.adoc @@ -45,6 +45,7 @@ endif::[] [id="installation-user-infra-machines-static-network_{context}"] = Advanced {op-system} installation reference +[role="_abstract"] This section illustrates the networking configuration and other advanced options that allow you to modify the {op-system-first} manual installation process. The following tables describe the kernel arguments and command-line options you can use with the {op-system} live installer and the `coreos-installer` command. [id="installation-user-infra-machines-routing-bonding_{context}"] @@ -172,7 +173,6 @@ ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none ---- - === Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: @@ -183,7 +183,6 @@ ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none ---- - === Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the `vlan=` parameter. @@ -220,7 +219,9 @@ ifndef::ibm-z-kvm[] === Bonding multiple network interfaces to a single interface -Optional: You can bond multiple network interfaces to a single interface by using the `bond=` option. Refer to the following examples: +As an optional configuration, you can bond multiple network interfaces to a single interface by using the `bond=` option. To apply this configuration to your cluster, complete the procedure steps for each node that runs on your cluster. + +.Procedure * The syntax for configuring a bonded interface is: `bond=[:][:options]` + @@ -229,33 +230,31 @@ and _options_ is a comma-separated list of bonding options. Enter `modinfo bondi * When you create a bonded interface using `bond=`, you must specify how the IP address is assigned and other information for the bonded interface. - ++ ** To configure the bonded interface to use DHCP, set the bond's IP address to `dhcp`. For example: + [source,terminal] ---- -bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp ---- - ++ ** To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: ifndef::ibm-z[] + [source,terminal] ---- -bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none ---- endif::ibm-z[] ifdef::ibm-z[] - ++ [source,terminal] ---- -bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 -ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none +bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none::AA:BB:CC:DD:EE:FF ip=em1:none::AA:BB:CC:DD:EE:FF +ip=em2:none::AA:BB:CC:DD:EE:FF ---- -Always set the `fail_over_mac=1` option in active-backup mode, to avoid problems when shared OSA/RoCE cards are used. +{ibm-z-title} supports value `1` for the `fail_over_mac` parameter, so always set the `fail_over_mac=1` option in active-backup mode to avoid problems when shared OSA/RoCE cards are used. endif::ibm-z[] ifdef::ibm-z[] @@ -287,9 +286,9 @@ ifndef::ibm-z[] === Bonding multiple SR-IOV network interfaces to a dual port NIC interface -Optional: You can bond multiple SR-IOV network interfaces to a dual port NIC interface by using the `bond=` option. +As an optional configuration, you can bond multiple SR-IOV network interfaces to a dual port NIC interface by using the `bond=` option. -On each node, you must perform the following tasks: +.Procedure ifndef::installing-ibm-power[] . Create the SR-IOV virtual functions (VFs) following the guidance in link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/configuring_and_managing_virtualization/managing-virtual-devices_configuring-and-managing-virtualization#managing-sr-iov-devices_managing-virtual-devices[Managing SR-IOV devices]. Follow the procedure in the "Attaching SR-IOV networking devices to virtual machines" section. @@ -308,12 +307,14 @@ The following examples illustrate the syntax you must use: * When you create a bonded interface using `bond=`, you must specify how the IP address is assigned and other information for the bonded interface. -** To configure the bonded interface to use DHCP, set the bond's IP address to `dhcp`. For example: +** To configure the bonded interface to use DHCP, set the `ip` parameter to `dhcp` as demonstrated in the following example: + [source,terminal] ---- bond=bond0:eno1f0,eno2f0:mode=active-backup -ip=bond0:dhcp +ip=bond0:dhcp::AA:BB:CC:DD:EE:FF +ip=eno1f0:none::AA:BB:CC:DD:EE:FF +ip=eno2f0:none::AA:BB:CC:DD:EE:FF ---- ** To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: diff --git a/modules/nw-kernel-bonding.adoc b/modules/nw-kernel-bonding.adoc new file mode 100644 index 000000000000..695533b0db34 --- /dev/null +++ b/modules/nw-kernel-bonding.adoc @@ -0,0 +1,25 @@ +// Module included in the following assemblies: +// +// * networking/advanced_networking/network-bonding-considerations.adoc + +:_mod-docs-content-type: CONCEPT +[id="nw-kernel-bonding_{context}"] += Kernel bonding + +[role="_abstract"] +Kernel bonding is a built-in Linux kernel function where link aggregation can exist among many Ethernet interfaces to create a single logical physical interface. Kernel bonding is the default mode if no bond interfaces depend on OVS bonds. This bonding type does not give the same level of customization as supported OVS bonding. + +For `kernel-bonding` mode, the bond interfaces exist outside, which means they are not in the data path, of the bridge interface. Network traffic in this mode is not sent or received on the bond interface port but instead requires additional bridging capabilities for MAC address assignment at the kernel level. + +If you enabled `kernel-bonding` mode on network controller interfaces (NICs) for your nodes, you must specify a Media Access Control (MAC) address failover. This configuration prevents node communication issues with the bond interfaces, such as `eno1f0` and `eno2f0`. + +Red Hat supports only the following value for the `fail_over_mac` parameter: + +* `0`: Specifies the `none` value and this is the default value that disables MAC address failover so that all interfaces receive the same MAC address as the bond interface. + +Red Hat does not support the following values for the `fail_over_mac` parameter: + +* `1`: Specifies the `active` value and sets the MAC address of the primary bond interface to always remain the same as active interfaces. If during a failover, the MAC address of an interface changes, the MAC address of the bond interface changes to match the new MAC address of the interface. + +* `2`: Specifies the `follow` value so that during a failover, an active interface gets the MAC address of the bond interface and a formerly active interface receives the MAC address of the newly active interface. + diff --git a/modules/nw-ovs-bonding.adoc b/modules/nw-ovs-bonding.adoc new file mode 100644 index 000000000000..f42df45f1043 --- /dev/null +++ b/modules/nw-ovs-bonding.adoc @@ -0,0 +1,21 @@ +// Module included in the following assemblies: +// +// * networking/advanced_networking/network-bonding-considerations.adoc + +:_mod-docs-content-type: CONCEPT +[id="nw-ovs-bonding_{context}"] += Open vSwitch (OVS) bonding + +[role="_abstract"] +With an OVS bonding configuration, you create a single, logical interface by connecting each physical network interface controller (NIC) as a port to a specific bond. This single bond then handles all network traffic, effectively replacing the function of individual interfaces. + +Consider the following architectural layout for OVS bridges that interact with OVS interfaces: + +* A network interface uses a bridge Media Access Control (MAC) address for managing protocol-level traffic and other administrative tasks, such as IP address assignment. +* The physical MAC addresses of physical interfaces do not handle traffic. +* OVS handles all MAC address management at the OVS bridge level. + +This layout simplies bond interface management as bonds acts as data paths where centralized MAC address management happens at the OVS bridge level. + +For OVS bonding, you can select either `active-backup` mode or `balance-slb` mode. A bonding mode specifies the policy for how bond interfaces get used during network transmission. + diff --git a/modules/virt-example-nmstate-multiple-interfaces.adoc b/modules/virt-example-nmstate-multiple-interfaces.adoc index a208eca665a3..282495f84b25 100644 --- a/modules/virt-example-nmstate-multiple-interfaces.adoc +++ b/modules/virt-example-nmstate-multiple-interfaces.adoc @@ -9,6 +9,11 @@ [role="_abstract"] You can create multiple interfaces in the same node network configuration policy. These interfaces can reference each other, allowing you to build and deploy a network configuration by using a single policy manifest. +[IMPORTANT] +==== +If multiple interfaces use the same default configuration, a single Network Manager connection profile activates on multiple interfaces simultaneously and this causes connections to have the same universally unique identifier (UUID). To avoid this issue, ensure that each interface has a specific configuration that is different from the default configuration. +==== + The following example YAML file creates a bond that is named `bond10` across two NICs and VLAN that is named `bond10.103` that connects to the bond. [source,yaml] diff --git a/networking/advanced_networking/network-bonding-considerations.adoc b/networking/advanced_networking/network-bonding-considerations.adoc new file mode 100644 index 000000000000..937653cf0ee0 --- /dev/null +++ b/networking/advanced_networking/network-bonding-considerations.adoc @@ -0,0 +1,25 @@ +:_mod-docs-content-type: ASSEMBLY +[id="network-bonding-considerations"] += Network bonding considerations +include::_attributes/common-attributes.adoc[] +:context: network-bonding-considerations + +toc::[] + +[role="_abstract"] +The Network bonding method, also known as _link aggregration_, combines many network interfaces into a single, logical interface. Network bonding uses different modes to handle how network traffic distributes across bonded interfaces. Each mode provides fault tolerance and some modes provide load balancing capabilities to your network. Red Hat supports Open vSwitch (OVS) bonding and kernel bonding. + +// Open vSwitch (OVS) bonding +include::modules/nw-ovs-bonding.adoc[leveloffset=+1] + +// Enabling active-backup mode for your cluster +include::modules/enable-active-backup-mode.adoc[leveloffset=+2] + +// Enabling OVS balance-slb mode for your cluster +include::modules/enabling-OVS-balance-slb-mode.adoc[leveloffset=+2] + +// Kernel bonding +include::modules/nw-kernel-bonding.adoc[leveloffset=+1] + + + diff --git a/networking/networking_operators/k8s-nmstate-about-the-k8s-nmstate-operator.adoc b/networking/networking_operators/k8s-nmstate-about-the-k8s-nmstate-operator.adoc index 23a0d41a6fe9..a5d9397e691f 100644 --- a/networking/networking_operators/k8s-nmstate-about-the-k8s-nmstate-operator.adoc +++ b/networking/networking_operators/k8s-nmstate-about-the-k8s-nmstate-operator.adoc @@ -7,6 +7,7 @@ include::_attributes/common-attributes.adoc[] toc::[] +[role="_abstract"] The Kubernetes NMState Operator provides a Kubernetes API for performing state-driven network configuration across the {product-title} cluster's nodes with NMState. The Kubernetes NMState Operator provides users with functionality to configure various network interface types, DNS, and routing on cluster nodes. Additionally, the daemons on the cluster nodes periodically report on the state of each node's network interfaces to the API server. [IMPORTANT] @@ -38,6 +39,11 @@ Node networking is monitored and updated by the following objects: `NodeNetworkConfigurationPolicy`:: Describes the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a `NodeNetworkConfigurationPolicy` CR to the cluster. `NodeNetworkConfigurationEnactment`:: Reports the network policies enacted upon each node. +[NOTE] +==== +As a postinstallation task, do not make configuration changes to the `br-ex` bridge or its underlying interfaces. +==== + [id="installing-the-kubernetes-nmstate-operator-cli"] == Installing the Kubernetes NMState Operator