Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
87 changes: 87 additions & 0 deletions docs/how-to/create-active-active-nfs.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
.. _howto_active_active_nfs:

How to create an active-active NFS Ganesha service
===================================================

This guide explains how to create a highly available, active-active NFS Ganesha service using the MicroCeph ingress feature. The ingress service uses keepalived and haproxy to provide a virtual IP (VIP) that load balances traffic across multiple NFS Ganesha instances. You can also use this feature to provide ingress for other services.

Prerequisites
-------------

- A running MicroCeph cluster with at least two nodes.
- The ``nfs`` feature enabled on at least two nodes, each belonging to the same NFS cluster.

1. Enable NFS Ganesha services
-------------------------------

First, enable the NFS Ganesha service on two or more nodes. Make sure to use the same ``--cluster-id`` for each instance to group them into a single NFS cluster.

On the first node:

.. code-block:: bash

microceph enable nfs --cluster-id nfs-ha --target-node node1

On the second node:

.. code-block:: bash

microceph enable nfs --cluster-id nfs-ha --target-node node2

This will create two NFS Ganesha instances that are part of the ``nfs-ha`` cluster.

2. Enable the ingress service
-----------------------------

Next, enable the ingress service. This will create a VIP that floats between the nodes where the ingress service is enabled and load balances traffic to the target service instances.

.. code-block:: bash

microceph enable ingress --service-id ingress-nfs-ha \
--vip-address 192.168.1.100 \
--vip-interface eth0 \
--target nfs.nfs-ha \
--target-node node1

Repeat the same command for any other node where you want to run the ingress service (typically the same nodes as your target service).

.. code-block:: bash

microceph enable ingress --service-id ingress-nfs-ha \
--vip-address 192.168.1.100 \
--vip-interface eth0 \
--target nfs.nfs-ha \
--target-node node2

- ``--service-id``: A unique name for this ingress service instance.
- ``--vip-address``: The virtual IP address that clients will connect to.
- ``--vip-interface``: The network interface on which the VIP will be active.
- ``--target``: The service to provide ingress for, in the format ``<service-name>.<service-id>``. In this case, we are targeting the ``nfs`` service with the ID ``nfs-ha``.

MicroCeph will automatically generate and manage the necessary VRRP password and router ID for the keepalived configuration.

Creating multiple ingress services
----------------------------------

You can run the ``enable ingress`` command multiple times with different ``--service-id`` values to create multiple, independent ingress services. This is useful for providing VIPs for different services or different clusters of the same service.

3. Verify the setup
-------------------

You should now be able to mount the NFS share using the virtual IP address:

.. code-block:: bash

mount -t nfs -o port=2049 192.168.1.100:/ /mnt/nfs

4. Disable the ingress service
------------------------------

To disable an ingress service, use the ``disable ingress`` command with the service ID:

.. code-block:: bash

microceph disable ingress --service-id ingress-nfs-ha --target-node node1
microceph disable ingress --service-id ingress-nfs-ha --target-node node2

This will remove the configuration for this specific ingress service and reload the ingress service. If no other ingress services are configured, the `keepalived` and `haproxy` daemons will be stopped.
1 change: 1 addition & 0 deletions docs/how-to/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ configuration of metrics, alerts and other service instances.
enable-metrics
enable-alerts
enable-service-instances
create-active-active-nfs

Interacting with your cluster
-----------------------------
Expand Down
1 change: 1 addition & 0 deletions microceph/api/servers.go
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ var Servers = map[string]rest.Server{
mgrServiceCmd,
monServiceCmd,
nfsServiceCmd,
ingressServiceCmd,
poolsOpCmd,
rgwServiceCmd,
rbdMirroServiceCmd,
Expand Down
30 changes: 30 additions & 0 deletions microceph/api/services.go
Original file line number Diff line number Diff line change
Expand Up @@ -75,6 +75,12 @@ var rgwServiceCmd = rest.Endpoint{
Put: rest.EndpointAction{Handler: cmdEnableServicePut, ProxyTarget: true},
Delete: rest.EndpointAction{Handler: cmdRGWServiceDelete, ProxyTarget: true},
}

var ingressServiceCmd = rest.Endpoint{
Path: "services/ingress",
Put: rest.EndpointAction{Handler: cmdEnableServicePut, ProxyTarget: true},
Delete: rest.EndpointAction{Handler: cmdIngressDeleteService, ProxyTarget: true},
}
var rbdMirroServiceCmd = rest.Endpoint{
Path: "services/rbd-mirror",
Put: rest.EndpointAction{Handler: cmdEnableServicePut, ProxyTarget: true},
Expand Down Expand Up @@ -161,6 +167,30 @@ func cmdRestartServicePost(s state.State, r *http.Request) response.Response {
return response.EmptySyncResponse
}

// cmdIngressDeleteService handles the ingress service deletion.
func cmdIngressDeleteService(s state.State, r *http.Request) response.Response {
var svc types.IngressService

err := json.NewDecoder(r.Body).Decode(&svc)
if err != nil {
logger.Errorf("failed decoding disable service request: %v", err)
return response.InternalError(err)
}

if !types.IngressServiceIDRegex.MatchString(svc.ClusterID) {
err := fmt.Errorf("expected service_id to be valid (regex: '%s')", types.IngressServiceIDRegex.String())
return response.SmartError(err)
}

err = ceph.DisableIngress(r.Context(), interfaces.CephState{State: s}, svc.ClusterID)
if err != nil {
logger.Errorf("Failed disabling ingress: %v", err)
return response.SmartError(err)
}

return response.EmptySyncResponse
}

// cmdDeleteService handles service deletion.
func cmdDeleteService(s state.State, r *http.Request) response.Response {
which := path.Base(r.URL.Path)
Expand Down
40 changes: 40 additions & 0 deletions microceph/api/types/service_ingress.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
package types

import (
"fmt"
"net"
"regexp"
"strings"
)

// IngressServiceIDRegex is the regular expression that a valid Ingress ServiceID must match.
var IngressServiceIDRegex = regexp.MustCompile(`^[a-zA-Z0-9.\-_]+$`)

// IngressServicePlacement represents the configuration for an ingress service.
type IngressServicePlacement struct {
ServiceID string `json:"service_id"`
VIPAddress string `json:"vip_address"`
VIPInterface string `json:"vip_interface"`
Target string `json:"target"`
}

// Validate checks if the IngressServicePlacement has valid fields.
func (isp *IngressServicePlacement) Validate() error {
if !IngressServiceIDRegex.MatchString(isp.ServiceID) {
return fmt.Errorf("expected service_id to be valid (regex: '%s')", IngressServiceIDRegex.String())
}
if net.ParseIP(isp.VIPAddress) == nil {
return fmt.Errorf("vip_address '%s' could not be parsed", isp.VIPAddress)
}
if isp.VIPInterface == "" {
return fmt.Errorf("vip_interface must be provided")
}
if isp.Target == "" {
return fmt.Errorf("target must be provided")
}
parts := strings.Split(isp.Target, ".")
if len(parts) != 2 {
return fmt.Errorf("target must be in the format <service>.<id>")
}
return nil
}
5 changes: 5 additions & 0 deletions microceph/api/types/services.go
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,11 @@ type NFSService struct {
ClusterID string `json:"cluster_id" yaml:"cluster_id"`
}

// IngressService represents an ingress service that is running on a given node.
type IngressService struct {
ClusterID string `json:"cluster_id"`
}

// NFSClusterIDRegex is a regex for acceptable ClusterIDs.
var NFSClusterIDRegex = regexp.MustCompile(`^[\w][\w.-]{1,61}[\w]$`)

Expand Down
Loading
Loading