Skip to content
Merged
Show file tree
Hide file tree
Changes from 11 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -20,4 +20,7 @@ mocks/controller-runtime/client/gomock_reflect_*
pkg/**/prog.*

# Image build tarballed bundles
*.tgz
*.tgz

# Kiro
.kiro
4 changes: 2 additions & 2 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -120,12 +120,12 @@ e2e-test: ## Run e2e tests against cluster pointed to by ~/.kube/config
cd test && go test \
-p 1 \
-count 1 \
-timeout 90m \
-timeout 120m \
-v \
./suites/integration/... \
--ginkgo.focus="${FOCUS}" \
--ginkgo.skip="${SKIP}" \
--ginkgo.timeout=90m \
--ginkgo.timeout=120m \
--ginkgo.v

.SILENT:
Expand Down
46 changes: 46 additions & 0 deletions docs/api-types/service-export.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,10 @@ for example, using target groups in the VPC Lattice setup outside Kubernetes.
Note that ServiceExport is not the implementation of Kubernetes [Multicluster Service APIs](https://multicluster.sigs.k8s.io/concepts/multicluster-services-api/);
instead AWS Gateway API Controller uses its own version of the resource for the purpose of Gateway API integration.

### TargetGroupPolicy Integration

ServiceExport resources can be targeted by [`TargetGroupPolicy`](target-group-policy.md) to configure protocol, protocol version, and health check settings. When a TargetGroupPolicy is applied to a ServiceExport, the configuration is automatically propagated to all target groups across all clusters that participate in the multi-cluster service mesh, ensuring consistent behavior regardless of which cluster contains the route resource.

### Annotations (Legacy Method)

* `application-networking.k8s.aws/port`
Expand Down Expand Up @@ -69,3 +73,45 @@ spec:
This configuration will:
1. Export port 80 to be used with HTTP routes
2. Export port 8081 to be used with gRPC routes

### ServiceExport with TargetGroupPolicy

The following example shows how to combine ServiceExport with TargetGroupPolicy for consistent multi-cluster health check configuration:

```yaml
# ServiceExport
apiVersion: application-networking.k8s.aws/v1alpha1
kind: ServiceExport
metadata:
name: inventory-service
spec:
exportedPorts:
- port: 8080
routeType: HTTP
---
# TargetGroupPolicy for the ServiceExport
apiVersion: application-networking.k8s.aws/v1alpha1
kind: TargetGroupPolicy
metadata:
name: inventory-health-policy
spec:
targetRef:
group: "application-networking.k8s.aws"
kind: ServiceExport
name: inventory-service
protocol: HTTP
protocolVersion: HTTP2
healthCheck:
enabled: true
intervalSeconds: 10
timeoutSeconds: 5
healthyThresholdCount: 2
unhealthyThresholdCount: 3
path: "/health"
port: 8080
protocol: HTTP
protocolVersion: HTTP1
statusMatch: "200-299"
```

This configuration ensures that all target groups created for the `inventory-service` across all clusters will use the same health check configuration, providing consistent health monitoring in multi-cluster deployments.
46 changes: 44 additions & 2 deletions docs/api-types/target-group-policy.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,10 @@ When attaching a policy to a resource, the following restrictions apply:
- A policy can be attached to `ServiceExport`.
- The attached resource should exist in the same namespace as the policy resource.

### Multi-Cluster Health Check Configuration

In multi-cluster deployments, TargetGroupPolicy health check configurations are automatically propagated across all clusters that participate in the service mesh. When a TargetGroupPolicy is applied to a ServiceExport, all target groups created for that service across different clusters will use the same health check configuration, ensuring consistent health monitoring regardless of which cluster contains the route resource.

The policy will not take effect if:
- The resource does not exist
- The resource is not referenced by any route
Expand All @@ -32,12 +36,15 @@ However, the policy will not take effect unless the target is valid.
of VPC Lattice TargetGroup resource, except for health check updates.
- Attaching TargetGroupPolicy to an existing ServiceExport will result in a replacement of VPC Lattice TargetGroup resource, except for health check updates.
- Removing TargetGroupPolicy of a resource will roll back protocol configuration to default setting. (HTTP1/HTTP plaintext)
- In multi-cluster deployments, TargetGroupPolicy changes will automatically propagate to all clusters participating in the service mesh, ensuring consistent configuration across the deployment.

## Example Configurations

## Example Configuration
### Single Cluster Configuration

This will enable HTTPS traffic between the gateway and Kubernetes service, with customized health check configuration.

```
```yaml
apiVersion: application-networking.k8s.aws/v1alpha1
kind: TargetGroupPolicy
metadata:
Expand All @@ -61,3 +68,38 @@ spec:
protocolVersion: HTTP1
statusMatch: "200"
```

### Multi-Cluster Configuration

This example shows how to configure health checks for a ServiceExport in a multi-cluster deployment. The health check configuration will be automatically applied to all target groups across all clusters that participate in the service mesh.

```yaml
apiVersion: application-networking.k8s.aws/v1alpha1
kind: TargetGroupPolicy
metadata:
name: multi-cluster-policy
spec:
targetRef:
group: "application-networking.k8s.aws"
kind: ServiceExport
name: inventory-service
protocol: HTTP
protocolVersion: HTTP2
healthCheck:
enabled: true
intervalSeconds: 10
timeoutSeconds: 5
healthyThresholdCount: 2
unhealthyThresholdCount: 3
path: "/health"
port: 8080
protocol: HTTP
protocolVersion: HTTP1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it default behaviour having service with HTTP2 and health check with http1?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, I'll set this to HTTP2 instead.

statusMatch: "200-299"
```

In this multi-cluster example:
- The policy targets a `ServiceExport` named `inventory-service`
- All clusters with target groups for this service will use HTTP/2 for traffic and the specified health check configuration
- Health checks will use HTTP/1 on port 8080 with the `/health` endpoint
- The configuration ensures consistent health monitoring across all participating clusters
1 change: 1 addition & 0 deletions docs/concepts/concepts.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ AWS Gateway API Controller integrates with Amazon VPC Lattice and allows you to:
* Discover VPC Lattice services spanning multiple Kubernetes clusters.
* Implement a defense-in-depth strategy to secure communication between those services.
* Observe the request/response traffic across the services.
* Ensure consistent health check configuration across multi-cluster deployments through automatic policy propagation.

This documentation describes how to set up the AWS Gateway API Controller, provides example use cases, development concepts, and API references. AWS Gateway API Controller will provide developers the ability to publish services running on Kubernetes cluster and other compute platforms on AWS such as AWS Lambda or Amazon EC2. Once the AWS Gateway API controller deployed and running, you will be able to manage services for multiple Kubernetes clusters and other compute targets on AWS through the following:

Expand Down
2 changes: 2 additions & 0 deletions docs/concepts/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,8 @@ In the context of Kubernetes, Amazon VPC Lattice helps to simplify the following
- **Kubernetes multi-cluster connectivity**: Architecting multiple clusters across multiple VPCs.
After configuring your services with the AWS Gateway API Controller, you can facilitate advanced traffic management and application layer routing between services on those clusters without dealing with the underlying infrastructure.
VPC Lattice handles a lot of the details for you without needing things like sidecars.

**Multi-cluster health check consistency**: When using TargetGroupPolicy with ServiceExport resources, health check configurations are automatically propagated across all clusters participating in the service mesh. This ensures consistent health monitoring behavior regardless of which cluster contains the route resource, eliminating configuration drift and improving reliability in multi-cluster deployments.
- **Cross-platform access**: VPC Lattice allows access to serverless and Amazon EC2 features, as well as Kubernetes cluster features.
This gives you a way to have a consistent interface to multiple types of platforms.
- **Implement a defense-in-depth strategy**: Secure communication between services and networks.
Expand Down
6 changes: 5 additions & 1 deletion docs/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,8 @@ Your AWS VPC CNI must be v1.8.0 or later to work with VPC Lattice.

**Which versions of Gateway API are supported?**

AWS Gateway API Controller supports Gateway API CRD bundle versions `v1.1` or greater. Not all features of Gateway API are supported - for detailed features and limitation, please refer to individual API references. Please note that users are required to install Gateway API CRDs themselves as these are no longer bundled as of release `v1.1.0`. The latest Gateway API CRDs are available [here](https://gateway-api.sigs.k8s.io/). Please [follow this installation](https://gateway-api.sigs.k8s.io/guides/#installing-gateway-api) process.
AWS Gateway API Controller supports Gateway API CRD bundle versions `v1.1` or greater. Not all features of Gateway API are supported - for detailed features and limitation, please refer to individual API references. Please note that users are required to install Gateway API CRDs themselves as these are no longer bundled as of release `v1.1.0`. The latest Gateway API CRDs are available [here](https://gateway-api.sigs.k8s.io/). Please [follow this installation](https://gateway-api.sigs.k8s.io/guides/#installing-gateway-api) process.

**How do health checks work in multi-cluster deployments?**

In multi-cluster deployments, when you apply a TargetGroupPolicy to a ServiceExport, the health check configuration is automatically propagated to all target groups across all clusters that participate in the service mesh. This ensures consistent health monitoring behavior regardless of which cluster contains the route resource.
31 changes: 31 additions & 0 deletions docs/guides/advanced-configurations.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,37 @@ The `{index}` in the annotation corresponds to the zero-based index of the rule

Higher priority values indicate higher precedence, so requests to `/api/v2` will be matched by the first rule (priority 200) before the second rule (priority 100) is considered.

### Multi-Cluster Health Check Configuration

In multi-cluster deployments, you can ensure consistent health check configuration across all clusters by applying TargetGroupPolicy to ServiceExport resources. This eliminates the previous limitation where only the cluster containing the route resource would receive the correct health check configuration.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to call the previous limitation? unless it was documented somewhere else, I don't think it is necessary to call it out.


#### Configuring Health Checks for ServiceExport

When you apply a TargetGroupPolicy to a ServiceExport, the health check configuration is automatically propagated to all target groups across all clusters that participate in the service mesh:

```yaml
apiVersion: application-networking.k8s.aws/v1alpha1
kind: TargetGroupPolicy
metadata:
name: multi-cluster-health-policy
spec:
targetRef:
group: "application-networking.k8s.aws"
kind: ServiceExport
name: my-service
healthCheck:
enabled: true
intervalSeconds: 10
timeoutSeconds: 5
healthyThresholdCount: 2
unhealthyThresholdCount: 3
path: "/health"
port: 8080
protocol: HTTP
protocolVersion: HTTP1
statusMatch: "200-299"
```

### IPv6 support

IPv6 address type is automatically used for your services and pods if
Expand Down
58 changes: 58 additions & 0 deletions docs/guides/getstarted.md
Original file line number Diff line number Diff line change
Expand Up @@ -280,6 +280,64 @@ This section builds on the previous one. We will be migrating the Kubernetes `in
kubectl apply -f files/examples/inventory-ver2-export.yaml
```

### Configuring Health Checks for Multi-Cluster Services (Optional)

When deploying services across multiple clusters, you can ensure consistent health check configuration by applying a TargetGroupPolicy to your ServiceExport. This ensures that all target groups created for the service across different clusters use the same health check settings.

For example, to configure custom health checks for the inventory-ver2 service:

```yaml
apiVersion: application-networking.k8s.aws/v1alpha1
kind: TargetGroupPolicy
metadata:
name: inventory-health-policy
spec:
targetRef:
group: "application-networking.k8s.aws"
kind: ServiceExport
name: inventory-ver2
healthCheck:
enabled: true
intervalSeconds: 10
timeoutSeconds: 5
healthyThresholdCount: 2
unhealthyThresholdCount: 3
path: "/health"
port: 80
protocol: HTTP
protocolVersion: HTTP1
statusMatch: "200-299"
```

Apply this policy in the same cluster where the ServiceExport is created:

```bash
kubectl apply -f - <<EOF
apiVersion: application-networking.k8s.aws/v1alpha1
kind: TargetGroupPolicy
metadata:
name: inventory-health-policy
spec:
targetRef:
group: "application-networking.k8s.aws"
kind: ServiceExport
name: inventory-ver2
healthCheck:
enabled: true
intervalSeconds: 10
timeoutSeconds: 5
healthyThresholdCount: 2
unhealthyThresholdCount: 3
path: "/health"
port: 80
protocol: HTTP
protocolVersion: HTTP1
statusMatch: "200-299"
EOF
```

This configuration will be automatically applied to all target groups for the inventory-ver2 service across all clusters in your multi-cluster deployment.

**Switch back to the first cluster**

1. Switch context back to the first cluster
Expand Down
55 changes: 55 additions & 0 deletions pkg/controllers/eventhandlers/service.go
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,14 @@ func (h *serviceEventHandler) MapToServiceExport() handler.EventHandler {
func (h *serviceEventHandler) mapToServiceExport(ctx context.Context, obj client.Object) []reconcile.Request {
var requests []reconcile.Request

// Handle TargetGroupPolicy changes more directly for ServiceExport
if tgp, ok := obj.(*v1alpha1.TargetGroupPolicy); ok {
requests = h.mapTargetGroupPolicyToServiceExport(ctx, tgp)
if len(requests) > 0 {
return requests
}
}

svc := h.mapToService(ctx, obj)
svcExport := h.mapper.ServiceToServiceExport(ctx, svc)
if svcExport != nil {
Expand All @@ -65,6 +73,53 @@ func (h *serviceEventHandler) mapToService(ctx context.Context, obj client.Objec
return nil
}

func (h *serviceEventHandler) mapTargetGroupPolicyToServiceExport(ctx context.Context, tgp *v1alpha1.TargetGroupPolicy) []reconcile.Request {
var requests []reconcile.Request

targetRef := tgp.GetTargetRef()
if targetRef == nil {
return requests
}

// Check if the policy directly targets a ServiceExport
if targetRef.Kind == "ServiceExport" && (targetRef.Group == "" || targetRef.Group == v1alpha1.GroupName) {
svcExport := &v1alpha1.ServiceExport{}
key := client.ObjectKey{
Name: string(targetRef.Name),
Namespace: tgp.Namespace,
}
if err := h.client.Get(ctx, key, svcExport); err == nil {
requests = append(requests, reconcile.Request{
NamespacedName: k8s.NamespacedName(svcExport),
})
h.log.Infow(ctx, "TargetGroupPolicy change triggered ServiceExport update",
"policyName", tgp.Namespace+"/"+tgp.Name,
"serviceExportName", svcExport.Namespace+"/"+svcExport.Name)
}
return requests
}

// Check if the policy targets a Service that has a corresponding ServiceExport
if targetRef.Kind == "Service" && (targetRef.Group == "" || targetRef.Group == corev1.GroupName) {
svcExport := &v1alpha1.ServiceExport{}
key := client.ObjectKey{
Name: string(targetRef.Name),
Namespace: tgp.Namespace,
}
if err := h.client.Get(ctx, key, svcExport); err == nil {
requests = append(requests, reconcile.Request{
NamespacedName: k8s.NamespacedName(svcExport),
})
h.log.Infow(ctx, "TargetGroupPolicy change for Service triggered ServiceExport update",
"policyName", tgp.Namespace+"/"+tgp.Name,
"serviceName", string(targetRef.Name),
"serviceExportName", svcExport.Namespace+"/"+svcExport.Name)
}
}

return requests
}

func (h *serviceEventHandler) mapToRoute(ctx context.Context, obj client.Object, routeType core.RouteType) []reconcile.Request {
svc := h.mapToService(ctx, obj)
routes := h.mapper.ServiceToRoutes(ctx, svc, routeType)
Expand Down
Loading
Loading