-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Description
Bug Description
I have 2 peered VPCs. The first one is for general-purpose, where among others, ALB nodes reside, and the second one is for IPs of EKS nodes and pods only.
I use ip-mode for my target group, so the traffic from ALB is routed directly to IPv4 addresses of pods.
Everything was working fine until I added a new value to alb.ingress.kubernetes.io/target-group-attributes - load_balancing.cross_zone.enabled=false. After disabling the cross-zone load balancing none of the targets were registered. In the logs of the LB controller deployment, I can see an example error message:
{
"level":"error",
"ts":"2025-11-29T22:12:06Z",
"msg":"Reconciler error",
"controller":"targetGroupBinding",
"controllerGroup":"elbv2.k8s.aws",
"controllerKind":"TargetGroupBinding",
"TargetGroupBinding":{
"name":"k8s-gobacken-gobacken-fc702f12bc",
"namespace":"go-backend"
},
"namespace":"go-backend",
"name":"k8s-gobacken-gobacken-fc702f12bc",
"reconcileID":"99181ed3-1c2a-4a0d-a534-98c9ce3af0d5",
"error":"operation error Elastic Load Balancing v2: RegisterTargets, https response error StatusCode: 400, RequestID: dee398b3-b0b0-419c-a209-240689556b50, api error ValidationError: When cross-zone load balancing is disabled on this target group, you must specify an Availability Zone for IP target '172.16.0.108' since it is outside of the VPC"
}The relevant part:
ValidationError: When cross-zone load balancing is disabled on this target group, you must specify an Availability Zone for IP target '172.16.0.108' since it is outside of the VPC
This is partially expected because the AWS API requires explicitly specifying the AZ when we try to register a private IPv4 from a peered VPC as a target. However, the LB controller should automatically discover AZ of the underlying EC2 instance and provide this data when registering a new target, but it seems that it doesn't do that. I also didn't find any information about this problem of the LB controller anywhere.
When I manually go into the AWS console and register a target (private IPv4 of pod) with specifying a proper AZ, then the target is successfully registered and everything works perfectly fine. Also, when I go back to my previous setup where cross-zone is disabled, then everything works fine too.
So, it seems that the LB controller doesn't work as of now with Private IPv4 targets that reside in a peered VPC, when cross-zone load balancing is enabled.
Steps to Reproduce
- Create 2 AWS VPCs that are peered with each other and contain proper routing tables.
- Install LB controller and enable it for a few subnets in VPC A with making sure that
alb.ingress.kubernetes.io/target-group-attributesin the ingress manifest is set toload_balancing.cross_zone.enabled=falseandalb.ingress.kubernetes.io/target-type:is set toip. - Deploy some EKS worker nodes in VPC B.
- Deploy K8s resources that will be used by the ingress (Service, Deployment, etc.).
- See logs of
aws-load-balancer-controllerdeployment
Expected Behavior
Private IPv4 addresses are registered in the target group, even if 3 constraints are fulfille at the same time:
- Target group uses
ip-mode - Private IPv4 addresses of targets reside in peered VPC
- Cross-zone load balancing is enabled.
Actual Behavior
No targets are registered, so the target group is always empty.
Regression
Was the functionality working correctly in a previous version ? [No]
Current Workarounds
Don't use cross-zone load balancing, or use instance instead of ip-mode, or keep ALB nodes in the same VPC. Everything fixes the issue but comes with the disadvantage of not being able to use some features.
Environment
- AWS Load Balancer controller version:
v2.13.4 - Helm chart
aws-load-balancer-controllerversion:1.13.4 - Kubernetes version:
1.34 - Using EKS (yes/no), if so version?: yes,
1.34 - Using Service or Ingress:
Ingress - AWS region:
us-east-1
Possible Solution (Optional)
Probably update prepareRegistrationCall function somewhere here:
| target.AvailabilityZone = awssdk.String("all") |
Additional Context
My ingress manifest that doesn't work because of load_balancing.cross_zone.enabled=false:
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .Chart.Name }}
namespace: {{ .Chart.Name }}
labels:
{{- include "helpers.labels" . | nindent 4 }}
annotations:
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/group.name: main
alb.ingress.kubernetes.io/target-group-attributes: load_balancing.cross_zone.enabled=false,deregistration_delay.timeout_seconds=120,slow_start.duration_seconds=300
alb.ingress.kubernetes.io/backend-protocol: HTTPS
alb.ingress.kubernetes.io/backend-protocol-version: HTTP2
alb.ingress.kubernetes.io/success-codes: 200-399
alb.ingress.kubernetes.io/healthcheck-path: /api/crud/users
spec:
ingressClassName: alb
rules:
- host: "api.{{ .Values.envDomainPrefix }}{{ .Values.domainName }}"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: go-backend
port:
number: 443
tls:
- hosts:
- "api.{{ .Values.envDomainPrefix }}{{ .Values.domainName }}"