Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

AWS Load Balancer Failed to Deploy

I'm trying to create AWS ALB-Ingress through EKS following the steps in the document https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html

I was successful till the step 7 in creating the controller:

[ec2-user@ip-X-X-X-X eks-cluster]$ kubectl apply -f v2_0_0_full.yaml 
customresourcedefinition.apiextensions.k8s.io/targetgroupbindings.elbv2.k8s.aws created 
mutatingwebhookconfiguration.admissionregistration.k8s.io/aws-load-balancer-webhook created 
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply 
serviceaccount/aws-load-balancer-controller configured 
role.rbac.authorization.k8s.io/aws-load-balancer-controller-leader-election-role created 
clusterrole.rbac.authorization.k8s.io/aws-load-balancer-controller-role created 
rolebinding.rbac.authorization.k8s.io/aws-load-balancer-controller-leader-election-rolebinding created 
clusterrolebinding.rbac.authorization.k8s.io/aws-load-balancer-controller-rolebinding created 
service/aws-load-balancer-webhook-service created 
deployment.apps/aws-load-balancer-controller created 
certificate.cert-manager.io/aws-load-balancer-serving-cert created 
issuer.cert-manager.io/aws-load-balancer-selfsigned-issuer created 
validatingwebhookconfiguration.admissionregistration.k8s.io/aws-load-balancer-webhook created

However, the controller does NOT get to "Ready" status:

[ec2-user@ip-X-X-X-X eks-cluster]$ kubectl get deployment -n kube-system aws-load-balancer-controller
NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
aws-load-balancer-controller   0/1     1            0           29m

I'm also able to list the pod associated with the controller which also shows NOT READY:

[ec2-user@ip-X-X-X-X eks-cluster]$ kubectl get pods -n kube-system
NAME                                            READY   STATUS    RESTARTS   AGE
aws-load-balancer-controller-XXXXXXXXXX-p4l7f   0/1     Pending   0          30m

I also can't seem to get its logs in order to try and debug the issue:

[ec2-user@ip-X-X-X-X eks-cluster]$ kubectl -n kube-system logs aws-load-balancer-controller-XXXXXXXXXX-p4l7f
[ec2-user@ip-X-X-X-X eks-cluster]$

Furthermore, the /var/log directory also does not have any related logs.

Please help me understand why it is not coming to READY state. Also let me know how to enable logging to debug these kind of issues.

like image 468
Vishwas M.R Avatar asked Oct 21 '25 09:10

Vishwas M.R


1 Answers

I found the answer here. A faragate deployment requires the region and vpc-id.

helm upgrade -i aws-load-balancer-controller eks/aws-load-balancer-controller \
    --set clusterName=<cluster-name> \
    --set serviceAccount.create=false \
    --set region=<region-code> \
    --set vpcId=<vpc-xxxxxxxx>> \
    --set serviceAccount.name=aws-load-balancer-controller \
    -n kube-system
like image 189
user1842409 Avatar answered Oct 24 '25 03:10

user1842409



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!