To start of I have tested the tutorial at https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer
which works fine. I also tested the same tutorial but added a tls secret as well to test https which also worked fine.
My problems arise when I create my own image. Here is the steps I take:
# We label our stage as "builder"
FROM node:9.4.0-alpine as builder
COPY package.json package-lock.json ./
## Storing node modules on a separate layer will prevent unnecessary npm installs at each build
RUN npm i && mkdir /srv/cs-ui && cp -R ./node_modules ./srv/cs-ui
WORKDIR /srv/cs-ui
COPY . .
## Build the angular app in production mode and store the artifacts in dist folder
RUN $(npm bin)/ng build --environment "prod"
FROM nginx
## Copy our default nginx config
COPY nginx/default.conf /etc/nginx/conf.d/
## Remove default nginx website
RUN rm -rf /usr/share/nginx/html/*
## From "builder" stage copy over the artifacts in dist folder to default nginx nginx public folder
COPY --from=builder /srv/cs-ui/dist /usr/share/nginx/html/
version: '2'
services:
cs-ui:
image: "gcr.io/cs-micro/cs-ui:v1"
container_name: "cs-ui"
tty: true
build: .
ports:
- "80:80"
gcloud docker -- push gcr.io/cs-micro/cs-ui:v1
kubectl run cs-ui --image=gcr.io/cs-micro/cs-ui:v1 --port=80
kubectl expose deployment cs-ui --target-port=80 --type=NodePort
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: basic-ingress
spec:
tls:
- secretName: tls-certificate
backend:
serviceName: cs-ui
servicePort: 80
with command:
kubectl apply -f test.yaml
Name: cs-ui
Namespace: default
Labels: run=cs-ui
Annotations:
Selector: run=cs-ui
Type: NodePort
IP: 10.35.244.124
Port: 80/TCP
TargetPort: 80/TCP
NodePort: 30272/TCP
Endpoints: 10.32.0.32:80
Session Affinity: None
External Traffic Policy: Cluster
Events:
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations:
Selector:
Type: ClusterIP
IP: 10.35.240.1
Port: https 443/TCP
TargetPort: 443/TCP
Endpoints: 35.195.192.28:443
Session Affinity: ClientIP
Events:
Name: cs-ui
Namespace: default
CreationTimestamp: Thu, 25 Jan 2018 12:27:59 +0100
Labels: run=cs-ui
Annotations: deployment.kubernetes.io/revision=1
Selector: run=cs-ui
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Pod Template:
Labels: run=cs-ui
Containers:
cs-ui:
Image: gcr.io/cs-micro/cs-ui:v1
Port: 80/TCP
Environment:
Mounts:
Volumes:
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
OldReplicaSets:
NewReplicaSet: cs-ui-2929390783 (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 9m deployment-controller Scaled up replica set cs-ui-2929390783 to 1
Name: basic-ingress
Namespace: default
Address: 35.227.220.186
Default backend: cs-ui:80 (10.32.0.32:80)
TLS:
tls-certificate terminates
Rules:
Host Path Backends
---- ---- --------
* * cs-ui:80 (10.32.0.32:80)
Annotations:
https-forwarding-rule: k8s-fws-default-basic-ingress--f5fde3efbfa51336
https-target-proxy: k8s-tps-default-basic-ingress--f5fde3efbfa51336
ssl-cert: k8s-ssl-default-basic-ingress--f5fde3efbfa51336
target-proxy: k8s-tp-default-basic-ingress--f5fde3efbfa51336
url-map: k8s-um-default-basic-ingress--f5fde3efbfa51336
backends: {"k8s-be-30272--f5fde3efbfa51336":"UNHEALTHY"}
forwarding-rule: k8s-fw-default-basic-ingress--f5fde3efbfa51336
static-ip: k8s-fw-default-basic-ingress--f5fde3efbfa51336
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ADD 12m loadbalancer-controller default/basic-ingress
Normal CREATE 11m loadbalancer-controller ip: 35.227.220.186
Normal Service 6m (x4 over 11m) loadbalancer-controller default backend set to cs-ui:30272
I have read countless of threads on what to do when you get the backend status of Unhealthy, but none of them have helped. One mentioned to add a firewall rule mention in this tutorial: https://cloud.google.com/compute/docs/load-balancing/health-checks which I have added, but did not help.
If you have any suggestions I will gladly test them.
In a GKE cluster, you create and configure an HTTP(S) load balancer by creating a Kubernetes Ingress object. An Ingress object must be associated with one or more Service objects, each of which is associated with a set of Pods. Note: To use Ingress, you must have the HTTP(S) Load Balancing add-on enabled.
The GKE Ingress controller creates and configures an HTTP(S) Load Balancer according to the information in the Ingress, routing all external HTTP traffic (on port 80) to the web NodePort Service you exposed. Note: To use Ingress, you must have the HTTP(S) Load Balancing add-on enabled.
Turned out our Angular application had a redirect on '/' which gave it a 302 response. This response makes the health check fail and results in a UNHEALTHY state.
As soon as we set up a custom health check it worked.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With