I have an API on google Endpoint, the backend is deployed on GKE and I'd like to expose it via an ingress so I can use IAP on it.
I am using ESP2.
I first deploy my service as a LoadBalancer and it was working.
Thing is my ingress says:
"All backend services are in UNHEALTHY state "
I get that the health check does not pass but I do not get why...
The service and the pod corresponding show no error, however on my pod event I can see: " Readiness probe failed: Get http://10.32.1.27:8000/swagger: dial tcp 10.32.1.27:8000: connect: connection refused "
my configurations for pod and service looks like this:
apiVersion: v1
kind: Service
metadata:
name: devfleet-django-endpoint-service
namespace: my-ns
spec:
# NodePort is mandatory for Ingress to perform load balancer
type: NodePort
ports:
- port: 443
protocol: TCP
targetPort: 9000
name: https
selector:
app: devfleet-django-endpoint
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: devfleet-django-endpoint
namespace: my-ns
spec:
replicas: 2
selector:
matchLabels:
app: devfleet-django-endpoint
template:
metadata:
labels:
app: devfleet-django-endpoint
spec:
containers:
- name: esp
image: gcr.io/endpoints-release/endpoints-runtime:2
args: [
"--listener_port=9000",
"--backend", "127.0.0.1:8080",
"--service=my-custom-domain.io",
"--rollout_strategy=managed",
"-z", "healthz",
"--ssl_server_cert_path", "/etc/esp/ssl"
]
volumeMounts:
- mountPath: /etc/esp/ssl
name: esp-ssl
readOnly: true
- mountPath: /etc/nginx/custom
name: nginx-config
readOnly: true
ports:
- containerPort: 9000
- name: devfleet-django-endpoint
image: my-img
ports:
- containerPort: 8000
imagePullPolicy: Always
env:
some_env_data
readinessProbe:
httpGet:
path: /swagger
port: 8000
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 10
failureThreshold: 3
livenessProbe:
httpGet:
path: /swagger
port: 8000
initialDelaySeconds: 15
periodSeconds: 60
timeoutSeconds: 10
failureThreshold: 3
volumeMounts:
- name: devfleet-storage
mountPath: /secrets/cloudstorage
readOnly: true
resources:
requests:
cpu: 30m
memory: 90Mi
limits:
cpu: 200m
memory: 400Mi
- image: b.gcr.io/cloudsql-docker/gce-proxy:1.13
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/app",
"-instances=production-213911:europe-west1:dev-postgresql=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: prodsql-oauth-credentials
mountPath: /secrets/cloudsql
readOnly: true
resources:
requests:
cpu: 10m
memory: 10Mi
limits:
cpu: 20m
memory: 50Mi
volumes:
- name: prodsql-oauth-credentials
secret:
secretName: secret-name
- name: devfleet-storage
secret:
secretName: secret-name
- name: app
emptyDir:
- name: esp-ssl
secret:
secretName: secret-name
- name: nginx-config
configMap:
name: nginx-config
nodeSelector:
cloud.google.com/gke-nodepool: default-pool
and my ingress configuration is:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: devapi
namespace: my-ns
annotations:
kubernetes.io/ingress.global-static-ip-name: ingress-devapi
ingress.gcp.kubernetes.io/pre-shared-cert: "my-cert"
kubernetes.io/ingress.class: gce
ingress.kubernetes.io/enable-cors: "true"
kubernetes.io/ingress.allow-http: "false"
spec:
backend:
serviceName: devfleet-django-endpoint-service
servicePort: 443
Any idea what I am doing wrong ?
Thank you