Kubernetes Ingress and Istio Gateway Resource
Published on

Kubernetes Ingress and Istio Gateway Resource

Written by Peter Jausovec
By default, any service running inside the service mesh is not automatically exposed outside of the cluster which means that we can't get to it from the public Internet. Similarly, services within the mesh don't have access to anything running outside of the cluster either.
To allow incoming traffic to the frontend service that runs inside the cluster, we need to create an external load balancer first. As part of the installation, Istio creates an istio-ingressgateway Kubernetes Service that is of type LoadBalancer and, with the corresponding Istio Gateway resource, can be used to allow traffic to the cluster.
If you run kubectl get svc istio-ingressgateway -n istio-system, you will get an output similar to this one:
istio-ingressgateway LoadBalancer <pending> ...
The above output shows the Istio ingress gateway of type LoadBalancer. If you're using a Minikube cluster you will notice how the external IP column shows text <pending>   — that is because we don't actually have a real external load balancer as everything runs locally. With a cluster running in the cloud from any cloud provider, we would see a real IP address there — that IP address is where the incoming traffic enters the cluster.
We will be accessing the service in the cluster frequently, so we need to know which address to use. The address we are going to use depends on where the Kubernetes cluster is running.

If using Minikube

Use the script below to set the GATEWAY environment variable we will be using to access the services.
export INGRESS_HOST=$(minikube ip)
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==”http2")].nodePort}')
If you run echo $GATEWAY you should get an IP address with a port, such as:

If using Minikube (v0.32.0 or higher)

Minikube version v0.32.0 and higher, has a command called minikube tunnel. This command creates networking routes from your machine into the Kubernetes cluster as well as allocates IPs to services marked with LoadBalancer. What this means is that you can access your exposed service using an external IP address, just like you would when you're running Kubernetes in the cloud.
To use the tunnel command, open a new terminal window and run minikube tunnel and you should see an output similar to this one:
$ minikube tunnel
machine: minikube
pid: 43606
route: ->
minikube: Running
services: [istio-ingressgateway]
  minikube: no errors
  router: no errors
  loadbalancer emulator: no errors
If you run the kubectl get svc istio-ingressgateway -n istio-system command to get the ingress gateway service, you will notice an actual IP address in the EXTERNAL-IP column. It should look something like this:
$ kubectl get svc istio-ingressgateway -n istio-system
istio-ingressgateway LoadBalancer ...
Now you can use the external IP address ( above) as the public entry point to your cluster. Run the command below to set the external IP value to the GATEWAY variable:
export GATEWAY=$(kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

If using Docker for Mac/Windows

When using Docker for Mac/Windows, the Istio ingress gateway is exposed on localhost:80:
export GATEWAY=localhost

If using hosted Kubernetes

If you're using hosted Kubernetes, run the kubectl get svc istio-ingressgateway -n istio-system command and use the external IP value.
For the rest of the module, we will use the GATEWAY environment variable in all examples when accessing the services.


Now that we have the GATEWAY we could try and access it. Unfortunately, we get back something like this:
$ curl $GATEWAY
curl: (7) Failed to connect to port 31380: Connection refused
Yes, we have the IP and it's the correct one, however, this IP address alone is not enough — we also need an Ingress or Gateway and that to configure what happens with the requests when they hit the cluster. This resource operates at the edge of the service mesh and is used to enable ingress (incoming) traffic to the cluster.
Here's how a minimal Gateway resource looks like:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
    name: gateway
        istio: ingressgateway
        - port:
              number: 80
              name: http
              protocol: HTTP
              - '*'
With the above snippet, we are creating a gateway that will proxy all requests to pods that are labeled with istio: ingressgateway label. You can run kubectl get pod — selector="istio=ingressgateway" — all-namespaces to get all the pods with that label. The command will return you the Istio ingress gateway pod that's running in the istio-system namespace. This ingress gateway pod will then, in turn, proxy traffic further to different Kubernetes services.
Under servers we define which hosts will this gateway proxy — we are using * which means we want to proxy all requests, regardless of the host name.


In the real world, the host would be set to the actual domain name (e.g. www.example.com) where cluster services will be accessible from. The * should be only used for testing and in local scenarios and not in production.
With the host and port combination above, we are allowing incoming HTTP traffic to port 80 for any host (*). Let's deploy this resource:
cat <<EOF | kubectl create -f -
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
    name: gateway
        istio: ingressgateway
        - port:
              number: 80
              name: http
              protocol: HTTP
              - '*'
If you run the curl command now, you will get a bit of a different response:
$ curl -v $GATEWAY
* Rebuilt URL to:
* Trying…
* Connected to ( port 31380 (#0)
> GET / HTTP/1.1
> Host:
> User-Agent: curl/7.54.0
> Accept: */*
< HTTP/1.1 404 Not Found
< location:
< date: Tue, 18 Dec 2018 00:05:17 GMT
< server: envoy
< content-length: 0
* Connection #0 to host left intact
Instead of getting a connection refused response, we get a 404. If you think about it, that response makes sense as we only defined the port and hosts with the Gateway resource, but haven't actually defined anywhere which service we want to route the requests to. This is where the second Istio resource — VirtualService —  comes into play.
Join the discussion
Peter Jausovec

Peter Jausovec

Peter Jausovec is a platform advocate at Solo.io. He has more than 15 years of experience in the field of software development and tech, in various roles such as QA (test), software engineering and leading tech teams. He's been working in the cloud-native space, focusing on Kubernetes and service meshes, and delivering talks and workshops around the world. He authored and co-authored a couple of books, latest being Cloud Native: Using Containers, Functions, and Data to Build Next-Generation Applications.

Related posts