Traffic Mirroring with Istio Service Mesh
Published on

Traffic Mirroring with Istio Service Mesh

Author
Written by Peter Jausovec
In addition to more “traditional” traffic routing between different service versions, that can be based on a variety of incoming requests properties, such as portions of the URL, header values, request method, etc., Istio also supports traffic mirroring.
Traffic mirroring can be used in cases when you don't want to release the new version and expose users to it, but you'd still like to deploy it and observe how it works, gather telemetry data and compare the performance and functionality of the existing service with the new service.
You might ask — what is the difference between deploying and releasing something? When we talk about deploying a service to production, we are merely moving the executable code (binaries, containers, whatever form needed for the code to be able to execute) to live in the production environment, but not sending any production traffic to it. The service is there, but it's not (hopefully!) affecting any existing services that are running next to it.
Releasing a service involves taking that deployed service and start routing production traffic to it. At this point, the code we moved to production is being executed and it will probably impact other services and end users.
Routing traffic between two versions, doing blue-green releases is helpful and useful, but there are risks involved — what if the service breaks or malfunctions? Even if service is receiving only 1% of the production traffic, it can still negatively impact a lot of users.

What is traffic mirroring?

The idea behind traffic mirroring is to minimize the risk of exposing users to potentially buggy service. Instead of deploying, releasing and routing traffic to the new service, we deploy the new service and then just mirror the production traffic being sent to the released version of the service.
Service receiving mirrored traffic can then be observed for errors without impacting any production traffic. In addition to running a variety of tests on the deployed version of the service, you can now also use actual production traffic and increase the testing coverage which could give you more confidence and minimize the risk of releasing a buggy service.
Here's a quick snippet on how to turn on traffic mirroring with Istio:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: greeter-service
spec:
  hosts:
    - greeter-service
  http:
    - route:
        - destination:
            host: greeter-service
            port:
              number: 3000
            subset: v1
          weight: 100
      mirror:
        host: greeter-service
        port:
          number: 3000
        subset: v2
The virtual service defined above is routing 100% of the traffic to the v1 version, while also mirroring the same traffic to the v2 version. The quickest way to see this in action is to watch the logs from the v2 service while sending some requests to the v1 version of the service.
The response you will see on the web page will be coming from the v1 version of the service, however, you'll also see the request being sent to the v2 version:
$ kubectl logs greeter-service-v2–78fc64b995-krzf7 -c svc -f
> greeter-service@2.0.0 start /app
> node server.js
Listening on port 3000
GET /hello 200 9.303 ms — 59
GET /hello 200 0.811 ms — 59
GET /hello 200 0.254 ms — 59
GET /hello 200 3.563 ms — 59
Join the discussion
SHARE THIS ARTICLE
Peter Jausovec

Peter Jausovec

Peter Jausovec is a platform advocate at Solo.io. He has more than 15 years of experience in the field of software development and tech, in various roles such as QA (test), software engineering and leading tech teams. He's been working in the cloud-native space, focusing on Kubernetes and service meshes, and delivering talks and workshops around the world. He authored and co-authored a couple of books, latest being Cloud Native: Using Containers, Functions, and Data to Build Next-Generation Applications.

Related posts

;