To illustrate, let’s look at deploying the helloworld service and see how simple the problem becomes. Istio’s routing rules also provide other important advantages you can easily controlįine-grained traffic percentages (e.g., route 1% of traffic without requiring 100 pods) and you can control traffic using other criteria (e.g., route traffic for specific users to the canary version). Autoscalers may, in fact, respond to load variations resulting from traffic routing changes, but they are nevertheless functioning independently and no differently than when loads change for other reasons. This makes managing a canary version in the presence of autoscaling a much simpler problem. The number of pods implementing services are free to scale up and down based on traffic load, completely orthogonal to the control of version traffic routing. With Istio, traffic routing and replica deployment are two completely independent functions. If, instead, we wanted to limit the visibility of the canary to requests based on some specific criteria, we still need another solution. Even if we ignore this problem, the deployment approach is still very limited in that it only supports the simple (random percentage) canary approach. Maintaining canary traffic at small percentages requires many replicas (e.g., 1% would require a minimum of 100 replicas). All replica pods, regardless of version, are treated the same in the kube-proxy round-robin pool, so the only way to manage the amount of traffic that a particular version receives is by controlling the replica ratio. Whether we use one deployment or two, canary management using deployment features of container orchestration platforms like Docker, Mesos/Marathon, or Kubernetes has a fundamental problem: the use of instance scaling to manage the traffic traffic version distribution and replica deployment are not independent in these systems. In this case, we can’t use autoscaling anymore because it’s now being done by two independent autoscalers, one for each Deployment, so the replica ratios (percentages) may vary from the desired ratio, depending purely on load. In fact, for the latter (for example, testing a canary version that may not even be ready or intended for wider exposure), the canary deployment in Kubernetes would be done using two Deployments with common pod labels. red/black, kind of upgrade than a “dip your feet in the water” kind of canary deployment. Best of all, we can even attach a horizontal pod autoscaler to the Deployment and it will keep the replica ratios consistent if, during the rollout process, it also needs to scale replicas up or down to handle traffic load.Īlthough fine for what it does, this approach is only useful when we have a properly tested version that we want to deploy, i.e., more of a blue/green, a.k.a. We can then observe the effect before deciding to proceed or, if necessary, roll back. If we take particular care to ensure that there are enough v1 replicas running when we start and pause the rollout after only one or two v2 replicas have been started, we can keep the canary’s effect on the system very small. Using Kubernetes, you can roll out a new version of the helloworld service by simply updating the image in the service’s corresponding Deployment and letting the rollout happen automatically. Canary deployment in KubernetesĪs an example, let’s say we have a deployed service, helloworld version v1, for which we would like to test (or simply roll out) a new version, v2. Although doing a rollout this way works in simple cases, it’s very limited, especially in large scale cloud environments receiving lots of (and especially varying amounts of) traffic, where autoscaling is needed. Problem solved, right? Well, not exactly. The traffic sent to the canary version is a randomly selected percentage of requests, but in more sophisticated schemes itĬan be based on the region, user, or other properties of the request.ĭepending on your level of expertise in this area, you may wonder why Istio’s support for canary deployment is even needed, given that platforms like Kubernetes already provide a way to do version rollout and canary deployment. If anything goes wrong along the way, we abort and roll back to the previous version. Traffic, and then if all goes well, increase, possibly gradually in increments, the percentage while simultaneously phasing out The idea behindĬanary deployment (or rollout) is to introduce a new version of a service by first testing it using a small percentage of user One of the benefits of the Istio project is that it provides the control needed to deploy canary services. This post was updated on to use the latest version of the traffic management model.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |