Transcript for:
Understanding Istio and Service Mesh Concepts

hello everyone my name is abishek and welcome back to my channel in today's video we will Deep dive into the concept of service mesh using ISO this video tutorial is going to be both theoretical as well as practical and in the Practical section I'm not only going to show you how to install configure setup esto but I'm also going to show you how IO works internally that is how Cube API server talks to ISO and how ISO does the sidecar injection in real time so it's going to be a very interesting video probably you will not get this information anywhere else so please try to watch this video till the end at a high level the video tutorial covers the following things what are admission controllers why are they needed and how they actually work what are sidecar containers what is a service mesh why do you need a service mesh for your kubernetes cluster pros and cons then we will learn how to install and configure esto we will learn the traffic management using ISO which is a popular use case of IO and to to implement the traffic management we need esto custom resources that are virtual services and destination rules so we will learn all of these things using a demo application the demo application is going to be a multi microservice architectured demo application it is provided by IO and we are going to use that then we will learn the features of IO such as circuit breaking Mutual TLS observability and others towards the end we will also learn the concepts like gateways which helps users expose the services in the service mesh to the external world we will also compare gateways with Ingress and Ingress controllers all the notes related to this video is shared in this GitHub repository all the commands that I'm going to use the yaml Manifest everything is available in this GitHub repo so you can start the repo FOC it watch it to get the continuous updates so let's start with the important concept that is what is a service mesh why do you need service mesh and how service mesh actually works let's take the what aspect first as per the definition service mesh helps you with the traffic management of your kubernetes cluster and especially the east west traffic management of your kubernetes cluster now what does each East West meal let's take a simple e-commerce application for understanding this so let's say there is a e-commerce application that is deployed on a kubernetes cluster and this e-commerce application is a very simple one which is created using only four microservices for easiness where you have a login microservice then you have a catalogs microservice payments microservice and notifications micros service so the user workflow is any user who logs in to this e-commerce application cataloges is displayed where user can see a catalog of product then catalog talks to payment if user selects a product and if user is interested in purchasing the product catalog talks to the payments and either if the payment is successful or failure payment talks to notification so that user gets a notification about the payment status now this is the workflow but if this e-commerce application has to be exposed to the customers or to the users outside the kubernetes cluster a Ingress is created for the login micros service it can be Ingress or service of type noepe or load balancer type anything but somehow user has to access the login micros service from outside the kubernetes cluster only then the e-commerce application can be used so this is a workflow and in this work flow the traffic can be divided into two parts one is the Ingress traffic or the traffic that comes from outside the kubernetes cluster this is called as north south CFT and the other one is the internal service to service communication where login talking to the catalog catalog talking to the payments or payments talking to the notifications this does not not require any Ingress traffic or egress traffic so this one is called as east west traffic so the traffic between the services within your kubernetes cluster is usually referred as east west traffic and the traffic that comes outside your kubernetes cluster which is the Ingress or traffic that flows outside your kubernetes cluster which is igis is called as North South traffic so the definition of esto ISO can help you with the traffic management that is the traffic between the services of your kubernetes clusters now this leads to a question why right if login can already talk to catalog and if catalog can talk to payments payments can talk to Notifications because they are already within the kubernetes cluster they can either talk to each other using the service names or any other mechanism right one kubernetes service can talk to other kubernetes service unless there are any network policies or something mentioned they can directly talk to each other in that case why we need a service mesh so the answer to it is yes of course service to service communication is possible but esto enhances or adds on capabilities to your service to service communication such as Mutual TLS now what does mutual TLS mean so if you install ISO and if you allive IO to this particular name space then IO says that it can secure the service to service communication Now by default these services are talking without any TLS without any security measures so if you install estoo adds Mutual TLS between this communication where login microservice will have a certificate catalog microservice will have a certificate generated by the certification authority of esto you don't need to install anything and when login wants to communicate with catalog both login as well as catalog will display their certificate to each other and only when both of them trust each other a communication is established this is slightly different from the traditional TLS because in the traditional TLS approach usually client is the one that displays certificate to the server and if server acknowledges then a trust is established and communication is established but hereo is talking about Mutual TLS which is advanced level of security don't worry we will also learn how this thing happens practically but talking about the Y aspect Point number one is esto adds the mutual TLS for secure service to service communication not just that ISO adds other capabilities such as advanced deployment strategies where using esto you can Implement deployment strategies like uh kenary AB or Blu green deployment in a very easy way let's try to understand if anyone of you are not sure what is kenary so let's say catalog is talking to payments where you have a catalog service which is already talking to payments let's call this as version one of payments and you want to introduce a new version of payments with Advanced features let's call it as payments version two you have tested it on your test environment Dev environment staging environment but still you're not confident to directly put it out in the production for all the customers so you can use de deployment strategies like kenary where initially you can request your catalog service to only send 10 percentage of traffic to the new version that is V2 and rest 90% to V1 once you look at your Prometheus metrics or you run some tests or the users are satisfied then you can increase the percentage to 20 probably 50 and eventually you can get it to 100 and then you can remove uh or delete payments version one so this is how Kary Works similarly there are multiple deployment strategies like you have uh AB you have blue green implementing these deployment strategies is not straightforward on a kubernetes cluster it takes modification of a lot of resources adding custom controllers but witho you can do it with little less difficulty I think these features Mutual TLS and kenary already explains the Y factor of service mesh why some customers go with it but to add to these features IO also adds another powerful capability to your kubernetes cluster that is observability so IO comes with Kali out of the box you just need to enable it and what Kali does for you it keeps a track of your service to service communication anyway service to service communication is going through a service mesh so it keeps the track of this information and it will help you understand how your services are behaving what are the metrics it can help you with understanding the health of your services behavior of your services so you don't need to install any additional observability platform or on your kubernetes cluster it is taken care by the iso service mesh itself there are so many other features like circuit breaking it can Implement uh traffic splitting so if you keep talking there are more features that sto can add to your kubernetes cluster but these are some of the important features now I think this explains the why Factor we already learned what is a service mesh we learned why service mesh is imp imped by customers on your kubernetes cluster now it's time to understand how because we learned thato can add these features these features are not that easy to implement if we take example of mutual TLS it's not easy foro to implement Mutual TLS on your existing kubernetes cluster how ISO does this I'll make it as simple as possible so what IO does is in all the parts of your kubernetes cluster of course the name spaces which IO has access to So within each and every pod what IO does is it adds a new container that sits next to the actual container of your kubernetes p and this new container is called as side car container what is inside this sidecar container so the sidecar container has a enoy proxy application so it's just a proxy server and what this proxy server does is it handles the traffic management of your kubernetes F that means any request that is coming to your actual container or any request that is coming to your pod now it will go to the sidecar container and any request that is also going outside your kubernetes spot goes through the sidecar container so it will handle the complete traffic and this sidecar container is installed in each and every pod of your kubernetes cluster let's take the same e-commerce application as example so that you understand it in a better way so we have catalog then we have uh payment in our previous example and in general sense without ISO what catalog does is catalog initiates a API call with payments so catalog will find the service URL of the payments and using that it will initiate an API call the service URL catalog can get it uh from the config map or uh catalog gets the service URL of payment from the command argument anyways but it will initiate a API request and payment will send the response back now with service mesh what happens is the same workflow now catalog when it tries to initiate a API call to the payments request will be taken by the sidecar container so the API call is intercepted by by the sidecar container it goes to the sidecar container and from there it will go to the payments again the sidecar container in the payments will intercept the request and then it goes to the payments so both the inward traffic as well as the outward traffic are intercepted by the sidecar containers how will that help if you take example of mutual TLS you will understand how sidecar containers are managing the traffic so the same example when catalog tries to initiate an API call with the payments so this SAR container intercepted the request and here it will add a certificate or it will initiate a TS traffic and when this goes to the payments this side car container will again intercept the request and here it will try to verify the certificate along with verifying the certificate it will also display its TS certificate to the catalog service because it's a mutual TLS right both catalog and payments trust each other only if they have the valid certificates so if these sidecar containers do not exist then developers have to write this logic within their application or you need any third party to implement this Mutual T so what IO does beautifully is without making any change to your application without making any change significant change to your cluster within every pod it is just adding a side car container and this side car container implements Mutual TLS it will help in implementing the Kary model of deployment this sidecar container will help in implementing the circuit breaking and the most important one this sidecar containers help in the inbuilt observability of the sto every example we can understand using the same concept of sidecar container even with absorbability because sidecar containers are intercepting the request they know which services are being accessed and what they do is they will send all this information the sidecar containers to IO D so IO is basically the primary component of IO it receives all this information and it keeps the track of all the service metrics which will help in the observability similarly Canary model circuit breaking we will come to know when we move towards the demonstration so this is the how part but again you can ask me a question that abishek okay it's clear that IO is adding a sidecar container how because if ISO has to add a sidecar container to each and every port that is created on the kubernetes cluster esto needs to get that information from somewhere you know if someone sends a request to the API server for a pod creation ISO should be immediately notified that okay someone is requesting me for creating a pod do you want to add a sidecar container to it or not if API server is not notifying sto a pod gets created and sidecar container is not added so how does the communication between the API server and ISO take place for that ISO use a concept called admission control of course sto uses a little Advanced concept of admission controller which is called Dynamic admission control or admission web hook we will go to that but before that we should understand the concept of admission control so what exactly is an admission control if you take a simple request to the API server there is a user and user wants to create a pod on the kubernetes cluster so using cctl apply minus F or cube Cil create user will try to create this SP request goes to the API server there is a component in the API server which will basically verify if the user is authenticated and authorized to perform this request or not if the user is authenticated and authorized then API server will take this object and purchase the object in the etcd OR stores the object in etcd so this is step one and step two step one is to verify the authentication and authorization step two is to take the object and persist in or store it in etcd admission controllers come in the Second Step where they can intercept this request to the etcd before API server creates those objects in etcd admission controllers can mutate or validate the objects mutate is nothing but modify so admission controllers can modify or they can also validate that is verify few things in the Pod resource or any resource that is getting created for example let's say you want to create a persistent volume claim and you did not add a storage class in the persistent volume claim resource so the request goes to API server you have authentication and authorization so API server tries to add that resource to etcd before adding there is a admission controller that that is called storage class admission controller which comes into picture it will see if the PVC has the storage class field or not if it does not have it will mutate the object and in the object it will add the field to your PVC creation request mutation admission controller will add a new field and that field is called storage class is equals to XY Z and then the object is persisted into etcd like that there are some 30 plus admission controllers that are available by default in every kubernetes cluster of course sometimes they are disabled by some distributions but you know there are 30 plus admission controllers and these admission controllers you don't have to install them they are pre-compiled into the API server so API server already has code for all of this 30 plus admission controllers you can just enable or disable them abishek how do I know what are those admission controllers if you simply go to this page called admission controller reference so here you can see all the admission controllers so this is the one that I was talking about default storage class similarly you have namespace exist you have node restriction you have pod security you have resource quota you have service account so you have 30 plus admission controllers can we see see a practical example for sure let's try it on our kubernetes cluster so that you can understand it in a better way so let's take the kubernetes cluster so I have this kubernetes cluster that is running it's a mini Cube cluster you can use any kubernetes cluster that's okay and you can verify first that the admission controllers are enabled on this cluster or not so just log into the kubernetes cluster and you can go to this file called Cube API server. yaml which is usually present in this location Etc kubernetes manifest and you have this file called Cube API server. and in this file you will see that the cube API server is passed with an argument command argument which is enable admission plugins or admission controllers and these are the admission controllers that are enabled on the cluster in your case this number might be more or it can be less because it depends on the distribution of kubernetes that you are using of course you can add new fields to it restart your API server or there is also a cube API server which you can run and you can add more admission controllers like I showed these are the list of admission controllers out of which probably 10 are available on my cluster namespace life cycle limit Ranger service account default storage class which I was talking about during the theory and the other admission controllers now let's try to perform the demo of the same thing that is default storage class where what I'm going to do is I'm not going to provide the storage class in my persistent volume and let's see if the object gets mutated or not because the storage class controller is already available on my cluster it should get mutated so we can go to the GitHub repository and go to this folder called default admission controllers just for copying the yaml Manifest I've have already provided it here so I have two examples for mutation one and validation one let's go to mutation and this is the storage class file that I have where you can see that I don't have the field for storage class default storage class okay let me call this as PVC do yaml and let me put this file here now I will do cctl apply minus F PVC do yam you will notice all the steps have taken place like the PVC request is sent to API server it has authenticated and authorized my request then the storage class mutation controller impact or intercepted the request before object is stor into etcd and it has added this field Cube CTL edit PVC my claim so if we scroll down so this is the field that I was talking about it said storage class standard which was not available again if I go back to my GitHub repository so that particular field called storage class standard is not available in this yaml Manifest this concept is called as mutation similarly you can also perform the validation where if you go to the validation folder let's say I will create a resource quota to my namespace that is nothing but I will restrict my default namespace to certain amount of CPU and RAM so the namespace cannot use more than one CPU and 2 GB Ram this is the Restriction that I'll put on my namespace Cube Cil apply minus F quota. yaml and now intentionally I will create a pod which is requesting for 10 GB Ram okay the namespace itself can only provide or use one CPU and 2 GB Ram but the Pod request that I'm going to make is of 10 GB Ram Now API server should initially or ideally create this object but because I have a validation admission controller validating admission controller it should intercept my request and it should throw an error saying that hey you cannot do it because there is an admission controller which says maximum of only 2GB Ram can be used by any pod or the maximum of name space Cube CTL apply minus f. yam so we got this error and who sent this error to us it is the same admission controller that is enabled on my kubernetes cluster so just to quick show you one more time here there is this admission controller called Kota uh it should be somewhere here resource quota this is the validation admission controller which intercepted my request and it sent an error to me that hey you cannot do it perfect so this is how admission controllers work but you should still have a question that abishek all this admission controllers are already compiled to the API server so API server can basically call these admission controllers in our case we are talking about IO and esto is an external component it is not the control plane component of kubernetes so how API server can forward the request to esto will esto write their own admission controllers or will they write any other thing I will explain this once we do the demonstration ofo because the concept of dynamic admission control is not that simple to understand so I'll initially show you the demonstration of how sto works and then we can come back and understand this piece of logic of how API server connects to the web hook of esto where esto adds a dynamic web hook or dynamic admission web hook so we will see that concept after the demonstration the installation of sto is pretty straightforward just go to this document link is also available in the GitHub repository and if you scroll down you will find this curl command which will download a folder on your local machine and that folder will have all the installation scripts cleanup scripts sample for our demo as well so I will go to my terminal and uh I will run this command on the terminal if you're on Windows you can use git bash so it is downloading this folder 1.2.2 and where is it downloading from it is downloading from istos GitHub repository and this folder will have everything that is required so let's see CDO star so let me do CDO 1.2.2 so it has the samples which we can use for our demonstration it has the Manifest that will help us the installation and also bin which has a command line utility for Io that is called IO CTL just like for kubernetes you have Cube CTL ISO also has something like ISO CTL what what are the next steps we got that change the directory to 1. 121.2 which we already did just export this path where sto CDL is available so if I just do LS on the Bild folder so this is the binary sto CTL which will help us to connect to esto through the command line now we will use the same ISO CTL to install ISO as well ISL install profile demo minus y which is yes so there are multiple profiles ISO provides you like production development demo so demo means it will come up with the default configuration values that are useful for your demo if you're going with production then you will get more strict values demo is a best place to start with shouldn't take much time uh hardly it will take one or two minutes okay now sto is installed so it installed few components like I mentioned sto D which is the primary component of esto which performs the control plane related to esto and all the main concepts of iso then you have an igress Gateway and Ingress Gateway we will come to this topic when we talk about gateways so installation is complete then we can proceed with with enabling or allowing esto to access the default name space so for that you have to run this command where you need to add label to the name space where you want to give ISO permissions to so I said ISO injection enabled on the default namespace using this Cube CTL label command now we can proceed with deploying a simple application and this application is a book info application where you know this is the architecture of the application it's not very complicated application it is just built using four microservices where we have a products page then we have details of the product we have multiple versions of the review service and then we have a ratings service so s basically talks to reviews and details and review service talks to the ratings so app would look something like this once it is installed and running so you will see some products and you will see the ratings of the product and you will also see some reviews if they exist and what this book info application does is just for the usability of the user or just to give user a better experience each microservice is written in different programming language you can see product is written in Python this is written in Java this is written in nodejs and details is written in Ruby of course there is no significance to it it's just a polot application so that you feel that you have exposure to using application on esto with different programming languages and if you watch carefully reviews service there are three different ports for the review service version one version two and version three that means all the other applications have just one copy but reviews have three copies that is three replicas of PODS where each pod is of different version so that you can also learn the things like Kary model of deployment this application also helps you in that now let's proceed with deploying this application and learn about it further so Cube CTL apply minus F and it started creating the services and deployments if you just do Cube CTL get so we can see all the pods are up and running and what's important to note is each pod in the default namespace because we have enabled sto injection we can see there are two containers inside inside each of these spots 2 by two containers are in the running State and we can also do Cube cail edit pod let's pick up any one pod you will see two containers one is the sidecar container and one is the actel container so this is the actual container which is book info details application and if you scroll down you will see another application here which is the side C container and the sign carard container comes with bunch of arguments envirment variables everything that is required for the sidecar container to manage the traffic of the Pod so likewise in each of the container you can just login and see you will see the sidecar container configuration now let's move to the next steps where I just want to expose this application to outside world and to show you how the appliation looks like before we perform further activities like uh Mutual TLS and other things let's just run this application on our kubernetes cluster so deploying is done now I will expose this application to outside world we have a command here don't worry we will learn what this virtual surveys what are the gateways but firstly we want to see the application I will also run mini Cube tunnel so that I can access this application from my browser if you don't run mini Cube tunnel you can only access it from your mini Cube server or from your kubernetes cluster to access it from the browser you would need mini Cube tunnel or any kind of port forwarding so let me keep this window the same and let me open a new tab so that I'll not close the the tunnel connection okay so I have created the tunnel as well and we will export basically what this export commands are doing is it is just forming this URL you can also run this by your own it's not required but just I'm following the documentation so now you can also access it directly on this particular URL you don't need to run that export commands just this is your Local Host Bood 80 followed by product page okay so we have the application up and running and if you watch carefully we got the products information we have the details microservice that is populating the details we have reviews as well but we don't have any ratings if you see my screenshot here or my previous instance of the application here there are ratings but the one that we started did not have any ratings any guess why because reviews is served with reviews V1 version and if you look at the architecture of the diagram reviews V1 is not connected to the ratings application or is not connected to the ratings service only reviews V2 and reviews V3 are connected to the rating service probably if I refresh because the default load balancing of a service is round robin maybe if I refresh a couple of times we will see reviews V1 change to reviews V2 and we got the ratings and probably if I refresh a couple of times more we will see maybe reviews V3 perfect so what is happening here service product service is talking to different versions or different replicas of Review Spot where one replica of reviews sport that is reviews V1 does not have connection to ratings so probably when this was developed they did not have anything like ratings but in reviews V2 and reviews V3 they have information to connect to rating service and get that populated on the homepage perfect so we also have this application up and running so we installed IO what are the steps that we perform we have EO up and running we have enabled sidecar injection so for each of my pod I have sidecar container injected into it then I also got a simple hook page application up and running on my mini Cube kubernetes cluster now let's try to make use of the sidecar container and implement the features of iso so what I'm going to do now if I just do Cube CTL get SVC you will get information of all the services like probably let's take the product page and if I do mini Cube SSH and if I try to connect to this uh products page using the service IP followed by the port I'll just do curl 9080 and we got the information of the products page similarly I can also hit the product page API where I can just say API V1 products so so I can also talk to the products page if I log to the mini Cube cluster or if I have access to the cluster IP of the service but what we learned ISO enables Mutual TLS where someone can only talk to the service if they have the certificate to talk with them but in my case I'm just running a curl command and how did I access it so by default IO runs Mutual TLS in the permissive mode the permissive mode says you can either access the service using Mutual TLS or you can also access it without Mutual TLS so what I'm going to show you in the first demo is I will enable if we go back click on Mutual TLS I have this custom resource so I will make sure that Mutual TLS is strict not permissive so after I enable this in the strict mode if I again try to do the mini Cube SSH and run the curl command I should not get the display page or I should get some error let's see if that happens so first let's create it TLS mode. yaml so we are doing Mutual TLS demo and I'll just do Cube CTL apply minus F PLS mode. yam okay let's just give it couple of seconds like if it does not work in your cluster please wait for a couple of minutes sometimes it takes time to reflect the configuration let's do mini Cube SSH again and this time let's try out the same command call connection reset by Pier why because I'm sending out a call request the iso sidecar container is asking me for the certificate I don't have the certificate so it has rejected the connection so this is how Mutual TLS is implemented now if I try to access it from any other service I'm able to do it you know abishek how can you be confident because if I just refresh this page still I'm able to see the review I'm still able to see the ratings that means the internal communication is working fine the catalogs is able to talk to product product is able to talk to details but if I try it using a curl command that is without any service then I'm getting this error so my services are secure so what Haso done here for Meo made sure if anyone tries to access the service without a proper certificate it throws an error so this is how how Mutual TLS is implemented to secure your cluster so this is our demo one now let's try to demonstrate the Kary model of deployment before we go there we should understand some Concepts like virtual service and destination rules virtual services and destination rules will help in the traffic management of IO in our use case virtual service and destination rules will help us in implementing Kary model of deployment let's see how so if we take the same book info application where you know currently we saw that if we refresh probably reviews V3 will change to reviews V2 or it can change to review V1 now what I want to do is I want to learn the canary model of deployment where initially I want all my requests just to go to review V1 all the time and then I will introduce a new version let's say reviews V2 then I only want 50% of the traffic to go to rev V2 and 50% to go to rev V1 right so here it is continuously changing at this point of time let's make it only talk to ruse B1 initially so for that what I'm going to do is I'll go to my is cluster and I'll deploy this yaml manifest from the GitHub repository so this is the traffic shifting example traffic shifting is nothing but kenary model where currently I will go to only the old version example Okay so this is where virtual Services play a critical role where it is saying virtual service for details where every request should go to the destination details V1 what is destination destination is another custom resource which we are going to create and in the destination we will mention the subset V1 will only point to the version one of details similarly when we are talking about ratings of course all of them just have version one what is important for us is the reviews so the reviews virtual service is saying any request that comes to the reviews microservice or the sidecar container they should go to destinations reviews subset B1 now we have to create this destination and in the destination I will say request should only go to the Pod with version one so let's try to do that first first I will say Cube CTL apply minus F old hyen version. yaml right now it is only 50% done because we have created virtual service but without destination rules it is incomplete so for the destination rules now we can scroll down so the destination rules I'll also upload this to my uh GitHub repository this is the only missing piece at this point I'll show you what this destination rules is doing so it is getting applied let's open the destination rules for reviews and there you can see always it is sending the traffic only to the version one Cube C edit destination rules reviews okay so this is the destination Rule and here it is saying if the subset is B1 then only go to the version one so there are multiple subsets here subset B1 subset V2 and subset B3 and we have mentioned that it should always go to subset of version one only we have mentioned that in the virtual service so the request will go to pod of version one so it is applied let's go back and see okay it is getting refreshed it went to version one and even if you refresh it 100 times every time the request will only go to reviews V1 because virtual services and destination rules have explained the sidecar container that request should always go to version one only so this is how they will play a crucial role they will explain the sidecar container how it should behave now I want to implement traffic shifting or Kary so I will make a slight modification probably here you might understand it in a much better way so I will go to traffic shifting example and here you can see in the virtual service I have mentioned destination V1 for 50% and destination V3 for 50% previously I just said destination B1 all the time but now I have explained the virtual service which explains the sidecar container to send 50% of traffic to version one and 50% of traffic to version three again I'll copy this or let's say uh kenary do yaml let's apply this Cube CTL apply minus F kenary do yam and now if you go to the browser you should see 50% of traffic going to V1 and 50% of traffic going to V3 let's see if that is correct or not okay it went to V1 let's refresh again it went to V3 let's refresh again okay it's getting refreshed it's not yet refreshed so overall if you make 10 requests or let's say you make 20 requests you can see on an average 10 request goes to V1 and 10 request goes to V3 so 50% of traffic is shared between both of them so this is how Kary model is implemented and now if you are comfortable let's say your Prometheus metrics or your uh ke metric or any automation test that you run on the new version if they are working fine what you can simply do is you can go to the old version. yaml or you can even come to kenary and just simply say wait 100 on version three and wait zero on version one or you can just remove this block as well now let's do apply minus f one more time it's time for the final test where my request should only go to version three refresh again refresh again again and again you will see the Kary model is success fly implemented and now your new traffic only goes to version three how did we achieve this using virtual services and destination rules in a nutshell virtual services and destination rules will help the sidecar container how to manage the traffic of your actual container Kary model is just an example along with that if you have time you can also explore multiple traffic management examples of ISO we have covered Mutual TLS and traffic shifting which is kenary you can also try circuit breaking request timeouts and in each of the example they have shared you the virtual services and destination rules yaml files it will take a lot of time if you are interested you can try but these are the main things where sto is quite popularly used now that we got an understanding about Mutual TLS and how ISO implements traffic management for Kary model of deployment we got information about side car injection now let's go back and try to understand the concept of how sto implements admission controller if you look at all the other admission controllers as I've explained they are pre-compiled into the API server so whatever the resource quota admission controller that we have tried or the storage class admission controller they are all pre-compiled into the API server so API server clearly knows how to invoke them and what action will those admission controllers take either they are going to take mutation or validation in this case foro to add a sidecar container ISO should know when a pod creation request comes to the API server and somehow API server should notify ISO that okay now you can proceed with the side car injection so that concept is called as Dynamic admission control I'll make it very simple this is not even available in the EO dos so this happens in multiple stages where the same P creation request comes to the kubernetes cluster API server will perform the authentication and authorization but here not the standard admission controllers there are two special admission controllers which are called as mutating admission web hook controller and validating admission web hook controllers they are also available in the documentation so if you see along with the traditional controllers or the basic controllers you have validating admission web hook controller and you have mutating admission Web book controller the responsibility of these controllers so they are the ones where they take your input or they take request from the API server and they notify esto or any other project that wants to implement the sidecar injection or any kind of mutation and validation so which component of API server is taking this responsibility it is again an admission controller which is helping in the dynamic admission control but this component does not perform the mutation or validation it only forwards the request to any web hook that is implemented by the project Soo is going to implement an admission web hook which is again a kubernetes controller it is part of stod and this admission web Hook is called by the mutating admission web hook controller this web Hook is going to perform the mutation logic it is going to inject the sidecar container and then object is persisted into etcd it might look complicated let me show you practically maybe you know practical might help so if you go and look for this crd on the kubernetes cluster Cube CTL get mutating web hook configuration so this is one crd that esto creates if you try to read through the crd or custom resource not crd Cube CTL edit mutating web hook configuration ISO side car injector so this architecture is too much technical but it is good to understand so this mutating web hook configuration CR is submitted to the API server and this is read by this mutating admission web hook controller and what it will understand from the custom resource is it will just look for the rules where rules is okay whenever a pod is created doesn't matter in which name space doesn't matter just let me know whenever a pod is created and how do you let me know for that here there is a field called service okay so this custom resource is actually telling the mutation web hook controller that whenever a pod is created just forward the request to this web hook called stod that is available in the sto system name space so mutating admission web hook controller whenever a pod is created because the custom resource is telling it to forward the request to stod ad ad web hook it forwards the request to it and this stod admission web hook takes the API request it performs the mutation and then it Returns the object back to API server from there object goes to Etc so this is how Dynamic admission web hook or this is how the dynamic admission controller is working in case of iso side car injection is taking place let's say tomorrow you want to implement something what you can do is you can also develop your own web hook let me call it as let's say abishek Web book I can create this abishek Web book but what also I should do is I should also create a custom resource called mutating admission web hook configuration or validating web hook configuration and in that configuration I have to Clearly say API server when should it forward the request to my web hook and what is the path of my web hook where is it located in which name space and on which path should it send a APA call to me so anybody can implement this Dynamic admission web hook but saying that you need to know how to write a web hook server in kubernetes and how to create custom resource definitions and work with it so this is about howo Works internally to implement the side car injection of course this is not available like I mentioned in the io dos but if you are very curious and if you want to spend some good amount of time you can go through this document Dynamic admission control but it takes a lot of time to understand keep that as a note keep that uh as a point to be noted if you are going to spend your time you are going to spend a lot of time in understanding this in detail finally like like I said IO also comes with an inbuilt observability you just need to install Kali Kali is not installed out of the box but all the information related to your services is collected byoo gets the information of the traffic that is flowing into your service out of your service what you can simply do is go to the iso dos search for observability go to visualizing your run this command to install K just do Kali so you know you have so in my case I think it is already installed let's see C CTL get pods minus a yeah Kali is already running in your case you should run that command and then what I'll simply do is Runo CTL dashboard Kali and this should start the Kali dashboard and it will automatically open it so this is how I get the Kali dashboard and you can look for the graphs you can look for the applications workloads services so I'm not going into the details of uh kiali and the observability because it's a complete different topic where we will take this in the monitoring class which we will do in the future but just to give you an information so ISO has this Kali where it can help you with the service tracing Dynamic uh distributed tracing you can also get the information about your services help so this is about IO I hope you found this video informative and if you have any questions do let me know in the comment section