Transcript for:
Kubernetes Custom Resources and Controllers

hello everyone my name is Abhishek and welcome back to my channel so firstly kubernetes is easy Yes you heard it right so many people worry about kubernetes and whenever we talk about devops you know people focus most of their time on cicd Solutions so I've seen many people focusing a lot on building pipelines you know talking about live cicd project and even if you search on YouTube you'll find bunch of cicd related stuff on YouTube and everybody is focused on it but the key player in the market is kubernetes take 20 resumes search for devops related jobs if you don't find kubernetes you can definitely come back and in the comment section ask me okay I did not see kubernetes in one of the videos but you are saying kubernetes is important trust me that will never happen because in all of the job descriptions you will see kubernetes because kubernetes is the future kubernetes is a future of devops so if you want to do a marathon journey in devops right if you want to do a short Sprint or if you just want to get into devops and you know you just want to be around in devops then yeah you can survive yourself by just learning about simple devops topics you can do projects on CI CD and you can you might also find some uh simple uh devops job roles which offers you some positions but it's not devops it is basically build and release engineering or it is basically some release engineer roles which offers you jobs on just CI CD and other devops stuffs but if you want to like I told you do a marathon in devops or if you want to do a long run in devops Journey kubernetes is the future now why I am saying this because everybody knows that these days people have started moving towards microservices now not these days people have started moving towards micro services from last six to seven years very active and you all know from last six classes we have been talking about Docker and containers right so I have explained enough why containers are very important if you haven't watched the previous videos from day 24 to date 30 uh day 29 if I am not wrong we just talked about containers evolution of containers in introduction to Containers importance of containers and we also did some live projects on containers that is using Docker so if you haven't watched that videos I highly recommend you to watch those videos before you jump on to kubernetes because the only prerequisite for learning kubernetes is docker now why do I say Docker not containers because most of the people are very acquainted to the term called Docker they relate themselves more when we say a Docker but in general I am talking about containers so if you want to be a key player in this space you need to understand the concept of containers very well it's not like when I say understand containers don't talk about simple Docker commands Okay I'm not talking about the docker commands at all whenever I speak to you in my videos what I always tell you is get strong with your Basics so I am only talking about the basics of the containers that means you need to understand what container is bringing into the picture how is it different from a virtual machine what is your networking isolation what is your namespace isolation right and why containers are very lightweight in nature you all know that containers are lightweight but why are they lightweight how do you secure your containers we talked about destroys we talked about multi-stage Docker builds so learn about all of these things before you start your journey with kubernetes now I am assuming all of you have watched my previous videos from day 24 to day 29 or you already know the concept of containers and Dockers very well okay so this is the assumption that I'm making and I'm going forward with today's video so I'm again saying you if you haven't watched the previous videos or if you do not have the concept or the understanding of containers then don't go forward but instead go back and watch the previous videos okay now the first question that you should get before learning kubernetes is what is the difference between Docker and kubernetes okay because from day 24 to day 29 I explained you how to build projects on Docker we deployed some real-time applications we you know secured the containers using displayless images multi-stage builds and we saw about the life cycle of talker and containers and how talker which is a container platform what was Docker I also explained in the previous class Docker is a container platform that means Docker is making you the interaction with the containers or you know Docker is making your container Journey very easy because it provides a complete container life cycle whether it is using the docker engine or using the docker CLI it makes you the I mean it makes your container Journey very easy but now what is kubernetes then so if you already have a container platform and if the container platform is offering you a lot of things what is kubernetes so for the textbook definition kubernetes is nothing but a container orchestration platform okay so what is kubernetes kubernetes is a container orchestration platform so this is for just a test book definition of understanding that means just for your one word answer you can say that Docker is a container platform whereas kubernetes is a container orchestration platform that does not make you understand anything right if I just tell you here that uh okay learn about kubernetes kubernetes is just a container orchestration platform and if I close this topic here you will not understand anything if you are just a beginner with kubernetes so let's try to understand the Practical implication or what I am like you know by container orchestration platform how is it different from Docker platform or a container platform let's try to see now if you have worked with containers before one thing that you will notice is containers are ephemeral in nature I use this term previously also if you haven't I mean if this term hasn't got registered with you ephemeral is nothing but something that is short life or you know short living in nature that means containers can die and you know revive anytime so what do I mean by this basically if you have a host let's assume that this is your host on top of which you have installed Docker on top of which you created let's say some 100 containers okay now one of these containers is taking a lot of memory okay all of a sudden you started taking a lot of memory so this one impacts your 99th or you know it impacts your 50th container why it impacts because this container is not getting enough resources so it will die or if it is already not scheduled what happens is this container will not get started right so basically the life of containers is very short if there is a lack of memory resources or you know if there is some issue with the container like the image is not getting pulled in any of the cases the container will immediately die because there is only one host here and on top of that I installed a 100 containers one of the container there is no relation between container number one and there is no relation between container number 100 but still what is happening here is the container one which is consuming a lot of memory is killing the container 100 it is not directly killing but because of the Linux how Linux works okay so if you want to understand this thing in deep what happens is there is priority that is allocated for each process in Linux so now what happens is that when one of the you know process is taking a lot of memory depending upon your Linux and kernel rules the process like Linux have to Define right or kernel has to decide which process to kill let's say there are 100 process it cannot randomly kill process number 50 right so there is a particular algorithm in kernel using which it deletes one of the processes so here in this case what I am telling you is container one first taking a lot of resources because of which container 100 is not at all getting created or it is directly dying okay so this is one use case what is the problem here because you have only one host and on top of on top of it you have installed docker and you have created 100 containers one of the container is dying because of the other thing so what is one problem that we have learned here single host okay so this is problem number one now let's move to problem number two what is problem number one the nature of Docker or the nature of this container platform is scope to one single host so because it is only one single host the containers which are there it is impacting each another and if one of the container is impacting the other container there is no way that this container can come up okay so this is problem number one now the problem number two let's say somebody has killed one of your container killed container now what will happen if somebody kills the container immediately the application that is running inside the container will be not accessible right why is what is the reason for it being not accessible because the container got killed and unless there is a user or a devops engineer who starts this container somebody has to act upon the container that got killed okay so this behavior is called as Auto healing okay so what is auto healing Auto healing is a behavior where without users manual intervention okay so without the user's manual intervention your container should start by itself but does it happen in the docker let's say you are playing around with your Docker containers on your personal laptop does the container come up automatically no it does not come up now there are hundreds of reasons why your container can go down okay so there are hundreds of reasons why your container can go down now the problem is that a devops engineer cannot continuously monitor some 10 thousands of containers okay so in your personal laptop you might have just one single container but when you are working in production or when you are working with your organization you will see some ten thousands of containers now you can always not do Docker PS right always a devops engineer cannot perform the docker PS command and see which containers are in running state so there has to be a mechanism that is called as Auto healing so Auto healing is a very important feature that Docker or any container platform by itself is missing so what is problem number two Auto healing what was problem number one single hostnature of your Docker container now let's move and see what is problem number three so this problem number three is called as Auto scaling now let's try to understand what is this problem number three so again let's take the same example you have a easy to instance or a physical host on top of which you install docker okay on top of which you have let's assume you just created one container okay just for easy understanding in usual terms you will not just create one container but let's say that you just created one container and this container you know it has some resources called as okay this host let's say it is a 4GB RAM and 4 CPU now this container can maximum consume up to 4G 4 CPU and 4 GB Ram because it is the maximum capacity of your host in general your container will not get all the required resources from your host because the host itself has a lot of processes that is running right but for easy understanding let's say maximum this container can go up to 4 CPU and 4 GB okay or you know for some reason what has happened is your user whoever is the user okay so you have some 10 000 users your application has some 10 000 users but what happened is during a festival season let's say some Christmas or dasara or any Festival season your users all of a sudden went from 10 000 to 1 lakh so this happens all the times right so let's say a movie is released on Netflix and this is a very uh popular movie something like Marvel or Avengers or any popular actors movie so usually Netflix might receive load from 10 000 users but on this particular occasion Netflix will receive some 10 000. sorry some uh one million or one lakh or whatever the load gets increased so to satisfy this increasing load or to you know for the container of the application to act upon the increasing load you need to have a specific feature which is called as Auto scaling now what is auto scaling as soon as the load gets increased okay there are two ways one is manually you increase the application there is only one container right so manually you increase the container count from one to let's say 10 okay because the load is increased by 10 times I'm just giving an example so manually you increase the load from 1 to 10 or the containers similar containers like similar to C1 what you will do is you will create 10 different C1 containers or it has to happen automatically okay as soon as this sees the load uh the docker container it has to immediately understand that oh okay so the load is getting increased so I have to scale up myself but Docker does not support both of the things right so let's say you have one container called C1 okay so manually what you will do is let's say you want to increase from 10 000 to 20 000 user request what you have to do you have to create another container called C1 and apart from this you have to configure load balancing okay if there is no load balancing you know you cannot tell user that okay for first 10 000 users access my application URL as 172.16.3.4 and for next 10 000 your users access my application on 170.16.3.5 that is not possible Netflix will never tell you this right all that you do as an end user is just access netflix.com okay so you just access netflix.com and your favorite movie so now what is happening behind the hood is there is a load balancer which is actually sending you the load whether you are doing it manually or automatically whether you increase these containers count uh manually from one to ten or automatically if your platform is increasing from 1 to 10 behind the hood there is a load balancer okay so this load balancer basically says that oh okay now I understood that instead of one container there are two or instead of two there are three or there are ten so let me equally split the load okay so there has to be this mechanism of load balancer which supports your auto scaling so this is feature number three which is missing in your docker okay so what are the three different things that I told you the first one that I told you is the problem that we had with the containers is it is a single host one that is the docker platform basically relies on one single host whether you install it on your laptop or ec2 instance what you are doing is on one specific ec2 instance you are installing Docker and top of it you are installing containers whether the containers are like you know if you have 10 containers 100 containers you are just simply installing on that specific host and you are serving the traffic okay now the problem number two that we have is auto healing what is happening with auto healing is that your containers are not able to heal automatically if the container is dying then devops engineer should keep track of this 10 000 or 1 lakh or 1 million containers and he has to start by himself or when customers report that your application has gone down you have to start which is a very bad user experience now the problem number three I just explained is about Auto scaling and the problem number four or the final problem that I that I want to bring here is talker is a very minimalistic or very simple platform okay what is simple platform by default Docker does not support any of your Enterprise level application support okay Docker does not provide any Enterprise level support now what do I mean by that let's say you don't know Docker at all what are the minimal things that you require for running your application even on a virtual machine okay so it's not like running your college project if you want to run your college project you can just run it on your laptop but when we deal with Enterprise applications or when we deal with Enterprise Solutions you have a lot of things to deal with so one of the example is your application should have a load balancer let's keep writing firewall okay without load balancer your application is not Enterprise ready without firewall your application is not Enterprise ready or you know there are some such parameters let's keep uh writing them okay and what is the other thing your application has to Auto scale or at least it has to support scaling your application has to Auto heal or at least it has to support healing okay and then you know your application has to support some API gateways if you keep writing this number will go it will keep increasing okay so what are these things called some Enterprise level standards but Docker does not support this Enterprise level standards by default okay or you know instead of you going to Docker spam or any other high level Docker concepts by default if you're just using Docker these all things are not supported so it's a very simple or minimalistic platform so who is solving all of these problems let's write all the four problems here and let us see how kubernetes is solving each of this problem the first problem is a single host second problem is Auto scaling third problem is Auto healing fourth problem is Enterprise level support I'm sorry if there are any typos so these are the four problems that we discussed there are hundreds of problems but these are the four problems that are very important now I cannot tell you all of the problems because we are just starting our journey with kubernetes now who is solving these problems as I told you the one simple answer is kubernetes so now if you can answer this question right if somebody is asking you in the interviews who is solving you the problem of Docker or what is the difference between docker and kubernetes okay so if somebody asks you this question you have the answer now okay so all these five slides or all these 15 minutes I just talked about this one simple question the problem with Docker or the difference between Docker and kubernetes so now you should be able to explain these problems let me tell you the solutions immediately okay so till now I just explained you the problems with Docker and I just told you that kubernetes will solve the problem but I know that you people will not trust me you will say that oh okay explain me uh how uh it will solve because if anybody says you that oh okay kubernetes solves the problem you should not trust you should ask the question okay explain how kubernetes solves this problem so let's try to understand by default kubernetes is a cluster what is a cluster cluster is basically group of nodes okay so previously when we installed Docker we just installed on one personal laptop or you know we just installed on one simple easy to instance so kubernetes in general in a production use case it is installed in a master node architecture okay so what is Master node architecture just like Jenkins we create clusters so that means to say whenever we install kubernetes we just create one master node and we create multiple nodes okay so somebody will directly ask me oh does that mean kubernetes cannot be installed on one single node you can definitely do it but that is only your developer environment okay so to just practice kubernetes or you know to just start working around with kubernetes you can also install kubernetes on one single node but in general in production kubernetes is installed in whether it is high availability or a standalone mode kubernetes is generally installed as a cluster so now what happens okay I might be installing it as a cluster but your question will be what what is the advantage that I get if I install as a cluster the advantage would be in the previous slide I told you if you go back to this problem here I told you that this container is actually getting affected by this one container that is taking a lot of memory right now what will happen is if you install kubernetes in kubernetes there are two nodes let's assume so if this node okay so let's assume this is the specific node and this is container one and container 99 okay so if container one is impacting this container 99 immediately kubernetes will put this container 99 in a different node okay so that what happened this container 99 is not affected by this one or the meaning is that a faulty node okay so there is a faulty node for example or there is one faulty application in the node which is impacting the other applications so kubernetes because it has multiple node architecture immediately it can put nodes in a different uh sorry pods in a different node or applications in a different node okay so because of which you have a cluster-like architecture okay so this is one problem that is already solved by using the cluster behavior of kubernetes so kubernetes is by default cluster in nature now the second problem what is that auto healing so kubernetes basically has something called as uh okay I don't want to go into the details but kubernetes has something called as replication controller or replica sets so replica set is a new name replication controller is old name okay just like you can consider version one and version two so kubernetes has something called as replica set so all that you need to do is you don't even have to deploy a new container okay let's say you have C1 okay and your C1 or your application is receiving load increase load previously it was receiving 10 000 on one Festival it is receiving one lakh for example okay or one tenth of a million so what you can simply do is kubernetes is basically dependent on yaml files so everything in kubernetes is all about yaml files so in replication controller.aml file replica set controller.yaml file or even in the deployment ml file now if you don't know what these terms are don't worry all that you need to understand is as a devops engineer you can go to one specific yaml file yaml is basically a indentation format file just like Json files okay so you can simply go to this yaml template file and say that include increase my replicas from 1 to 10 because my traffic is increasing I know that tomorrow is Festival so I want to increase traffic from 1 to 10 this is manual way and kubernetes also supports something called as HPA which is horizontal pod Auto scalar okay so using which you can directly say that okay whenever there is a load uh just keep increase okay if one of my container is receiving a threshold of 80 percentage so whenever you see that the load is reaching threshold of 80 just spin up one more container okay so in such cases it will keep up spinning containers if the load is even going from 1 million to 10 Millions even your horizontal pod scalar feature of your kubernetes can handle so this is how you are achieving Auto scaling two problem solved now let's go with the problem number three what is the problem number three that I told you Auto healing so what is auto healing basically the word heal itself means that whenever there is a damage kubernetes has to control the damage Okay so kubernetes controls and fix the damage so it will either control or it will fix so most of the time it will control the damage now what is the meaning of controlling the damage or what is the meaning of Auto healing so let's say for some reason one of your container is going down there are hundreds of reasons why your container can go down I'll explain you what are the classic problems of why a pod can go down or why a container can't go down there are multiple things and there are some standard things that you can remember uh or you know standard debugging steps for a container when container goes down but for now let's assume that your container is going down so in case of Docker like I told you you have to look into the docker PS commands look into all the list of containers and understand okay one of my container went on so let me restart or let me recreate this container whereas kubernetes has a feature called Auto healing using this order healing feature whenever the container is going down even before the container goes down kubernetes will start a new container okay so even before this container goes down how kubernetes will basically work is as soon as kubernetes receives uh in in kubernetes there is something called as API server okay so tomorrow I'll explain you the kubernetes architecture during that time I'll explain you what is API server what are the different components or how kubernetes architecture is present but for now there is something called as API server whenever the API server understands that one of the container is going down or whenever it receives a signal coil container is going down immediately what kubernetes does is even before this container goes down it will roll out a new container okay so whenever it will roll out a new container the end user will not even understand that the container has gone down and a new container has come up unless your application like let's say your us is a very heavy application or you know in some cases that might happen but I am only talking about the generic terminology or general uh usage purposes so even before your container goes down a new rollout or a new container is created or a new pod is created in kubernetes we usually deal in terms of PODS not in containers but for now let's understand that even if your container is going down before that kubernetes starts a new container so using which we have achieved a feature called Auto healing so three problem solved I explained you three problems Auto healing Auto scaling single host nature of Docker because kubernetes is a cluster it has scope from for putting one container from one specific node to another specific node what is the fourth problem the fourth problem is the Enterprise nature like Docker I told you it does not has entered sorry Enterprise nature okay so it does not have many Enterprise support capabilities like it does not support firewalls it does not support load balancers or by default okay so it does not support a lot of things unless you go to Docker spam so what people have done is the people at kubernetes so kubernetes is basically a tool that was originated from Google okay so people at Google were using a specific tool called Borg where they say that kubernetes is one of the parts of work so they say that Borg is uh like you know a even a better solution and kubernetes is one of the like you can consider as one of the parts of blocks or one of Borg or you can consider kubernetes as initial solution for Borg so we don't have much details here but the people at Google what so Borg is not an open source tool so the people at Google what they have done is they have built a Enterprise level container orchestration platform okay so why they have built a container orchestration platform that supports Enterprise level is because the docker platform which is just a container platform right Docker was just a container platform so it does not have all of these capabilities but to run your application on a pat on a platform which is not Enterprise ready it is not suggestible right so that's why nobody use Docker in production okay so Docker is never used in production so you might use Docker spam in production but Docker independently is never used in production because it's not a Enterprise level solution so Docker is basically a container platform which will allow you to play with containers on your personal laptop or on your ec2 instances but in general you can consider that Docker has some container runtime which will allow you to run containers or which will allow you to manage the life cycle of containers but it's not an Enterprise solution because it does not have all the list of capabilities like Auto healing Auto scaling load balancer support firewall support support for API gateways okay so all of white listing blacklisting so these are all features that you require to run your application in production now Flipkart or Amazon cannot just say that Docker platform is a platform that is running container so let me move to Docker so the first question the organizations mncs or your corporates will have is okay I appreciate a solution like Docker platform but is it suitable for our organization does it support all of these capabilities because I want to Blacklist few clients or I want to whitelist a few particular IPS I want to Blacklist uh somebody who is uh trying to perform uh you know DDOS attack or denial of service attack so all of these capabilities are required which Docker does not have and kubernetes is the one that is aiming to solve this problem now does kubernetes solve this problem 100 percent so to answer this question in a nutshell the answer is definitely no so you might talk to experts in kubernetes you can talk to anyone like who have been in the world of kubernetes so it is not as simple as we do it in the world of virtual machines okay so when we were do dealing with virtual machines like 10 years back or you know uh seven to eight years back everybody was on Virtual machines and the way you can integrate this external tools to Virtual machines was Far easy and virtual missions offer you far more security when you compare with containers so kubernetes is evolving and it is backed up by very wonderful people at cncf okay so there are many contributors to cncf even I am one of the contributors to cmcf cncf where the goal of this community is to make kubernetes a better place okay so you are basically I mean kubernetes basically has a very good backup and every day there are lots of enhancements that are done to kubernetes so you will see that there are many projects in the cncf like you have podman you have uh build packs you have uh you know uh Prometheus all of these are cncf incubated or you know cncf adopted projects so they might be created by someone else but cncf has adopted these projects so what is the meaning is that there is a community that is constantly focusing on developing the kubernetes community not just the kubernetes application but the tools around kubernetes because kubernetes also by default does not provide you a lot of capabilities but kubernetes provides a concept uh like you know Concepts like custom resources custom resource definitions using which you can extend kubernetes to any level okay so let's say by default kubernetes uh for example by default kubernetes does not support Advanced load balancing capabilities okay so everybody's knows this and this is a practical truth so by default kubernetes has services and Q proxy which will just give you some basic load balancing like right round robin but this is one of the major problems and how kubernetes solve this problem because kubernetes introduced custom resources and custom resource definitions and it told applications like fi nginx that okay you create a kubernetes controller using which people can use your load balancer even in kubernetes okay and this concept was called as Ingress controllers so similarly kubernetes is advancing every day and kubernetes is improving uh and it is reaching this near 100 okay so we will reach that near 100 so that is one of the reasons why even you know some of the companies hesitate to implement kubernetes and production people are migrating slowly to kubernetes in production because of all this support and all of these things so kubernetes is one such tool that you have to definitely watch out for and like I told you in the very first slide kubernetes is easy don't worry about kubernetes if you understood these four topics that I told you so today your part is done like let's assume that you have learned already five percent of kubernetes okay now what we will focus is learning the next 95 percent and this next 95 percent will completely depend on your foundations that is your first five percent that is to understand why you need to learn kubernetes if you understand the why statement why you need to learn kubernetes then you will understand with your effort not directly with your effort in learning kubernetes you will understand the risk 95 percent okay so we will slow by slow I mean in our next classes step by step we'll learn about the concepts like pods we'll learn about the concepts like deployments Services even before all of these things I'll explain you the architecture of kubernetes okay because that is very important and on the very first day like tomorrow when I explained to the architecture of kubernetes might be some people might not understand the complete architecture and you might feel that oh there are so many components in kubernetes now I have to learn about all of these components but on the very first day you will not understand all the components I'm very sure about it you might understand the definitions you might feel like you understood it but practically to gain understanding of all the kubernetes components it will take some time so don't lose hope in our next videos we'll start learning with pods we'll start learning with deployment Services Ingress controllers we'll start talking about admission controllers so it is a long journey and stay with me you will learn kubernetes definitely because kubernetes is very easy I hope you like this video if you like the video click on the like button if you have any feedback for me definitely post that in the comment section and don't forget to share it with your friends and your colleagues thank you so much for watching the video take care everyone bye see you in the next video this video I'll be talking about the kubernetes architecture so before we jump onto the topic for today let me start with a very lighter note question Pi kubernetes is called as k8s so everybody knows that kubernetes in short is called as k8s but to just uh start with a very uh fun question let's see how many people can answer this question why kubernetes is actually called as k8s so this is not at all an interview question so I'm just trying to you know start with a very uh simple question because we are going to deal with a very complicated concept okay so let's try to understand the architecture of kubernetes but before that if you know the answer definitely put that in the comment section so coming to the architecture of kubernetes firstly you should understand the difference between Docker and kubernetes so that is the same thing that we try to understand on day 30. so if you haven't watched our previous video that is day 13 I'll highly recommend you to watch that previous video and then come back to the video for today that is architecture of kubernetes the reason why I'm telling you is if you don't understand what a Docker platform or what a container platform offers and what is the reason why we need to evolve to a container orchestration platform you will never understand the reason for container architecture or you know sorry kubernetes architecture so on a very high level what I told you is your kubernetes offers four fundamental advantages over Docker that is kubernetes is by default a cluster in nature or cluster in Behavior then kubernetes offer something called as Auto healing kubernetes offer something called as Auto scaling and finally it offers multiple Enterprise level support like kubernetes offers you Advanced load balancing kubernetes offers you you know security related things it offers you Advanced networking so it offers you multiple Enterprise level support which is a major difference between Docker and kubernetes so we understood these four things in detail and today I am going to explain you the architecture of kubernetes also using this four examples so you might ask me that Abhishek there are hundreds of videos on internet which explains about the kubernetes architecture right so everybody says that kubernetes has something called as a control plane and kubernetes has something called as data plane right so this is something that everybody explains and probably if you have watched the previous videos or you know if you have watched any other video or if you have even read the documentation of kubernetes you know that there are multiple components income control plane like you know kubernetes has API server kubernetes has a component that is called as etcd kubernetes has a component that is called as a scheduler then you have a controller manager and then you have a cloud controller manager which is called a CCM and similarly in data plane also you have multiple components like you know you have your cubelet you have your Cube proxy you have your uh you know container runtime but what exactly all of these things are so even I can explain you that you know these are the different components in control plane these are the different components in data plane and each component does these these things but you will never understand the architecture of kubernetes in this way so that's why what I am going to do is I am going to compare this thing against Docker so let us try to understand two basic things in Docker the simplest thing is container whereas in kubernetes the simplest thing is POD so I will try to compare both of these things and how a container is what happens when a container is created in Docker and what happens when a pod is created in kubernetes so that you will directly understand the architecture of kubernetes so you yourself will say what is the advantage of each and every component in kubernetes and why kubernetes requires these many components whereas in Docker you have two to three components but in kubernetes you have all of these components by the end of this video you will understand the advantage of each and every component and why they are actually required okay so watch this video till the end so that you get a clear understanding of these components in kubernetes architecture and you will say that kubernetes is very easy though that is our primary goal to make kubernetes easy okay so let's start with creation of a container in Docker so let's say you have this platform okay so this is a virtual machine on top of which you install Docker let's say and what you have done as a user is you have created uh you have written a Docker file built images I am not going there but you have run a container using a basic command in Docker that is Docker run okay so you said Docker run and then you ran a container but what is happening under the hood so if you run a container nothing will happen right let's say you have installed a application let's say you have installed a Java application and on the platform you don't have Java runtime application actually run no it will not run similarly even when you are running a container you need to have something called as a container runtime okay so without container runtime your container will never run so in Docker you have a container runtime component that is called as Docker shim so this is something that is happening under the hood in Docker okay so now if we move to kubernetes so kubernetes also need to do some similar Behavior but because kubernetes is an advanced concept or because kubernetes provides you Enterprise support with auto healing Auto scaling and cluster-like behavior what you basically do with kubernetes is you create a master and you create a worker okay so for a basic example I am just using one master and one node component or worker component architecture so that it will be very easy for you guys to understand but in general there will not be one worker there will be multiple workers in kubernetes it doesn't mean that you cannot create kubernetes with one single node you can also do it but in production always you have multiple Masters and multiple workers but for easy understanding let's say you have just one master and you have just one worker so what happens is in kubernetes you will not directly send the request to worker but your request goes through Master okay so your request always goes through something called as a control plane now why you need to do this I'll explain you or you will even understand by it yourself so when you deploy your pod in kubernetes the smallest level of deployment is POD whereas in Docker you deploy a container you can slightly understand both of them are more or less similar kind of things because I'll explain the difference in detail in tomorrow's class but for now just understand that uh container a pod is just like a wrapper over your container which has some Advanced capabilities so when user tries to deploy a pod similar to container or similar to Docker your pod gets deployed okay let's say your pod is getting deployed on this this specific worker node but you have a component in kubernetes that is called as cubelet so what is this cubelet doing is basically this cubelet is responsible for running your pod okay in Docker basically you have a Docker engine okay and basically you have Docker shim okay in kubernetes you have something called as a cubelet which is responsible for maintaining this kubernetes pod okay it always looks for okay if the Pod is running or not if the Pod is not running because kubernetes has a feature called Auto healing I have to inform kubernetes that okay the Pod is not running do something so that's why kubernetes has a component called cubelet but even if the Pod has to run like I explained you here there need to be something called as a container runtime right inside a pod you will definitely have container so for this container to run even on kubernetes you need to have something called as a container runtime but the only difference is in kubernetes Docker is not mandatory in Docker like I told you there is something called as Docker shim but in kubernetes you can either use Docker shim you can either use container d you can use Creo what are all these things these are all company competition to Docker shim okay so Docker has only one support that is Docker shim whereas kubernetes can support containerdy kubernetes can support Creo kubernetes can support Docker shim or any other container runtimes which implements kubernetes container interface now let's not go into the details of it but understand that kubernetes has a standard call container interface if some container runtime it can be Creo it can be container D it can be Docker shim if they can implement this container interface or it can implement the standard that kubernetes has set then kubernetes allow you to use that kubernetes container runtime or that specific container runtime so what are the two different components that we learned we have cubelet and kubernetes we have container runtime in kubernetes cubelet is basically responsible for ensuring that the Pod is always running if the Pod is not running then kubernetes will inform uh there is a component in kubernetes I'll keep that component and suspense but cubelet will inform the specific component that okay something has gone with the something has gone wrong with the Pod let us restart it or let us do something with it so that is the responsibility of cubelet and container random you already understood now in the previous class in one of the previous classes I told you that in Docker there is something called as Docker 0 or you have a default networking in Docker that is called as a bridge networking so this networking is mandatory for running your pod even here in kubernetes you have something called as cube proxy so this Q proxy basically provides you networking like every pod that you are creating every container that you are creating it has to be allocated with the IP address right and it has to be provided with a load balancing copper capabilities because I told you kubernetes has something called as Auto scaling when you scale your pod instead of one replica if you have two replicas to your pod then there has to be a component which says okay send fifty percent request here send fifty percent request here so that is taken care by Q proxy so we talked about three components one is Q proxy which provides networking IP addresses and also the load balancing default load balancing capabilities in kubernetes then you have cubelet which is actually responsible for running your application and if your application is not running or if your pod is not running then cubelet informs one of the components and control plane that okay something is going wrong and finally you have container runtime which actually runs your container so these are the three components that are available on the work or not so see you directly understood what are the different components that are available in worker node of kubernetes so you are already done with data plane of kubernetes or you are done with the worker component of kubernetes isn't it easy all of you understood the components that are in worker node tomorrow if somebody asks you in the interview what are the components that are present in the worker node of kubernetes you can directly tell them that let me erase all of this stuff okay so you can directly tell them that in kubernetes worker node there are three components and those three components are nothing but your let's write them so those three components are nothing but you have few proxy you have cubelet and you have something called as container runtime okay and you should be practically able to explain the purpose of each of them as well so that is the reason why I took Docker as an example so that you guys understand it so Cube again let's repeat it cubelet is basically responsible for creation of the pods and it will basically ensure that the Pod is always in the running state if it is not then it takes the necessary action using the uh kubernetes control plane and then you have uh something called as cube proxy Q proxy is basically responsible for the networking like generating the IP addresses or load balancing basically it uses IP tables on your Linux machine okay so IP tables is a concept where you know uh okay let's not go into the details of Ip tables but just understand that cubelet Q proxy uses iptable for networking related configuration and finally you have container runtime which is responsible for running your container okay so worker component is done now let us move to control plane or the master component of it so this worker node or the data plane is basically responsible for running your application so using this three components you have technically everything to run your application right so cubelet is deploying Cube proxy is providing the networking container runtime is providing the execution environment for your container no why you need actually control plane itself so you should get this question the reason for that is for any Enterprise level tools or for any Enterprise level components like I told you there are some specific standards okay now cluster is one specific standard like I told you kubernetes has cluster now who will decide that the Pod creation like user has created a pod now who will decide that okay should the Pod be created on node one should the Pod be created on node 2 or should the Pod be created on node 3 so this is one specific instruction but there can be multiple instructions and there should be a heart or there should be a core component in your kubernetes that has to deal with such kind of instructions okay when multiple users are basically trying to access your kubernetes cluster or when multiple people are trying to uh you know do some kind of hacking or some kind of things so there has to be a component in kubernetes which basically acts as a core component of your kubernetes and takes all the incoming requests whether you want to in uh you know in future you want to implement some uh identity provided related configuration SSO or you want to do some security related stuff so there has to be a core component which is basically doing every everything in kubernetes and that core component is called as API server and this component is present in your master component or you can also can't call it as a control plane of your kubernetes so what is the purpose of API server so the API server is a component that basically exposes your kubernetes okay so this kubernetes has to be exposed to the external world all of these things are basically internal to your kubernetes the data plane all the worker nodes but the heart of the kubernetes is your kubernetes API server which basically takes all the requests from external world now what this let's say the user is trying to create a pod he tries to access the API server and from the API server kubernetes API server decides that okay Node 1 is free but to schedule the component on Node 1 you have a component in kubernetes that is called as scheduler okay so what is the responsibility of scheduler so scheduler is basically responsible for scheduling your pods or scheduling your resources on kubernetes okay so who decides the information API server but who acts on the information that is AP uh sorry that is the cube scheduler okay so what are the two things that we have learned till now one is API server the second thing that we learned is scheduler so scheduler is basically saying go and schedule this on Node 1 or node 2. it is receiving this information from API server after this let's say that we are deploying your production level applications on this kubernetes cluster there has to be a component inside your kubernetes that basically acts as a backup service or you know uh that basically acts as a backing store of entire cluster information okay even when we are talking about Jenkins I told you that backup is very essential in kubernetes there is a component that is called as etcd so etcd is basically a key value store and the entire kubernetes cluster information is stored as objects are key value pairs inside this etcd okay so the other component that we learned is etcd what happens without edcd you don't have the cluster related information tomorrow if you want to restore the cluster or you want to do any information etcd is basic component and finally you have two more components that are controller manager and you have Cloud controller manager let's put this Cloud controller manager aside for a moment if you understand what is a controller manager so basically like I told you kubernetes supports Auto scaling so to support Auto scaling kubernetes has some components like you know kubernetes has to automatically detect this and it has to do kind of things so for that kubernetes has basically some controllers okay for example replica set so replica set is basically is the one that is maintaining state of your kubernetes spots so tomorrow let us let me say that one part is not enough and I will schedule two parts I'll Auto scale one of my pot to two parts so there has to be a component in kubernetes that ensures that the two components or two pods are actually running so that is taken care by replica set in kubernetes yaml file if you say I inequate two parts so a replica set controller basically ensures that the two pods are always running now there has to be a component in kubernetes which ensures such controllers are always running so that component is called as controller manager if you did not understand about controller manager don't worry about it in future classes when we talk about deployments when we talk about Services by yourself you will understand what a controller manager is but for now just understand that in kubernetes by default there are multiple controllers like electric assets and there has to be a program or there has to be a component which ensures that this controllers are running that component is called as your controller manager or that that manager which is managing these controllers is called as a controller manager finally you have one component that is called as Cloud controller manager okay c c m many people get confused with this concept so that's why I thought I'll take this as a different concept and I'll explain you okay so you all know that kubernetes can be run on cloud platforms like eks or you can also run it on AKs or you know gke so what is happening is you are running your kubernetes on cloud platforms so basically This Cloud platforms let's say you are using elastic kubernetes surface so there is a user request or there is a request to create a load balancer or there is a request to create storage so if you directly send this information to kubernetes so kubernetes has to understand the underlying cloud provider okay if kubernetes has to create a load balancer on AWS or if kubernetes has to create a storage service on AKs or on Azure so kubernetes has to translate the request from the user on to the API request that your cloud provider understands okay so this mechanism has to be implemented on your Cloud control manager that means to say let's say tomorrow there is a new Cloud that is implemented called as Abhishek okay and you want to run kubernetes on this platform called as Abhishek okay so you want to run kubernetes on the platform called Abhishek now what kubernetes tells you is that okay I cannot write Logic for all of these different Cloud providers I will provide you a component called as Cloud control manager so This Cloud controller manager is a open source utility okay so this code is available on GitHub tomorrow if Abhishek creates a new cloud provider what Abhishek can do is he can go to this open source GitHub repository and he can write the logic for his cloud provider inside this Cloud controller manager he can create a pull request to the cloud controller manager saying that okay so I have implemented a new cloud and I want I want to support kubernetes on my cloud provider so for that reason what Abhishek has to do is he has to write a bunch of logic and he has to submit to Cloud controller manager so if you are running kubernetes on on-premise this component is not at all required or this component does not have to be created at all on your kubernetes cluster so these are the various components of your kubernetes so if you have to sum up or if you have to just put that in one specific slide basically you have kubernetes divided into two parts one is your control plane and one is your data plan so if you have two worker nodes on your two worker nodes you will have kubernetes data plane components that are three components one is your cubelet second easier Cube proxy third is your container runtime so every kubernetes worker node has these three components so in some cases you will not see container runtime in some documentations but end of the day container runtime is required so I consider it as also a component okay so this is worker node one but even on worker node two you'll have these three components okay one two three every worker node will have these components and then you have something called as kubernetes Master which has components like your API server which is heart of your kubernetes every request is received by this API server then you have your scheduler which schedules the resources whether it has to go on worker node one worker node two API server will take the decision and scheduler will schedule on that specific thing and then you have uh something called as etcd which is basically your data store or a key value store which stores all the information of your cluster and then you have controller manager which is manager for your kubernetes inbuilt controllers and finally you have something called as Cloud controller manager okay so these are the different components you have to explain in an interview if your interviewer is asking tell them that these are the components on your kubernetes master and then these are your components on your kubernetes workers so this is the control plane and this is the data plane so control plane is the one that is controlling the actions and data planes is one that is actually executing this your actions okay I hope the concept is clear you understood what are the master or control plane components and what are the worker node components so if you want to practically try this out in even before practically trying consider this as an assignment that I am giving to you so write a detailed notes okay so watch this video and write a detailed notes post it on your LinkedIn so that everybody understands like when a interviewer is trying to approach you he understands that you know okay this guy has the architecture knowledge on kubernetes so post on LinkedIn saying that okay today I understood about kubernetes architecture these are the different components in kubernetes and this this is how kubernetes basically works so you know you can draw a specific diagram like you know you can basically uh use paint or something and tell them that these are the different components in kubernetes and how one component works with connects with other component take part creation as an example and put all the details uh including uh you know uh the diagram as well as the written part and put that in your GitHub profile and share that URL on the LinkedIn so that you can create a GitHub profile and you can also share that information on LinkedIn so this is the assignment for today and I hope you understood the concept you understood each and every component if you did not understand something put that in the comment section I'll definitely reply to your comment saying that okay so you did not understand about this thing and this is how this component works if you like the video click on the like button if you have any feedback share that with me in the comment section and don't forget to share this video with your friends and colleagues so this is the video for today guys I'll see in the next video tomorrow where we'll try to understand the kubernetes Pod thank you so much for watching the video take care everyone bye hello everyone welcome back to my channel and in this video we'll see how to install kubernetes cluster or you know how to deploy a kubernetes cluster on your local machine or you can also use this method to uh create a kubernetes cluster on your virtual machine wherever it is so basically for our development purpose or to learn kubernetes you know you cannot uh afford to create a full-blown kubernetes cluster so you need a development kubernetes cluster so uh there are many uh local kubernetes clusters like minicube or k3s you can also use K3 or some other stuff but you know the easiest thing that you can do or the thing that has been for a long time is mini Cube so in this video I'll show you how to install minicube on your laptop or your virtual machine and how to use it so there are two easy steps to do so one is you know uh to install the mini Cube and the second thing is to install Cube CTL so what is cubesatel Cube CTL is your kubernetes command line uh so with which you can interact with your kubernetes cluster so you can also do it with your kubernetes UI or the kubernetes dashboard but the most preferred way of doing it is using Cube CTL so the easiest way uh is to directly go to switch to the kubernetes sorry minicube official documentation so this is the mini Cube official documentation if you see here minicube dot six dot KH dot IO special interest groups.kh dot IO so this is the uh official documentation and uh you need to uh have this prerequisites like you know you have to have two CPUs 2GB of free RAM and 20 GB of free hard disk and of course an internet connection and if you want to install a mini Cube on you know uh or any of the platforms like Windows or Mac OS or anywhere you need to have a hypervisor that is installed and what is the purpose of hypervisor it's basically serves for creating a virtual server on top of your uh laptop or your server so that's only prerequisite that that you need to have and I'm not going with details of creating a virtual machine uh sorry a hypervisor because it differs from platform to platform that you are on so I'll assume that you already have a hypervisor so the first thing that you do is step over to the installation process and choose your operating system on which you are on and you can simply download the mini queue from here or you can simply execute the scripts that are available here and it doesn't take much time so if you download the binary uh let's assume you're on a Linux platform so you download the binary and you can set it to your path so it uh hardly takes like you know five to ten minutes and once you have that you can simply use this command called minicube start so mini Cube start would start a cluster for you and you can also pass some parameters like the driver that you want to use and all but the basic comment that you can do is minicube start and the next thing that you need to have like a uh showed you in the previous slide is to have Cube CTL up and running so a cube serial can be easily downloaded so uh again go to the official kubernetes documentation kubernetes dot IO and search for the install tools and you can simply download the cubesatel binary of the operating system that you are on so install gives you refer Linux once you click on this you can easily download the binary and once you have that so your Cube CTL would be configured to the cluster that you are with like the kubernetes cluster that you're using and you can execute your kubernetes comments and there are other instructions uh of how you can operate with your mini Cube cluster like you can pass your mini Cube cluster or you can unpause your mini Cube cluster you can stop the cluster whenever you not require this or for any other reasons you can stop the mini Cube cluster and you can create multiple clusters with a Sim uh single mini Cube uh instance like you know you can create a development cluster and you can create a testing cluster you can do multiple clusters with mini Cube and you can also set the different configurations you can increase your memory or you can do different things here so just type the mini Cube command mini Cube and you'll look into the all the options that you have and these are the simple examples that I have shown here and one more important thing about mini cube is it supports a lot of add-ons so that you can install add-ons like your Ingress controller operator lifecycle manager for which you can install The Operators and you can do different things with mini Cube add-ons so this is a simple video of using and installing mini Cube guys so if you have any questions or if you need a detailed video on how to install mini Cube please post it in the comment section and don't forget to like share and subscribe too thank you so much scores and in this class we are going to see how to deploy our first application in kubernetes so before watching this video I'll highly recommend you to watch the previous videos day 30 31 and 32. the reason why I ask everyone to watch these videos is because from Docker to kubernetes like you know before you start your journey with kubernetes you have to understand the differences between Docker and kubernetes this is one part of it and after that you should also understand two things one is the architecture of kubernetes and the next thing is how to install kubernetes right so we covered three topics in day 30 31 and 32 so if you don't have the knowledge of these things then I will recommend you to not watch this video go back and watch the videos and then come back to it because only then you will understand today's concept so from day 30 I have been stressing on few points where kubernetes is better than Docker and why people move to kubernetes one is because kubernetes is a cluster 2 is kubernetes offers you scaling that is auto scaling kubernetes offers you Auto healing right and kubernetes also offers something which is very important a Enterprise level Behavior right so using kubernetes you can support a lot of things for your containers so these are the four primary things and to start with kubernetes to get out to achieve all of these things you have to learn about few terminologies okay so like we learned about the terminologies in Docker in one of our previous classes similarly we should understand few Concepts in kubernetes before we go into it so I am not going to talk about the architecture of kubernetes here because we already covered it but I am going to introduce you to few things which will make your understanding on kubernetes better because I don't want to directly jump onto and explain like you know what is a pod in kubernetes and how to deploy a pod how to install your application it will hardly take me 15 minutes to do that but I will properly explain you the basics and then we will go with the demo okay so that your fundamentals are clear firstly we are moving from Docker to kubernetes right I mean we are moving our thing from containers to container orchestration environment so in kubernetes the lowest level of deployment is a pod okay so in kubernetes you cannot directly deploy a container in Docker what you are doing is you are building a container and you are deploying a container right in kubernetes also we will use these containers that you have deployed in Docker because end of the day whether it is kubernetes or whether it is Docker the end goal is to deploy your applications in containers right so that is the concept of containerization but what kubernetes says is okay don't deploy your application and container access but deploy to me as a pod now what is POD why we should deploy your container as a pod why can't you directly deploy as a container in kubernetes as well so this has to be a fundamental question right because once you start learning kubernetes the very first thing that you will see is people talk about powder now if in Docker if you are installing your applications using containers why you have to install in kubernetes using pods what is it and why is it different okay so now basically a pod is described okay in terms of definition a pod is described as a definition on or definition of how to run a container okay so what does this mean let's say in Docker whenever you want to run a container what you would do is basically you would say docker run minus D or minus t or minus i t followed by the name of the container then what you would follow I mean the name of the image then you would pass minus P to do some Port mapping then you would say minus V to do some Mount volume a volume mounts then if you have some Network you would say minus hyphen f and network and you would pass a network detail so in Docker you are basically passing all of these arguments to run a container right in command line whereas in kubernetes what you will do is you will pass those specifications in the pod.yaml file okay so in kubernetes you basically have a wrapper or you basically have a concept that is similar to container but it abstracts the user defined commands in pod dot specification yaml so if it is confusing don't worry about it I am going to explain it in a very clear way so what you do in kubernetes is you instead of container you will deploy a pod okay now a pod can be a single container or it can be multiple containers I'll tell you why a pod can be multiple containers what are the advantages but first for now just go with a single container okay so assume you are building a pod with one single container what you will do is similar to Docker end of the day pod is also exactly like a Docker container the only difference if when you have one single container the only difference is here instead of you using a command called Docker run and then you pass all the different arguments you will try to put all of them in a yaml file okay so inside the ml file you will say something like this API version is uh V1 then you provide the name of this container sorry of this pod and all of these things then you will provide the specification so inside the specification you will provide all of the details of the container okay so you have multiple containers option here and inside which you provide specification of your containers so don't worry once you look into the definition or the yaml file of the Pod you will understand oh okay it is exactly similar to your container but the only thing is instead of command line you are trying to put everything in a yaml file that's the only difference now why kubernetes has to deal with this complexity you might ask me a question that Abhishek if things are going well with Docker container and you can deploy everything as a container in Docker platform why kubernetes has introduced this complexity why you have to run things in kubernetes using yaml files so the thing is kubernetes like I told you is a Enterprise level platform and what it want to do is it want to bring declarative capabilities okay so or it want to build a standardization so the thing is you can put these yaml files okay in kubernetes we deal everything with yaml files okay so whether it is pod resource whether it is deployment resource whether it is Services we are going to talk about all of these things in future but everything will be written in yaml files only okay so you have to master yaml files uh you don't have to like you know mug up how to write a pod uh yaml file you don't have to mug up how to write a deployment ml file don't worry about it we have bunch of examples and everybody make use of these examples only like whether it's a senior devops engineer Junior devops in it everybody use this example from official kubernetes documentation or from some samples but the thing that I want to mention is you have to understand how yaml files are written so only then you will become expert in kubernetes because every day we deal with ML files in kubernetes okay now so like I told you pod is nothing but one or group of containers so why it has to be one or group of containers so most of the times a pod is a single container but there are some cases where you have some you know sidecar containers or you have some init containers so what are these things like these are the things that support your axle container just to give you an example let's say you have a container okay you have your application deployed in a container and this wants to read some config files or you know this wants to read some uh user related files from another container so in such cases what you will do is instead of creating two different uh pods in kubernetes you can put both of them in a single part and what part says is if you put one or two containers or multiple containers inside a single pod I will ensure that kubernetes will ensure that both of the containers will have some advantages so that's why you put one or group of containers inside a single pod when it is required what are the advantages so if you put group of containers in a single pod okay let's say you have container a and container B and if you put both of them in one single pod in kubernetes then kubernetes will allow you shared networking shared storage okay so this way what happens is container a and container B inside a single pod can talk to each other using localhost that means to say if a container wants to talk to container we put uh three uh 3000 so it can simply access using localhost 3000 okay so the application can be directly accessed and the information can be retrieved or if both of them wants to share some files okay so even in such cases because both of them are in one single part they can share the files as well so that is one of the reasons why people put multiple containers but it is a very rare case the usual practice for this is to create some sidecar containers or init containers which is a advanced topic which I'll explain going ahead when we talk about service mesh or when we talk about uh you know things like some advanced concepts of kubernetes I will talk about why you put multiple containers inside a pod but for now if you understand that there is a pod and inside this part of kubernetes you have a container so container and pod so basically what kubernetes does is it allocates a cluster IP address to this pod okay and you can access the applications inside the containers using this pod cluster IP address so IP addresses are not generated for the containers but they are generated for the pods now don't worry or don't overthink the concept here because it is fairly simple a pod is basically a wrapper that kubernetes has created for a container to make the life of devops Engineers easy because when we try to deal with containers like hundreds of containers thousands of containers in production if you have a wrapper like pod which can Define everything in a yaml file okay which can say like if a developer can go to a git repository or a devops engineer can go to a get repository and look for the pod.yaml file he will understand everything about the container that okay so this container is running on the application is running inside it on Port 80. it has a volume Mount then uh you know what is the networking of it or you will understand multiple details that you have for your Docker container so kubernetes has created a wrapper for it okay so most of the cases when you are dealing with a pod you deal with a single container and you know you access the Pod sorry you access the container using the cluster IP address that kubernetes gave for pod so who is giving this cluster IP address if you watch the previous videos Cube proxy is generating this cluster IP address okay perfect so this is the concept of pod in kubernetes so very first application that we are going to deploy we are going to deploy as a pod okay don't worry we are going to when we do the demo you will understand this even in a better way but One More Concept that I wanted to introduce here is Cube CTL what is Cube CDL so Cube CTL is nothing but like for Docker whenever you are trying to run any commands you have the docker CLI right in kubernetes you have something called as cubesata so Cube CTL is command line for kubernetes okay so what is it it's a command line tool for kubernetes so you can directly interact with the kubernetes Clusters let's say you have a kubernetes cluster and inside that you have 10 nodes okay so to understand how many nodes are there inside your kubernetes cluster you can just use this Cube CTL command and say Cube CTL get nodes okay so how you will understand these commands what are the different options I'll show you don't worry so if you want to see how many parts are there you can simply say Cube CDL get pods if you want to see how many deployments are there Cube CTL get deployment if you want to delete a deployment you want to create a deployment so for such cases to interact with kubernetes we have Cube CTL so today's class we will first install cubectl then we will create a kubernetes cluster that is mini Cube why we will create a mini Cube kubernetes cluster because in last class I told you I showed you how to create a kubernetes cluster on AWS using cops but for this you need to have some free credits on AWS you can also run eks or any other systems but for that you need some free credits so if you don't want to spend on your kubernetes clusters you can learn them using a local kubernetes cluster that is mini Cube or k3s or kind or any kubernetes clusters installation of all of them are fairly simple don't worry about the installations at all but the only thing is when you use this local kubernetes clusters they are not as equal as your full-bone kubernetes clan full-blown kubernetes clusters but for our demo purposes or for our learning purposes because we don't run huge applications we are not running applications that are CPU and memory intensive so even these clusters are fine and we are not going to set up any High availability all of these things at this point so you can use mini Cube so that you don't have to spend on your AWS okay so first thing we will see is how to install cubectl then we will see how to create a kubernetes cluster on your local using minicube I have a complete video as well where you can refer to this complete video I'll share the link in the description if you find today's video uh is going fast with respect to installation don't worry you can refer to this complete video so Cube CDL mini Cube and then we will see how to deploy a pod which is our first application on kubernetes okay is the things clear till now let me stop sharing here and let me proceed with the demo part so I stopped sharing then let me share my terminal okay just a second perfect so now my terminal let me increase the font a bit perfect so now you guys are able to see my terminal as well right so the very first thing that we'll do is start with the installation of cube CTL so to start with the installation of cube CTL just go to your browser search for cubectl don't do anything just say Cube CTL installation you will go to this specific page called install tool kubernetes click on it then choose the platform do you want to install cubectle on Linux Mac OS or Windows click on for example I am using Macos let me click on Macos then there are multiple options do you want to install it on Intel or silicon chip so silicon chip is nothing but your Mac M1 M2 or your arm processors these are related to Silicon chip but if you are using the old Mac then you are basically on Intel so just copy the script and your kubernetes cubesatel installation is done so this is very very simple just copy the script just execute it you will see that the cube CTL is installed so it barely takes a minute or so for the entire installation now once you have the cube CTL installed just search Cube CTL version so your Cube CTL is up and running perfect after this like I told you we'll proceed with the installation of a local kubernetes cluster so here there are multiple options you can use mini Cube k3s kind you can use uh micro k8s so there are multiple options but in my case the videos that I am going to demonstrate I will prefer mini Cube because many many people or many subscribers already are using mini Cube if I teach them in kind then they have to do some additional network settings so that's why I'll proceed with mini Cube but just to let you know that uh on my local setup or whenever I am practicing things I prefer kind so once you learn kubernetes then you can also move towards kind but for easy way to start with kubernetes start with mini Cube but why kind is better is because kind is basically kubernetes in Docker that means to say your kubernetes nodes or your kubernetes entire setup as is done as Docker containers okay this is a slightly Advanced concept how kind handles kubernetes clusters but you can create hundreds of kubernetes clusters even on your personal laptop using kind whereas with mini Cube it's not possible okay but for now let's Bother only about one single cluster so let's use mini Cube so firstly install minicube so to install mini Cube again go to your browser search for minicube you will go to your mini Cube kubernetes page click on it so you will find the installation uh suggestion where you will be asked with your operating system similarly if you are in Linux click on Linux then be very careful with the architecture so if you are using xc8664 use this architecture if you are using arm64 then click on this button arm64 is the arm processor so most of the times people on Linux must be using this x8664 unless you change your configuration or you're using IBM P cluster or ibmc cluster uh sorry P operating system or Zia operating system so in my case I'm using Mac OS and you know I am using the arm64 processor so as soon as I change this you will see that there is a change in the command so let me copy these things here and let me execute so as soon as I execute this one you will see that mini cube is installed the reason why I did not do is I already have mini Cube but the installation is that's it like you just install these two commands and your mini Cube installation is done you can just search for minicube and you will notice that your mini Cube installation is done perfect so I have my Cube CTL I have my mini Cube my Cube CTL sorry I just have my mini Cube I have to proceed with creating a cluster okay so what is mini Cube mini cube is a command line tool that will allow you to create a kubernetes cluster but right now your mini cube is only created your kubernetes cluster is not created so to do that the simple command is minicube start so if you just do mini Cube start your kubernetes cluster will be started but if you are using Mac or if you are using uh Windows understand that how mini Cube creates a cluster is it will create a virtual machine first on top of this virtual machine it will create a single node kubernetes cluster okay what is it single node kubernetes cluster like I told you in production or in uh you know real time scenarios we will use multi-node kubernetes cluster where we will have a master node or we will have three Master nodes and we will have three worker nodes four worker nodes hundred worker nodes whatever is the requirement but in general when you are doing High availability you will have three Master nodes and you will have n number of worker nodes but because mini Cube like I told you it's a demo cluster or you know it's a test cluster your practice cluster so it just creates one virtual machine on top of it it runs a single node kubernetes cluster so to create a virtual machine on top of your Mac OS or on top of your windows firstly you need to have a virtualization platform most of the time it comes by default so if you are on Mac all that you need to do is just run this command minicube so okay so you can just use hyperkit this is a command Okay so hyperkit comes by default and uh so what I'm doing minicube start pass the memory requirements whatever is the requirement that required and then hyper hyphen driver is equals to hyperkit so here you can change the values you can change it to virtualbox you can change it to hyperkit whatever is your requirement Okay so let's say you are not bothered about these things uh today's class we are only learning about the basics of kubernetes so in such cases even if you do this simple mini Cube stat that is more than enough okay so the only difference is if you are just doing mini Cube start then the kubernetes cluster will by default use your Docker driver okay but Docker driver better uh you don't use it when you move to advanced kubernetes concepts in such cases just use this command okay where you will use minicube start and hyphen hyphen driver as hyperkit okay so now uh I think I have spent enough time in explaining how to install kubernetes cluster now my Cube CTL is configured to understand that just say Cube CDL get nodes okay when you do Cube CTL get notes you will notice that Cube CTL is already connected to your kubernetes cluster and it is saying that okay there is one kubernetes cluster that is running sorry there is one node that is running which is called mini Cube node so mini Cube uh reference that node as mini Cube node and then the status is ready and this node itself is your control plane and data plane okay because you just have one node architecture here awesome so minicube is done kubernetes is installed my nodes are up and running so what do you have to wait for so you can directly start with installation of pod but how to do it so again go to your kubernetes documentation and search for pod okay so if you see this like I told you pod is basically a yaml file okay so you can simply copy this yaml file because we are just starting with kubernetes and even once you advance with kubernetes also you have to take this examples as reference because nobody is going to uh mug up these things as it will not give you any advantage learning these specific uh things inside your yaml file is of no use all that you need to understand is like you know copy this specific thing even for your future cases and just understand where do you have to update your commands Okay so yaml file will remain same whether you are creating one pod file whether you are creating tomorrow you might be creating pod for different application Day After Tomorrow you might be creating pod for another application the definition will be the same only thing that will change is these values so these are all the keys and the values will change okay so today let us try with the default image that is provided in the example called nginx but if you want to replace you can replace this in image with any application that we have created in our previous applications or in a previous Docker demos so we did a lot of Docker demos where we created my first Docker or you know we created some golang based applications we created some python based applications so that's fine you can use any of those images or you can go with the default example that kubernetes is offering you here because we just uh wanted to run our first pod and see how pod works right so here name of the image is nginx1.14.2 you can change it like I told you and then whenever you make this modification make sure you make this change as well so here the image that he is giving I mean kubernetes is giving us it says that the container Port is 80 but in your case your application container Port can be 8000 it can be nine thousand it can be anything so modify it accordingly okay but in this case the image is this one and the container Port is this one so let us try to First compare this with the docker Commander okay so that everybody will be clear because you people are coming from Docker so let us try to debug and see what is the equivalent command for this in Docker so here we are just saying docker run we are trying to run it so Docker run minus D so we are running it in the background and then hyphen hyphen image you don't have to give image in Docker you can simply say nginx this one hyphen hyphen name what is the name we are giving name as nginx so this is the name and then minus P 80 to 80. so this is the equivalent command to pod okay so the reason why I just explained you in this way is to make you understand that like I told you a pod is basically a specification or a specification on of how to run your container so that's why I just showed you how the equivalent command looks in docker save this one so now your Cube CTL will come into picture Okay so use this command called Cube CTL which is similar to Docker CLI here this is a kubernetes CLI the command you will say is create minus F power dot yaml so as well as you do this you will see that your pod is created that means your application is created so how do you see in Docker you will see Docker PS so here you will say Cube CTL get parts okay so you see that the kubernetes Pod is running if you do minus o wide then it will print the entire details of this part it said this is the IP address you can simply do curl and then you can execute this specific IP address you will notice that the okay so in this case you have to log into the cluster right so like previously if you are not exposing this application from Docker container to external application we log into the container and we execute it right or something so in this case you have to log into your kubernetes cluster so the command is easy just do minicube SSH okay so you will log into your kubernetes cluster if you are using a real-time kubernetes cluster what you will do is instead of minicube SSH you will SSH to the master or any worker node IP address and then you will just do curl to this specific thing and you will notice that your application is running it says thank you for using nginx so your first ever kubernetes application is created and you were able to execute as well using Cube CTL get parts minus o wide now the first question that you should ask me is Abhishek how do you remember all of these commands so I have been working on kubernetes for a long time but for somebody to start with there is a very good reference called kubernetes or cube CTL cheat sheet okay just just search for cube CTL cheat sheet you will see this specific page go to this page and you have bunch of kubernetes commands Okay so just go through this uh specific page whenever you uh want to find any specific command you are not understanding so even I reference this page because it has bunch of examples and all of these examples are very very handy for us to understand okay how uh let's say I want to search one command with respect to uh get the pods so I can search for get pods and it gives me all options so you can it says that Cube CTL get pods get in all the namespaces get uh your complete description of the Pod so all of these things are very much provided here so reference Cube CTL cheat sheet okay so things are fine I have just installed my first pod my pod is running everything looks good I was even able to access the Pod once I SSH into the cluster now what is next so you were able to do this similarly you can also do Cube CTL delete uh pod provide the name of the Pod okay so your pod will be deleted okay so Cube CTL is basically life cycle but what is next so there are two things next one is like I told you this pod.aml is a specification of your Docker container how a Docker container has to be run so here you can enhance this specification as well like I told you here you can add more for example volumes okay the syntax is not correct don't worry about it at all you can add a volume mounts so these things we will learn as we go ahead because I don't want to complicate these things and explain you at this point of time itself how to add persistent volumes how to add volumes how to add volume mods to airpod a lot of these things are not required at this point because we are just learning kubernetes so for now understood you understood how to deploy your first application the next thing you have to ask me is how to add Auto scaling Auto healing so these were the topics I was telling you that this is how kubernetes is better than Docker or any container platform so you should ask me an expression is how to add this capabilities because this is the reason why we started learning kubernetes because kubernetes is an internal Enterprise platform which we already saw by looking at the architecture and all of the things now the next thing is kubernetes provides Auto scaling or not a healing how do you get this to my application so ah if you ask me this question Abhishek how to get this Auto healing Auto scaling capabilities to my application so the answer is what you will do is on top of the Pod you have a wrapper called deployment in kubernetes okay so you have to use your deployment in kubernetes to get these features like Auto healing and auto scaling which will be at tomorrow's topics okay so to start with kubernetes you also you always have to start with pod but to get this Advanced capabilities we will move from pod to deployment now you can ask me why we have to learn POD at all because we have to go to deployment because deployment is just a wrapper okay so tomorrow when I show you how to write a deployment.yaml file you yourself will understand oh okay so deployment and pod are pretty much same only thing is we are just changing the kind here okay so here instead of kind pod we are just modifying it and we are just saying it as kind deployment and we add more things like we add some other things like template and we say okay so this is my pod template specification but more or less what a kubernetes deployment does is it acts as a wrapper on top of your pod which is going to be your way to deploy your applications okay so it is going to be a way to deploy apps in kubernetes in real time production scenarios you will not deploy pods but you will actually deploy your deployments or stateful sets or demons at these things which we will learn but to understand those things you need to have your foundations correct that is you need to understand how does a pod Works in kubernetes okay so today we understood how does a pod work we logged into the Pod we try to execute the Pod right all of these things are done final things that I have to show you is how to verify the application let's say you have some issues with respect to the applications that you are running so Cube CDL also offers some commands like you can say Cube CTL let me create the Pod one more time Cube CDL so the Pod is created Now using the cube CTL itself you can debug your applications like you know you can say Cube CTL logs followed by the name of the Pod okay so once you provide the name of the Pod here you will see the logs of your application okay uh Cube serial logs pod nginx right so as soon as you do it you will see the logs okay so it is still not running don't worry about it but using cubectl logs you can verify the logs of your kubernetes pod and using Cube CTL get pods as you get the Pod information what you can also do is you can just say Cube CTL describe okay followed by the name of your uh pod so if you do this you will notice that it will print all the information of your pod so what is uh the current status of your pod so if your interviewer is asking you how do you debug a pod you can simply say them that I use a command called Cube CTL describe pod using which I get the status of everything inside a pod whether the Pod is currently running if there is any error what is the error in the Pod if there is any issue with the Pod what is the issue with the Pod so you will get all that information with the Pod and once you understand it you can also get the information of cube CTL logs pod followed by the name of the part if your application is throwing uh some logs you can also sorry what is the issue here oh sorry Cube CTL logs nginx so if there is any logs currently this application the demo application that kubernetes has shared us it is not throwing any logs but in real time in production your application will throw the logs where you can see those logs using the cube CTL logs nginx okay so let's say if I log into this uh specific pod one more cluster one more time and if I execute uh the HTTP server or the nginx server you'll notice the logs even with respect to cube CTL logs in the next but for now that's okay so the interview question is how do you debug pods or how do you debug's applications uh issues in kubernetes so your two go to commands would be Cube CTL describe name of the pod and the next command would be Cube CDL logs name of the pod so this will be your two go to commands describe will explain your complete details of your pod what is the issue with the Pod and all and to verify the logs of your pod you can use the cube CTL logs command so this is the video for today I request everyone who is watching this video to practice everything that we have learned today because going ahead the complexity will increase we will go in we will uh like I told you we will learn about deployments we'll learn about Services we will talk about Auto healing water scaling all of these things for which it is very important for you guys to practice today's session and also watch the previous videos on kubernetes okay so if you liked today's video click on the like button don't forget to share this video with your friends and colleagues I'll see you in the next video take care everyone bye hello everyone my name is Abhishek and welcome back to my channel so today is day 34 of our complete devops course congratulations we already reached day 34 of 45 days devops journey and you know in this class we'll be talking about kubernetes deployment so from class as day 30 to 33 we try to understand in depth about the kubernetes architecture how is it compared with Docker kubernetes installations on-premise as well as cloud and today on day 34 we will be learning about the kubernetes deployment so what is this kubernetes deployment to understand that everyone must have watched the previous video that is day 33 it is very important because we talked about kubernetes parts so let's try to understand the difference here itself right if kubernetes can do things with pod okay so if you can deploy your application onto kubernetes as part then why do you require deployment okay so what is the comparison that we are going to look at the difference between a container a pod and a deployment right so this is your interview question as well so people will ask you in an interview what is the difference between a container pod and a deployment you might feel this is a very entry level question but if you can't answer this question then you know uh date itself your interviewer will understand that you don't have uh experience on kubernetes so basically containers like we have watched from day 23 to day 30 you can create containers using any container platforms like let's say you have created a container using Docker okay so to run this container what you usually do is you provide the specifications to run this container on the command line right so how do you do that basically you say talker run minus i t right or minus D if you want to run in the detached mode Then followed by the name of the image then if there is any port you will expose using minus P if there is any volume you will use minus V if there is any network you will use hyphen iPhone network so similarly you will pass bunch of options here right so this is how container works and this is how I'm not going into the workflow how you create a container you write Docker file build Docker image and container let's not go there but just let's assume that this is how you run a container on Docker platform so what kubernetes said is okay let me modify this process and let me bring a Enterprise model to this so what what does kubernetes say is instead of writing all of these things in the command line you can create a yaml manifest okay and inside this yaml manifest you can Define all of these things that you are defining here in the command line option and you know you can just say what are the things that are required what is the container image right even even you provide the container image here then what is the port that you want to run this specific container on what is the volumes that you have and what is the network so everything you can provide in the yaml Manifest so a pod dot yaml or a pod yaml manifest is nothing but a running specification of your coupon Docker container right so it's just like a running specification what are the parameters that you are that you require to run a container is part so the only difference here is a pod can be a single or multiple containers okay so in a pod you can create a single container or multiple container why do you create multiple containers is because let's say you have an application that is dependent on other other application without which it cannot run or you know you have a container here this is your actual application container and in this you have written your API Gateway rules your load balancing rules like sidecar containers so in such cases also you can put both of them inside a single pod so a popular use case is service mesh so in case of service mesh you have a container that is sidecar container and this is your actual container so what is the advantage is if you use a pod now both of them can share the same networking so both of them can communicate using localhost and both of them can have the same volume or storage kind of things okay so this is about the Pod now finally what is a deployment so if you if you can ask me this question Abhishek in day 33 we already saw how to create a pod uh you know how to deploy application using a pod we deploy the nginx application now why we have to transition from power to a deployment because if you can deploy a container or if you can deploy an application as a container in kubernetes using a pod what is the purpose of using a deployment this is a very valid question right so your interviewer can also ask you this question so to answer this question it is very simple so kubernetes like I always told you from day 23 or from day 30 when we started learning kubernetes kubernetes offers you some things which is required I mean which is the requirement for people to move from container platforms like Docker to kubernetes what are the two important things that I told you the first thing that I told you is the auto healing Behavior the second thing that I told you is auto scaling Behavior okay so does POD has this capability of implementing Auto healing and auto scaling no so pod is equivalent not equivalent is somewhere similar to your container because a pod is doing nothing it is just providing a yaml specification of running your container nothing more than that or in some cases a pod can run multiple containers and it can offer some Advantage there because these two container can share networking and share the storage space but the thing that pod cannot do and which is very important is the auto healing and auto scaling capability so who offers this thing in kubernetes this kind of things in kubernetes can done using deployment okay so if you want to do some zero downtime deployments or you know if you want to bring in Auto healing Auto scaling then you should never deploy your applications as pods in kubernetes but instead you should deploy as deployment and what deployment will do end of the day it will deploy a pod only okay but instead of deploying a pod if you deploy a deployment what it does is let's say you have deployed this deployment okay you have created a deployment resource it will again create some intermediate resource called replica set and then replica set will create something called a spot for now forget about this replica set because I'll teach you as we progress into the video but so this is how you can create part so the practice that you have to do is or What kubernetes suggests you is do not create pod directly okay so end of the day you will be creating pod only that's why we saw in day 33 how to create a port what is a part all of those things you have to know the concept but do not create it directly but create it using a deployment resource okay so what is this resource called deployment resource and what this deployment resource will do is firstly it will create something called as a replica set which is your kubernetes controller okay what is this this is a kubernetes controller and then what happens is this will roll out your pods okay now why you need this intermediate resource so the thing is what a deployment does is inside your deployment you can just say what is the number of replicas of your pod that you require okay so why is this required is in some cases you know you always do not want to have a single replica of your container sometimes you are your load will be too high you might want to talk uh expose your application to a multiple concurrent users who can access your applications like you know you can say 100 users should go to pod one 100 users should go to pod part two I mean a replica of replica one of pod and replica two of part that means to say you are implementing you can call it as high availability or load balancing or whatever is a general terminology okay so what you can do inside your deployment yaml manifest deployment is again a ml manifest because in kubernetes everything is a yaml manifest okay so inside your deployment yaml manifest you can just say replica count as two but when you say this okay there has to be something in kubernetes that ensures that okay you said I want two replicas okay so deployment will create a pod that I have already told you but if we go back to the topics called Auto healing and auto scaling okay what does Auto healing mean if you say you need two replicas okay deployment will create using replica set two uh two parts but what replica set additionally does is because it's a kubernetes controller what it will always do is it will ensure that there are two controllers even if some user deletes one of these pods okay uh sorry there are two parts so even if a user deletes one of these pods he says that okay accidentally I deleted one part no there is only one part kubernetes will say don't worry because you have submitted a deployment yaml manifest to me I Implement Auto healing using this replica set controller okay so it will always ensure that there is two number of replicas on the controller if you are not understanding this wait for the demo in the demo live in live I will show you how this is working okay so the end process is you will create a deployment okay and this dep deployment will roll out a replica set okay which is called as RS and this will create the number of PODS that you have mentioned in the deployment yaml manifest okay what this RS or replica set will do is it will ensure that what user has provided in the ml manifest it will ensure to be implement the auto healing capability if you say the replica count as two if you can say the replica count as 100 this replica set will always ensure that there are 100 replicas of your pod on the kubernetes cluster so that million users can parallely use it maybe okay so if user deletes one and if he makes it 99 what replica set will do is no no because deployment told me that the Pod count has to be 100 so let me put it back to 100 okay so this is how a zero downtime deployment of tomorrow let's say you want to increase the replica count from 100 to 150 okay you can just go to this yaml manifest and change the replica count from 100 to 150 I'll show you how there's a deployment email look like but for now if you change the replica conference 100 to 150 then RS will say that oh okay there is a new change in the yaml Manifest so I have to increase the Pod crowns from 100 to 150 let me create 50 more parts okay 50 more parts in the sense 50 more replicas of your pod okay so this is how a deployment works it will create a replica set and this replica sets will create a pod for you okay and this replica set is a kubernetes controller so if you are listening this word for the first time kubernetes controller don't worry you'll get acquainted with this because in kubernetes we deal with a lot of controllers so controllers are something which maintains a state you know it always ensures that the desired state is always present on the actual cluster that means desired State and the actual State on the cluster are same so anything that is doing this behavior in kubernetes is called a controller okay so there are some default controllers in kubernetes and you can also create custom controllers like rocd admission controllers all of these are custom controllers that you are creating whereas the default controllers are also available in kubernetes which ensures that the axle state is always same as the desired State okay so whenever you hear this term called controller just understand okay controller is something that will ensure that the state in the yaml Manifest if in yaml manifest you are saying something has to be there it is always there in the kubernetes cluster that is maintained by the controllers in kubernetes okay now this is the introduction so the popular interviewer interview questions here will be what is the difference between pod versus container versus deployment so this is question number one if you are not able to answer go back and watch this specific slide okay so here in this slide I clearly explained container versus pod versus deployment okay so this is question number one and the question number two will be what is the difference between deployment and replica set so people will confuse here don't worry it's very simple so replica set is basically a kubernetes controller that is the one that is implementing the auto healing feature of your pods if a pod is getting killed or if a deployment says that increase the Pod by one so who is doing this replica set controller so replica set controller is the one which is actually implementing the auto healing capability by saying that the actual state in the deployment yaml manifest or the actual state in the deployment should be on the cluster okay so this is the desired state that is provided in the ml manifest which always have to be same on the Excel state so when you create a deployment a replica set is created automatically which is responsible for tracking this control controller behavior in kubernetes okay so this is it now let us try to see this in Practical and don't get confused it's a very simple topic even if you refer to the kubernetes documentation you can learn about deployments in 30 minutes not more than that okay so let me stop this uh here and let me share the screen stop now let's take a terminal and let's try to implement this life so share screen perfect let's say I'm new to kubernetes okay and I don't know anything only thing that I know is from the last classes if you use Cube CTL command you can interact with the kubernetes okay so you just have created a kubernetes cluster it can be minicube cluster or it can be the Clusters on AWS using cops that I showed you okay minicube also we have seen it's very simple to create so I'm assuming all of you have a kubernetes cluster and Cube CTL configured now if I do Cube CTL get pods okay so at this point of time there is something so let me delete it okay so that the demo will be clear so I have one deployment and what I'll do using Cube CTL I'll just delete it so that we we are ready for our demo Cube CDL delete deploy this specific thing now if you notice Cube CTL get parts there are no pods Cube CTL get deployed there are no deployments so in real world scenarios you cannot do all of the like you know you cannot enter multiple commands so you can just say Cube CTL get all so it will list everything like the deployments for services all the kubernetes by default Services it will list out in the particular namespace okay perfect so this is one interview question again if somebody asks you how do you list out all the resources that are available in a particular namespace you can just say Cube CTL get all and if you want for all the namespaces just say Cube City will get all minus a then it will list out for all the namespaces all the applications in your cluster okay but for now just because uh you know I was doing that command I just thought of explaining you if you go back to the uh specific course of today we will see what will happen to kubernetes spots because we stopped from there so I have a port.yaml let me open this spot.aml this is the same thing that we saw in the last class okay so what is it just a kubernetes pod simple kubernetes point this is the example that we have copied from the kubernetes documentation okay and what is it doing there is a simple nginx image and let us try to create it how do you create it Cube CTL apply minus F power dot yaml now what happens as soon as we apply it this will be created on your kubernetes cluster let us see if it got created Cube CTL get parts awesome it got created how do we check the IP address of this just say minus or wide so it will print Cube CTL get pods will just give you some information and if you do Cube CTL get pods minus o wide it will give all the information about the Pod okay or you can also give a cube CTL describe anything is possible so Cube CTL get pods minus so wide where you got the IP address of this so what I will do is to access this pod I need to log into my kubernetes cluster so because my kubernetes cluster is mini Cube so minicube just says enter this command called mini Cube SSH but if you are using a remote kubernetes cluster you have to use SSH minus IE your identity file right followed by the name or the node name or the IP address of the node so that you log into your kubernetes cluster but because minicube makes our life easy for development it just says that enter mini Cube SSH and we will convert the command accordingly and you will log into the kubernetes cluster so now just say curl and this specific thing your application is running this is something that we saw in the last class as well now I'll show you something that will make you understand why deployments are required the same thing that I explained in the theory as well just say Cube CTL delete pod what was the name of the part sorry I forgot Cube CTL get pods okay let me copy this Cube CTL delete pod nginx let's say someone perform this action accidentally someone deleted a pod on your cluster so now when I click on the enter button or let's say for some reason your part got deleted because of some network issues or for some reason your pod got deleted now the customer who is trying to access your application usually customers won't access using minicube SSH and all because you know they are external people who are outside your kubernetes cluster so in future classes when we learn about Ingress we learn about service you will understand how that happens in real time but for for now just assume because we are still in the concept of part you have done minicube SSH and when you try to access the same application that we did okay so using the IP address I think I forgot the iPad yeah coil 172 17003 now you will notice that the application is not reachable because you have killed the Pod the application is gone now you should ask me then what is the advantage of kubernetes because the same thing was happening in Docker also you told me that kubernetes is a very robust platform kubernetes supports Auto healing water scaling wait so kubernetes supports all of that but you have to create the correct resource you have created a pod instead you have to create deployment okay now the next question will be Abhishek but this syntax is very huge how do I remember all of these things don't worry nobody remembers all of these things and it is also not suggested to remember all of these things what you need to do is just go to official kubernetes documentation or any examples that you want to follow you can you are open to follow any uh specific website just go to the deployments and you know here you have an example so in the future if you want to deploy your application you can simply modify this image here right and if your application has some volumes and specific thing you can take example in the kubernetes documentation itself you have a lot of examples I'll show you you can pick the right examples and then you can just update the fields which are required okay so that is how you have to deal with it don't remember all of this syntax because it's waste of time in your interview nobody will ask you to write the syntax people will ask you what is image in container or what is the labels and selectors what is the role of it or what is the role of replicas this is what people will ask you okay so this is the same thing I have on my cluster as well and if you see here this is what I am telling you inside deployment what you will do is you will say how many parts you want to create do you want to create one part do you want to create two part do you want to create three parts for example I'll show you that I want to create only one pod for now now let us see what happens as soon as I create the deployment Cube CDL get sorry apply minus F or create minus f deployment.ama as soon as I do this the deployment is created but the magic is Cube CTL get deployed deployment is there but you will also notice when you do Cube CTL get pod spot is also created so this is what I was telling you so who has created this part like I told you the ecosystem is once you create a deployment it will create something called as a replica set for you and replica set will create a pod for you okay so we can see this same if you do Cube CTL get deploy you notice that the deployment is there then you do just say Cube CTL get RS you'll see that the replica set is also there RS is short for shortcut for replica set okay and then when you do Cube CTL get powered your pod is also created okay but what is the deployment deployment is an abstraction that means you don't have to create this replica set you don't have to create this pod what deployment says is okay just create one resource called deployment.yaml and I'll take care of everything for you because I am responsible for implementing Auto healing and zero downtime in kubernetes okay but deployment will not do it directly deployment will take help of replica set and replica set is a kubernetes controller which is actually doing it what is a kubernetes controller kubernetes controller is nothing but a go language application that kubernetes has written which will ensure that you know a specific behavior is implemented now in this case what is the behavior the behavior is that the desired State or the desired number of replicas inside the deployment have has to be available on the cluster I'll show you live let's take two terminals here okay I took two terminals here and let us see it in life let me just say Cube CTL foreign this is the name of the Pod right and before I click on enter what I'll do is I'll watch for the parts Cube CDL get pods minus W when you do minus W that means you are watching it will show you in live what is happening with the pods so as soon as I click this button you will notice that the Pod is getting deleted but see the magic word is the magic that replica set is doing for you even before like it initiated it initiated the terminating signal but before the termination is done it is just terminating not terminated okay so before the termination is done what it has done is you know a new container is getting created that means a new pod is getting created and you see both of the actions has taken place in parallel terminating running that means the termination and the running are happening in parallel if someone there is a malicious user let's say I am a malicious user or I am a wrong person who has deleted your pod then even without your consent replica set because it it knows that the deployment has told it that the desired replica count for the Pod is one so it ensured that the Pod is always in running State even if someone deletes it there is one part that is available so if you if you just see Cube CTL get pods you will notice the same behavior so right so the Pod is running now let me increase the Pod count and show you just say Vim deployment.yaml let me increase the Pod count to three okay now again let me apply this manifest Cube CTL apply minus F deployment.yaml you can also use Cube CTL edit command but apply is more easy okay so that's why I just modify the ml file here and then I just use the apply command so Cube CTL apply minus F deployment or DML now let us again watch for the pots Cube CTL get pods what is the expectation here the replica count should be increased by three and who has to do it replicas it let us see if replica set is doing it configured now let us see what is going to happen if you see here there are three parts okay now who has created these three parts again replica set so deployment is just a wrapper it's just a high level abstraction deployment by default will not do anything for you it's just like a high level abstraction and who does the things for you replica set controller okay now let me delete one of the pods and let us see what happens okay Cube CTL get pods so there are three parts right let let me delete one of these parts randomly and again what replica set has to do is it has to make sure that three pods are running irrespective of the one part that you are deleting two parts so you're deleting it also it always has to ensure three ports are running because it's a kubernetes controller that is responsible for keeping a state that is the kubernetes controller that is responsible for auto healing let me click enter now let me see uh is the Pod deleted okay I just said get sorry I have to do the delete operation I was just confused why kubernetes is not showing anything yeah delete a pod and now let us see what is going to happen see again the behavior is the same even before deleting or parallelly deletion and creation has happened so that's what is the beauty of kubernetes if you say Cube CTL get parts you'll see that the three parts are running awesome right so this is how kubernetes implements the auto healing capability using deployment replica set and pod okay so in real world kubernetes or in production scenarios you will never create a pod directly but what you will do is you will create a deployment okay so this deployment will create replica set for you and replica set will create a pod for you okay so this is how kubernetes will work in real time so your assignment for today will be create a deployment okay so take the same example replace your image okay so here replace your image and play with the kubernetes like I showed you okay uh kill a pod and see what is going to happen create new one increase the replicas and see if replica sets are getting created or not okay so if you see here Cube CTL get RS this is the replica set you have not created but replica set is automatically getting created right and that is creating pods for you so understand this Behavior Keep playing with it okay take more examples of deployments you can just search for you know just come here randomly kubernetes deploy employment examples okay and just search for GitHub you'll notice bunch of kubernetes examples okay this is official kubernetes repository and you have bunch of examples here just take guest book example okay and choose any of the things that you want all in one for example where you have the uh this one here I guess all in one dot EML and here you have a deployment so you can find bunch of examples in the internet okay just play around with these examples because this is what you will do in real time as well on a day-to-day basis you will not create faults but you will create deployments okay whether you are creating these deployments directly or you will put it in the git so those are in the future but now you have to understand this concept how kubernetes does zero time deployment what is zero time deployment if you see here I increase the replica count from one to three but it happened without disturbing the existing part even I deleted one part okay then it did not Implement exp sorry disturb the existing application no live traffic is destroyed because parallel creation and deletion has taken place so user will not face any disturbance okay of course there is role of service there is role of Ingress which we are going to learn in the future but till this point you have to be clear with the concept and I hope you enjoy the video if you like the video click on the like button if you have any questions put the timestamp and ask the question to me then if you like you know feel there is someone who is going to be benefited by these videos please share the videos thank you so much I'll see in the next video take care everyone hello everyone my name is Abhishek and welcome back to my channel so today we are at day 35 of our complete devops course and in this class we will try to learn about kubernetes services so service is very critical component of kubernetes so in like like I told you in production scenarios we don't deploy a pod but we usually deploy a deployment right so this is what we learned in the last class so this is our learning from the last class similarly once you deploy a deployment for each deployment most of the times you will create a service in the world of kubernetes so why will we create this service and what is the importance of service let's try to understand in today's class okay so before we learn every anything what we usually do is we'll try to learn the why aspect of it right why do we need a service in kubernetes and what happens if there is no service in kubernetes okay so let's talk about the scenarios of no service okay so now everything that I am going to talk about is assuming that what if there is no concept of services in kubernetes okay so what will happen so what will happen usually like our previous classes what a developer or devops engineer would do he will deploy his pod as a deployment in kubernetes right and what that part will do what the deployment would do it would create a replica set and what replicas it would do it would create a pod if the Pod count is one it would create a single pod or if the replicas are multiple then it would create multiple replicas let's say we have the requirement of creating three replicas okay so assume this is replica one replica 2 and replica three why do we need multiple replicas of a pod for a general understanding let's say there is one user then in such cases you don't need it but let's say there are 10 concurrent users concurrent is people trying to use same time like for example uh you and me might use WhatsApp at the same time so like similarly there there can be some thousands of users who are trying to access WhatsApp at the same point of time so if every request is going to only one particular pod then this pod will go down because it is getting too much of load so that's why what you usually do is you create multiple replicas and the number of count of replicas depend upon the number of users trying to access your applications and also number of connections one particular pod can take okay so if somebody asks you what is the ideal pod size or what is the ideal pod count what will you say is it depends upon the number of concurrent users and number of users or the number of requests one replica of your pod can I mean one replica of your application can handle so if one uh replica of your application can handle 10 requests at one time and if you have 100 requests that are coming in then you have to create 10 parts okay so you have to take this decision as a devops engineer as developers you have to sit together and you have to take this decision okay now if we don't deviate let's say there are three parts okay and now for these three parts the problem is that let's say one of this pod has gone down for some reason okay there is some Network issue or in in the world of kubernetes in the world of containers a part going down or a container going down is very common but what is the advantage of kubernetes is because of its Auto healing capability okay so why we move towards kubernetes is because kubernetes has this Auto healing capability containers are ephemeral so if the containers die they do not come up similarly if a part goes down it will not come up automatically unless you have the auto healing behavior that is implemented by the deployment in kubernetes or the replica set controller in kubernetes right sorry so now let's say you have the auto healing in place so what happens as soon as this pod has gone down what will this uh replica set say don't worry I am here let me create one more copy and this copy will be created even before the actual one is deleted or parallel it happens okay so I have this one back but the problem is that when this one comes up let's say previously the IP addresses were 172.16.3.4 3.5 and 3.6 I mean 172 3.4.5 something like that and next time when it came up the IP address will change previously if it is 170.16.3.4 when it comes up this time it might be 17.16.3.8 so what happened is okay the application came up but the IP address of the application has changed and now we are talking about the scenario where there is no Services Concept in kubernetes so what will happen is your application IP addresses you have to share with your test team you have to share with your other project who is using this application so I thought as third party applications or something what they usually do is they'll try to access this application let's say there are three teams who are trying to access this application or three people who are trying to access this application what you said for user number one this is the IP address user number two this is the IP address user number three this is the IP address so as a devops engineer you thought I created a deployment which created a replica set which created three replicas of PODS and there are three users so parallely also if they try to use my applications are accessible because I created three replicas of the pods and for one person I said 100 to 16 3.4 for one person or other team you said 172 16 3.5 and for the others you said 170 to 16 3.6 so you are in an assumption that everything is right but now what has happened even though you have the auto healing capability of kubernetes because the IP address has changed so this is part one part two and part three but the IP address is new 172 16.3.8 so this user one or the project one let's say there are 10 people in Project one who are trying to test this application what they said is your application is not reachable or your application is not working but as a devops engineer what are you arguing no no my application is there I can see my application you are doing something wrong end of the day you realized that after debugging he is trying to send requests to 172.16.3.4 but the IP address of your application is 170 to 16 3.8 so neither he is wrong nor he nor you are wrong because you have implemented Auto healing and he said that I have used the same IP address that you that you gave so this is the problem and even if you look at the real world okay so the real world will never work like this let's say all of us use google.com on a day-to-day basis okay will Google ever tell you that try to access my application on a IP address called 100.64.2.7 and for let's say there is another user Google will say Okay access me on 172.16.3.9 so let's say Google has 100 replicas now Google will never tell you that 1 million user access on this port this particular IP address another million people access on this specific IP address that doesn't work like that so what is the concept here the concept is Google does a load balancing okay and even I told you when I introduce you all to kubernetes so that there is a concept called load balancing in kubernetes okay and I'll teach you later so what you will tell this user project one is okay do not access using this IP addresses what you will do is I will create like you know you created a deployment for this you will tell them that on top of this I will create something called as a service the shortcut for service is SVC okay so I will create something called as service and what you do is instead of accessing these specific things okay directly try to access the service okay so what now the user project one team does is instead of accessing the 172.16.3.4 let's say there are three replicas let's write all the IP addresses 172.16.3.5 and 172.16.3.6 let's say these are the three IP addresses that you got from kubernetes clusters and then there are three projects one is user project one then user project 2 then user project 3. previously you were giving them this IP addresses and you are asking them to access the application using the IP addresses but what was going wrong when the Pod was going down you have the auto healing Behavior but the problem is that the auto healing behavior when it spins UPS a new part the IP address was changed from 170 to 16 3.4 to 3.8 this can happen to this specific pod as well and this can happen to this specific Port as well so what you will do is instead of this Behavior instead of giving them each and every IP addresses you can simply change this Behavior by creating a service on top of the deployment so if you say that this is a deployment that has created three parts using a replica set on top of this you will create something called as a service okay and what this service does is it acts as a load balancer how does it access a load balancer it uses a component in kubernetes that is called as Q proxy now let's quickly not go into it because you will get confused for now let's assume that service is doing it ignore about Q proxy for now okay so what service is offering is load balancing and you will tell these three user projects that instead of accessing the IP address and this IP address can change very frequently so what you will do is access me using the service name so what these people will do or what this development teams will do instead of accessing the payments applications on the specific IP addresses they will say payment Dot default dot SVC okay so let's say this is the name of the service that kubernetes provided as soon as you create a service what kubernetes does is if this is your name of the service this is the namespace and this is dot SVC so kubernetes will give you something like this and you can tell them that okay you can access my applications on this specific IP address that is the service IP address or the load balancing IP address so these people will try to access these applications on the same IP address okay so everybody will use the same IP address underlying what this load balancer what this service using Q proxy will do is it will forward the request let's say 10 requests are coming from here 10 requests are coming from here 10 requests are coming from here it will just say okay send 10 request here send 10 request here and send 10 requests here so this is one problem without service you would have faced okay so if there were no Services Concept in kubernetes you would fail terribly even implementing the auto healing capability even you have deployments and pods your applications will will not work for certain people when the application goes down it comes up with a new IP address so there is a problem and who is solving this problem the problem is solved by service so what is one point that we learned about the advantage of service that is load balancing now you should immediately get a question from the previous explanation that I just uh you know that I just showed you you should get a question that okay Abhishek let's go back to the diagram once I got it out so what you should ask is okay let's say that you gave this IP address I mean instead of IP address you gave this URL to them so you know service should also face the same problem right because if user one project was not able to reach this specific pod because the IP address URL has changed right from 3.4 it has changed to 3.8 now a service what is it actually doing it is just taking the request from this user and it is forwarding the request but even the service should face the same problem because the IP address has changed the service should be sending the request to 170 to 16 3.4 but the new IP address is 172 16 3.8 right so you must ask me a question that okay Abhishek even the 10 request that service has sent to 170 to 16 3.4 would fail and again the problem is same user project one who is trying to access this pod using the service would fail terribly right because there is no traffic so this is another problem that service solve that is called as Discovery okay so what is it called the second advantage that you get using service is service discovery so what service says is okay I understood the problem that if I am if I am keeping a track of a deployment okay let's say this is service service is keeping a track of deployment and which is creating three parts for example and if one of these IP address is changed let's say this IP address has changed if service also follows the same problem of keeping a track of IP address the problem is not solved at all so what service said is okay I will not bother about IP address at all okay I will come up with a new process which is called as labels and selectors okay so how service does a service Discovery unlike the previous example that I showed you unlike manually keeping track of IP addresses which can change any number of times okay and even if there are two to three parts let's say service can keep track of IP addresses what if there are thousand Parts this can happen like companies like Google they might have a thousand Parts okay I'm just giving an example or they can have some 50 to 60 pods so if service manually keeps track of all the IP addresses then this problem will arrive so that's why what service said is I'll introduce a new mechanism called labels and selectors okay and what this labels and selectors will do is okay for every pod that is getting created devops Engineers or developers what they will do is they will apply a label okay so this label will be common for all the pods let's say this is payments right so you can create a label for each of these parts called payment now what services is I'll not bother about IP address I will only watch for the pods with this specific label called payment okay so this can go down 100 times or this can go down thousand times I don't bother about it because I am only watching about label so next time if if this goes down and this can come up with a new new IP address but the label will remain the same why will label remain the same because the replica set controller what it will do is it will deploy a new pod with the same yaml that it got right that is water healing so if a service is keeping track of your pods using labels instead of IP address and the label is always the same the problem is solved so this is the service Discovery mechanism of kubernetes service so how kubernetes service will do a service mechanism using labels and selectors okay so this is why kubernetes service concept is Advanced okay why kubernetes service has a very good service Discovery mechanism is because of the concept that it uses which is called as labels and selectors right I hope you got this answer let's go back to the previous slide to make it even clear so what service will do is okay I'll draw a new diagram probably so the end mechanism would be you create a deployment okay so this is your deployment and how do you create a deployment basically you create a yaml manifest right so let's say you created yaml manifest what as a devops engineer you will do is whenever you create this deployment okay so you provide all the specification that is required and all of the things along with that inside your metadata okay metadata of your deployment you create something called as a label label is just a tag you can just say app payment for example so now what this deployment will do it will create a replica set and what this replica set will do let's say there are two replicas it will create pod one and this is part two and for both the pods it will have a label like a pod if you just do Cube CTL edit pod you can see the Pod right or you can do cubesatel describe pod and you can you can see the Pod so what you will see is for this part there is a label called app payment similarly there is also a label here called app payment perfect so let's say this service has gone down I might be repeating this multiple times but this is very important let's say this part has gone down so the IP address will change but what will replica set say okay I have the yaml Manifest and according to the yaml Manifest the Pod has to be created with this specific label so even if this is going down 100 times 100 times it will come up with the same labels and selector now what we will do from today's learning is we will create a service right because service offers a load balancing that is required along with that what service is also doing is instead of looking at looking or keeping track of IP addresses it will keep track of this label so whenever a new pod is created let's say you increase the replica from two to three also so again a new pod will be created with label app payment so service will understand oh okay there is a new pod so I have to keep track of this as well so this is how service maintains a service Discovery process so this is very important if interviewers are asking you you should be able to answer this okay so this is the concept of labels and selectors perfect first one we learned is load balancing second one we learned is about service Discovery let's learn about the third thing so the third thing and the other important thing what a service can do is any guesses what a service can do apart from this is it can also expose to external world now what is this another thing that I am going to talk about don't worry we will do practicals and we will do demos of each and everything so even if in case in today's Theory you are understanding few things and you are waiting for the demo don't worry in tomorrow's class we will do a detailed demo of service okay so I just don't want to hurry up in the theory and move on to the demo part because practicals are as equal as your theory so if you understand Theory very well then practicals you will able to understand very easy so we talked about two things if we see here I explained already two things and I hope these two things are clear load balancing and service Discovery now the third thing is exposing your application to the world what is this so yesterday's class what we have seen is whenever we are creating a deployment right the Pod that got created what has happened to this part this pod came up with an IP address 172 16 3.4 okay whether you are accessing this you know uh directly by sshing into the mini Cube or you have created a your kubernetes cluster and you have SSH to the master or any worker node what is actually happening is Whoever has access to this kubernetes cluster right it can be mini Cube it can be cops it can be eks anything so whoever has access to this they can log into the kubernetes cluster and they can hit the application but this is not a real world scenario right so you cannot ask your customer that okay SSH to my machine login to kubernetes and access my application on this IP address will Google ever tell you this complicated process so google.com you are anywhere in the world you don't require SSH or you don't require anything you directly access your application on https.google.com right so this is what you try to do so this is something that kubernetes cannot offer you directly by using deployments so deployments can create a report and a user can SSH onto your kubernetes cluster then SSH into uh probably Master node worker node and they can you know access the application but for end users somewhere your user might be sitting in you know somewhere in Italy or he might be sitting somewhere in Austria now you cannot tell him that okay you cannot have access to my application directly because you are not in my network firstly you have to come to my network use VPN you cannot say all of these things right so what service will do additionally is a service can expose your application okay so by Expose your application if service can allow your application to access outside the kubernetes cluster right outside the K8 cluster so how service will do it is basically whenever you are taking a kubernetes service you are provided with three options okay we will see that in life don't worry like I'm telling you whenever you are creating a kubernetes service resource in the ml manifest what you can say is you can create this service of three types type 1 cluster IP type 2 you can create it as a node Port mode type 3 you can create it as a load balancer there are other types as well headless service and all I'm not talking about all of those things okay so these are the default types so cluster IP node port and load balancing so what happens is if you create this service using a cluster IP mode any service so this will be a by default Behavior so your application will still be only accessed inside the kubernetes cluster nothing will change for you only if you create a service with this cluster IP mode what happens is you will get the two benefits that we talked till now that is Discovery and load balancing okay what are you getting Discovery and load balancing but if you create service of type node Port mode then what service will do is it will allow your application to be accessed inside your organization okay so anybody within your organization or anybody within your network who is not you know uh technically they might not have access to the kubernetes cluster but they have access to your worker node IP addresses okay so if I have to just put it in a very simple way Whoever has access to your notes load IP addresses only they can access the application if you create your service as type node Port mode okay and finally load balancer type so what is load balancer type so load balancer type is basically your service will expose your application to external world let's say you have deployed this uh you know everything on a eks kubernetes cluster so if you are creating a service of load balancer type then you will get a elastic load balancer IP address for your specific service and whoever is trying to access you know they can use this elastic any anywhere in the world because this is a public IP address right so they can access using the public IP address got it so like I was telling you previously payments Dot default dot SVC so this is your you know uh the service name or this is your this is where your service gets resolved but when you create a service as uh you know external world or load balancer service type of mode load balancer then if it is on any cloud provider then depending upon the cloud provider implementation this load balancer will only work on cloud providers okay so if you are trying to do it on your mini Cube or if you are trying to do it by default it will not work okay so there is a project which is trying to get this work on the mini Cube as well but for now let's not go into the details if you try to do by default on minicube or any local kubernetes clusters the service type load balancer will not work so what is the solution there that we will learn in future classes there where we will learn about ingresses okay but if you create load balancer service type then that means to say on your Cloud on your cloud provider there will be a elastic load balancer IP address that will be created which is the public IP address using which you can access your application okay if you create it as a node Port mode then you can access within your Whoever has access to your AWS or whoever has access to your node inside the AWS they can access your application cluster IP nobody can access whoever just has access to uh kubernetes can access this thing now I'll explain everything in one simple diagram so that you will understand it in a much clearer way so let's say this is your entire kubernetes cluster what you have done is you created a deployment replica set pod all of these things is inside a node okay let me this is part okay now let's assume all of this is inside a node this is worker Node 1 for example this is your kubernetes cluster which has assumed two to three nodes okay but for easy understanding I did not draw all of them now there is a customer okay so what you will do is on top of this like I told you you will create something called as a service so service will watch further part now let's try to understand the customer's flow or let's try to understand the user flow depending upon the type of service if you create this service as a cluster IP service this is case one you have created this as a cluster IP so what will this service do is if you create this using a cluster IP mode then this service will say okay don't worry about anything the application should be accessed only for the people who has access to this kubernetes cluster so there is a customer or there is a user who is trying to access this application he is sitting out outside your organization okay so let's say this is public and this is your organization something like you are using your home Wi-Fi and he is using his home Wi-Fi very easy understanding okay so this is public and this is your organization so he tries to reach but you know he cannot reach uh this specific thing let's say he has access to the organization as well he cannot reach the application because the problem is the application is sitting somewhere here and he do not have access to this specific type right so that is not practically possible now let's say you have created a load balancer type service what will happen is the service that got created it will say the if you assume this cluster is on AWS it will notify AWS that oh I mean kubernetes API server will notify AWS that okay eks I have a service of type load balancer mode so can you give me a elastic load balancer IP address okay which means a public IP address and which component of kubernetes is doing it there is a component in kubernetes that we learned that is called as Cloud control manager right this is part of your kubernetes master node what is this component kubernetes Cloud control manager so the cloud control manager will generate a public IP address using the AWS implementation and it will return a public IP address and now what we what pod will I mean what service will do is it will say whoever wants to access these pods you can access using a public IP address okay so this is the public IP address so this public IP address by the name itself it's public so that anyone who has access to Internet you just need to have access to Internet so user who has access to Internet can access the application because the service type is load balancer finally you have something called as node Port mode so when you create a service of load Port mode then whoever has access like this is the public right or this is the user now he only has access to public internet or he only has access to resources in the internet but what a service type node Port will do is it will say that okay what I will do is instead of allowing only people who has access to kubernetes Cluster what I'll do is I will say because the services of type node Port mode I can allow access to people who has access to this worker Node 1 or worker node 2 or worker node 3. okay so whoever can access the worker node IP addresses like like let's say these worker nodes are easy to instances okay so whoever has access to the ec2 instance IP addresses they can access me okay so the first case is if you create a load balancer type then anybody in the world can access it if you create a node Port type mode then anybody who has access to worker nodes or the ec2 instances traffic or the VPC traffic they can have access to the pods or the applications and in the third case that is cluster IP mode then nobody has access to it even if you have access to the VPC even if you have access to the ec2 instance only if you can log into this kubernetes cluster and if you have access to the network inside the kubernetes cluster that is you have access to the container Network or you have access to flannel Calico whatever you have configured only they can access okay so these are the three things this is how a kubernetes service works so what are the three advantages if you go back so the First Advantage that kubernetes service offers you is load balancing second advantage that kubernetes service offers you is service Discovery third Advantage is exposing the applications to the world so I explained each and everything using examples and I hope you understood it like understand right from here if you have not understood the video go back to this specific slide where I have explained what happens if you don't have a service in kubernetes watch the video one more time you have the auto healing capability the deployment is giving you why you need a kubernetes service I clearly explained here that is the IP address will get changed whenever a container comes up if you know you have configured Auto healing so a new pod comes up but the IP address has changed so you need a discovery mechanism and you know to manage the traffic between the pods you need a load balancing mechanism similarly if you want to make these applications available to Internet available to specific people in your organization like probably you want everybody in the world to access this application if it is a open source application or if it is application that you want everybody to access okay so for example best example is amazon.com okay so here if we go back to this slide when will you choose cluster IP mode when will you choose node Port mode and when will you choose load balancer mode for example amazon.com let's say we are working for Amazon okay and if as a devops engineer if you have to understand services in a very simple words if you are working for amazon.com what you will do is you will create a service of type load balancer this is just example guys okay so that anybody in the world can access amazon.com okay there is one application let's say that is called amazon.com so if you create a service of type load balancer then everybody in the world can access this if you create so amazon.com is a load balancer don't get confused I'm just giving it as an example if you want people inside your organization or people who have access to your VPC your nodes right only those people to access then you will create service of type node Port mode if you want only devops Engineers or you know if you want only people who have access to your kubernetes cluster Network then you will create cluster IP mode take this as an assignment try to write few lines try to see you know if you understood the concept well try to draw a diagram for this and you know post it on your LinkedIn post it on your GitHub so you know this is how you can correct your understanding or you know you can see if your understanding is right or wrong I hope you enjoyed the video if you like the video click on the like button if you have any questions put that questions on the comment section also if you have any feedback share that with me finally don't forget to subscribe my channel and share this with your friends and colleagues thank you so much I'll see you in the next video take care everyone bye hello everyone my name is Abhishek and welcome back to my channel so today we are at Day 36 of our complete devops course and in this class we'll be talking about kubernetes interview questions part one no what is this and why I have decided to do this video because ideally today's class has to be on the practical Services implementation and introduction to Ingress but why I have decided to do this interview questions is like you know we have been learning about kubernetes from past five to six days and we have covered a lot of topics like kubernetes architecture comparison with Docker deployment spots containers versus pods so we have covered some very interesting topics and these are the topics that interviewers will ask you uh you know during your course of interview so I thought to check how much concept have you grabbed or you know how much have you understood from the past videos so this is a really good exercise that we are going to do today you can consider it as a mock interview you can consider it as anything but what you will do is try to answer the questions like whenever I show you the question number one okay firstly I'll not show you the answer I'll just show you the question so try to see how many are you able to answer before I reveal the answer okay so that you can give or you can assess your score by yourself so if you want you can also comment your score below so there is no competition kind of a thing here but the only thing is you will understand how others are you know taking the topics or are they practicing the topic so that they are remembering the concepts so that you can cancel it as a feedback or retrospective for yourself okay so without wasting any time let us quickly jump on to the video I will have 10 questions for you I mean I have 10 questions for you and let us see how many of you can answer how many number of questions so firstly the question number one sorry for that lines I can quickly remove them no worries there so just clear the drawings here perfect so the question number one is what is the difference between Docker and kubernetes so today's questions will be scenario based questions and so here what you need to do is try to answer this question by yourself okay pause the video here and try to see if you can answer this question assume you are an interview you are in an interview and see if you can answer the difference between Docker and kubernetes okay so the answer for this would be I have also explained this in the previous video I think the very first class when I have introduced you people to kubernetes in that class itself I told you that Docker is a container platform and kubernetes is a container orchestration platform I also have answers for you don't worry so kubernetes is a container orchestration platform and what kubernetes adds to the docker is like you know containers are ephemeral in nature that means to say a containers can go down containers you know for multiple reasons uh if a container goes down then your application is already down so your end user who is trying to access your application he will see a traffic loss So to avoid that you can move to a container orchestration platform Solutions like kubernetes which will offer you Auto healing Auto scaling which will offer you like you know because kubernetes is a cluster itself like in production you can join or combine multiple virtual machines and create a kubernetes cluster so that even if one of your nodes let's say Docker is a single load platform right you install Docker on a platform and you start your container if that node itself goes down for example your laptop has gone down for some reason so your application is not reachable but what kubernetes offers is if one of the node goes down because it's a cluster it will immediately move move the Pod from that specific node to a different node and finally it also has many Enterprise capabilities like load balancing it can offer integration with custom resource definitions or you know you can deploy any custom kubernetes clusters controllers that are developed by other people like for example Ingress controllers right so there are multiple Ingress controllers which can offer you Advanced capabilities so in nutshell you can extend the capabilities of kubernetes cluster using the custom resources as well so this is the primary difference between Docker and kubernetes let's say you have not understood this one because you are watching this video uh even what even before watching the previous videos and I'll highly recommend you to watch our video I think day 31 where I explained and compared the difference between Docker and kubernetes so it was complete 30 minutes class where we took plenty enough time to understand the comparison between Docker and kubernetes so now if you are able to answer this question assuming that you are an interview you are in an interview then yeah uh you get one Mark here okay so question number two what are the main components of kubernetes architecture so this is one of the most asked interview questions okay so you go to an any kubernetes interview the interviewer will definitely ask you this question because kubernetes has a lot of components and when I explained you in day number 33 I guess the architecture of kubernetes that is what I actually like you know I took 40 minutes of time to explain about the kubernetes architecture because it's a very important topic whenever you plan to learn about kubernetes you should understand how a you know how multiple components of kubernetes are talking to each other and how the kubernetes is maintaining its robustness so in a nutshell when somebody asks you this question what you need to say is on a very high level I can divide kubernetes into control plane and the data plane on the control plane you have components like API server which is uh you know responsible for handling the API is talking to the end users and then you have the scheduler which is responsible for scheduling the resources on the kubernetes cluster then you have etcd Etc is a kubernetes object store where you know all the resources of the kubernetes are stored uh as objects in kubernetes and then you have controller manager so controller manager is basically uh for example you have a replication uh replica set or replication controller so you know controller manager is something that takes care of this default controllers in kubernetes and then you have Cloud control manager so Cloud control manager is in the last class I explained you let's say you want to implement the kubernetes on any cloud provider for example Amazon has implemented kubernetes as managed service on eks platform so whenever you install this kubernetes cluster but these Cloud providers will do they will contribute to the cloud control manager and they will say like let's say you created a service of type load balancer so what happens under the hood is the cloud control manager has the logic that is written by the people at AWS which can spin up a load balancer IP address for you okay so when you create a load balancer service type you are getting a load balancer IP address on the AWS but who is generating this right so Cloud control manager is doing this with the help of the contributions from the people of AWS tomorrow if I write my own cloud then what happens is I have to go ahead and contribute to Cloud control manager so that kubernetes can like let's say somebody creates a service on my cloud then the cloud control manager can act and give you a load balancer IP address so this is a about the control plane or the master node components of kubernetes and then you have the data plane where you have three primary components one is cubelet one is Cube proxy and then the final one is container runtime so people also say there is one more component called Cube DNS but you can restrict yourself to here where you can talk about cubelet Q proxy and container runtime so cubelet you all know it is responsible for managing the pods let's say if pod is running in a healthy state or not a pod has to be restarted if the Pod has gone down then cubelet takes care of starting the Pod so cubelet is a component that is responsible for managing the pods on the nodes then you have q proxy Q proxy is a networking uh component of kubernetes uh which typically takes care of uh updating the IP tables for example you create a service of type node Port so what under the hood happens is the Q proxy is the one that understands that okay there is a service that is created of type node Port so I have to go ahead and update the IP tables in such a way that somebody access the node IP address call on a specific Port the request has to be sent to the Pod okay so Q proxy is the one that takes care of the networking finally you have container runtime what is container runtime so container runtime is nothing but for a container to run you need a runtime for example if you have a Java application and for Java application to run you have a Java runtime similarly for containers to run you have container runtime and kubernetes is not opiniated uh about this one like you can use Docker shim you can use container ID you can use Creo previously kubernetes was open created because it only used to support a Docker Sim out of the box okay but now you know out of the box nothing is supported you have to install the container runtime on each and every node okay then so here interviewer might ask you one question I have seen uh sometimes like when you say kubernetes is not using Docker Sim out of the box or kubernetes is not using Docker as runtime out of the box does that mean kubernetes is not supporting Docker no it supports Docker it supports Docker shim but nothing is available out of the box let's say previously when you install a kubernetes cluster on each worker node you used to get Docker Sim runtime out of the box but now it's up to you like you can install Docker Sim you can install containerd you can install Creo any container runtime that implements the kubernetes container runtime interface okay low let's not go into the details of it but if you want to understand the details you can watch my kubernetes architecture video that should be day 32 or 33. then what are the main differences between Docker swam and kubernetes so this I haven't covered in my previous videos but many people are asking about it in the comment section so uh Dockers fam uh and kubernetes what is the difference why we have to use kubernetes when we have to use Docker slam so basically if you look at the popularity kubernetes is quite popular even when you compare against any container orchestration environments whether it can be Cloud Foundry it can be miso as Marathon Docker spam so kubernetes is a quite popular choice and if you talk about Docker spam Docker spam is a talker based solution right so the major difference is kubernetes is suited for the Enterprise like you know large organizations or even mid-skilled organizations whereas Docker swam is very easy to install it is very easy to use but it is only suitable for the small scale or you know very simple uh app applications the reason for that is you know when you are going for scaling kubernetes has multiple options and when you are going for you know Advanced networking capabilities kubernetes can uh do Advanced networking capabilities very easily like you can use flannel Calico or you know sdn ovn so all of these things with kubernetes is very easily and with Docker spam the support is very limited and the other important thing is that you have a lot of third party uh support for kubernetes like for example the cncf community it has been very active and because kubernetes supports something unless custom resource definitions so anybody who can write a kubernetes controller if they feel that kubernetes is not supporting something they can extend the capabilities of kubernetes right because it's all about installing and deploying a controller in kubernetes and you can extend the capabilities to whatever extent that you want so this is the comparison on a very high level so if you are looking for a mid scale or large scale solution then go for kubernetes but if you don't bother about the scale then you know you can choose Docker sperm because Docker form is also very easy and very simple to install and use but you know if you look at the market today kubernetes has large openings and if you even take 10 JDS out of 10 Z is 10 JDS in devops will have a kubernetes so why should you go for Docker swam if you are learning about kubernetes or if you are learning about container orchestration environments then what is the difference between Docker container and a kubernetes pod so again I took almost 30 minutes to explain this difference in one of the videos uh so if you are trying to answer this question you should definitely answer and you know let us see how many people will get this answers correct so sometimes what happens is you people can answer the questions but you people will not be able to phrase the answers okay so whenever the interviewer is asking so if you take a lot of time thinking about the answer then interviewer might feel that okay so probably he does not know the answer or he's searching for the answer somewhere because these days interview interviews are also not face to face right so interviewer might feel that you don't know the answer so these are some of the standard questions that you can expect in any interview so try to be ready with the answer for these questions okay so what is the difference between Docker container and a kubernetes spot so as I explained equivalent is called is nothing but a runtime specification or you know uh what you do is in a yaml file kubernetes resources are basically written in yaml files okay so in a ml file you can put together all of the things that are required for your container to run so that itself is a pod but the only difference is the inner pod like pod is the lowest level deployment in kubernetes in a pod you can create one single container or multiple containers so if you have multiple containers then both of them can talk within the Pod using the same network okay and they can also use this uh same storage or you know same uh resources inside the Pod so that's the only difference between a pod and a container so you can simply say a pod is nothing but a runtime specification of a container what is a namespace in kubernetes so again many people were asking this question uh in the comment section uh explain about the namespace namespace is a very simple concept okay so namespace is nothing but a kubernetes cluster is used by multiple people in your organization right so there are multiple projects and multiple projects for each project you cannot create a kubernetes cluster in production because end of the day let's say you have 20 projects which are working on 20 micro Services all of these 20 micro Services together might uh you know create your end application if you take about amazon.com for Amazon there can be 20 different teams working on 20 different micro services but end of the day for amazon.com to function what what is required all these 20 micro Services should talk to each other and should form a single application okay it will be bundled uh probably as different applications but all of them are deployed in one single kubernetes cluster end of the day so a kubernetes cluster I mean in a kubernetes cluster namespace is nothing but a logical isolation of resources networks are back and everything that you can do for example there are two projects and two projects you want to deploy on a kubernetes cluster so you will say for project a uh you will create a namespace called namespace a for Project B you will create namespace called namespace B and within project a there can be 10 developers they can work on namespace a and the other 10 developers in Project B they can work in namespace B so what happens is you have provided them the same kubernetes cluster but you have created two different name spaces for them so in that way they have a logical separation physically they are in the same kubernetes cluster but logically they are separated with Concepts like rbac what is rbac authentication then they have different network policies they have you know isolation of resources like in namespace a let's say there is a deployment in namespace B there is another deployment okay so here there is application a and here there is application B so developers of namespace a you can restrict from accessing the applications resources in the namespace B so this is how your namespace isolation works okay so to separate the isolation or you know to create the isolation to form this uh concept what you can do is you can make use of the r backs okay R back is nothing but role based access control so we will talk about the rbac for people who don't know about our back don't worry about it but for now if somebody is asking you what is the namespace you can simply tell a kubernetes namespace is a logical isolation of resources so that multiple project teams in a company can work on the same kubernetes cluster but each of them will have a dedicated namespace so that nobody will interrupt the work of the other people or other projects what is the role of Q proxy so this is question number six till question number five uh let us see how many people were able to answer all the five questions still question number five because more four questions I think we covered already in our previous classes let's see uh now question number six what is the role of Q proxy so again this I explained uh in one of the previous classes so Cube proxy I think I explained even during the question number two where uh we discussed about the architecture of kubernetes but if somebody asks you dedicatedly like uh please elaborate more on the Q proxy so I've written some description here you can write down this description somewhere or you know you can as it easily uh you know you can copy paste the description uh when somebody is asking you Cube proxy basically is about configuring the network rules okay on each of the node that means to say like the fundamental example that I gave you if a user creates a service of node Port mode okay so that means to say your pod can be accessed on that specific node IP call in the port that you configured in your service.aml file right but who is doing these things under the hood right so who is saying that when somebody sends a request on the Node IP followed by the port number the request has to be routed to the Pod okay somebody has to say this configuration right so Q proxy is the one what it does is on every Linux machine there is a concept called IP tables okay so Cube proxy I mean you you can configure Q proxy in different modes but by default one is Q proxy updates the IP tables so whenever somebody access the application using let's say your service is on node Port mode so if they access the URL or if they hit the URL node Port column port uh port number the queue proxy because it has configured the IP tables the request is sent from that specific node Port column uh sorry node IP colon port to the Pod okay so this entire routing is done using the kernel and the IP tables so you can also use ipvs and other things but by default mode is IP tables in kubernetes okay so this is about Q proxy and I've also provided the uh you know description here so that if you want you can copy this description and you can say say it as it is okay when you want to convey it to your interviewer then what is the different type sorry what are the different types of services within kubernetes so day number 36 uh sorry 35 when we talked about Services I explained three different services in kubernetes so this is the question uh again where if somebody asks you what are the different types of services within kubernetes so fundamentally I explained you Services has three major responsibilities right one is load balancing one is uh you know the service Discovery and finally to expose your applications to the external World these are the three major responsibilities of a service in kubernetes so discover service Discovery load balancing and exposing the applications so service Discovery and load balancing I already explained now this question is about how to expose this application outside the kubernetes cluster or you know what is the networking that you have configured or what are the different types of services that are available in kubernetes service modes that are available in kubernetes so the answer to the question is you can create three different of types of services in kubernetes one is you can configure the service mode as cluster IP mode second is you can configure the service mode as node Port mode and the third is you can configure the service mode as load balancer mode so this is a straightforward answer but your interviewer will definitely you know ask you to elaborate more can you elaborate the difference between cluster mode Port mode and the load balancer mode so in the last class I explained uh difference between each of them in tomorrow's class you will see the practicals as well but again if you have to explain you the cluster IP mode what happens if you create a service as a cluster IP mode so your pods or you know your service will basically get a cluster IP so if you try to access your service so you will be only able to access the service using the cluster IP which is only available or accessible within the kubernetes cluster okay whereas if you try to create the service as type node Port mode then your service can be accessed on the Node IP call on the port number that you define in your service.yaml file so what happens with that anybody in your organization who has the access to your node IP address for example you have created a kubernetes cluster on your AWS platform okay so what happens is you have configured your worker nodes as ec2 instances so now anybody who can basically reach that easy to instance so you you if you can just ping the IP address of the ec2 instance and that means to say that your node is accessible to that specific users so whoever can access the nodes whoever can access the worker nodes or whoever can access the IP address of your kubernetes cluster then they can basically access your applications if they are deployed in the node Port mode but for the end users who are sitting outside your organization okay so your end user is somewhere in India and your applications or your kubernetes cluster is somewhere in the US so in such cases if they don't have access to your network and if they are outside your organization then you have to expose your applications as load balancer mode okay so what happens if you do that you are Cloud control manager component of kubernetes basically it will create a public IP address for you or it will create a load balancer IP address for you and using which you can anybody in the world can access that applications okay so this can also be done using Ingress but the question is only related to services so let us restrict to service only so again the question is same here what is the difference between node port and load balancer type service because this is a very frequently Asked question I thought I'll also put this question here so the description is same the one that I just explained so you can pause the video here and you can read the description question number one what is the role of cubelet okay so here when I explained you about the architecture of kubernetes so I told you that cubelet is a very important component right because cubelet is very I mean cubelet is the one that is responsible for managing your pod life cycle on the worker nodes so whenever you install or whenever you schedule a pod on the worker node using the cube scheduler so the Pod can go down for some reason or you know a pod can uh something can happen to your phone so there has to be someone who has to inform the cube API server that okay the Pod has gone down now uh you have to send the information to the replica set or the deployment and it has to scale up the Pod okay so if your pod count has to be one so if for some reason the Pod has gone down and the replica has become zero so your replica set controller has to know that okay the Pod has gone down so I have to ensure that the Pod comes up and I have to scale up the part so this information is actually continuously monitored by the cubelet so cubelet always watches that pod if it goes down then it sends the notification to the API server and then API server note notifies the replica set controller and then you know the replica set controller again spins up or you know uh it increases the scale to the required amount so this is the life cycle so on a high level what pod is doing so pod is the one that is responsible for managing the uh circumlet is the one that is responsible for managing the pods on the worker nodes so I have provided the description as well so that you can explain the answer to to your interviewer during the interviews question number 10 and this is a very important question what are your day to day activities on kubernetes so many people get confused here and I see also many people asking that Abhishek I'm getting the theory knowledge I'm able to practice uh using your videos we were able to you know get some understanding of kubernetes and the devops itself but when somebody asks us the question what are the day-to-day activities as a devops engineer or what are the day-to-day activities uh on kubernetes we are not able to answer so don't worry about it actually you know it's a very easy question or it's a very simple question to answer and if you are starting with a you know good answer like this will be our first question interviewer can ask you what are your day-to-day activities on devops or what are the day-to-day activities on kubernetes so because this is your first question or most probably uh in first one or two questions if you answer this question uh in a very good way then it boosts your confidence and this question is a very simple one you don't have to complicate the question or complicate your answer simply say them that you know as part of the devops engineer role we manage kubernetes clusters for our organization and we also ensure that you know the applications are deployed onto the kubernetes cluster and there are no issues with the application so we have set up monitoring on our kubernetes cluster we ensure that whenever there are bugs on the kubernetes cluster for example uh the developers are not able to troubleshoot some issue with respect to pods developers are not able to troubleshoot with respect to Services they are not able to you know uh route the traffic in inside the kubernetes cluster so in such cases as subject matter expertise on the kubernetes Clusters we come into picture and we solve their problems but apart from that we also do a lot of Maintenance activities for example uh we have kubernetes clusters with three Master nodes and 10 worker nodes so we have to do some continuous maintenance activities on this worker nodes probably uh you know upgrading the versions of this worker nodes or installing some default mandatory packages ensuring that these worker nodes are not uh security uh exposed to security vulnerabilities so all of these things are our day-to-day activities on kubernetes apart from that we also serve as subject matter expertise on kubernetes so if anyone in the organization has any issues with kubernetes they create a jeera items for us or you know they create tickets for us and we will help them in solving or making them understand the concept of kubernetes so this is how you can explain so it is a very simple answer it's a very straightforward answer you don't have to you know get scared about this question so these are the 10 questions that I have for today and let us see how many people were able to get all the 10 questions correct because you know most of the questions we have covered I think eight questions we already covered in the previous videos so let us see what is the uh scorecard and uh yeah in future videos we will learn about Ingress we will learn about the Practical implementation of services we'll also talk about custom resource definitions we will see a few things about Helm so it's going to be four or five videos more on kubernetes and after that we'll also do a kubernetes interview questions part two so if you like the video click on the like button and if you feel that someone who is not following our 45 days of devops course please share these videos with them so that they'll also get benefit out of the videos thank you so much I'll see in the next video take care everyone bye hello everyone my name is Abhishek and welcome back to my channel so today we are at day 37 of our complete devops course and in this class we will Deep dive into kubernetes services that means we'll be doing practical session on kubernetes service where you will see the aspects that we were talking about like the load balancing service Discovery as well as how to expose your applications to outside world in kubernetes so everything will be practical I'll recommend everybody to watch the video till the end because we are doing practical uh traffic viewing using Cube shark so Cube shark is a tool which will help you to understand how traffic is Flowing within the kubernetes how each component of kubernetes is talking uh like you know how one component is talking to the other component so it will be a very interesting session and you will see all the capabilities using cubeshark like the how Services uh doing the load balancing within multiple pods how uh a you know service is able to discover the pods and also we will see how to expose the applications to outside world as well as within the kubernetes cluster and within your organization perfect so without wasting any time I'll quickly jump onto the video but disclaimer and very important point is watch the video till the end because even if you know the concept of services even if you understand kubernetes using the cube shark I am going to show you how the traffic is Flowing so so it is very useful session okay perfect so let me stop this share and go on to the kubernetes cluster uh where is it perfect so here for the purpose of demo I already have a kubernetes cluster let me clear this thing yeah so this is the kubernetes cluster that I have it's a mini Cube kubernetes cluster if you just see mini Cube status you will see that the kubernetes cluster is already up and running uh for instance if you don't know how to create a kubernetes cluster you can watch my previous videos where I explained how to create a kubernetes cluster both using minicube and also if you want to create on AWS if you have some free coupons or the resources then you can use cops to create the kubernetes cluster which I explained in the last classes perfect so I have the mini Cube kubernetes cluster running and let me clear up all the resources that I currently have okay so if I just do Cube CTL get all so I was just using uh the default namespace for my other activities so let me just clear all of these things I just have a deployment and a service so Cube CTL delete deploy let me delete this deployment that I have and then I'll also delete the service that I have Cube CTL delete uh SVC uh this is the service that I have and you will not remove the default service that is kubernetes service itself so now if I just do Cube CTL get all I should see just the kubernetes default service that is running perfect so I think we are good for the demo so for the demo what I've done is uh you know in the previous classes we use the repository called Docker 0 To Hero so I'll use the same repository uh you can either use that repository or you can use your own images if you have one so this is the repository Docker 0 to 0 so you can also get that repository from my uh GitHub so you can simply go here GitHub um Docker 0 to 0 so this is the repository guys where you have uh real time practical uh python as well as golang images which are basically uh you know front end and back-end based applications so either you can use these things or you can personally use your own ones but uh if you want to use then you can go to this GitHub repository called uh this is my username in GitHub and Docker 0 to hero is the link I'll also put the link in the description okay let me go back to the screen so here now let's start from the uh scratch where I'll create a deployment first and you know a deployment is something that creates your replica set which indeed creates a pods but these parts are only accessible within your kubernetes cluster so we have seen that in the last classes because those pods come up with a default cluster IP address and if you are using the cluster IP address then the problem is that the cluster IP is only accessible within the kubernetes cluster so firstly let me uh you know go to that folder examples and inside examples I have uh either you can go to python application or you can go to golang based application so go to python based application uh in the demo I'll use Python and you know here I have the docker file this file let me remove this so that I can to write from the scratch and you people can understand so here I just have a Docker file and this is the code and service.aml also we can delete so you will not have these files in the repository if you go to the repository you will see devops folder which is the application itself and you will have a Docker file and the requirements.txt so now the thing that we will do is we will create a deployment here and we will deploy this application as a deployment onto the kubernetes cluster so this is a Docker file guys so it's a very uh simple python Django based application and the application has an entry point and CMD so you don't have to pass any arguments commands uh it will self-execute when you uh run the container so for that uh firstly let's build this Docker image so let me call this as uh Docker build right so I'm giving a tag called uh python sample application demo and V1 right this is the image name and this is the tag so we will create the image so that we will do the things right from zero so the image is created now I have the image ready here now the thing is we have to start with the deployment because I want to deploy this onto the kubernetes cluster so as I told you in the last classes you don't have to remember any syntaxes just go here search for kubernetes deployment so you will go to a page called deployments in kubernetes and here just take the example that is available okay just copy this example onto the terminal let me call this file as deployment.yaml perfect so let me paste it here so this is the deployment we need to edit the fields so as I told you in the previous classes you don't have to remember anything you just have to know which Fields have to be modified right so I don't want three replicas let me just choose uh do replicas for the demo so that I can show you the load balancing as well with the service so I'm just creating two replicas of my uh pod so name I'll just modify the name as uh let me call this as sample python app okay and we have to choose the labels guys so labels is important because uh let's say someone wants to use this deployment or you know you want a selector so I explained the concept of labels and selectors so this applies for every resource in kubernetes so every time you create a resource in kubernetes whether it is deployment or whether it is any kind of resource try to put some labels on top of them okay so here I'll say a label as uh just again I'll use the same label called sample python app okay replicas has two and uh selector we can use the same thing so this selector is required for the deployment to actually look into uh you know labels and selectors concept where this is the selector which will uh look for the labels called app sample python application okay so that's why what I'll do is inside the uh this is the Pod template right so inside the Pod template also you can choose the same label okay sample python application now who will be looking for this label service will also be looking for this label because service works on the concept of labels and selectors so whenever I'm I'm going to create service after this I'll show you when we create service we have to remember that we have to copy this label as is and we have to use this inside the selector field of the service only then your service will be able to find out this part for example if I remove this and let's say it it is conflicting information in service as well as in pod then you know your service will not be able to find the point and you will see a traffic loss so we can also try that as an example no problem now here you can call it as a python app just a name of the container it does not matter but the main thing is you have to replace with the image that we have just created okay so let me save this and what was the image that we created so this is the image right so let me open this one and put the image name here okay so this is what I'm going to uh show you guys that you don't have to remember any Syntax for the deployments or Services because the file will remain always the same and on which Port is the application running my application is running on the 8000 Port okay so how do you know this it's very simple you can open the docker file and you know you will know on which Port your application is running so either it will be part of the exposed statement or you can also find that as part of the uh Commander okay so whenever you are running the application so as developers or devops Engineers you should know on which Port your application is running got it so now this is the deployment.aml I have uh successfully updated the container Port I've updated the image then I've updated some labels and selectors so this is it now you can go ahead and create the deployment Cube CDL apply minus F deployment.yaml so if I create this deployment so you will see that you know it says that the deployment is created and you can also use the cube CTL get deploy what does Cube CTL get deployed to cube CTL will talk to your kubernetes API server and it will get the information of deploy okay so here you will see that Cube CTL get deployed returns saying that okay I've created a deployment and there are two pods that you have requested and both of the pods are available so if you don't believe Cube CTL you can just say Cube Cube CTL get pods which will show you the two pods that are created as well okay so this is how you can get the information of the pods that are running but if you want to get the IP addresses of this pods as well what you can do Cube CTL get pods minus o wide which will give you the information of your IP address of the pods okay so if you are Keen enough uh to I mean if you are Keen to understand uh what exactly is happening when you run this Cube CTL commands you can simply add a verbose statement like you know instead of just saying cubicle get pods you can say Cube CTL get pods minus V is equals to 7 for example okay so it will give you the information what is it saying firstly loaded the cubeconfig file okay after that it is connecting to the API server okay so here this is the API server and it is trying to use this API call with the kubernetes to get the list of parts then it says that the request headers are accepted and it got the response as 200 so it has written you the information of the pods as you increase the verbosity level you will get more information about this kubernetes Parts like you can do 9 which is the maximum verbosity level then you get more information about the API call like the Json that it is and what is the response what is the request so this is only if you are curious to understand how Cube CTL is talking to the kubernetes API server and what is happening behind the scenes when you run the cube CTL get pods command okay so now this is not relevant to our class today so you can just do Cube CTL get parts minus so wide now deployment has created two parts and we all know the Practical use cases of deployments right so what it does it's a high level wrapper and you know it rolls out a replica set and you know replica set is a controller which makes sure that uh the state of the pods is matching according to the deployment.yaml that we have created so for example if I delete one of these pods you already know that okay let me delete this Cube CTL uh Delete pod then replica set will create a new pod we have already seen this in the last classes with uh practicals as well so if I again do Cube CTL get parts you'll see that the two parts are running and this time probably the IP address might change okay so if I just say Cube ETL get pods minus o wide see here the IP addresses were 0.5 0.6 now the IP address has changed to 0.7 and 0.5 so this is the problem that we were discussing about kubernetes deployments right so if the IP address has changed now the user who was trying to access the application on 0.6 they will say that oh I was using 0.6 and I'm getting a traffic loss but as devops engineer we will say no no uh there are two pods expected expectation is two pods and two parts are running so I am not responsible so whose problem is this the problem is with respect to kubernetes because kubernetes whenever it has created a new replica okay it has changed the IP address because kubernetes does a dynamic allocation of IP address it's not a static allocation if it was a static allocation so whenever a new pod comes up every time it comes up with the same IP address but in case of dynamic allocation the IP address might change so now this is the reason why you need a service Discovery mechanism okay so if kubernetes Services was identifying the parts okay using the IP address what happens then you know it becomes wrong right so what I mean to say is you will face a traffic loss because the IP address has changed so that's why as I explained you in the last classes we use a concept called labels and selectors so using labels and selectors what you do is you identify like kubernetes service identifies the pods using the labels and selector Concepts so that every time a new pod comes up its label will remain the same right because the label is same label is just like a stamp or you can understand it as a tag so every time a pod comes up it definitely comes up with the same tag IP address might change but the tag or the stamp or the label is always the same so service will say that oh okay so I notice that a new pod came up and let me check the label okay label is correct this is what information the devops engineer has given me that uh this is my selector selector should match the label of the Pod and a new pod is created so this part belongs to me so I can send the traffic to this new product as well so that is how it works and I explain this thing in the last class using uh Theory and the diagrams as well so now we will go ahead and see this Behavior okay firstly uh what happens is if you want to stop here let's say if you just want to use the deployments then what you can do is you can just say mini Cube SSH and use one of these IP address right and probably you can access them using a curl command you can just say curl HTTP followed by this specific IP address just use minus L because the application that I have written it requires a redirect so you will notice okay call an 8000 right because the application was running on the port 8000 so you will notice that there is a traffic here so what what is happening here slash demo as well sorry guys so the application which I have written is running on the context root called slash demo so you have to use this specific thing coil followed by the IP address call and put on which the application is running and the context root of the application okay so don't worry why I am changing this information I am not changing anything you can just go to this python web application it's a Django based application if you have knowledge on Django you can just go here and you can see the context root of the application if you go to the urls.py you will see uh here actually uh yeah if you go here to the urls.py you will see that the context root is slash demo okay so that's why I'm accessing the application on slash demo I'm not changing anything don't worry so you just have to access it on 172.17.0.5 followed by the port of the container followed by the contact router that is slash demo now you will see that there is a traffic that you are trying to access what does it say uh learn devops with strong foundational knowledge and practical understand understanding please share the Channel with your friends and colleagues so this is a very simple uh static application that I have written now the problem you all know that okay uh if you use the same IP address and try to access it let's say use this IP address say use the same command Okay so curl minus L http colon double slash colon 8000 slash demo you will see that there is no traffic we were getting only uh you know we were able to access the application and we were able to get the response only inside the kubernetes cluster this is because a pod by default will have only the cluster IP addresses that is I mean a pod by default will just have the cluster network attached to it so if it is a cluster Network you have to access it using the cluster itself right you have to log into the cluster and access it but this is not expected your customers will be definitely if you have internal customers internal customers can be within your organization but if you have external customers there will be even outside your organization so you have to to solve the two problems here right one is people within your organization okay for people within the organization like I told you you can use the kubernetes service concept let's say this is your kubernetes cluster for example uh where is this okay so uh I was trying to draw here but I hope you uh understood that uh if you are trying to use people within your organization um okay let me try to grab something here yeah probably I can write here so let's say this is your kubernetes cluster okay or let me uh take a external diagram itself draw Auto draw we can use this Auto draw to explain okay clear this thing start over right so let's say this is your kubernetes cluster okay and this is your organization okay so this is your organization and this is your kubernetes cluster and this is your application so you can have people within your organization trying to access this application or you can my you you might also have people who are outside the organization itself right so if you are building applications for uh your organization if there are internal applications then what you need is you have to expose this application on the kubernetes worker node IP addresses so that you know these people can directly access using the kubernetes worker node IP address if you want to use uh I mean if you want this application to be used by external customers they don't even have access to your organization so you need to create a public IP address for this application so that you know everybody in the world can access your applications so these two cases can be solved even in yesterday's interview question I explained if you want to solve this problem one you have to use node Port mode right and for two you have to use load balancer mode so let us see both of these cases okay let us try to understand a node Port mode and let us also try to understand the load balancer mode perfect so if you want to learn these things in detail you can watch my previous videos where I've explained all of these things in detail but today's video will be going to be a practical video and I'm also going to show you the same things using virusack first of all let us proceed with the creation of service so service dot yaml let me create this file here and again I will not remember anything I'll just go to the kubernetes website itself and say kubernetes service okay so if I am doing a kubernetes service here uh later let me clear this diagram go to annotate and clear perfect so if you go to kubernetes service so here you will notice that just copy this uh I mean go to this page and here you have multiple example of the services so the default one like I told you it's just search for the cluster IP address I don't want it so firstly let us demonstrate the node port example okay what happens in node Port your application will be ex Exposed on the Node IP address okay so in my case the node IP address is the mini Cube node IP address because I am using mini cube right so let me just copy this one right because this is the example for node port and paste it here uh let me delete this so that it will be clear okay perfect now what are the things that I have to change firstly you can give any name to for your service you can also keep it as is my service uh let me change this thing to uh python Django app service you can give any random name okay don't worry about it now the most important thing is to keep this selector similar to the deployment or the pods that you have created not the deployment to the pods because service will be directly looking at the pods using the selectors if there are 100 parts then this selector will be looking at 100 parts that have this label okay it doesn't matter tomorrow if you have 200 Parts 300 parts what service says is I don't bother about the number I'll be only looking at pods that has this label if any pod let's say unexpectedly someone else has created a pod with this specific thing then your service will forward the traffic to that Port as well if it is in the same name space okay so that's how the service works so service will only bother about the labels and selector okay so what you need to do is you have to go back to your kubernetes pod or the deployment or the ml file inside the deployment inside the port template okay make sure that you are copying it from here okay sometimes for the deployments you might have different labels and selectors okay so always pick from the template section inside the templates you have this label pick from here go back to the service example oh sorry I did not save it my bad no problem I'll just go back to the page and save it one more time copy it one more time so this is the service copy it from here copy and then delete this comment delete this command so that it is clear now copy this uh thing app python so always make sure that you copy the right thing because if you don't copy the right ones you will land into some problems with the labels and selectors and it will be difficult to debug okay so that's why try to copy it as is now I have copied this one okay so app sample python application perfect I have copied now choose any node Port that you want uh I can keep as is as well I can use 3007 port number and one important thing is to change the Target Port what is Target Port Target Port is basically the port on which your application is running so my application is running on Port 8000 so I'll choose this as the Target Port and do I need to change anything I don't uh like I can change this one uh python Django sample app it can be anything now let me just save this Cube CTL apply minus F service dot yaml okay so as soon as you apply this the service will be created again if you want to debug or understand more then what you can do just say Cube CTL get SVC minus V is equals to 9 you will get the entire information how the call is uh how the traffic is going within the cluster how the cube CTL get is working and you will get the all the information but if you just ignore for the purpose of the demo you can just say Cube CDL get SVC and you will get the application that is running this is the cluster IP don't get confused because you have created the service using the node Port you will see that there is a port mapping that is done so the cluster IP colon ET Port is mapped with the node IP address 3007 okay what does this mean this means to say that either you can do minicube SSH copy this IP address okay and you know you can also access your application using this cluster IP address of the service that is HTTP colon this IP address colon 18 slash demo okay even if you do this you will get the traffic minus L I have to pass okay even using this you will get the traffic or what you can do is you can otherwise use the node code IP address now why this is not recommended because you can already do this using the Pod IP addresses right so a service or any resource that you are creating in kubernetes whether you are creating a node Port mode whether you are creating in load balancer mode or anything cluster IP will definitely be there right so Additionally you are creating node Port additionally when you are creating service in node Port mode you will get a port mapping and that Port mapping is nothing but what kubernetes service has done for you it says that okay if you don't want to access using the cluster IP address you can use the node IP address and I have mapped the port that is on Port 80 with 30 0007 Port that you have provided in the service.ml so now I can simply say minicube IP uh to get the IP address of the mini Cube node if it is a ec2 instance you can get the IP address of the ec2 instance you already know how to get the IP address of easy easy to instance right so this is the IP address what I can simply do is I can say curl minus l so for other applications you might not need this minus L but for my application because there is a redirect happening within the application I require this and now HTTP colon this is the node IP address so this is the interesting thing guys now I'm going to show you how the application is accessed from the uh node IP address okay so now I'm not logging into the kubernetes cluster I can also access this using the browser I'll show you that as well okay so why I can access using the browser because it is the same laptop if you access from your browser you will not get the traffic if you want to access using from other people's browser then you have to use the load balancer IP address because it becomes external right it becomes external traffic but if you are accessing from your laptop the mini Cube IP address you can do that because you are already in the same laptop you know the uh right your laptop can connect to mini Cube because it is part of the what it is end of the day what mini cube is doing it is just installing a virtual mission on top of your laptop and uh you know because both of them are in the same network you can access but outside people cannot access it because you have just exposed using the node Port mode so what I'll do is call an 80 watch this carefully I'm not using colon 8000 because I am basically you know what service is doing is this is the node IP address and this is the port when you do it it maps to even if you don't use it there is no problem uh sorry you have to use 3007 right you cannot use 80 because if you use 80 uh Service uh sorry the node IP address would be looking for applications on 80 nothing will be running so if you have if you do 3007 then this is how the service will route the traffic to the pods okay so this is the node IP address this is the node Port followed by slash demo now you will see that the application is accessible you can use the same IP address okay just let me copy the same thing and you can also access from your browser this is from the browser right so this is the application that is running now if you take this same URL you are watching this video right if you take the same URL and if you try to access this from outside it will not work okay why will it why will it not work because what is the reason you have not exposed your application to outside world so this is how you expose your applications to other people in your organization or somebody who has access to your node IP addresses or someone who has access to your ec2 instances or your virtual machines now how to access the application to outside world okay so to do that you will make a very simple change just go to your service called Cube CTL edit service okay and what was the name of the service sorry I don't remember so let me do Cube CTL get service and let me edit this service okay Cube CTL edit SVC and once you edit you will see uh the type as node Port mode right so in one of the places we have selected the type as node port just simply make the modification and change it to load balancer now this will not work here because we are using mini Cube okay if just the same thing just modify to type load balancer and it will work for you if you are on your ec2 instance or if you are on any cloud provider because load balancer type is only uh you know supported on the cloud providers right and who does that the thing is done by your uh sorry Cloud control manager right so what you need to do is just go back here Cube CTL edit SVC and search for node port and just modify the thing to your load balancer okay what is the mode that you will change load balancer I hope the syntax is right perfect so now if you do Cube CTL get SVC the IP address will not be allocated the external IP will remain pending here because this is mini Cube if it was AWS or if it was Azure or if it was gcp you will get the IP address here and who is generating that IP address for you the cloud control manager of kubernetes why Cloud control manager is generating because the people of AWS Azure and gcp have told the cloud control managers they have contributed to the cloud control manager saying that if you find a service of type load balancer then use the internal components of AWS or Azure and gcp and generate a IP address okay so that is why the external IP will be generated in my case it will not be generated there is a project called a metal lb using which you can expose the applications on minicube as well you can search for the project called metal lb okay so you can expose the uh it it can generate you one uh public IP address as well but this is still a beta project uh you know and you don't have to try that as well because if you know the concept that's more than enough you can uh if you have a easy to instance you can try it or else just understand that you will get a public IP address something like uh 32.48 or 100 or something and you can share this IP address to your customers or someone and they can access that using the public IP address okay so this is the concept of how to expose your applications okay but I told you about three concepts right uh whenever we are talking about the service I promise you that a service can do three things one is load balancing two is service discovery and three is exposing the applications okay so third part is clear to you till now right because I showed you using note Port I showed you using uh load balancer mode how does this work and all so now let us see the second part that is service Discovery okay so to understand the service Discovery just make a very simple change Cube CDL edit SVC sorry again we need the service name if not we will get both the services in the edit and it will just be confusing so Cube CTL get service this is the uh name of the services Cube CTL edit service followed by the name of the service and now what you will do to understand the concept of Discovery just modify the selector okay search for uh selector see if you are not comfortable with the cube CTL edit what you can also do is just use the same service.aml file that you have created okay but the only condition is whenever you have created this you must have used the apply command Okay so either if you have used the cubesatel apply command to create the service then you can just say vim and edit the service and reapply the service or else you can also use the cube CTL edit command okay whichever is easy for you but I will recommend this one uh always create your services using the apply so that in future you can modify it now what I'll do is I'll just come here and remove one character let us see if even if you can access your applications like this okay then Cube CTL apply minus F service dot yaml okay so now the labels and selectors are different so the labels of your pod is sample application what was that so the label that you have on your uh pod is sample python app but here on the service it is sample python AP so let us see if the service Discovery uh will be able to detect the pods what I'm saying is it shouldn't because what is the reason why it shouldn't detect because the labels and selectors are different so what was the curl command that I used let me show you from the browser itself this is the curl Command right so let me just copy this one more time and let me try from the browser so this time you'll notice ah sorry I just need this one right I don't need curl so this time you'll notice that the application is not accessible why is it not accessible even using the curl you can see that you will not couldn't connect to the server so by just changing the labels and selector by just changing the selector you understood that the service is not discoverable so again go back and modify so you will understand the concept of how service is using the service Discovery concept using the labels and selector so now let's let me reapply Cube CTL apply minus F so just give it one minute right because the queue proxy has to update the rules uh the iptables and all the things have to be updated so just give it a minute uh don't get a pan I mean don't go into a situation of thinking oh this is not working it will just take a minute or not not even minute sometime so sometimes the refresh will take time so you will notice that the service Discovery is done so now what are the two things that you have already learned one is service Discovery and one is how to expose your application so finally now I'll show you the load balancing as well so you have two applications right Cube CTL get pods why you need load balancing I explained like uh if there was only one replica if there are 100 requests it will be difficult for one uh replica to serve all the requests so if you have depending upon the load of your application you can create multiple replicas but by default the deployments or the pods do not have load balancing if you create service you will get the load balancing this is what I explained now let us see that in Practical so for that I have the cube shark as well so what I am doing Cube shark is a very simple application uh you can also install the cubeshark and I recommend you to install cubesark I'll make a full detailed video on cubeshark as well but if you want to install the installation is very simple just go to uh the cube sharp documentation okay so you will understand a lot about kubernetes if you uh you know if you have this Cube shark uh because it explains you how the traffic is Flowing within the cluster and all of the things so go to the install and run page here and this is the simple curl command just execute this call command or if you are on Mac you can just run these two commands and your Cube shark is up then you just have to run this specific thing called cubeshark tab minus a or you can do Cube shot tab but this will only be limiting your Cube shot to one single name space if you want to expose cubeshark to all the namespaces or if you want to understand the kubernetes traffic flow for all the namespaces just run this command called Cube shot tap minus a and you will see this page okay so you can access the cube shark browser on the port called double eight double nine localhost Colin double eight double n it will automatically open in your browser so you will get this beautiful page where you know you can do lot of things on cubeshark so this video I am not going to talk about the details of cubesark I'm just using this to explain you the concept of load balancing in service but I'll do a dedicated video on cubeshark where I'll explain you how does the traffic flow and all but here I'll just spend two minutes to explain you okay now let me run this curl command five times okay so this one I'll just copy it and I'll run for five times so that you can unders let me run for six times and show you the load balancing one uh okay let me remove this so that it will not print the output let me just remove minus l 1 2 3 4 5 and 6. now what should be the expected output is the kubernetes service has to send request uh three times or you know uh using the round robin it has to send request to 170 to 17 0.7 as well as 170 to 17 0.5 let us see we made six requests let us see if the requests are you know segregated amongst these two parts or not okay let me go back here and let me see so once the request went to 170 to 17 0.5 then okay let me just refresh this page called apply just let me apply and show you um perfect let me rerun sorry I had some old data let me do it one more time one two three uh four five and six okay so sorry guys I had some refresh information uh so because of which it was not showing but now what you will see here is when you apply this thing uh I think it is taking some time to refresh and get the data just give a minute yeah uh sorry I had to restart uh the cube shark so I just created it before the demo and for some time uh you know the proxy was disconnected so what I did was I went back and restarted the cube shark so I'll show you how to restart these things and all uh in the video where I uh demonstrate about the cubesat but what happened was this is a cube shot which is running and if you see here error while proxying the request and context cancel okay so this was the error I got and what I did was I have re-established the connection with uh between the cube shark and my kubernetes cluster okay so this is the command uh in general called Cube sharp proxy and what it will do is it will re-establish the connection so I just did not want to go into the details but uh yeah unfortunately because the uh the connection was disabled so I had to explain you all of these things perfect now but the demo that I wanted to show you here is now the cube shark is back you will see that I have sent six requests right uh so these are the six requests that I've sent and see what is happening so once the request into 172 170.5 and again the request went to 172 17 0.7 again it went to 172 0.170.5 172 170.7 again it went to 170 to 170.5 so what is happening is kubernetes service is doing the load balancing okay so what I did is I tried to access the application on this specific Port called 192.168.64.10 and using which the request actually this is the service and it it is once sending the request to 170 to 170.7 and another time again you hit the same URL and it is sending the request to 170 to 17 0.5 so this is how the packet is actually uh traveling within your kubernetes cluster or you know this is the packet flow what is happening is uh so if you take this is the start of the request I as a user I executed 192.168 64.10 and this is the IP address so it went from my machine so this is my machine IP address from my machine because I am using the browser or the curl command so it went from my browser all the curl command this is the source right so what is the source source is the point where you have started the execution okay so from Source if you look at the ifconfig and if you just grab for this IP address 192 168 192.160 okay let me even search it here so just run ifconfig and if you search here you will notice that this is my machine IP address 192.168.64.1 whether you are doing from curl or whether you are doing from the browser you will see that this is my source or this is my origin so from my laptop I have executed this specific IP address that is 192.168.64.10 what happened is from here the request went to 172 170.1 this is my mini Cube IP address okay from here it went into the service so this is the packet flow guys and if you want to understand the packet flow in detail this is the tool called Cube shark and I'll explain you when we actually deal with this uh specific Cube shark individual video so I'll explain you how this packet is traveling okay so this is the request and this is the response from there it went to mini Cube what happened once it went to mini Cube like you know this is the URL context path this is the host IP address and then it went to you know it sent this response and from there it sent the request to 170 to 170.7 so you can understand these things in detail you can replay right when you replay you know you can do this action one more time or you can also uh you know capture the packet and you can debug it with some external tools so you all know about virus are probably you can capture this packet and you can execute against Wireshark as well so that you understand more details about the packet you can use TCP dump so these are some of the things let's not go into the detail of this tool but we have understood the three concepts right so here using cubeshark I explained you the concept of load balancing then using uh you know uh the browser and the terminal itself I explained to the service Discovery concept and also the what was the other thing I explained you how to expose the application so these are the three things that I wanted to cover as part of this video I hope you enjoyed the demo and Cube shark I'll do a dedicated video because this is a must have tool for every devops engineer this explains the traffic viewer of kubernetes and most of your kubernetes Concepts will become clear here you can also do a service map where you can see what are the different Services how one service is talking to the other service then you know you can look into list of pods in the namespaces here and you can uh you know blog sorry access understand the traffic depending upon the TCP request HTTP request you can do layer 4 layer 7 all of the things so I'll explain uh this in detail but for now this is the video for today like if you enjoyed the video click on the like button if you have any feedback for me put that in the comment section and finally don't forget to share this video with your friends and colleagues thank you so much guys I'll see you in the next video take care everyone bye foreign my name is Abhishek and welcome back to my channel so today we are at day 38 of our complete devops course and in this class we will be learning about kubernetes Ingress so people find this concept slightly tricky or people find it slightly difficult because of two reasons one is they don't understand why Ingress is required right if you don't understand why Ingress is required then definitely you will find the topic complicated and the second thing is practical implementation so people try it on their mini Cube clusters or on their local kubernetes clusters and they will not succeed with the setup so that's one of the other reasons why people find it difficult I've also seen few videos and we also have created a end-to-end video on Ingress on our particular channel so I'll also share the link in the description so that you can follow the link where we have done end to end complete practical on how to set up Ingress and all of the things don't worry even in today's class I'm going to explain you both the theory and the Practical okay so even if you watch this video till the end you will get a very detailed understanding on why Ingress is required and how to practically install Ingress and try out the things so you know if you have followed our previous class on service Deep dive you know you'll be easily able to understand today's topic so if you haven't watched the video 37 that is on Deep dive of kubernetes services I'll highly recommend you to go and watch video number 37 on the complete devops course and only then come back so that you understand the concept of Ingress very well okay now without wasting any time because we have to cover a lot of things in this specific video let me jump onto the video okay so firstly what is ingress so you must be asking me that Abhishek in the last class we used kubernetes services and service was offering me a lot of good things right so I explained to that service was offering you service Discovery mechanism on kubernetes so it is solving this problem it was also doing a load balancing for you right services for doing the load balancing we have seen using the cube shark utility as well in the last video and it was also exposing the applications to external world then why you need a tool like Ingress or why you need a concept like Ingress and what problem is it solving so before 2015 uh December I guess or November so that was before kubernetes version release 1.1 Ingress was not even there okay so people were using kubernetes without Ingress that was that means people were using kubernetes with just service concept okay and Ingress without Ingress so what they used to do was similarly uh like we were doing till the last class so people used to create a deployment which would create a pod right and additionally because you are creating a deployment you you will get Auto healing and auto scaling right these features and then you will create a service on top of it so that you can expose your application within your kubernetes cluster or outside the kubernetes cluster using the load balancer that is using the load balancer mode of your service but there are some practical problems which people realized after using kubernetes okay so once people started using kubernetes so obviously these users who migrated to kubernetes were migrating from the Legacy systems like like people used to have virtual machines or physical servers on top of that they used to install their applications okay and what people used to do was they used to use a load balancer so these load balancers were something like you know people used to use uh engine X engine X or you know people used to use fi load balancer or any other load balancers that they want to use on their virtual machines or physical servers okay and these are some Enterprise load balancers okay so what is enterprise load balancing so they offer very good load balance and capability load balancing capabilities like for example you can do ratio based load balancing that is you can say uh send three requests to pod number one seven request to pod number two you don't have pods in the virtual machines but just for your understanding I'm explaining you okay you can do ratio based you can do sticky sessions that means if one request is going to pod one then all the requests of that specific user have to go to pod one only okay so this is called sticky sessions they can use path based load balancing they can use domain or host based load balancing okay they support uh you know whitelisting that means only allow specific customers to access the application they can do blacklisting that means to say okay so these customers are like hackers do not allow any users coming from Pakistan for example or do not allow any users coming from USA do not allow any users coming from Russia so you can Define your traffic and you can say that okay so this is the concept or these are the capabilities that Enterprise load balancers support now the problem was when these people who were doing this virtual machines and applications when they migrated from this to kubernetes okay so initially they were very happy that kubernetes was offering you Auto healing Auto scaling automatic service Discovery uh exposing the applications to external world so you know people used to create the same things that we we did like you know they used to create a deployment after the deployment they used to create a service right and using the services they used to get all the features that are available and using the deployment they used to get the healing and scaling capabilities but off late they realized that okay service was providing load balancing mechanism but the load balancing mechanism the service was providing is a simple round Robin load balancing what is round robin if you are doing 10 requests what this specific uh service using Q proxy right Cube proxy is updating your IP table rules what it does is it will send file requests to pod number one and it will send five requests to pod number two let's assume there are two parts but this is a very simple load balancing because people were coming from uh virtual machines and they used to get all of these features against what they are getting in kubernetes is a very simple round robin they are not getting all of these features and these are only list of features I gave you the commercial or the Enterprise load balancers they can offer hundreds of features okay so you can simply read and you will see that you can do a web application firewall you can do uh you know a lot of configurations like TLS you can add more security using TLS so these load balancers offers all of these features okay so I within uh during this video itself I have listed 10 close to 10 features which kubernetes was not supporting so these people were unhappy with kubernetes okay so they said that okay a service was doing few things but still we are not happy and the other thing that they have noticed this is problem number one and the problem number two is uh you can expose your applications to external World using load balancer mode service right you can create your service as load balancer mode but what is the problem is every time like let's say you have 100 Services if you take companies like Amazon they have some thousands of services okay so for each of the service when they create the service as type load balancer mode you know the cloud provider was charging them for each and every IP address because these are Dynamic and public IP address sorry sorry these are static public IP address so they don't charge for the dynamic IP address but whenever the IP address becomes static so for static load balancing IP addresses and static public load balancing IP address so if there are thousands of micro services or if there are thousands of services that you require for your applications on kubernetes so cloud provider was charging very heavy and Cloud providers are right in their terms because you are asking them for a static load balancing IP address and they are charging you for money okay so this is another problem that these these people were facing in the previous example okay what they used to do is because there was only one load balancer okay in the contrary you have for each application you have one service right but on the physical or virtual virtual servers people used to create one load balancer whether you have one application two application three applications so they used to configure in their load balancer like okay if the request is coming to amazon.com slash ABC send request to app one if it is coming to slash XYZ go to app2 and they used to only expose this application uh sorry they only used to expose these load balancer with static public IP address so what is happening is here they just have one IP address which they are getting from the cloud provider or even within their organization they are only exposing one specific IP address whereas here what is happening is you are exposing thousands of IP address and you are getting charged so this is problem number two so let us write the two problems so that it is very clear to you before I jump on to Ingress and how Ingress is solving this problem what is the problem number one okay so the problem number one that we discussed is Enterprise and TLS that is secure load balancing capabilities so if you are using a service this thing is missing people who are coming from the virtual machines they had very good load balancing capabilities like one two three four five that I discussed in the previous slide like for example I can give you basic example like it is missing sticky sessions then it is missing uh TLS based load balancing that is secure load balancing https based load balancing then the other thing it is missing was uh some uh a path based load balancing like I just told you host based load balancing or domain-based load balancing so if request is going to amazon.com go to this specific application if the request is going to amazon.in go to other applications so that is host based load balancing and then there are many other things like uh like I told you ratio based load balancing so I can write this list to 15 to 20 different things on top of my head but you know it will only waste our time so what what is the thing is the services in kubernetes was not offering all of this Enterprise level capabilities and the second point is I just told you that if you are creating a service of type load balancer then for each service kubernetes will charge you kubernetes will not charge you the cloud provider will actually charge you right so the cloud provider will charge you this is a very important interview question as well people will ask you what is the difference between load balancer type service and the uh traditional kubernetes Ingress okay so what you will answer is the load balancing type service was good but it was missing all of these capabilities and also you will say that the cloud provider will charge you for each and every load balancer service type like if there are thousands of services you will be getting charged for thousands of load balancer static public IP addresses okay so these are the two problems and these two problems you have to remember and it they have to be on top of your head because this is very important interview Point okay so people will definitely ask you in your uh interviews that what is ingress or why Ingress has to be created what is difference between load balancer service type and Ingress so these questions will keep coming so definitely you have to remember those two points and now how Ingress is solving those problem okay so what now kubernetes said is so kubernetes is also admitted the problem so kubernetes said that yeah we understand and till that point what happened was open shift open shift which is red hat openshift which is again a kubernetes distribution they have implemented something called as openshift routes which is very similar to kubernetes Ingress so kubernetes has understood that okay openshift has also implemented something to solve the problem and even many users are requesting us saying that okay so this is a very valid problem when we were on Virtual machines this is VM and this is kubernetes okay so these customers kept on complaining on kubernetes GitHub page that when we were on Virtual machines we were enjoying all the good capabilities of load balancers okay and because of which our applications were very secure because of which you know we had reduced costs but once we moved to kubernetes we realized that this is a very big problem so kubernetes people have also agreed to it and what people at kubernetes said is okay we will Implement something called Ingress okay so we will allow the users of kubernetes to create a resource called Ingress and what you people have to do who are these people like nginx F5 ambassador okay traffic uh what are the other things like okay there are bunch of h a proxy so these were the top load balancers that people were using uh here on the virtual machines I don't think Ambassador was there till now but okay that doesn't matter so people were using these uh top load balancers and what kubernetes said is okay I cannot Implement for each and every load balancer so what I'll do is I will tell my users to create something called as a Ingress resource okay so as a kubernetes user you will create a user sorry you will create a resource called Ingress resource and now all of these load balancers okay so all of these companies what they will do is kubernetes set them that you create something called as Ingress controller now what is this Ingress controller okay so on a high level if you are creating Ingress resource on your kubernetes cluster and if you are saying that I need a path based routing for example okay so you realize that you are missing the path based routing on kubernetes which you are very heavily using on your virtual machines so you can come to your kubernetes cluster create a Ingress resource I'll show you the example don't worry and you can say that okay I want to create path based routing so you can kubernetes at that okay create one yaml file and inside the DML file say that you know I want Pathways routing so you said the same thing but who will implement this okay so who will decide that which load balancer you want to use so there are hundreds of load balancers in the market so what kubernetes said is okay we cannot support all of you uh you know we cannot create the logic for all of you in the kubernetes master or the API server instead you people create something called as Ingress controller okay what does English controller mean so let's say that you want to create uh this specific capability using nginx load balancer so the nginx company will write a nginx Ingress controller and as kubernetes users on this kubernetes cluster you will deploy the Ingress controller okay you can deploy that using Helm charts you can deploy that using yaml manifest okay and once you deploy the developer or again the devops engineers they will create the Ingress yaml survey yaml resource for their kubernetes services okay so this Ingress controller will watch for the Ingress resource and it will provide you the path based routing okay so if it is complicated don't worry I'm explaining again okay so for example let's say this is your kubernetes cluster what you are doing is you are creating a pod for example okay so you are writing a yaml manifest for this and you have created a pod now what will happen like I told you there is a component called cubelet this cubelet will deploy your pod onto one of the worker nodes so cubelet will also sit on the worker node and API server will notify cubelet using scheduler that okay a pod is created and cubelet will deploy the pod right and similarly let's say you are creating a service CML manifest okay so there is Q proxy and this Q proxy will what it will do this Q proxy will update the IP table rules so for every resource that you are creating in kubernetes there is a component which is watching for that resource and it is performing the required action okay so similarly even if you are creating Ingress in kubernetes let's say you are creating Ingress so there has to be a resource or component or a controller which has to watch for this Ingress right so this was the problem so kubernetes said that okay I can create Ingress resource but if I have to implement Logic for all the load balancers that are available in the market that is nginx F5 traffic Ambassador hm proxy so kubernetes said that okay it is technically impossible I cannot do it so what I'll do is I'll come up with the architecture and the architecture is user will create Ingress resource okay load balancing companies like nginx F5 or any other load balancing companies they will write their own Ingress controllers and they will place these Ingress controllers on GitHub and they will provide the steps on how to install this Ingress controllers using Helm charts or any other ways and as a user instead of just creating Ingress resource you also have to deploy Ingress controller okay so it is up to the organization to choose which Ingress controller they want to use what is ingress control at the end of the day it is just a load balancer right sometimes it can be a load balancer plus API Gateway as well API Gateway offers you some additional capabilities okay so end of the day what you need to do as a user is on your kubernetes cluster the prerequisite is deploy a Ingress controller which Ingress controller you will deploy let's say in your virtual machines world before you move to kubernetes if you are using nginx so you will go to nginx GitHub page and you will deploy the nginx Ingress controller onto the kubernetes cluster after that you will create Ingress resource depending upon the capabilities that you need okay if you need path based routing you will create one type of Ingress if you need TLS based Ingress you will create one type of Ingress if you need host base you will create one type of Ingress so this is one time activity the one time activity for the devops engineers is to decide which Ingress controller they want what what is ingress controller to decide which load balancer they want okay it can be nginx it can be a file and they will go to their organizational GitHub page they will find the steps on how to deploy this and once they realize how to deploy they will after that it can be one service two service 100 Services they will only write the Ingress resource once they write the Ingress resource like you know Ingress does not have to be one-to-one mapping okay you can create one Ingress and you know you can handle 100 of services as well using paths you can say if path is a go to service one if path is B go to service two I'll show you that don't worry about it but you understood the topic here right what was the problem why Ingress was introduced what is ingress controller you understood all of these things so once you understand this concept it is very easy for you okay so the major thing that you have to understand is the problem number one that Ingress is solving that is the kubernetes services did not have Enterprise level load balancing capabilities and which is very very important you will say that move to kubernetes because containers are very lightweight because all of blah blah blah blah blah but without security without good load balancing capabilities nobody will move to kubernetes and kubernetes has realized that so that's why they have introduced Ingress and the problem number two was if you are creating a service of type load balancer mode Cloud providers were charging you for each and every IP address okay so these were the two problems that Ingress was solved you understood this thing and the next thing that you need to understand after this is okay how to install Ingress like if you just followed the uh document till here or the presentation tool here and to you will go to your kubernetes cluster you will find one Ingress example for the yaml file and you will just create a Ingress resource what will happen nothing will happen because you don't have Ingress controller on your cluster okay so if the Ingress controller is missing then you might create one Ingress two Ingress hundred Ingress nothing will happen because the Ingress is of no use without Ingress controller and what is ingress controller Ingress controller is a load balancer that you can choose from the requirement okay if you want engine mixing engine explode balancer you can create nginx Ingress controller if you want fi uh or big-ip load balancer you can choose fi Ingress controller if you want uh HF proxy then you can create HF proxy Ingress controller okay once you create this Ingress controllers once the Ingress resource is created on the cluster they will watch for the Ingress resource and they will what they will do is basically they will provide you the required capability if you require path based load balancing they will help you with path based load balancing if you require host base it will help you with host place so this is the theory part I hope the thing is clear till here if it is not clear you have to watch the video one more time before we jump onto the Practical because theory is very very important and your interview questions will be on Theory perfect so now let me stop sharing here and let me jump on to my other screen and show you how to install and how to configure okay so let me get my terminal let me start sharing the screen perfect so this is my screen and in the last class we learned to tell Services okay so what I'll do is let me check if I have the same state uh if I do Cube CTL get pods uh yeah I have this deployments that we create these pods that we created in the last class and if I do Cube CTL get service as well perfect the service is also available so this is the service and if you have followed my last class I showed you how to create deployment.aml and how to create service.aml as well and many people have tried it I watched in the comment section so really appreciate you people for doing it perfect so now let us see if I can access the service on the first fall okay so I created this service as node Port service and this is the port so what I'll do is firstly I'll get the mini Cube IP address so this is the mini Cube IP address so I can just use the curl command perfect so I am getting the output learn devops with some strong foundational knowledge great so we are able to verify that our service is running our service is watching the pods and the application is running now let me create a Ingress resource for this and what we will do with Ingress is we will set up a host based load balancing okay so what is host based load balancing like I told you uh host based load balancing is nothing but we can say that okay in the previous example we try to access using a curl Command right so instead of this curl command I can say that allow users when they try to reach my uh specific service on example.com okay or you can say hello if they try to reach on example.com ABC okay so let us try to see how to create this so again there is no rocket science what I'll do is I'll simply go to the kubernetes official documentation so uh I'll just say kubernetes Ingress perfect so this is the official page for kubernetes Ingress and what I'll do is I'll just go here okay see the example also here what these people are saying is there is ingress managed load balancer that is ingress controller okay so you create an Ingress resource and using this Ingress controller you know you can Define how to route the traffic for your applications or how to route the traffic for your pods I'll also paste the link in the description of the previous video I I think I did that two to three months back that is a very very informative video because we spent almost more than one hour explaining you different types of Ingress resources how to create them how to create a path based load balancing how to do uh you know SSL offloading how to do pass through so lot of things in very detail after watching this video if you have time definitely watch that video as well okay perfect so here I'll just copy this example instead of this let me go for host okay so this is an example which has host go through all the examples uh you can just follow the documentation and you can go through all the examples as well so I will do Ingress Dot yaml okay and what I will do is I'll just modify these fields so instead of Ingress wildcard host I'll just say Ingress example so let let us keep the same food.bar.com no problem with it right instead of example.com let us just keep it as food.bar.com and now I'll say if anybody wants to reach my application my service okay so if they hit on food or ball.com slash bar okay so they should reach the service one and they should you know typically you have to provide what is your service name and what is your service port so let us see what is my service name Cube CTL get SVC so this is my service name right so let me go to my Ingress yaml and let me just replace the service name my port number is 18 so I don't have any problem there perfect so now let us deploy this file Cube CDL apply minus F ingress.ml and let me see if something is happening okay so the Ingress is created if I do Cube CTL get Ingress you will notice that the Ingress is created but the address field is empty and you will see that nothing will happen like even if I try to replace this curl command and instead of this what I'll do is I'll just say food.bar.com so this is domain based routing okay so what we are achieving here is domain based routing and I will say food.bar.com for example what was the path that we have provided slash bar okay and if I try to hit nothing will happen here okay so the reason here nothing is happening is because you haven't created Ingress controller right so only if you create the Ingress controller then your uh this thing will start working right the Ingress should be read by Ingress controller so what we need to do is firstly install the Ingress controller so let me install nginx Ingress controller because nginx is a quite popular one so again I'll just follow the kubernetes documentation and say kubernetes Ingress install and this is the example uh documentation where I can search for Ingress controllers here there are bunch of Ingress controllers that kubernetes supports like I explained you uh there is nothing like kubernetes supports actually as load balancing company is they can create the Ingress controllers okay so all of these companies have implemented their Ingress controllers like nginx has their own Ingress controller proxy as their own fi has their own okay so Apache has their own Ingress controller like you can also create your own Ingress controller if you have a load balancer perfect so let us see nginx Ingress controller because it's a very lightweight and simple Ingress controller let us see how to install so if you are trying to install on Mini Cube like they provide you some very easy steps uh where is this nginx Ingress controller works with kubernetes perfect so you can just say try nginx Ingress controller okay here the steps are not good I can just say kubernetes nginx Ingress controller mini Cube because I am installing on minicube so you know there is a very good documentation straight forward like you can use this same example that I'm showing you just search for kubernetes engine X Ingress controller mini Cube and this is one simple command that will install nginx Ingress controller on your mini Cube cluster so all that you need to do is mini Cube enable add-ons sorry add-ons enable Ingress and that will create a Ingress controller for you right so it's a very simple step that you have to do uh additionally like if you want to know how to deploy Ingress controller for your production like you know in production you will not use mini Cube probably you are using uh eks clusters or openshift or some some things right so in such gate go back to the documentation that I showed you just search for kubernetes Ingress okay go back here and choose which Ingress controller you want let us say that you are doing this in your organization so the steps you will follow is go to this Ingress controller and let's say you are using same engine X Ingress controller okay instead of choosing the page this page where you will only be able to install for mini Cube okay go back here and in the documentation what you need to do so these are individual documentations right for every Ingress controller you have their own documentation so you can come here and you can provide you can look for the steps to install uh where is this uh installation steps let us search for install directly okay let us choose a different Ingress controller I think uh they have complicated the steps here let me just search for Ambassador okay so I took this randomly and here once you click on a quick start so again this is asking for some okay don't worry uh so they are just asking for all the sign up and all the things but uh you can just search for their uh official product documentation and say for example Ambassador ambassador Ingress controller installation okay so just searching Google like this and you will directly find the steps for installing Ambassador uh Ingress controller or you know anything that is required for your organization so if you are doing on Mini Cube you don't have to worry about anything like you know this is the official documentation for Ambassador installing uh Ambassador Ingress controller okay so you have the direct steps here install with Helm install with Cube City ml so on your production cluster you can choose probably Helm so click on the helm instructions and you can install them using these specific commands but for mini Cube like I told you we just have to do mini Cube add-ons enable Ingress and it will install the Ingress controller for us so let us see if the Ingress controller is installed or not end of the day English controller is also a pod so Cube CTL get pods minus a because I am not sure in which namespaces that is installed and just say nginx so see here so nginx Ingress controller is up and running and in which namespace it has created its own namespace called Ingress nginx okay and let us try to see the logs and let us see if it has identified the Ingress resource that we have created Cube CTL logs minus n uh what was the namespace Ingress hyphen engine X okay so click on enter and it should identify the Ingress resource that we have created so what was the Ingress resource that we have created see here Ingress example we have created in the default namespace and the name was Ingress example okay so it said that it has identified that we have created Ingress resource and it has successfully synced as well so what does it mean like it synced so it will go to the nginx load balancer configuration that is nginx.com file okay and it will update some Ingress related configuration for our load balancer related configuration for the English resource that we created and you don't have to go into these details at this point of time so with using the Ingress controllers you will understand like you know don't learn everything on day one eventually you will understand what is happening under the hood but as I showed you in the Pod logs it has unidentified that you know Abhishek has created an Ingress called example Ingress or Ingress example in the default namespace right this is a default namespace and it said that the configuration is synced okay for example tomorrow you are getting any error what you need to do is you have to go back and see in the pod.logs and now if you notice this address field was not there previously but it is updated now okay so if I can show you that I don't know if my terminal okay so the logs are deleted but address field was not there previously but after creating the Ingress controller this address field is populated that means to say nerve I can access says you know I can use the Ingress resource on food.bar.com what was the Ingress resource that we have created this Ingress example now can be accessed on food.bar.com slash bar the reason why I can access is because I have used the Ingress controller right and the Ingress controller has updated the configuration so in your production environment this is enough but if you are trying on your local kubernetes cluster you have to do one more configuration that is you have to update the slash Etc host configuration okay you have to update this file why you need to update this file and why you don't need to update this in production is because because you are doing on local and you have not done this domain mapping so this food.bar.com has to be mapped with the domain or IP address that is 192. 168 64.11 so this is my mini Cube or you know this is my Ingress IP address not mini Cube this is my Ingress IP address so whenever I try to say food.bar.com like if you ping and say food.bar.com this IP address does not exist right this domain does not exist in your real-time production environment for example for people at Amazon they might be using amazon.com and amazon.com is a real domain which does exist so in their Ingress resource what they will do is they will simply mention here as amazon.com but because we are not a company and we are just doing a domain uh sorry demo video so I cannot go to GoDaddy and purchase a domain right so that's why I simply said food.bar.com but what you can do is you can mock this Behavior or you know you can create a dummy Behavior here like you can confuse your uh laptop or you can you know you can update the ETC host file like sudo Vim slash Etc host update this host file and tell the host file that let's just provide the password so you can just tell it right I know this IP address called what was the IP address uh uh sorry I have to go back so you can tell them that okay if somebody is trying to reach food or bar.com okay you can tell them that I know this domain called fubad.bar.com and just provide this IP address and tell them that or tell the machine that foo.bar.com okay so this IP address sorry this domain will be resolved on this specific IP address okay so now if you try to access food.bar.com it will try to reach 192.168.64.11 so this way you can mimic the behavior or mock the behavior but this is not production use case in production you don't have to do all of these things you can simply ask your manager or you can simply ask your company what is the domain that we use and you can provide the domain name now if I try to Pink food.bar.com okay so you will notice that okay so the request is not reaching yet but in some time you will notice that the request will reach on to foo.bar.com okay so did I do any mistake here no there is no mistake so okay uh there are some previous entries I have to delete these entries okay uh right perfect so in some time what you will notice is when somebody tries to reach on the specific food.bar.com you can tell through your curl request or you can tell through uh you know slash Etc host even if you don't want to update the ctcos you can also tell that in your curl command that okay I know what is uh food.bar.com you can resolve if somebody tries to access food or bar.com just resolve this on IP address 192 168 64.11. okay so now what will happen after some time is you will be able to reach the application uh just replace the curl command with uber.paul.com and your application will be reached now okay I can go into that practicals but before that you need to understand that go through this document where you will find multiple other things like this was just example of uh host and Pathways routing right similarly you can do TLS based routing so what is TLS just search for TLS here and you will see that you can create uh secure kubernetes ingresses as well that means to say that you know uh this Ingress resource that I have created anybody can accept access access on the HTTP request but in production real-time use cases for example you are accessing google.com you will access on https so these all things can also be done using Ingress and if you want to do this if you want to try the Practical okay you can follow the video that I am pasting in the description okay it has everything like I have shown all the types of Ingress that were available I think it was made two to three months back I have shown all the type of Ingress as TLS without TLS host based path based wildcard entry so follow that video after this so that you understand the entire concept okay so if you like the button click on the like button if you have understood the concept of Ingress definitely comment on the video and I'll see in the next video take care everyone bye hello everyone my name is Abhishek and welcome back to my channel so today we are at day 41 of our complete devops course and in this video we will be learning about config maps and secrets in kubernetes so on a very high level what we will learn is what is a config map in kubernetes we learn the why aspect of what is a secret why a secret does exist in kubernetes then we will try to understand a classic interview question that is the difference between config map and Secret so this is a very popular interview question right and then we will also try to do a live demo so this video is going to have a live demo where we will try to see how to create config map how to create secret different types of Secrets and then we will finally see how to reference or how to use these ones inside a pod or deployment of your kubernetes right so this is going to be a long video and without wasting any time let's quickly jump onto the video but before that if you haven't subscribed to my channel definitely uh consider subscribing it because in the future I am going to do more and more free courses where we will learn about uh yeah I'll keep it suspense and you can keep watching our community Tab and telegram channel to understand what are my future free projects okay first of all config map so what is a config map in kubernetes so if you just for a couple of minutes if you forget about kubernetes let's say you are a application developer or you have understanding of how application works so you know there is an application and let's call it as a backend application so this back-end application what it does is it talks to a database okay and it retrieves some information from the database and it gives it back to the user so this is a very simple application right so uh this backend is trying to talk to the database and it is trying to give the information back to the users when user has requested now what is the information that is application required from database like it requires some information like what is the database port what is the database username what is the database password okay and what is the for example uh connection type or what are the number of connectors that are required and a few more information that this application requires from the database now how this information is retrieved uh you know this can be retrieved using a environment variable inside the application like you know that a hardcore or a thumb rule is that the application should not have these details hard-coded right so if you have these details hard coded what will be the problem in the future if these details gets changed okay any of this information get changed then the user will get null or you know he gets some vague information okay if there is a vague uh like you know uh if there is any other uh username that got changed or password that that got changed or the port got changed so in such cases the user will get a wrong information or he might not get any information at all regarding the user right so to solve this problem you will not hard code this information inside the application but a general practice is we are not talking about kubernetes at all General practices you will try to save this as part of environment variable or you will try to save this as a specific file in a specific path inside your application or inside your file system right and you will try to retrieve this information from the file system using OS modules right let's say you're using python you can use the OS model you are using Java you can get that from the operating system libraries that Java supports right so this is how you read the information now how do you do this inside the world of kubernetes okay so inside kubernetes there are two things one is okay with the same problem let's say we want to solve the problem with respect to DB port and uh the DB connection type all of these information so I'm not talking about DB username and DB password okay for some time let's put DB username and DB password aside and let us talk about only the DB port and DB connection type these kind of information okay so what kubernetes says is because kubernetes basically deals with containers okay how a user can get this information uh you know as part of your container environment variable or as part of your container a specific file inside your container okay so to achieve this what kubernetes says is okay so we support something called as config map okay so what you can do is as a devops engineer or as a configuration manager engineer you can create a config map inside the kubernetes cluster okay and put the information like DB port or any kind of information inside the config map okay and you can mount this config map or you can use the details of the config map inside your kubernetes pod okay so the information can be stored inside your pod as a environment variable or can be stored as a file inside your pod on your container file system but this information has to be retrieved from the config map because as a user you cannot log into the Pod you know you cannot go to the container and once you go to The Container you cannot create this environment variable or you know so that's not a right practice so because you can do it but the problem is that sometimes you might not have access to the container uh you know login itself or the other problem is that what if this Fields gets changed continuously okay or whenever you are creating the docker file itself you don't know these values okay probably these values are fed to your application later point of time so this is not possible and What kubernetes suggests is go with the config map so as a devops engineer what you need to do is you can collect this information that a user requires you can talk to the database admin and as a devops engineer you can create a config map and inside the config map you can store these values once you store these values you can mount this config map or you can use this config map as the data of this config map as environment variables inside your kubernetes spot how you can do it and you know what are the different ways like I told you one is you can use them as environment variables one is you can use them as volume mounts so I'll talk about both the use cases when we go with the live demo but for now you understood the purpose of config map right so what problem config map is solving so config map is solving the problem of storing the information that can be used by your application later point of time okay so config map is used to store data and this data can be at later point of time used by your application or your pod or your deployment now if config map is solving this problem why you need a secret in kubernetes right so you should get this question like okay config map is solving this problem then what is the purpose of secrets in kubernetes so secrets in kubernetes solves the same problem but Secrets deal with sensitive data that means to say like you know if you are just providing all of the information if you go back to the previous slide like I told you you have parameters like DB password okay you have parameters like DB username like if you try to put this information along with the DB port and all the details in the config map there is a major problem in kubernetes that is in kubernetes whenever you create a resource what happens is this information gets saved in etcd okay so if this information is getting in saved in etcd in etcd usually the data data is saved as objects and you know any hacker who tries to get access to Etc they can retrieve the information so if they are retrieving the information of your DT DB username and DB password that means your entire application or your entire platform is compromised so they can get the details of your database so if they are getting details of your database then you know your kubernetes cluster does not have a proper security so to solve this problem what you will do is kubernetes says that if you have non-sensitive data then store it in config Maps if you have sensitive data then store it in Secrets now what will happen if you store it in Secrets what difference does it make so what kubernetes says is if you put any kind of data inside the secret okay what we will do is we will encrypt the data at the rest that means to say before the object is saved in etcd kubernetes will encrypt this one okay so by default kubernetes uses a basic encryption but what kubernetes says is we will also allow you to use your own encryption mechanism like the custom encryption so you can pass these values to their API server and say that whenever you are storing like you know API server is feeding some information to the etcd you can use this custom encryption and even if hacker is trying to access this etcd because he does not know the decryption key right so he can read all the information from etcd let's say he read the information about config Maps deployment pod everything but when it comes to Secrets he will just retrieve a encrypted information that is of no use for him okay so he will he just have to throw that information because he cannot read that he does not have the decryption key okay so that's why whenever you have encrypted or sorry whenever you have sensitive information go with storing the objects or values in the secrets whenever you have nonsensitive information then go with the config Maps Okay so this is the differentiation between conflict maps and secrets now what happens like okay let's try to go a step back and let us see the step-by-step approach on what is happening like let's say you are a user okay as a user you are creating a config map for example so what you will do is you will write a yaml file for config map and inside the ml file like I told you you'll put all the details that are required so you can get the yaml yaml syntax from the kubernetes documentation as well and once you do this you will use the cube CTL apply I'll show you all of these things in the demo as well so you use the cube CTL apply and you create this config map on your kubernetes cluster so your config map got created so what is happening here your config map is created and at the same time API server is saving this information inside the etcd as well so this is the entire process with respect to config Mark now for a hacker if you are storing the sensitive information he has two points to retrieve the information one is if the hacker has access to your kubernetes cluster one is he can come here he can come to the config map he can just say Cube CTL described config map or he can just say Cube CTL edit config map and he can get the information from the config map so your DB password is compromised and the other thing that he can do is he can go to the etcd and because the data is not encrypted he can get the information from here as well so these are the two problems major problems that secret is solvent problem number two I already explained so at the etcd the data is encrypted at the rest so the hacker does not have the decryption key so that's why your information is secure but what about the point number one you might say that uh you know uh the hacker might come to the secrets and he can again use the cube CTL describe or edit and he can read the information right so for that reason what kubernetes recommends is apart from kubernetes doing its part so kubernetes is doing its part it is using the decryption encryption so kubernetes is saying whenever you create Secrets use a strong R back okay say that no user should have access to our bags who are not like for example there is a very popular Concept in devops that is called as least privilege okay least privilege is a concept where what you will do is you will only Grant least access like you know very less number of people should have access to Secrets okay like the same concept with the IIM as well in AWS so if you are restricting the access like uh there is a developer who is trying to log into the kubernetes cluster like he can have read access to config Maps he can have read access to pods he can have read access to deployments but there is no requirement he has access to secrets so you can prevent that in the user are back you can just say that okay he should have access to all the resources but not secrets so this is how you prevent both the things okay so this is the difference between config map and secrets if user sorry if your interviewer is asking you this question what is the difference between config map and secret this is how you explain him like both the config map and secrets both of them are used to store the information or you know pass the information or uh for example you might want to save some Json data you want to save some key value pair inside your company when it is clustered that will be later fed to your applications which are in pods in kubernetes you can use both of them for the same purpose but Secrets is used for sensitive information whereas config Maps is used for non-sensitive information and how does Secrets solves this sensitive information problem like I told you uh with Secrets the data is encrypted at the rest and with secrets you can enforce a strong rvac so that you know for the entire Secrets resource in kubernetes you can say only devops engineer should have access to it you can do that using the kubernetes rpback okay now we will not waste more time and we'll quickly jump onto the demo because demo also might take some time for us so let me stop sharing here stop share perfect so let me start sharing my other screen so at this point of time if the things are clear to you uh while I'm sharing my screen just comment saying that okay so I'm able to understand I am not able to follow at any specific point okay so your feedback is highly appreciated perfect now you should be able to see my terminal in one two three seconds perfect so let me firstly clean my cluster Cube CTL get deployed to have any deployments I think I should have a few so Cube CTL delete I should have done this before sorry delete deployment sample python app done and Cube CDL get CM config map let me delete this config map as well Cube CTL delete config map test config map perfect so let's start with creating a config map okay so uh Vim cm.yaml I'll just say uh this one and here inside the configmap.aml firstly you have to provide the API version V1 then you have to provide kind which is as config map don't think that Abhishek is just typing it how does he remember all of this like I do it on a day-to-day basis so I remember but you can also use the kubernetes documentation and you will not get any extra points for remembering this thing okay name as test cm then data you can pass any data here like uh probably we'll use the same thing DB hyphen Port let us give the MySQL Port let me just say 3306 okay so let me just save this one now let us create this always use Cube CTL apply over create why I have already explained in the previous video and if you know the answer you can also comment that Cube CTL apply minus fcm Dot yaml so now if I just say Cube CTL get CM you will notice that the config map is created now let us describe this and see Cube CTL describe cm test hyphen CM right so this is one data entry that we have saved in the config map similarly you can save any number of data okay as part of your application in your Enterprise your application might require a lot of fields then you can use a lot of fields here like later you can point these fields as environment variables inside your kubernetes part that is what we are going to see now so my end goal now will be to take this field from the config map and to put these fields as environment variables inside my kubernetes pod but for that firstly I have to create a kubernetes pod itself if you have watched my previous videos uh git remote minus V uh you know this is one video that I explained Docker 0 to 0 where we have created a python Django application which I'm going to use here okay now I I will not explain this one more time because I've also explained this in the kubernetes deployment as well as the docker Zero to Hero video how to create a python Django application and host it as a application inside your container we'll use the same container so to just save the time now this is my deployment.yaml file let me open this and let me get it back to the state for example I have to remove this field sorry so this is the same one that I explained in the previous video okay in the kubernetes deployment we use the same deployment to create the kubernetes pods okay so I'll show you very quickly uh if I just do Cube CTL apply minus F deployment.yaml what will happen here is it will create two kubernetes parts because I have used the scaled replica as two Cube CTL get pods minus W for watching two parts are created now how do I look into the environment variables of this Parts you can just say Cube CTL exec the name of the Pod hyphen hyphen hyphen slash bin slash bash so you will sorry exact minus ID is required so that I open it in an interactive terminal right now I am inside the Pod and here if I just say environment variable and followed by just let me search with DB you'll notice that there is no environment variable with respect to DB because this is just an application running and till now it does not have information of the database like it does not have the information of the database Port but let's say I am using mySQL database in this application and I want to use the information uh that is I want to know the database Port of MySQL so what I have to do is I have to go here and modify the deployment.yaml and what I need to modify is I just need to add after the image or anywhere you know you can provide something called as EnV because I want to read the value as an environment variable right so here inside the EnV what I will do is I will tell my kubernetes deployment that you know I need environment variable and the name of the environment variable should be DB Port let's give caps okay this should be name of the environment variable and the value of the environment variable so how will you get the value of the environment variable don't worry I created a config map so to get the value get the value from the config map that is value from what is the config map like I can provide the reference to my config map saying that config map ref that is reference to the config map so get the so this is the name of the environment variable what should be the value of the environment variable the DB Port that we have provided in the config map so that's why I'm saying value from config map reference and here what is the name of the config map because kubernetes has to know where it has to get right so name of the config map is what did we create the name of the config map I think test hyphen cm and the value or like you know what is the key uh where I have should stored the database Port so the K the key inside the config map that I stored is I think DB hyphen Port let us quickly go back and see so Vim cm.yaml oh sorry cm.yml so you'll see that okay so this is the port so I have to pass this so that kubernetes can retrieve this information I think I have already passed the right one if I go to the deployment.yaml uh key is DB hyphen Port name of the config map is testif and CM and the environment variable name inside the python application that I want is DB Port so now the expectation that I have is as soon as I deployed this kubernetes deployment so it should overwrite the existing parts and inside the pods if I just say EnV grep DB right the expectation is now I should see a new environment variable called DB hyphen port and the value of the environment variable should be 3306 so this is my expectation now let me see if my expectation will match or not for that Cube CTL applying minus f deployment.yaml so it said that uh okay through an error and it says validation error deployment EnV value from okay so it said that the it does not know value from unknown field config map ref okay so it said that the config map ref name that we have provided is wrong I think there is some syntax error here config perfect map ah okay sorry my bad so that's why you always have to follow the documentation you should not go by your gut so it should be config map followed by key ref okay so you will do this mistakes don't worry if you are not doing mistakes you will not learn things so here I've done a mistake that it should be config map key reference so that's fine uh now if I apply it one more time it should get created perfect now Cube CTL get pods minus W I'm watching it again see these containers are getting created the previous ones the previous pods are getting terminated and the new ones are getting created perfect let's give it some time for the ones to be running I hope that is done so let me just say Cube CTL get parts perfect so they are running 25 seconds ago and 21 seconds ago that means I hope they are good now again Cube CTL get pods let me accept into one of this I can pick randomly anything let's pick this one Cube CTL exec minus it name of the Pod what I'll do is slash bin slash bash so that I'll go to the Pod now I'm inside the Pod let me say EnV grab I hope this will work minus I DB so that okay let me use DB itself I use the capital DB and let's see perfect DB Port is 3306. so our purpose is solved now the developer what he can do inside is Java application he can just say that for example or inside the python application you can just say OS dot DB port or you know OS dot EnV or Environ off you can just say DB hyphen port and he can retrieve the value for his database connection right so this is how you use the config map inside a application as environment variable but now there is a problem I'll show you what that problem is and this problem your interviewers will definitely ask okay how will you use your config map inside your kubernetes spot so if you say this way okay so this way you can use your application and sorry value inside your application but the problem is let's say I am the devops engineer and I realize that for some reason I want to change the oh sorry let me get out for some reason I want to change the DB Port okay so the DB Port is occupied or you know consider it as some variable with respect to DB that I want to change now how do I change this okay so I'll just come to the config map and instead of 0 6 I'll say 0 7. I'll save it now how will the Pod come to know about this change okay so if I just do Cube CTL exec again we'll go to the same part one more time and if you see the DB Port name will remain the same okay if you just say grab DB it will be 3306 only so your application will continue to use 3306 and it will fail because the port has changed so the database admin has changed the port and your application is not understanding that the port is changed so your application will try to connect to the DB but it will never get connected so to solve this problem what kubernetes is said has said is if you have this kind of information if you have the information that keeps changing okay the changing the environment variables inside kubernetes is not possible inside containers is not possible you can never update a value like today you can go to any container and try to update the environment variable value and let me know what happens you will say Abhishek I cannot change the value inside environment variable because container does not allow changing the environment variable you have to recreate the container but input production you cannot restart the containers okay because it might lead to some traffic loss if you are deleting the deployment and recreating the deployment you might incur some traffic loss which is not expected so the other way that kubernetes suggests is why can't you use volume mounts okay so kubernetes says like instead of this approach go with an approach called volume mounts okay so using the volume mounts what you can do is you can uh sorry using the volume once you can do the same thing but instead of using them as environment variables you will use them as files because you are mounting right so you are config map information will be saved inside a file and developers can read the information from the file instead of environment variable let us see how to do that okay so again I'll open the deployment.yaml now instead of this I'll delete this environment thing okay so instead of environment thing what I'll do is I will do a volume Mount okay but to do a volume want the first thing that you have to do is you have to create the volume itself right so let me leave this space so that you understand and hear what I will do is at the level of the containers okay so here at the level of the container what you will do is you will create a volume so you just say volumes whatever the name of the volume that you would like to so let me say the name of the volume is uh DB connection for example and inside that what I will say is this volume should read the information from config map so in kubernetes you can create different type of volumes you can create external volumes you can create internal volumes persistent volumes config map so in this case I am creating a volume that reads the information from config map okay and again the name of the config map what was it it is test hyphen cm okay so this is the volume that I've created and here you can mount the volume okay for that you can simply say volume months so what is happening here the first thing that happened is I have created a volume why did I create a volume because a volume is nothing but a storage right it's just a block and inside that what I'm saying is read the information from config map so this is just like a Docker volume that I explained you previously and now I have to read this value inside the point for that what I'll do is I'll Mount the volume so mounting the volume is nothing but reading this volume inside the kubernetes Pod okay for that in the volume once I'll say name what is the name you have to provide the same name that is DB connection okay and where do you want to mount this so inside the kubernetes pod on which folder do you want to mount it or on which disk or which file system do you want to mount it I'll simply say Mount path AS Slash opt you can use any path okay now if I save this sorry if I save this and if I just do Cube CTL Cube CTL apply minus F deployment.yaml you will again notice that okay if I just do Cube CDL get pods minus W so the pods are created four seconds ago and six create six seconds ago that means they just got created as soon as I applied now let me do exec one more oh sorry I have to get the Pod name right to exec qctm get parts I'll clear my screen get the pods use one of these parts and what will happen now is the environment variable should get deleted because we have related removed the environment right so if I exit into one of these pods and if I just say hyphen hyphen slash bin slash batch now you will see if I again do EnV pipe grep DB there is no environment variable because we have removed it right that is working fine now again what I did is I have also mounted this so let us see if it got mounted on the slash opt folder perfect it got mounted now it says that there is a file called DP Port let us see what is the value of the DB Port cat DB Port pipe more uh what time oh sorry cat slash opt DB Port pipe more see what is support three three zero six right so this uh got mounted inside the uh what is it the file system okay now what I can do is again I can go to cube CTL edit config map [Music] configmap.yaml sorry cm.yaml and here what I will do is okay I change the port but I did not apply sorry for that so let me apply the port anyways you have seen what is inside the uh container or what is inside the Pod right so you have noticed that inside the Pod there is a file called DB port and the value is 3306. now let me apply this change Cube CTL apply minus F uh CM dot ml okay now my expectation is the kubernetes poured without getting restarted it should know the value of the config map has changed let us see if it will understand or not now again what I will do is firstly to show you Cube CTL describe CM test cm see the DB Port has changed now I will show you that the Pod is not restarted Cube CTL get pods the timestamp will will be more so see it is two minutes before that means it did not get restarted as soon as I updated the config map right so now let us exec into this part and let us see inside that file the name of the or the port number has changed or not automatically so I did not change anything right inside the Pod I did not log in and I did not change it exec minus it slash slash slash bin slash Bash fingers crossed cat slash opt slash DB port now I am expecting the port number should be 3307 perfect it got changed you not believe me so I'll do it one more time config map dot yml okay you might say that Abhishek I don't believe it so let me change it one more time three three zero nine okay now again let me apply it apply minus f ah sorry my fingers yeah apply minus F this got applied again so you have to give it couple of seconds for it to get refresh inside uh your kubernetes pods so let's do it one more time Cube CTL exec every time you don't have to exit into the Pod I was just showing you but what I can also do is cat slash opt slash DP Port here as well and you will notice that the port has changed oh sorry it hasn't changed now like I told you give it a couple of seconds so this is uh perfect let us keep doing it and let us see if it gets changed or not but did I change here Cube CTL uh describe config map SCM did I apply it oh yeah I applied it right so let us see let us keep doing it and I'm sure it will change perfect it got changed right I know you wouldn't you wouldn't have believed me if I have not showed this but yeah so the port number has updated like I told you it will take couple of seconds because uh kubernetes uh continuously keeps reading the config map and looks for the changes so now uh it understood that okay the monitor scenes and like be patient give it a couple of seconds because it will take couple of seconds to reflect okay so you can keep trying it add new values to the config map keep doing it so similarly you can do it for secret as well so there is no rocket science uh for secrets and config Maps the behavior is same uh so even what I can do is I can just create a secret for you uh Cube CTL create so there are different types of Secrets there can be TLS Secrets there can be a generic secret okay we are not talking about that because uh it will be out of context uh today so I can just say Cube CTL create secret generic secret TLS is basically to store the certificates but here we are just storing username and password right DB password so uh Cube CTL create secret generic and let me call this secret name as uh test Secret okay and what I will do is you can also use the same uh secret.aml file to write it but I'm just showing you the other way like you can create config map like this as well Cube CTL create config map name of the config map again you can provide the details just for projecting or showing you the other way of creating a secret I'm just showing you here okay so from literal means to say that you are creating a basic literal okay so here I will say the literal name as uh DB what was the name that we showed I think DB Port right okay DB Port is equals to 3306 okay so here what I'm doing is I'm just showing you the other way of creating as soon as I create the secret got created you can also do it the same way using secret.yml as well same way that I did for the config map so now if I do Cube CTL describe secret followed by the name of the secret was test Secret okay so see what is there here DB Port it says four bytes so this is exactly what I told you during the theory class right so if I just do Cube CTL edit secret test secret I told you that what kubernetes does is it will encrypt in base64 okay so this is the base64 encrypted but this is not a great encryption by default kubernetes does not support great encryption you can encrypt using your own way for encryption here you can use something like a hashicorp vault or you can use seal Secrets or you can use any different thing for encrypting the secrets on your kubernetes namespace level or during your kubernetes secret creation but at the rest using uh etcd you have to pass it to the API server whatever your encryption key is okay but here your kubernetes secret got saved and if you want to understand like you know what is this secret value just to say that is it 3306 or not you can again say Echo uh the field here and you can just say base64 hyphen hyphen decode okay so it will decode and see the port number is 3306. so kubernetes does not have a great uh encryption uh whenever you are creating the secrets by default so if you want to keep it more and more secure you can use the applications that I just told you but that's fine always uh importance is to secure it at the level of etcd and that comes at your kubernetes security where you can use a encryption key okay if you want to know more about it read how to encrypt etcd for Secrets I also explained in it in one of the previous classes regarding kubernetes security okay so this is uh the one here like I have created the secret now instead of this port what you can do is for your demo or for you to practice use the same example and create a new secret for DB password okay and provide a password Here uh probably a b and create a new secret called uh test secret one and your exercise for today is to repeat the same exercise so instead of config map key reference you can just say secret key reference like if you go to my deployment dot file deployment.yaml file right so here whatever I used just replace the config map secret like you know here I said volumes right so inside the volume inside this config map just replace it with secret or you can follow the kubernetes documentation and if you want to read it as environment variable uh like I showed you in the previous example you can just say secret reference and provide the information but you will get all of this information inside your kubernetes documents I'll I'll put the link in the description as well consider this as your homework or assignment and I explained you everything regarding config map no you have to do it with respect to secret okay so that you learn it I hope you enjoyed the video for today if you have any questions put that in the comment section don't forget to like the video and share it with your friends finally if you haven't subscribed please subscribe to my channel Abhishek virmala thank you so much I'll see in the next video take care everyone bye hello everyone my name is Abhishek and welcome back to my channel so today is day 39 of our complete devops course and in this class we'll be talking about kubernetes are back so as I promised you and also you people might have seen in the thumbnail that I am going to show you how to create a 30 days free openshift cluster okay so this cluster is going to be free for 30 days and you can uh like create resources uh in this openshift cluster and you can play around with this openshift cluster but let's talk into the details of it uh at the end of the video and currently let's focus on kubernetes are back so what is kubernetes rbac and why is it important so I will say that kubernetes rbac is a very simple but complicated topic so what is simple and again when I say it is simple why is it complicated so kubernetes are back is a very simple topic to understand but if it is not implemented right then it becomes very complicated to debug the issues or you know even it becomes very complicated for your organization because kubernetes rbac is directly related to security when something is related to security that itself means that it is very important so you need to understand the concept of rbac more than understanding how to create a service account or how to create role and how to create role binding because that takes very less time okay so if you want to understand how to create a role how to create a service account and how to create a role binding you can also get things done in 10 minutes like you know you can create a pod attach its service account and understand the things but I'm not going to talk about those things instead I'm going to firstly explain you the concept of rbac how and why is it very important and after that I am going to talk about okay what is service account what is role and what is rule by name okay perfect so firstly kubernetes are back can be broadly divided into two parts okay so the first part can be users and the second thing can be your service account or you know how Services manage access in the kubernetes that can be any applications that you are running in kubernetes so firstly let's try to understand this user management Okay so if you have a kubernetes cluster let's say you have been using uh kubernetes in mini Cube or you're using a kubernetes in kind or any any other kubernetes platforms let's say so out of the box you get administrative access to these clusters right because they are your local kubernetes clusters and you have been playing around with this uh local kubernetes clusters for uh you know learning kubernetes but when you try to use kubernetes in organizations the very first thing that you would do is let's say this is your kubernetes clusters as devops Engineers or kubernetes administrators your primary responsibility would be to Define access so if there is a development team if there is a QE team okay so how do you define what access should developers have onto this cluster and what access should this QE engineer should have on this cluster okay it's not that uh you know any QE engineer can come to this kubernetes cluster and uh Delete resources in uh let's say Cube system namespace or let's say uh the queue engineer deletes uh something related to etcd okay so these things can become very very worse if like you know if someone comes and deletes something related to etcd then you know it becomes very difficult for your uh administrator or for your uh devops Team to get back the original state of these uh things so effectively how you can solve this problem is by defining rbac that means to say role based Access Control okay so what is role based Access Control depending upon the role of the person okay so R back depending upon the role you would Define access so role based access and this is the control that what you are trying to do okay so this is one part of it so how we are manage how we are going to manage the user management or how we are going to man manage the access to users in your kubernetes cluster that is one part of our bank and the second part is how you are going to do deal with the service account that means to say let's say you have created a pod through deployment or through any other sources you have created a pod now what access does this port need to have on the kubernetes cluster so should pod be uh having access to config Maps should this pod have access to Secrets okay so let's say as part of your application you want to read config map as part of your application you want to read secret as part of your application can you delete like let's say uh you have uh create you you have deployed a pod and what this pod is doing let's say this pod is a malicious pod and what this pod is doing is it is deleting uh some content related to API server okay or uh accidentally it is you know removing some important files on your uh system so how do you restrict this so again the same thing similar to user management you can also manage the access for your services or for your applications that are running on the cluster using the rbac okay so two primary responsibilities of our back is user management as well as managing the access of the services that are running on the cluster now how this is them on a broad level okay before I jump into the depth depth of how do you manage all of these things on a broad level in kubernetes you have three major things okay for managing the RBX one is like I told you service accounts or users second thing is called as kubernetes roles or cluster roles third is role binding or cluster role binding so I'll try to explain the difference of roles and cluster role binding and cluster role binding as well don't worry about it but first of all these are the high level three things that can Define r back in kubernetes okay but but but how do you create users in kubernetes okay so if we go back to the previous slide here I told you there are two essential things one is users and one is service accounts okay but how do you create users like for example if you are using mini Cube okay all these days you might be using minicube okay so if you go inside the mini Cube can you create a user like on my personal laptop I can use it's a let's say it's a Linux laptop I can use something like user ad okay and using user ad I can create a user on my Linux system and I can share this access like you know uh with relevant I can say that you know I'll create a user called developer and someone who has this username and password they can log into this Linux box and they can do specific set of actions on this Linux box but how do you do this on kubernetes so in kubernetes also can you use this command to create users no you can't okay so what kubernetes says is kubernetes does not deal with user management whereas kubernetes offloads the user management to Identity providers so this is very important to understand because service accounts is something that you can create okay so anyone can simply log into a kubernetes cluster even on your mini Cube cluster you can log in and you can create a service account but this part is very important to understand because when you are going to work in an organization let's say your organization is using eks your organization is using AKs or openshift so how do you create these users how can you say that okay devops engineer should log in with this specific user or in in devops team let's say you have 10 users so probably you might create uh 10 users for this devops users and probably there are 10 developers so you might want to create 10 accounts or 10 users for these developers and each of them should have only relevant access so developers should not be able to delete resources QE people should add the most only read the resources and read the logs probably just as an example okay and devops internets might want to do the administration of the cluster so how do you do all of these things so that's what kubernetes says is okay I'm not going to manage the users I will offload the user management to Identity providers so I'll give a very simple example so these days let's say you are using any applications okay the fundamental thing that you might have noticed is most of these application have options like login with Instagram or very popular one is login with Google and what happens is you don't even have to create account with this applications let's say there is a person who has downloaded an application from Play Store and you know most of the times you get this option called login with Google login with Instagram or login with Facebook and what happens is this person gets access to this app without creating the user right so this is what exactly kubernetes also does so kubernetes is is I will offload the user management okay so in kubernetes you all know that there is a component called API server so you can pass certain Flags to the API server okay I'm going to show you what that flags are it's not a rocket science don't worry about that Flags because always I tell you don't worry about syntaxes or don't worry about uh you know how does a yaml look like always understand the concept so in kubernetes the API server works as your oauth server okay now what is this oauth wait for it okay so you can offload the user management to any identity providers what are some of popular identity providers for example you are using this kubernetes cluster on AWS and let's say this is your eks kubernetes cluster so why can't you use IM users that's what kubernetes is okay so if you are using eks platform what kubernetes says is you you can use your IM users and using your IM users okay you can log into kubernetes so in between what you need to do is you have to create something called as IAM provider or IMO auth provider okay and using this I am the persons will log into the kubernetes cluster and already you have created users and groups in IM right so if you have a user and if you have I mean if your user belongs to a group so as you log into the kubernetes so you get to login with the same username and you get to log in with the same group okay so this is how kubernetes offloads the user management to external identity providers okay this this concept is same whether it is open shift whether it is eks whether it is AKs so depending upon the identity provider that your organization is using this might change for example your organization might be using ldap your organization might be using OCTA your organization might be using any SSO so you can use all of these things kubernetes natively supports all of these things okay but it is up to you how you want to configure this identity provider how you want to create users inside this identity providers you can also use uh some identity Brokers like key clock so key clock is a very uh popular one many people manage their kubernetes identity uh you know identity management or user management using key clock as a broker you can connect to all of these things okay so even if you want to uh try some things like let's say you have a access to production or you know you can create a kubernetes cluster on Amazon let's say today you can go to eks and you can integrate eks with key clock and using keyclock you can connect it with your GitHub so in your GitHub you can create user management right as part of GitHub you can collaborate with hundreds of people and you can create uh collaborators you can create users in your organization who which user has what access in your GitHub so using key clock you can connect to eks and you can get the all of these users okay this is how kubernetes offloads the user management and the second part is service service accounts right so service accounts it's just like a yaml file everybody can create okay so there is no different with respect to service account so if you understood how to create users and I will show you how to create service accounts so service account is just like creating a pod okay don't worry about it there is simply uh like you know like you have your pod.yaml you can create a service account.aml essay.aml for example and inside the service account.yaml you will just Define as part of your ml file you know what should be your API version what should be your kind what should be a name of your service account but then comes the interesting part okay what will happen next let's say you logged in as a user or your application is currently running as a service account uh by default even if you are creating a pod you might have this question like all these days I am using some pod okay by default whenever you are running a pod it comes with a default service account even if you are creating a service account or not a service account gets created automatically and using this service account itself kubernetes is called uh whatever applications that you are running it will be talking to the API server for any or to that matter of fact for uh connecting with any resources in kubernetes it will use this service accounts itself if you are not creating service account kubernetes will create a default there is a default service account kubernetes will attach the default service account to your pod if you are creating a service account then you can use your Custom Service amount but what happens after that whether you are logged in into your kubernetes cluster as a user or whether your application is running on a kubernetes cluster as a service account that is fine okay after that how do you manage the rules or how do you manage the configuration so to Define access after this part kubernetes supports two important resources that is called role and role binding so you can also consider this as uh cluster role when it has cluster level permissions and you can consider it as cluster role binding when it has cluster level permissions so this is not important at this point of time simply understand that kubernetes does all of these things using role and role binding okay now what is this role and role binding so once you create yourself once your application is running as a service account or you logged in as a user the next part is how to Grant access to it so firstly you will create a role okay let's say you are creating a role which says and you want to assign this role to the developers so what you are saying is they should have access to pods they should have access to config Maps they should have access to Secrets within the same namespace okay to to have within a single namespace you will create a role if they want to have access across the cluster then you will create cluster role that's the only difference but we will talk in detail as well okay so you have created this role now you have to attach this role to the users right so to attach you will use role binding that's the very simple concept okay what is is a role so a role is a yaml file where you will write all the things like you know they need to have access to config map they need to have access to secret they need to have uh you know even if it is a single user you will create a role and you will say okay so whoever gets this role attached to them like let's say there is a user called Abhishek if you are attaching this role to the Abhishek then Abhishek will get all these permissions if you are attaching this role to XYZ person so XYZ person will get access to all of these resources that you have defined in the roll dot yaml so you can consider or you can compare it with IM policies okay so once you attach all of these things or once you say that anybody who gets access to this role might have access to all of these things but how do you actually assign them this okay so you have created a role and there is a user how this user and how this service account and the role gets attached to each other okay so to do this you use something called as role binding okay so the simple ecosystem will look like this service account consider it as user as well role and role binding okay actually uh sorry uh this Arrow so this is service account this is role and this is role binding so you will create a service account or a user okay and you will create a role using this role binding you will bind both of them together okay so this is a very simple architecture so if you don't have a role binding you will create a service you will create a role but both of them are not attached together okay so if you have just a role binding you will create a service account but to bind you need a role without having a role just having a role binding you are not binding your service account with any permissions so simply this will take care of permissions this will be taking care of users creating user management and this will take care of binding the permissions to users okay binding permissions to user so this is the concept of kubernetes are back so simply if you are creating a role within a specific namespace it will be called as role and if you are creating this role with in the scope of the cluster it is called as a cluster role similarly role binding as well now what is the difference between cluster role and cluster role binding I don't think it is good to discuss in this specific class because if there are any beginners who is trying to understand this concept it will go over their head so whenever we are doing the practicals it will be very easy to understand the difference between cluster role cluster rule binding role and role binding as well okay so till here this is the theory part of your kubernetes rbac if you haven't understood any any specific topic here you can put that in the comment section and I'll try to do uh you know more detailed video on that or we will try to do a master class or something but till now we only discussed about the theory part of kubernetes are back now as I promise you let me show you how to create a openshift cluster a free openshift free trial openshift cluster for 30 days okay so you can make use of this openshift cluster for your learning so let me stop sharing here and let me uh start sharing the other screen share perfect got it right so I hope you people are seeing my Firefox screen right so I opened a incognito window and just search for openshift sandbox okay so openshift sandbox once you open this one you will notice that you know you will get a free trial what does it say here get 30 days free access okay let me increase the font get 30 days free access to shared openshift and kubernetes cluster I already told you it's a share cluster because this is just for your practice and to understand the concept of rbac so click on start your sandbox for free all that you need to do is you need to create a account with red hat okay so register a red hat account or you can also uh like you know if you have already a red hat account you can make use of the red hat account okay so in my case I already have a red hat account so if you don't have you can just create Red Hat account it's just following steps and once you follow the steps you can create a red hat account okay so let me just stop sharing here so that I can enter the details stop share and I'm trying to enter my red hat Account Details so this is very simple like you know you create your AWS account uh right similarly you can also create a red hat account okay now I'm just creating a redhead account uh sorry I'm just uh entering the details of my red hat account and this is public anyone can uh create a free account here in the openshift sandbox done okay now I am logging in and let me share back my screen perfect so now I'm logged in so the only difference is you will see the same screen but no you know if you see this icon here that says that I am logged in now again click on start using sandbox what you what you will get is a shared openshift cluster for 30 days in no time okay so the openshift cluster is assigned to you and this cluster has you know both a developer and administrative tabs but in the administrative tab you only have limited access like click on login with Dev sandbox like I told you like you know this is the identity provider that openshift is using in your case this can be in your organization this can be login with ldap login with OCTA okay so you have created a red hat account right so this red hat account that you have created gets saved in this Dev sandbox okay so they are using this as the identity provider to Define okay what kind of user is he is he a paid user easier subscription user easy a redhead user okay you get all of this information uh you send all of this information to the openshift cluster using this identity provider okay now as soon as I click on dev sandbox okay it will try to get the information of my specific user like what kind of user is this person I just create a red hat account and provided all the details right depending upon that it gave me a red hat openshift cluster and this is how your production environments looks like okay all this time you might have used uh just kubernetes clusters but you know here see what happens is this is my openshift cluster that is dedicated for me for 30 days and it's a shared cluster for example here what I can do is I can go to the workloads and I can look into the pods I can look into the uh deployments and here I can switch through the namespaces but because this is a shared cluster I am only given access to this specific namespace that is created for me okay so what this entire concept here is using openshift dedicated you will get a shared cluster and each person gets a namespace now click on the icon here uh on the username and you know you get an option called copy login command click on that and again uh it will prompt you for login once you log in click on the display token and using this token you can log in through the CLI so here what I will do is I will open my terminal this is my terminal and just provide this same display token that you got from the URL and now you are able to login to the openshift cluster from your terminal now you can do all of the things like you know Cube CTL get pods what happens is you know you will be able to see what are the pods that are running in that specific namespace you only have access to that namespace okay so because uh sometimes you know this cluster might take uh time in responding back but it is hardly less than seconds let me create a deployment here uh Cube CDL create a deployment nginx hyphen hyphen image okay now what happens is a nginx deployment will be created for me let us see if it got created or not now if I click on the deployments tab here through the UI also I can monitor right so using the deployment apps no you will notice that you know nginx deployment is created for me okay so this is the nginx deployment and here you know using the UI itself you can scale down the pods you can scale up the pods now I scaled up the pods to two so this way you can play around and you can get the real feel of kubernetes cluster how kubernetes clusters are used within the organization you know in the routes you can create Ingress uh people have been asking me about Ingress right so you can play around with Ingress here you can create Services here right you can also use uh storage services like persistent volumes volumes how do they work all of these things so using this real-time production environment you can understand not just the concept of rbac but you can also play around with these clusters and you can explore a lot of things how events are omitted so these are the events okay so what is happening within the kubernetes cluster so engine export got created engine export got started so kubernetes is trying to pull the image okay and you can search for the events API Explorer lot of the things what people usually do what devops Engineers usually do within the organization you can get a feel of it and within the user management like I told you you can create service accounts you can create roles we can create role bindings which we will do in the next class okay in the tomorrow's class we will use the same account and we will create service account we will create roles and we'll create role bindings as well okay so stay tuned for tomorrow's video and before tomorrow's video try to create this account because as you get more and more production or real-time experience you'll be more confident and you will face the interviews very well I hope you enjoyed today's video and if you have any questions uh please put that in the comment section and don't forget to subscribe my channel Abhishek thank you so much see you all in the next video take care hello everyone my name is Abhishek and welcome back to my channel so today is day 42 of our complete devops course and in this class we'll be learning about kubernetes monitoring so this class is not just going to be a theory but I have also a GitHub repository that is uh practical like you know you have all the installation steps and whatever we are going to try as part of the demo the reason why I've created this repository is you know we are going to do a lot of commands on my mini Cube cluster so sometimes people asked me like you know can you put these commands in a GitHub repository or can you share them as part of a document so I thought anyways I'm going to enhance this repository in the future and add more topics like you know I'll be adding more topics about uh Advanced kubernetes monitoring writing your own metric server so for all of those things I thought I will create a centralized GitHub repository and this is the one so what you can do is you can start this repository so that you understand what are the future advancements that I'm going to make to this Repository perfect but coming to the topic for today what we are going to learn today is firstly we will definitely understand as usual right as every class I am going to explain the why aspect like why you need monitoring what is the advantage of monitoring uh what is Prometheus what is grafana so we are going to learn all of those things after that I am going to show you how to install these tools so the only prerequisite for today's class is you have to have a kubernetes cluster that can be anything like it can be a real kubernetes cluster on a production or it can be your development kubernetes clusters like mini cube k3sk 3d any anything uh we are fine with it right we are going to learn about installation and then finally I am going to monitor I am going to show you how to monitor the mini Cube I'm going to use mini Cube so mini Cube kubernetes cluster using Prometheus and visualize using grafana right so this is going to be an interesting one and we are going to uh prepare a grafana dashboard that will show the metrics of API servers and the deployments that we have on on our kubernetes cluster what is the status what are the replicas we are going to fetch a lot of information from my kubernetes cluster so watch the video till the end so that you understand the uh you know concept of Prometheus grafana as well as monitoring of kubernetes clusters so let's start with Y and like I told you I've very well documented all of these things like even if you want to follow uh why monitoring or why Prometheus that I'm going to explain you can use the same GitHub repository to understand uh whenever in your interviews also you can use this repository for answering the questions firstly why monitoring is required so let's say in your organization you have one kubernetes cluster okay so whenever you have a single kubernetes cluster that's not a problem because for just one single kubernetes cluster probably because you are a single devops engineer what you can do is you can monitor your own kubernetes cluster but what happens if this one kubernetes cluster is used by multiple teams let's say you have multiple teams who are accessing this kubernetes cluster and one of the teams says that oh okay in my kubernetes cluster uh something is going wrong probably they'll say that the deployment is not receiving the request or they will say that the service is not accessible for a short while so how do you solve this problem or at least how do you understand as a devops engineer now this is just one kubernetes cluster as the number of kubernetes clusters increases probably you have a Dev environment you have staging environment you have production environment so your number of kubernetes clusters keep growing so as the number of kubernetes clusters keep growing then you would definitely need a observability or monitoring platform so that is where kubernetes comes into picture Okay so kubernetes was initially developed by SoundCloud then it is open sourced right now kubernetes is a sorry Prometheus is a complete uh open source platform and anybody can use this Prometheus uh on their uh clusters like even even if you are running kubernetes cluster behind or you are running a kubernetes cluster in your Enterprise you can use Prometheus because it is open sourced one perfect so then if you have Prometheus what is the requirement of grafana so grafana is basically for the visualization right so Prometheus can give you lot of information uh using the uh queries like you can use the prom ql queries and you can get all the information regarding your kubernetes cluster but for a better visualization you will understand when I show you the live demo you would need a grafana uh so grafana can use any data sources and Prometheus can be used as one of the data source okay perfect now so what is the architecture of promoters so sometimes interviewers might ask you this question uh can you explain me the architecture of Prometheus this diagram might look scary but it is very very simple like you have a kubernetes cluster what Prometheus does is as you install Prometheus there is a component and Promises called as a Prometheus server so this Prometheus server what it does is it has a HTTP server and the Prometheus collects all the information like from your kubernetes cluster by default your kubernetes has a API server and API server exposes a lot of metrics about your kubernetes cluster okay so maybe five years or six years ago you might have to do a lot of configuration for your kubernetes cluster but right now these tools are very matured uh even they have contributed back to kubernetes so a lot of metrics are exposed out of the box in your kubernetes cluster previously maybe you have to do a lot of configuration for your kubernetes but right now the number of configurations has gone down so kubernetes has an inbuilt API server okay so this inbuilt API server exposes a lot of metrics so it says uh if you access me on API server slash metrics API server IP slash Matrix you can get all the information of what is the status of your resources in the kubernetes cluster some of the default resources now Prometheus will try to fetch this information and it stores the entire information in a Time series database okay so what is time series database it's just like you know with respect to the timestamp it stores the information of the metrics of your kubernetes cluster so this is about the default kubernetes uh resources right but what if you want to do more resources or what if you want to get beyond the out of the box metrics that your kubernetes API server is using that also we are going to learn today okay so don't worry about it then it stores all of these things on the disk right so HDD or SSD whatever you are using so because it has to store this is a Time series database it has to store information somewhere right it stores on a node using headset on the HDD or SSD then it has a monitoring uh system like you know you can configure Prometheus with alert manager and you can send notifications to different platforms probably you can send to slack you can send you can do an email you can send to various uh things so what happens under the hood is if you create the alert manager so Prometheus can push the alerts to the alert manager and you can configure this alert manager to send out notifications to different places probably in my organization uh let's say I have decided to use slack for alerting so what happens is whenever Prometheus identifies uh you know you can say what kind of metrics or what kind of alerts have to be pushed let's say I say that if API server is not responding according to my limit that I've said then what you can say is Prometheus send alert to alert manager so Prometheus will send the alert to alert manager saying that API server kubernetes API server is uh you know sometimes showing a flaky Behavior or it is not responding a few times so this alert manager depending upon the number of things that you have configured with alert manager it's not just one thing probably you can do email you can do slack you can do meet Google meet anything right so you can send notifications to multiple places so that's what alert manager does and apart from that like this is about the default configuration right but somebody can also go to Prometheus server like Prometheus provides a very good UI so you can also go to the Prometheus UI and you can execute some prom ql queries from ql is nothing but promethel oh sorry Prometheus queries so you can execute some Prometheus queries to get the information from Prometheus whatever it has recorded or you can also use dashboards like grafana or like any other tool like for example AWS supports API right so Prometheus also supports API you can also do some curl commands or using Postman you can get that information from Prometheus as well so this is overall like the high level architecture of Prometheus so as we keep learning Prometheus this architecture looks even more simple now like I told you why grafana so grafana is just for the data visualization so here when you do a query to Prometheus it gives you output in a format for example uh any Tool uh that is returning your output let's say it is giving output in Json format and if your managers or you know if you want to set up dashboards uh in your organization somewhere so that everybody can monitor Json or you know any kind of uh this templating languages are difficult to read right so if you have lot of information it is very easy to represent the information in charts or you know any any kind of diagrams so that's what grafana does for you so it provides a very good visualization it retrieves the information from Prometheus you can configure Prometheus as a data source and you can get the information into grafana and inside the grafana what you can do is you can create some nice diagrams this is for Layman understanding okay so you can create some nice charts or visualization so now without wasting any time let's start our demo and what I'm going to do here is you know I'm going I'm just going to take a kubernetes cluster so I'll create right from scratch so that many people have been asking me uh how to create a kubernetes cluster even though I've explained that in the previous videos no worries I can do it one more time so I am using minicube for this demo uh most of the times I use kind whenever I'm doing my local development or whenever I am doing uh local testing because kind is a very lightweight uh kubernetes in Docker uh kubernetes cluster but you know whenever you you're doing this kind of demos which requires uh more memory or which requires more CPU go with mini Cube so one thing that you can do is you can simply say minicube start but when you say mini Cube start let's say you're on Windows or Mac it uses the docker driver as default so that means to say that the default driver which minicube is using is Docker but For Better Or easy networking configurations go with the virtualization like in my case I prefer hyperkit okay so this is the command that I used to start my mini Cube cluster and if you are on Mac definitely use this hyperkit is a default virtualization that is available on your Mac or supported on the Mac but you can also go with virtualbox Oracle virtualbox or any other platform as well okay so this is the command that I am using minicube start I'm giving 4GB of memory and Driver as hyperkit so if you don't provide this driver as hyperkit then it would use the docker desktop so whenever you are using the docker desktop sometimes when you are exposing your services or when you are using Ingress maybe you might have to do some additional networking configuration so this doesn't take much time uh it would take probably a minute to create your kubernetes cluster and once your kubernetes cluster is ready what we can do is go back to this GitHub repository right so in this GitHub repository I have created a folder called installation and you have installation for both of the things like you can use Prometheus folder for installing understanding how to install Prometheus you can use grafana for understanding how to use grafana or install grafana right so I will also use the same GitHub repository well think this would need a minute more for creating the mini Cube kubernetes cluster using hyperkit perfect so my kubernetes cluster is ready I've installed the latest version of the kubernetes cluster that is one dot whatever is supported out of the box with my installation so it has 1.23.3 I did not pass any additional configuration so probably in your case you might be installing 1.25 that doesn't matter okay so I'll show you that my uh kubernetes cluster if I just say Cube CTL get parts minus a so you will notice that it just has all the default installations that uh like the kubernetes API server controller manager uh code DNS etcd only these things right so let me proceed now and start with the installation of Prometheus so I would go with the helm as installation option or operators as installation option okay so either use Helm or use operators this is not just for these tools in a general practice operators offer a lot of uh you know Advanced capabilities like you know you can do life cycle management of your kubernetes controllers using operators where you can install it can you can configure for automatic upgrades let's say tomorrow there is a new version of Prometheus operators are capable of upgrading your Prometheus automatically and they can do a lot of more things we will talk when we discuss about kubernetes operators so in this class I am going to install you using Helm so I will just open uh the GitHub page that I've shown you in a different uh screen and what I'm going to do is I'm just going to copy paste the commands so the First Command that you will see in the GitHub page probably if you are watching the video you can open the GitHub page in a new tab or if you have a different screen you can open it there so I'm just copied it and it says Helm repo add Prometheus community and so what I am doing here I'm just adding a Helm chart so first thing that you need to do when you are using help is to add the helm repo right so that's what I'm doing here in my case it is already existing but in your case if it is not available like you can do it you can install it you can add the requirements to say now Helm repo update so this would update if you have let's say you have installed uh this repo a week back and now there is a new version of Prometheus uh controller so always do Helm repo update before you install anything so here in this case when I do Helm repo update so it updated a few things successfully perfect after that I would simply install the Prometheus controller right so this is the step to install the Prometheus controller and the other required configurations like the Prometheus config map and the other things perfect Helm install Prometheus Prometheus community so you could also do this step directly if you had the helm chat but what happens if you don't do Helm repo update probably you might install an old version of Prometheus no problem so I have installed I'm going to install Prometheus nav so it says perfect so Prometheus is successfully installed and what it says like just copy this information do not run to the next command because here it gives you some important information on how to like you know how to get the server URL probably if you're not using minicube maybe if you are using openshift platform or if you're using a different platform so this information is very important for you so read the information whatever is provided here and probably if you want to do a port forwarding all of these things is available but in this class I am going to explain you all of these things so you can skip but always try to read these things so I have done the helm install Perfect know the permit is installed let me see if it is installed right or not Cube CTL get pods uh it should run Prometheus points perfect so the Prometheus ports are running so Prometheus server is still running right so if you see here it says Prometheus server so the container is creating even in this case uh Cube State metrics uh I'll explain you what is Cube status Cube State Matrix and what is the importance of cube State Matrix but uh you can understand that it is still running okay so let's give it a minute so this is running now Cube State Matrix and this is taking more time so meanwhile I can explain you what is Cube State Matrix so like I told you uh kubernetes API server it would explain it it exposes few metrics of your kubernetes right so it gives you information about the kubernetes API server it gives you information about the default uh installations on your kubernetes cluster which I showed you uh a couple of minutes ago but as you are monitoring your kubernetes clusters you might require more information probably you might want to know information about all the deployments all the pods all the services on your kubernetes cluster you would want to know um if the replica count is matching the expected replica account of all the deployments on your kubernetes cluster so what the kubernetes community or you know what uh people at cubesat Matrix have done is they have created a cube State Matrix controller so you can create a service for this Cube State metrics and you can use this Cube State metrics so when you call the cube State Matrix on the you know ah Matrix endpoint so it would give you a lot of information about your existing kubernetes cluster so this information is beyond the information your kubernetes API server is providing so that is the importance of this uh specific kubernetes controller so when you install using help this is installed by default let's say you are not using Helm or let's say you are installing uh just the Prometheus deployment so I'll also show you that what happens in that case let's say you are not using helmchat and you have not installed this I'll show you how to install this by your own right how to install cubestate metrics so mean before that let me just say Cube serial get pods and see if everything is running perfect now everything is running and I am good to go so now what I will do is I will just see Cube CTL get SVC so all the services here so there are services like Prometheus server so this is a service that is created using cluster IP mode and you have again this one which is very important uh Prometheus Cube State metrics that is also created on the cluster IP mode so and then alert manager that I told you so all of these things are created using cluster IP but what I want to do is I want to expose this Prometheus server and I want to show you uh how does the Prometheus server API would look like and what queries that you can create or you know all of these things so for that what you can do is firstly convert this cluster IP mode service into a node Port service so what you can do is go to the documentation and simply use the command that I have provided so this is Cube CTL expose service command so what it would do is uh as soon as I enter this one okay so you would see a new entry here let me just do again Cube CTL get SVC so you do see a new entry called Prometheus server EXT because in the command I have provided you know expose a service and the name of the Expo service should be Prometheus server EXT okay so now what I can do is I can open the Prometheus server UI on the Node port 31 000 31110 right so let me go back uh take the terminal here right and uh show this to you so before that I need to get the IP address of my kubernetes node so I can just do mini Cube IP so if I do mini Cube IP this is the IP address so go back take the uh take the browser enter this one here http colon what was the port again sorry uh Thirty one thousand one one zero right so three one one zero so see that the Prometheus is running so you have installed Prometheus on your kubernetes cluster so right so you have done the step one successfully so now what you can do is you can provide any Prometheus queries right let's say you are not aware of Prometheus queries you can just read the Prometheus documentation or you can also use chat GPT to just give information about a few Prometheus queries and as soon as you execute uh this Prometheus queries here I will show you don't worry about that so you can get the information about your kubernetes cluster so by default like I told you you can only get the information of the metrics that are exposed by your kubernetes API server so if you have a application let's say you have an application called XYZ that your developers have deployed on your kubernetes cluster and you want to get the health checks of it or you want to get Lively liveliness probe or you want to get any details of that particular application using I mean at this point of time it will not be possible because the API server or the cube State metrics they will only give you the information at a certain level but if you want to get more details of your application then your developer should write a metric server or they can use the Prometheus Matrix libraries and what they can do is they can expose a matrix endpoint and you can scrape that Matrix inside the Prometheus that I'll show you how to do that in Prometheus you will have a config map and inside that config map you can scrape The Matrix so you can say Prometheus that okay apart from the Matrix that Abhishek is going to show me in the grafana board or here what I want to do additionally is I also want to get the metrics of my custom application or the application that my developer has deployed and apart from the default Matrix cubesat Matrix is giving me I want some additional metrics okay so let's not bother about it uh for a while so for now you have the Prometheus and Prometheus is installed now what I'm going to show you is the advantage of this uh this one here Prometheus Cube State metrics right so what is the advantage of it uh maybe we'll firstly create grafana and then we will come back okay so that you understand what are the default metrics that uh kubernetes API server is giving and what is the advantage of this Cube State Matrix which is going to give you additional metrics so again I'll go back to the document uh here so if you go to the GitHub so there is a folder called grafana just go to the grafana folder and you have Helm dot MD so just copy the commands step by step and every time you do it verify that your command is passing so let me just copy the first command to add grafana Helm chat so now the grafana in my case already exists probably in your case it does not exist and it is always a good practice to do Helm repo update as well then I'll proceed with installing the grafana using the grafana helm Charter so now this should install the grafana on my kubernetes mini Cube cluster so it doesn't take much time and you will notice here that it is very important now to follow these steps because you need to know the password for your grafana okay so to log into grafana or you know to visualize the information from Prometheus on your grafana dashboard what you need to have is the password to log into grafana so you can get that password here you can simply copy the command which says Cube CTL gets secret and you know admin is a user and the password in this case oh sorry again copy it so this is the password in my case okay so now let me try to expose this grafana similarly I have done to my uh Prometheus because if you just do Cube CTL get SVC you will notice that there is a grafana service but this grafana service is again here running on the cluster IP so let me expose this one and uh create a node Port mode grafana service okay so you will notice a new entry uh oh sorry there is a typo here I think I need to fix this on the GitHub page as well no problem so now you will notice that a new uh service entry would be created if I just say Cube CT I'll get SVC you will notice that you know there is something called as grafana EXT in your production environments or in your Dev staging environments you don't have to do this because you will definitely use Ingress you will have Ingress controller so you can create a Ingress or route for your grafana and you can start using that if you are using a operator then that would be automatically created you don't even have to do that so now the node Port IP address for this is 31281 okay so again a mini Cube IP so this is the IP address and 31281 is the port so let me take uh this one here and again show you let me open a new tab copy it http colon 31281 now you should see grafana dashboard as well right so it will ask you for the user ID and password uh I explained you how to get the password right so if you don't remember go back and what was the command that I've executed Cube CTL get secret namespace and this was the password that it generated for me so enter the password here and now we are able to log into grafana as well right so now you have successfully set up Prometheus as well as grafana for your kubernetes mini Cube cluster awesome so the first thing that you should do as soon as you have grafana is you need to create Prometheus as a data source for your grafana now why is this required because like I told you grafana is a visualization platform so it it would need some metrics or it would need some information for it to create all the charts or all the required diagrams so creating that is not difficult just go to this option here called data source add your first data source click on it so you would have option for multiple data sources like I told you Prometheus supports a lot of data sources but sorry grafana supports a lot of data sources we are interested about promoters so click on Prometheus provide the IP address of your Prometheus so in my case I can copy it from here and paste it here right so this is the IP address and save and test so that would save the configuration and it will also test if your data source is working or not so here it's it said data source is working that means to say now my Prometheus will be able to sorry again sorry for that no mic grafana will be able to retrieve the information from Prometheus it can use Prometheus as the data source and it can show you some dashboards so let me go and create a dashboard as well so click on the dashboards option here and you would okay you can do it from here and what you can do is instead of creating your first dashboard because it you have to do a lot of things the simplest thing that you can do is come here click on a dashboards option here you have something called as import so what grafana has done for you is grafana says that okay you don't have to create anything so in grafana.com we have created multiple dashboards like uh dashboards are nothing but they have created some predefined queries and anybody who is going to use that dashboard ID okay so we will automatically configure some queries to pull the information from Prometheus so if you are starting with grafana use this ID called 3326 just click on load and your first dashboard not found okay what was the problem let me try it one more time three three two six uh just give me a second foreign my bad the ID name would be 3662 it's not uh sorry sorry for that yeah as soon as you click on the load so you will uh see here that you can choose what is the default option that is Prometheus again so just click on import and you will notice that a beautiful dashboard is created for you and this dashboard is retrieving the information from your mini Cube cluster now how this how did this happen so as soon as you entered ID call 3326 what the people or you know what the grafana dashboard that is available for you at the grafana.com has done is it has created a pre-existing or you know uh pre-created template and that template would run queries like here I have shown you one diagram uh if we go back so here what I told you in the initial slide is you know if you want to get any information from the Prometheus server uh through grafana you have to run some prom ql queries right for if you are a beginner you might not know about this prompt uh maybe you have to uh you know learn or you will take more time to understand how to write this prompt queries so what grafana said is okay don't worry we'll make it easy we have noted on what are some of the common uh queries or what are some of the standard queries that everybody would require and we have created a template for it and that template ID is three double six two so three double six two in the grafana is a standard template that would get lot of information from your kubernetes cluster see now we are able to get the information of kubernetes API server kubernetes nodes all of the things right so you can just go on one specific thing let's just click on kubernetes notes so here you have the information of the kubernetes nodes and probably if you want to get any information about uh what is the uptime of your mini Cube cluster so it says mini Cube cluster is always running and if you just hover on it so it will give you the query as well okay so slightly hover on it uh yeah so it is sometimes difficult to capture that View just give me a moment okay no problem like you know you can hover on it and you can get the query as well okay so as soon as I'm doing it I am not able to copy it and uh show it to people but uh you can get all of these queries uh to understand or you can also good like see here so it is saying it has executed uh a query called sum and it is getting the time series of all the things what is the memory chunks or you know what are some of the missed iterations so all of these things and this is a real-time dashboard so you can get this information uh even if you execute this query that I'm showing you here now average average overtime of up status of my kubernetes node so you are going to get the output in the Prometheus as hundred okay so you can execute that queries here as well but like I told you uh if you are going to execute the queries here uh inside the Prometheus what will happen is you are going to get the information in a Json format but for a better visualization go for grafana you can use the similar queries in grafana as well now but here if you see what is happening is I have the Matrix regarding the native kubernetes services like you have kubernetes API server you have few other components but what if I would like to know what is the deployment status what are the running replicas or you know what is the current status of the kubernetes service from uh Prometheus so that's why what uh Prometheus has done or what this specific service that I'm going to talk is Cube State metrics so this one here Cube CTL get SVC so this is the cube State Matrix and this is going to give you a lot of more information okay so again similar to the previous things I'll try to expose this Cube State Matrix and go I'm going to show you okay so the command that we are going to use is just say expose and use this one here followed by what is the Target Port so how do you understand the Target Port so if you look at the cube State Matrix so the Target Port is eight zero eight zero so eight zero eight zero and let us name it as Cube State Matrix hyphen EXT now once I run this you will get a new entry for this Prometheus Cube State Matrix as well so let me do Cube CTL get SVC one more time and now you will notice that the cube State Matrix is running on three zero four two one now see the magic what would happen if I just use the same mini Cube IP address but on the port 30421 okay so let me do that what is the mini Cube IP address HTTP colon double slash 192 168 6415 or let me copy it from the cluster only copy colon 30421 now see so it says let me increase the font it says okay so you are trying to reach Cube Matrix and if you click on Matrix so you get Matrix of a lot of things on your kubernetes cluster okay so like I told you this is a Json format information not Json actually so this is a matrix format information where you are getting a lot of information right but now what you can do is see you can use this same information inside your grafana as well or take any of this query like for example I would like to know what is the status of deployments okay so you can just use the Prometheus query here called uh this one so what it is doing is it would give me status of my Prometheus server so you can take this query here and you can execute it like this and as soon as you do execute see this is the information that graphana sorry Prometheus has written here and what grafana will do is it will take the same information but it will provide you this information in a visualization pattern okay so that's the only difference like the information is coming from Prometheus itself grafana is just using Prometheus as a data source and it is providing you this information in a visualization format okay so it is providing you this information in a better format now what you have done till now is that you have set up Prometheus you have set up grafana and you have used the default dashboard in the grafana that is three double six two uh ID uh that's a default template which will retrieve a lot of information about your kubernetes cluster what is your notes uptime what is the status of your kubernetes API server what is the status of your kubernetes etcd all of these things is available let's say if your organization requires more information then what you can do as a devops engineer you can expose the cube State metrics and you can get a lot of information on this specific endpoint okay what is the end point here 192 168 64 15 is my mini Cube IP address followed by the node Port of my cubestate Matrix slash metrics okay so this is the end point where you are going to achieve or get all the information now you might ask me okay so I am doing this on the browser but how do I do this information inside my Prometheus like how do I get this information directly inside my Prometheus so again it's not a rocket science what you need to do is you can just do Cube CDL get cm so this is the config map called Prometheus server okay just open this Prometheus server okay Cube CTL edit CM config map called Prometheus server and here you have information about all the data that your Prometheus is scraping okay so scrape information is nothing but the information that promoters is getting from like you have a prometheus.aml file and here you have script configs so by default what it is saying is it is just getting the information from the Local Host 1990 but where I want the new information should be coming from so I want the new information should be coming from this specific endpoint 192 168 64 15 304 2y sorry 2 1 right so this is the new information that I want so what I can do is I can create a new entry inside my kubernetes cluster for this Cube State Matrix endpoint so here I would add a new configuration how would I do that I can simply come here okay so provide a new job name and here you can say State Matrix or anything that you would like to and here you would say uh static configs and put the same information and provide the target IP address okay now the important information is okay so this is about Cube State Matrix and this is about the default parameters Matrix but what about the applications might my developers are writing so my developers are writing bunch of applications and how do I get the health of those applications so how do I understand if that applications are receiving or you know they are sending out response on each and every request so how do you get that information is you should ask your developers right similar to this Cube State Matrix it is exposing uh all the metrics regarding the kubernetes default applications on slash metric endpoint right so similarly you should ask your developers to write a metric server and what they will do is they will write a matrix server and they will use the Prometheus libraries there is a very good documentation that is available as a devops engineer if you are not writing these things it is not required but let's say you are interested what you can do is you can just search for Prometheus Matrix server how do I write that I will also explain you that in the future classes so once your developers or devops Engineers writes that then it is is all about just going back and updating your applications metrics here okay so this is very simple and this is how we are going to do monitoring and visualization using Prometheus and grafana right I hope uh you understood what we have done today and if you want to replicate the same behavior if you want to try out the same things at your end then you can follow this documentation where I have detailed each and every step okay whatever I have done till now and probably the cube State Matrix information is missing so that thing you can understand from the video because it is just one command okay and yeah so this is the video for today if you like the video click on the like button if you have any questions or any feedback for me put that in the comment section I hope you enjoyed the video please share the video with your friends thank you so much I'll see in the next video take care everyone bye hello everyone my name is Abhishek and welcome back to my channel so today we are at day 40 of our complete devops course and in this class we'll be learning about custom resources in kubernetes okay so before we go to the topic I just want to make an announcement that if you haven't subscribed to my channel definitely subscribe to my channel because I am going to announce my future roadmap in next couple of days where you know I'll explain you what am I going to do after this complete devops course are they going to be master classes are there going to be any more free courses that I'm going to connect so you know if you want to get early access to that or if you want to follow the content right from day one then you know definitely subscribe to my channel to get those interesting updates okay so without wasting any time if we jump onto the topic for today what we will learn today is the topic related to custom resource definitions and custom resources okay so on a high level we will under this time what is a custom resource definition the shorthand for the custom resource definition is crd which is very popular like you know people usually say crd not custom resource definitions just to be like you know whenever they are writing or whenever they are talking about it so that it's very simple it's very simple to say okay and then we will talk about custom resources and then we will try to understand what is a custom controller okay so these are the three things that we will understand today now so before I explain uh the topic I will just give you a high level overview so that you understand what are we going to talk today so this is your kubernetes cluster okay so within your kubernetes cluster by default there are some you know resources that come out of the box uh for example uh you have a deployment like you can create a deployment resource and you know once you write a deployment or the ml file a deployment is created for you an application is created for you which is taken care by uh the controller in the kubernetes or you have service in kubernetes or you have pod in kubernetes or you have config map you have secrets so these are all the native kubernetes resources so they all of them come out of the box in kubernetes right but apart from this out of the box resources so what kubernetes says is if you want to extend the API of kubernetes or you know you want to introduce a new resource to kubernetes okay so this is very important okay so if you want to introduce a new resource to kubernetes why you you want to introduce a new resource because if you feel that the functionality that you need inside the kubernetes is not supported by any of these resources for example let's let me give you a basic example let's say you feel that kubernetes does not uh support Advanced security capabilities okay so for example you have resources like Cube hunter or kivarno or Q proxy sorry or cube bench so all of these things try to address the security related problem so you know they say that okay we want to introduce a new resource into kubernetes or you know you have applications like Argo CD what they say is we want to introduce the git Ops capabilities to kubernetes or you have applications like flux you have spinaker so you have hundreds of applications like if you go to cncf cncf is all about the kubernetes controllers like the custom kubernetes controllers okay so you have so many resources like whenever you want to introduce a new resource to kubernetes or you want to extend the API of kubernetes to introduce a new resource so that's why then you use all of these things okay so this is the high level overview and what happens is there are two actors here right the actor number one is devops engineer and the actor number two is the user so deploying the custom resource definition and the custom custom controller is the responsibility of the devops engineer and deploying the custom resource can be the action of the user or can be the action of the devops engineer as well okay so these are the three things that we are going to talk today and we will try to understand with the actors okay and we will try to understand the concept okay so why you want to extend the API of kubernetes I just explained you because whenever you want to introduce a new resource to kubernetes okay probably Argo CD or Flags or key clock or you know uh any resource in kubernetes if you go to cncf you will find so many resources in such cases you need a custom resource custom reports definition and a custom controller let's try to understand each one of them and let's try to Deep dive into this concept okay so firstly like again let's say this is your kubernetes cluster okay and what you have done is you like you know you learned about the basic concept of kubernetes you understood or your organization has implemented kubernetes they have deployed the application as a kubernetes deployment and then they have created a service for it and then they have also created a Ingress resource for it and this deployment might have used some config Maps uh secrets so everything is fine and the user who is there he was also able to or he or she was able to access the application through the Ingress like you have created the Ingress so you know let's say there is a Ingress controller and your application is working fine like you know the traffic is flowing in and outside the kubernetes cluster and your application is being used so there is no problem at all but after a while what what the devops engineering team or you know what you have as a devops engineer has uh said is let me Explore kubernetes More and as you Explore kubernetes More you have realized that there is World Beyond this native kubernetes resources like you know you've realized that there is something called istio which adds you know service mesh capabilities to your kubernetes cluster or you know you have realized uh there is an application called Argo CD which adds the capabilities of githubs to your kubernetes cluster okay or you have realized that there is a application called key clock what it does is it will provide a very tight identity and access management or you know oauth or oidc capabilities to your kubernetes cluster so similarly there are multiple applications that you have realized are used to enhance the behavior of your kubernetes cluster your application is working fine but once you started exploring kubernetes you realize that there are multiple things like you know you have security related things okay you have uh keyword no you have q bench you have q proxy so you have realized that you know there is World beyond the existing kubernetes resources now how does kubernetes support these resources because the number of these resources is keep I mean it kept growing right so there are there is not one uh resource like istio or Argo CD or Key Club so there are multiple people in the market there are multiple companies who says that you know we will provide Advanced capabilities to kubernetes clusters or you know apart from the basic kubernetes resources use our resource to get X feature use our resource to get y feature on kubernetes use our resource to get Z feature on kubernetes it can be load balancing it can be security firewall API Gateway so each and every company is coming to you know kubernetes space and they are saying that okay we will add new capabilities to kubernetes so how does kubernetes handle with this like you know kubernetes cannot go to each of these applications and kubernetes cannot add the logic for these applications into the kubernetes control plane component right kubernetes has accommodated Logic for deployment kubernetes has accommodated Logic for service kubernetes has accommodated Logic for uh you know let's say config maps are secrets but if kubernetes has to accommodate Logic for all of these things it is practically impossible for kubernetes team or creators of kubernetes because the number of these applications has reached thousands ten thousands you know there are like there are multiple custom kubernetes controllers in the market which are solving some of the other problem with kubernetes so that's why what kubernetes said is okay these are the set defined resources like you know whatever you are seeing here or Beyond this so we have only few resources that we support out of the box if you want to add additional capabilities to kubernetes what we will do is we will allow the users to extend the API of kubernetes okay watch this or understand this point carefully so what kubernetes is saying we will allow you to extend the capabilities of kubernetes or extend the API of kubernetes so they are saying that you can add a new API to your kubernetes or you can add a new API resource to kubernetes thank you using this resource you can ask your customers or you can ask your whoever wants to use this to you so what kubernetes is saying is your people is that you can create you can extend the kubernetes like uh probably whoever your users are you can ask them to deploy a few resources and you know you can uh this way you can extend the kubernetes Clusters but we are not going to support it so to extend the API or to extend the capabilities of API there are three resources in kubernetes okay no I think you understood the problem so the three resources are crd like I told you in the first line then you have cr then you have custom controller let us type to dig dive into each of them and let us try to understand first one crd okay so crd is nothing but custom resource definition that means you are defining whoever it is like when I say you it's not you but for example if there is a company called istio and istio says that we want to enhance the capability of kubernetes so kubernetes people are saying to create a crd what is the crd defining in you type of API to kubernetes okay so Define a new type of kubernetes API to kubernetes and how do you define is you have to submit a custom resource definition to kubernetes okay so people of istio will create a new custom resource definition and in this custom resource definition like it's a yaml file and in this yaml file you will Define things like you know what a user can create for example if you are a user or if you are a user who are creating a deployment or yaml file okay so in your deployment.aml file probably you have mentioned few things like you have said what is the API what is the kind what is the spec inside spec what is the template what is the Pod what is the container what is the container Port but beyond this how does kubernetes understand whatever the deployment of the ml you have created is correct or not so kubernetes will have a template which has all the fields related to deployment or DML like you know tomorrow you can add volume amounts tomorrow you can add mounts or tomorrow you have added a new field called XYZ so kubernetes will say that immediately when you create a new field called XYZ in the deployment.yaml you will get error okay what error you will get field x y z is not known if you have a question like go to your deployment.aml file and you know write a new field in the inside respect try to create a new field called XYZ so kubernetes will immediately throw a error called field XYZ is not known or some error kubernetes will throw at you okay so how kubernetes is throwing this error because kubernetes knows what is the definition of a deployment okay out of this definition you can use whatever is required or you can omit whatever is not required but this is the standard definition that kubernetes has similarly even for the custom resource or what is a custom resource custom resource is a custom like you know it's a new or it's a you know a variable resource that you are some submitting to kubernetes but before anyone submits kubernetes asks you to extend or Define a new type of API to kubernetes using the custom resource definition where people of istio if you are taking history as an example they will provide a complete yaml file which will have all the possible options that they support okay a crd is a yaml file which is used to introduce a new type of API to kubernetes and that will have all the fields a user can submit in the custom resource okay like for example if you take about a resource called deployment.aml and further uh deployment.aml you have a resource definition inside your kubernetes so this is a general resource of kubernetes and this is a general resource definition of kubernetes but because we are dealing with custom resources that's why we call this as a custom resource definition okay and whatever the user is submitting we call it as a custom resource okay let's try to understand this in detail okay so we'll try to compare it with deployment.yaml itself so that you people will understand for example you are a user you are creating a deployment with ML okay so this is a yaml file that you are submitting let's say this is a deployment.aml file okay inside the deployment.tml file what you will do is you will say my API version okay apps V1 and then you will say my kind then you will say metadata then you will say spec inside the spec you will say template you will say container all of these things okay but how does kubernetes understand if your yaml definition is correct or not so like I told you this is a kubernetes resource that you are creating similarly kubernetes has a resource definition in the API okay in the API server or in the kubernetes controller manager okay so what does this resource definition do it will validate if the resource that you have created is right or wrong okay so the resource definition in the kubernetes will try to understand if the user created resource is correct or not similarly even in case of custom resource definition okay so custom resource definition is a custom resource that you are adding to kubernetes to enhance the behavior of kubernetes or to extend the API of kubernetes so even in that what a user will do is as a user he will create a custom resource okay so because we are talking about istia let us take about the istio example itself so istio has a custom resource called virtual service okay so what here user will do he will say API version is something related to istio then he will say kind as virtual service then he will say spec I mean metadata obviously and then he'll suspect he'll say a few properties okay I mean nobody will remember this you will go to history documentation and you will see what is the resource yaml file that is required for the is the over gel service you will have bunch of examples there okay so this is a what is this virtual service this virtual service is a custom resource now whom I mean who will validate this custom resource so this custom research like I told you is validated against a custom resource definition that is crd that you have submitted or the istio people have created you can as a devops engineer you can deploy this custom resource onto your kubernetes cluster so that your kubernetes cluster is extended okay so the two functionalities one is for the crd is to extend the capabilities of API kubernetes API and also to validate so right now you have understood the difference right if I try to compare this with a native kubernetes resource against a custom resource of kubernetes the process is same here you are creating a deployment.yaml file on the contrary here the user will create a virtual service yaml file okay and this is validated against the resource definition here this will be validated against the custom resource definition now like once you think this is done okay but this is not done yet so you have created a or user has created a custom resource okay so user has submitted a CR validated against a crd let's say this both of the things are fine you have created a crd it is you have created a CR it is validated against the crd and CR is created inside your kubernetes cluster let's say it is created inside your KX cluster now if you think this is done this is not done yet okay if you think this is over this is not over because you have created a custom resource but like I told you if you take the same example of deployment you might have created a deployment or yaml onto your kubernetes cluster which is validated against the deployment resource definition after that you will know that inside kubernetes there is something called as a deployment controller right so this deployment controller is the one that is you know it is taking care of creating a replica right and replica set controller will create a pod so there is a process that is happening and who is doing this a kubernetes controller is doing it so similarly here there has to be a custom controller or you can call it as a custom kubernetes controller okay so this is the flow so actually Arrow should Point here okay so there has to be a custom kubernetes controller that is already deployed inside your kubernetes cluster so that once you deploy your custom resource let's say you have deployed your CR this controller will watch for the CR and it will do some action okay so now let us take this into a diagram and understand okay so if this is a kubernetes cluster first of all devops engineer that is the people who are watching this video maybe or people might be developers or someone else as well but most of our audience who are devops engineer what they will do step one okay if the organization decides to use istio for example or if the organization use any other example Step One is they will deploy the crd onto the kubernetes cluster how they will deploy this they will go to the istio documentation they'll find what is the crd and they will deploy either using the plain kubernetes manifests or they can deploy it using the helm chats or they can deploy them using the operator anything is possible okay so using the crd they go to the I mean they go to the docs and they deploy the crd who deploys the crd so the devops engineers have deployed a new crd let's call it as a because we are talking about istio let's call it as a virtual service crd okay so virtual service crd is deployed onto your kubernetes cluster now now there is another actor here and this actor is nothing but a user so you can consider it as a developer or devops engineer or anyone okay now what this user will do again he will also go to istio docs and because he wants to use the capabilities of istio inside the cluster he will create a custom resource what is this custom resource let's say he has a namespace called Abhishek so inside this Abhishek namespace he will create a istio virtual service custom resource let's call it as vs so he has created a vs custom resource now like I told you before it getting created the API server or you know someone will intercept this request and they will try to validate it against the virtual service crd and if the request is correct then the request will pass through if not the request will fail right so this is the process that will happen let's say you have created the user has created a proper custom resource event to the documentation and he has created a proper custom resource which is validated and deployed inside your kubernetes cluster but till here you have just deployed a custom resource it will just stay there like for example if you just deploy a Ingress resource without Ingress controller what will happen nothing will happen right like we discussed in the previous class the Ingress resource will be of no use similarly you have just deployed a custom resource if you deploy a deployment there is a deployment controller which is taking or which is doing something for you but here this custom resource is being watched by no one till now right so if nobody is watching it then nothing is going to happen right so someone has to watch this custom resource so again the action to here of the devops engineer would be to deploy a custom controller so again how this custom controller is deployed again he will go to the documentation he will either deploy them using the helm chat plane manifest or operator whatever the devops engineer wants to follow the process within the organization so now again he can create this across the cluster the custom controller or he can just create for the specific namespace depending upon the feature that controller supports let's say because we are dealing with Abhishek namespace so devops engineer will deploy a custom controller here so now this custom resource is verified by the controller and controller will perform the required action in this case what is the required action the required action is istio that is service mesh or Mutual TLS or you know east west traffic whatever it is let's not go into the details of it horizontal traffic or each waste traffic Mutual DLS whatever the configuration that you want to do so this istio controller which you deployed will read the custom resource and it will perform the action so whenever you are getting confused with respect to custom resources or custom resource definition the simplest thing that you will do is try to understand it with the native resource itself because whether it is a native resource like deployment or the custom resource the only difference would be in case of custom resource you will deploy all the required resources whereas in case of deployment there are these resources out of the box available on the cluster okay but the steps are common first step is I mean for any custom resource or for any uh you know uh applications like istio or Argo CD the steps are common that is First Step would be you have to deploy the custom resource definition to extend the capabilities of your kubernetes cluster Second Step would be uh you know you have to deploy the custom controller and the third step is the user who wants to use this feature on their kubernetes cluster like you might have 100 name spaces but only 20 namespaces might want to use this feature okay so whoever the users or who are the namespaces that they want to use what they will do is they will deploy the custom resource so similarly if you compare with deployment so by default inside kubernetes cluster you have a resource definition for deployment as a user you are creating a deployment in kubernetes which is validated against the resource definition of your kubernetes and instead of the custom controller for deployment inside your kubernetes you have a native kubernetes controller okay so this is the concept of custom resource custom resource definition and a custom controller now some interesting points just for your understanding how one can write a custom controller so the very popular way of writing a custom controller is using golang you can write using python you can write using Java as well but the community or the very popular medium of writing a custom kubernetes controller using golang one of the primary reasons is you know kubernetes application itself is written in golang okay so one of the popular kubernetes uh you know apis is client go now you have client python you have client uh Java everything but you know initially when kubernetes was developed to extend the capabilities of kubernetes kubernetes has something as a client go which will allow uh you know you to interact with the kubernetes API so whenever I am saying you want to extend the capabilities of kubernetes that means to say the user has to interact with the kubernetes API right just like Cube CTL interacts with the kubernetes API whenever you want to write a custom controller you have to or you might want to talk to your kubernetes for that inside the kubernetes API server there is a component called client go okay so this client go will allow you to talk to the kubernetes API server so initially it was only client go but later point of time there is now you can write it in Python you can write it in Java because kubernetes has API supported for different things that is fine but you know because the community has started with go and kubernetes itself is written in go and the entire cncf ecosystem uh you know because of the uh features of golang like you know multi uh like the concurrency or you know the easy way of writing it all of the things we will try to learn whenever we are try to learning the golang but for now because you know initially the community has started in uh uh go language and the client go support is very well there is a very good Community for the client go and the cncf ecosystem with go language all of this controller custom controllers are written in Golan so even when you want to write a new custom controller the preferred way would be to use golang okay and how do you write a custom controller on a very high level okay I'm not going to the details because if you want to learn this probably I can take a new class because many of our subscribers might not understand golang or you know many of our subscribers are beginners to devops or they're just learning kubernetes so I don't want to go into the depth but you can put that in the comment section if you want to understand this in detail now so what you will do is you will use golang as your programming language and like I told you if this is your kubernetes cluster or this is your API server so there is a component uh called client go you will interact with the client go and the entire process depends on setting the Watchers listeners and Watchers okay so what you will do is by default this client go or by default your kubernetes will be watching for a set of Watchers like you know there is a deployment Watcher there is a service Watcher so whenever you are creating or whenever you are performing any of these actions like update delete or create okay so what happens is there is inbuilt watches that kubernetes has created for this resources so whenever one of these actions is performed kubernetes will come to know using these watches but if you want to write a custom kubernetes controller then you have to create your new Watchers so early when I started writing kubernetes controllers back in uh 2015 there you know the Frameworks were not strong or there were no uh a lot of framework so you have to create your own Watchers everything right from scratch but right now you have many uh framework like one of the very popular ones is you can use the kubernetes controller runtime so you know this is a go one that is supported by kubernetes itself it's a golang based uh kubernetes package so using this also you can set up your Watchers like let's say what people might uh people at istio might have done is they have set up watches for virtual service so any action like I told you users will create a virtual service right whenever they are creating or deleting or updating so there is Watchers that is configured for this virtual service and this watches you know they will notify a client go so again in client go there is a package called reflector so I'm not like I'm telling you I don't want to go into the details of it so using this reflector then you know what you will do is whenever you find to understand uh sorry whenever you understand that a new virtual service is created you can put that in a fifo uh queue so you will put that in a worker queue and you know you will start reading each of the uh elements or each of the objects in the worker queue uh and like Watchers will identify and you will put that in the worker queue and then you will start processing each and every resource okay so this way like once your controller starts processing each and every object in the queue then it starts creating the required functionality on the kubernetes in this case it will start creating a virtual service configuration on your kubernetes cluster so this is a very high level concept if you want to write a custom controller if you are interested more uh you can go for a sample kubernetes custom controller I'll put the link in the description uh a very good documentation which will help you to understand how to write a sample kubernetes custom controller kubernetes supports some documentation as well and go for the controller runtime as your medium of writing if you want to write operators then you can use operator SDK as well what is operator how is it different from controller not uh the topic today we are not dealing with this topic okay so now let me not go into the details of it so this is uh some interesting things that I wanted to explain and uh if you want to write a crd okay so devops Engineers will not write custom controllers or crds most of the devops engineers but if you are in a kubernetes developer role and if you want to write your in your organization if you are required to write a new kubernetes custom controller then you might have to know all of these things like in my role I have to write kubernetes controllers I have deal day in day out with kubernetes controllers so I know all of these things but even if you want to write a custom resource definition that's not difficult you can write a resource definition very easily okay now let me show you one example just for your understanding how does a custom kubernetes controller would look like and what are the parameters how to deploy them and all of those things so quickly I'll stop sharing here uh stop share and uh let me start sharing my other screen so I hope you'll know the topic is very clear to you and I will just take one example because we have been discussing about istio let us take example of istio itself to just show you okay how does this custom controllers and uh how do they operate okay perfect uh just go to GitHub or yeah first let's go to GitHub and uh so this is your GitHub page you can just search for istio istio okay so if you just search for istio istio so this is the istio repository so if you want to know the list of popular Uh custom controllers in kubernetes the best way is to go to cncf okay so cncf is cloud native Computing foundation and inside this you have a lot of kubernetes like for example I work on Argo CD so these are all the like they have 20 graduated projects 37 incubating projects and 93 uh so all of these are custom kubernetes controllers like Argo CD istio backstage build packs okay so all of these or the custom kubernetes controllers which are very popular in the uh Community not just these things so these are from the cncf cncf is a uh like you can consider it as a Linux foundations uh project or cncf is a community uh basically that is backing up or giving support to all of these uh projects like if you get your project incubated with cncf then you get a lot more attention or you know you get a lot of more uh support for your projects okay so Creo is one core DNS is one cross plane which we discussed uh in one of the classes is one so these are all the custom kubernetes controllers which are very popular in the community Prometheus I think most of you know about Prometheus right so you can go to any of their projects like in my case because I have been talking about steer the whole video so this is istia okay so if you are a devops engineer and if your organization has decided to use istio what you will do is if you want to read through the code so this is the history of go language code it is open source you can start reading it but on a uh like you know if firstly if you start with it you'll find it slightly difficult so uh if you are a beginner and if you want to understand the code firstly go to the package folder so inside the package folder like you know uh anyways like uh this will be very difficult to understand from here so use the example that I put in the description that is kubernetes sample Uh custom controller like uh kubernetes sample controller okay so I'll put the link in the description as well but this is the GitHub page where kubernetes people will explain you how to write a sample kubernetes controller okay like I told you uh you use a lot of packages like there is a code generator okay and uh then uh there is a controller manager sorry a controller runtime so you can use make make use of all of these things just follow the documentation uh they'll explain you how to write the go code for the controller as well and how to write a custom resource definition uh there are a bunch of examples on how to write a custom resource so just follow this uh specific GitHub repository if you are interested uh like I'm telling you for most of the devops job roles this might not be important you only need to understand the concept how to deploy custom resource how to deploy a controller how they are working in the back end whatever I explained in the theory part this should be more than enough for you but if you are in a challenging devops role where you take care of uh writing kubernetes controllers as well okay where you are contributing like probably you are an open source contributor for any of these projects like I'm doing so in such cases you might require this kind of knowledge if not you can stop at the video where I explained all of these things and you can see how to deploy these custom resources okay so this is about istio right like I'm telling you you can go through the uh code in this GitHub repository after that you know what you need to do is go to the introduction inside anything you can just go to there uh official documentation of istia where is it in this case the official documentation yeah so this is the istio.io this is the official documentation and here you will find the installation uh page okay so this is taking some time but uh basically you will have a Helm chat I'll show you uh the helm chart will take care of the deployment of the custom resource definition as well as your custom controller okay so both of the things will be deployed so let's go to the documentation and inside the documentation setup right so if you are a devops engineer you have to do these things okay so getting installation guides go here and now you need to understand okay take any of these things install with help because Helm is a quite popular one okay so all that you need to do is you just need to uh copy these commands Okay uh called Helm repo add istio and then update the uh Helm repo for the istio and then you know uh you will see that the istio related custom controllers will be created in your namespace along with the CR disk so after that it's up to the users to just go and deploy the istio virtual service for example okay if I just want to show you quickly just copy this one here and if you have a kubernetes cluster handy what you can do is I don't know if it today looks like my network is slow but I'll try to show you okay so I am just creating the uh help things for this so it says update complete happy helming perfect thank you and after this the installation steps is to choose uh any of the helm release and all of these things okay so now I have to choose the helm release as well I thought they have a no problem yeah I think this is fine here firstly create a namespace for istio okay so I'm creating a namespace for istio just follow the documentation as a devops engineer uh you can go through the documentation of uh how to create uh the steps for installing the custom resource definition and the custom kubernetes controller but once you do this you have to understand the concept like I told you how does istio work how what is the custom uh like you know what is a virtual service in istio all of these things you have to know by yourself okay so your role is to just deploy this custom resources sorry custom resource definition custom controller and apart from that if your teammates has any problem with istio then you have to solve it not the golang related code or not the controller or anything but if they say that you know my SEO virtual service is not working then you have to go to the HTO controller look for the logs what is happening there if the virtual service resource is properly created or not what is the status of the virtual service describe the virtual service resource so this kind of debugging is expected from you as a devops engineer okay so now if I just do Helm uh you will notice that the crd installation is created okay see so you have a new resource uh or you can just say Cube CTL get crd so you will see that the istio related so these are all the istio related custom resources so in case official there are lot of custom resources don't worry about it but you know this is how you create a custom resource after that again follow the documentation and create your istio related okay so by using this command now your Eco related controller will be created so there is no rocket science here uh all that you need to do is just follow the documentation and create every configuration like custom resources definition and your kubernetes custom controller so this is for help uh istio but the same process will be for Argo CD same process will be for Prometheus anything you just install their Helm chats which will which will deploy their custom resources uh definition and also there uh sorry what was that uh custom controller which is a deployment okay so this is the configuration related to it if you have any questions just post that in the comment section but as a devops engineer like I'm telling you one of your primary responsibilities is this and after that to debug like if your organization is using istio you have to read through the history or documentation completely and you have to understand about istio okay just deployment is not your part you have to understand each and everything here how is your service mesh is working what are the configurations that are required uh if user is getting any questions with respect to your destination rules uh what is the NY proxy in HTO right all of these things you have to know as devops Engineers apart from the installation and configuration okay so this might take some time but uh there is again no no point of just creating the deployment I just want you to explain I wanted you to explain the concept so I hope you like the video for today if you have any questions put that in the comment section and don't forget to subscribe my channel thank you so much for watching the video today I'll see you in the next video take care everyone bye