hello and welcome to this complete kubernetes course the course is a mix of animated theoretic explanations but also Hands-On demos for you to follow along so let's quickly go through the topics I'll cover in this course the first part gives you a great introduction to kubernetes we'll start with the basic concepts of what kubernetes actually is what problems it solves in the kubernetes architecture you will learn how you can use kubernetes by showcasing all the main components after learning the main Concepts we will learn and install minicube for a local kubernetes cluster and we will go through the main commands of creating debugging and deleting pods using cubectl which is kubernetes command line tool after knowing cubectl main commands I will explain kubernetes yaml configuration files which we will use to create and configure components then we will go through a practical use case where we'll deploy a simple application setup in kubernetes cluster locally to get your first hands-on experience with kubernetes and feel more confident about the tool in the second part we will go into more advanced and important Concepts like organizing your components using namespaces how to make your app available from outside using kubernetes Ingress and learn about Helm which is the package manager for kubernetes in addition we will look at three components in more detail first how to persist data in kubernetes using volumes second how to deploy stateful applications like databases using stateful set component and lastly we will look at the different kubernetes service types for different use cases if you like the course be sure to subscribe to my channel for more videos like this and also check out the video description for more related courses on udemy Etc if you guys have any questions during the course or after the course or you want to Simply stay in touch I would love to connect with you on social media so be sure to follow me there as well so in this video I'm going to explain what kubernetes is we're going to start off with the definition to see what official definition is and what it does then we're going to look at the problem solution case study of kubes basically why the kubernetes even come around and what problems does it solve so let's jump in right into the definition what is kubernetes so kubernetes is an open source container orchestration framework which was originally developed by Google so on the foundation it manages container speed Docker containers or from some other technology which basically means that kubernetes helps you manage applications that are made up of hundreds or maybe thousands of containers and it helps you manage them in different environments like physical machines virtual machines or Cloud environments or even hybrid deployment environments so what problems does kubernetes solve and what are the tasks of a container orchestration tool actually so to go through this chronologically the rise of microservices cost increased usage of container Technologies because the containers actually offer the perfect host for small independent applications like microservices and the rise of containers in the microservice technology actually resulted in applications that are now comprised of hundreds or sometimes maybe even thousands of containers now managing those loads of containers across multiple environments using scripts and self-made tools can be really complex and sometimes even impossible so that specific scenario actually caused the need for having container orchestration Technologies so what those orchestration tools like kubernetes do is actually guarantee following features one is high availability in simple words High availability means that the application has no downtime so it's always accessible by the users a second one is scalability which means that application has high performance it loads fast and the users have a very high response rates from the application and the third one is disaster recovery which basically means that if an infrastructure has some problems like data is lost or the servers explode or something bad happens with the service center the infrastructure has to have some kind of mechanism to pick up the data and to restore it to the latest state so that application doesn't actually lose any data and the containerized application can run from the latest state after the recovery and all of these are functionalities that container orchestration Technologies like kubernetes offer so in this video I want to give you an overview of the most basic fundamental components of kubernetes but just enough to actually get you started using kubernetes in practice either as a devops engineer or a software developer now kubernetes has tons of components but most of the time you're going to be working with just a handful of them so I'm gonna build a case of a simple JavaScript application with a simple database and I'm going to show you step by step how each component of kubernetes actually helps you to deploy your application and what is the role of each of those components so let's start with the basic setup of a worker node or in kubernetes terms a node which is a simple server a physical or virtual machine and the basic component or the smallest unit of kubernetes is a pod so what pod is is basically an abstraction over a container so if you're familiar with Docker containers or container images so basically what pod does is it creates this running environment or a layer on top of the container and the reason is because kubernetes wants to abstract away the container runtime or container Technologies so that you can replace them if you want to and also because you don't have to directly work with Docker whatever container technology you use in a kubernetes so you only interact with the kubernetes layer so we have an application pod which is our own application and that will maybe use a data database pod with its own container and this is also an important concept here pot is usually meant to run one application container inside of it you can run multiple containers inside one pod but usually it's only the case if you have one main application container and a helper container or some side service that has to run inside of that pod and as you say this is nothing special you just have one server and two containers running on it with a abstraction layer on top of it so now let's see how they communicate with each other in kubernetes world so kubernetes offers out of the box a virtual Network which means that each pod gets its own IP address not the container the Pod gets the IP address and each pod can communicate with each other using that IP address which is an internal IP address obviously it's not the public one so my application container can communicate with database using the IP address however pod components in kubernetes also an important concept are ephemeral which means that they can die very easily and when that happens for example if I lose a database container because the container crashed because the application crashed inside or because the nodes the server that I'm running them on uh ran out resources the Pod will die and a new one will get created in its place and when that happens it will get assigned a new IP address which obviously is inconvenient if you are communicating with the database using the IP address because now you have to adjust it every time pod restarts and because of that another component of kubernetes called service is used so service is basically a static IP address or permanent IP address that can be attached so to say to each pod so my app will have its own service and database pod will have its own service and the good thing here is that the life cycles of service and the Pod are not connected so even if the Pod dies the service and its IP address will stay so you don't have to change that endpoint anymore so now obviously you would want your application to be accessible through a browser right and for this you would have to create an external service so external Services a service that opens the communication from external sources but obviously you wouldn't want your database to be open to the public requests and for that you would create something called an internal service so this is a type of a service that you specify when creating one however if you notice the URL of the external service is not very practical so basically what you have is an HTTP protocol with a node IP address so often node not the service and the port number of the service which is good for test purposes if you want to test something very fast but not for the end product so usually you would want your url to look like this if you want to talk to your application with a secure protocol and a domain name and for that there is another component of kubernetes called Ingress so instead of service the request goes first to Ingress and it does the forwarding then to the service so now we saw some of the very basic components of kubernetes and as you see this is a very simple setup we just have one server and a couple of containers running and some Services nothing really special where kubernetes advantages or the actual cool features really come forward but we're gonna get there step by step so let's continue so as we said pots communicate with each other using a service so my application will have a database endpoint let's say called mongodb service that it uses to communicate with the database but where do you configure usually this database URL or endpoint usually you would do it in application properties file or as some kind of external environmental variable but usually it's inside of the built image of the application so for example if the endpoint of the service or service name in this case changed to mongodb you would have to adjust that URL in the application so usually you'd have to rebuild the application with a new version and you have to push it to the repository and now you'll have to pull that new image in your pod and restart the whole thing so a little bit tedious for a small change like database URL so for that purpose kubernetes has a component called configmap so what it does is it's basically your external configuration to your application so config map would usually contain configuration data like URLs of a database or some other services that you use and in kubernetes you just connect it to the Pod so that pod actually gets the data that configmap contains and now if you change the name of the service the endpoint of the service you just adjust the config map and that's it you don't have to build a new image and have to go through this whole cycle now part of the external configuration can also be database username and password right which may also change in the application deployment process but putting a password or other credentials in a config map in a plain text format would be insecure even though it's an external configuration so for this purpose kubernetes has another component called secret so secret is just like config map but the difference is that it's used to store secret data credentials for example and it's stored not in a plain text format of course but in base 64 encoded format so secret would contain things like credentials and of course I mean database user you could also put in config map but what's important is the passwords certificates things that you don't want other people to have access to would go in the secret and just like config map you just connect it to your pod so that pod can actually see those data and read from the secret you can actually use the data from config map or Secret inside of your application pod using for example environmental variables or even as a properties file so now to review we've actually looked at almost all mostly used kubernetes basic components we've looked at the pod we've seen how services are used what is ingress component useful for and we've also seen external configuration using config map and secrets so now let's see another very important concept generally which is data storage and how it works in kubernetes so we have this database part that our application uses and it has some data regenerate some data with this setup that you see now if the database container or the Pod gets restarted the data would be gone and that's problematic and inconvenient obviously because you want your database data or log data to be persisted reliably long term and the way you can do it in kubernetes is using another component of kubernetes called volumes and how it works is that it basically attaches a physical storage on a hard drive to your pod and that storage could be either on a local machine meaning on the same server node where the Pod is running or it could be on the remote storage meaning outside of the kubernetes cluster it could be a cloud storage or it could be your own premise storage which is not part of the kubernetes cluster so you just have an external reference on it so now when the database pod or container gets restarted all the data will be there persisted it's important to understand the distinction between the kubernetes cluster and all of its components and the storage regardless of whether it's a local or remote storage think of a storage as an external hard drive plugged in into the kubernetes cluster because the point is kubernetes clustered explicitly doesn't manage any data persistence which means that you as a kubernetes user or an administrator are responsible for backing up the data replicating and managing it and making sure that it's kept on a proper Hardware Etc because it's not taking care of kubernetes so now let's see everything is running perfectly and a user can access our application through a browser now with this setup what happens if my application pod dies right crashes or I have to restart the Pod because I built a new container image basically I would have a downtime where a user can reach my application which is obviously a very bad thing if it happens in production and this is exactly the advantage of distributed systems and containers so instead of relying on just one application pod and one database part Etc we are replicating everything on multiple servers so we would have another node where a replica or clone of our application would run which will also be connected to the service so remember previously we said the service is like an persistent static IP address with a DNS name so that you don't have to constantly adjust the endpoint when a pod dies but service is also a load balancer which means that the service will actually catch the request and forward it to whichever part is list busy so it has both of these functionalities but in order to create the the second replica of the my application pod you wouldn't create a second part but instead you would Define a blueprint for a my application pod and specify how many replicas of that pod you would like to run and that component or that blueprint is called deployment which is another component of kubernetes and in practice you would not be working with pulse or you would not be creating pods you would be creating deployments because there you can specify how many replicas and you can also scale up or scale down number of replicas of parts that you need so with pod we said that part is a layer of abstraction on top of containers and deployment is another abstraction on top of pods which makes it more convenient to interact with the pods replicate them and do some other configuration so in practice you would mostly work with deployments and not with pots so now if one of the replicas of your application pod would die the service will forward the requests to another one so your application would still be accessible for the user so now you're probably wondering what about the database pod because if the database part diet your application also wouldn't be accessible so we need a database replica as well however we can't replicate database using a deployment and the reason for that is because database has a state which is its data meaning that if we have clones or replicas of the database they would all need to access the same shared data storage and there you would need some kind of mechanism that manages which pods are currently writing to that storage or which pods are reading from that storage in order to avoid the data inconsistencies and that mechanism in addition to replicating feature is offered by another kubernetes component called stateful set so this component is meant specifically for applications like databases so MySQL mongodb elasticsearch or any other stateful applications or databases should be created using stateful sets and not deployments it's a very important distinction and stateful said just like deployment would take care of replicating the pots and scaling them up or scaling them down but making sure the database reads and writes are synchronized so that no database inconsistencies are offered however I must mention here that deploying database applications using stateful sets in kubernetes cluster can be somewhat tedious so it's definitely more difficult than working with deployments where you don't have all these challenges that's why it's also a common practice to host database applications outside of the kubernetes cluster and just have the deployments or stateless applications that replicate and scale with no problem inside of the kubernetes cluster and communicate with the external database so now that we have two replicas of my application pod and two replicas of the database and they're both load balanced our setup is more robust which means that now even if Node 1 the whole node server was actually rebooted or crashed and nothing could run on it we would still have a second node with application and database pods running on it and the application would still be accessible by the user until these two replicas get recreated so you can avoid downtime so to summarize we have looked at the most used kubernetes components we start with the pods and the services in order to communicate between the parts and the Ingress component which is used to Route traffic into the cluster we've also looked at external configuration using config maps and secrets and data persistence using volumes and finally we've looked at pod blueprints with replicating mechanisms like deployments and stateful sets where stateful set is used specifically for stateful applications like databases and yes there are a lot more components that kubernetes offers but these are really the core the basic ones just using these core components you can actually build pretty powerful kubernetes clusters video we're gonna talk about basic architecture of kubernetes so we're going to look at two types of nodes that kubernetes operates on one is master and another one is slave and we're going to see what is the difference between those and which role each one of them has inside of the cluster and we're going to go through the basic concepts of how kubernetes does what it does and how the cluster is self-managed and self-healing and automated and how you as a operator of the kubernetes cluster should end up having much less manual effort and we're going to start with this basic setup of one node with two application Parts running on it so one of the main components of kubernetes architecture are its worker servers or nodes and each node will have multiple application pods with containers running on that node and the way kubernetes does it is using three processes that must be installed on every node that are used to schedule and manage those parts so nodes are the cluster servers that actually do the work that's why sometimes also called worker nodes so the first process that needs to run on every node is the container runtime in my example I have Docker but it could be some other technology as well so because application pods have containers running inside a container runtime needs to be installed on every node but the process that actually schedules those can those pods and the containers in underneath is cubelet which is a process of kubernetes itself unlike container runtime that has interface with both container runtime and the Machine the node itself because at the end of the day cubelet is responsible for taking that configuration and actually running a pod or starting a pod with a container inside and then assigning resources from that node to The Container like CPU RAM and storage resources so usually kubernetes cluster is made up of multiple nodes which also must have container runtime and cubelet services installed and you can have hundreds of those worker nodes which will run other pods and containers and replicas of the existing parts like my app and database pods in this example and the way that communication between them works is using Services which is sort of a load balancer that basically catch matches the request directed to the part or the application like database for example and then forwards it to the respective part and the third process that is responsible for forwarding requests from services to pods is actually Cube proxy that also must be installed on every node and Q proxy has actually intelligent forwarding logic inside that makes sure that the communication also works in a performant way with low overhead for example if an application my app replica is making a requested database instead of service just randomly forwarding the request to any replica it will actually forward it to the replica that is running on the same node as the Pod that initiated the request thus this way avoiding the network overhead of sending the request to another machine so to summarize two kubernetes processes cubelet and Cube proxy must be installed on every kubernetes worker node along with an independent container runtime in order for kubernetes cluster to function properly but now the question is how do you interact with this cluster or do you decide on which node a new application pod or database pod should be scheduled or if a replica part dies what process actually monitors it and then reschedules it or restarts it again or when we add another server how does it join the cluster to become another node and get pods and other components created on it and the answer is all these managing processes are done by Master nodes so Master servers or masternodes have completely different processes running inside and these are four processes that run on every masternode that control the cluster State and the worker nodes as well so the first service is API server so when you as a user want to deploy a new application in a kubernetes cluster you interact with the API server using some client it could be a UI like kubernetes dashboard could be command line tool like cubelet or a kubernetes API so API server is like a cluster Gateway which gets the initial request of any updates into the cluster or even the queries from the cluster and it also acts as a gatekeeper for authentication to make sure that only authenticated and authorized requests get through to the cluster that means whenever you want to schedule new pods deploy new applications create new service or any other components you have to talk to the API server on the master node and the API server then validate your request and if everything is fine then it will forward your request to other processes in order to schedule the Pod or create this component that you requested and also if you want to query the status of your deployment or the cluster Health Etc you make a request to the API server and it gives you the response which is good for security because you just have one entry point into the cluster another Master process is a scheduler so as I mentioned if you send an API server a request to schedule a new pod API server after it validates your request will actually hand it over to the scheduler in order to start that application pod on one of the worker nodes and of course instead of just randomly assigning to any node schedule has this whole intelligent way of deciding on which specific worker node the next pod will be scheduled or next component will be scheduled so first it will look at your request and see how much resources the application that you want to schedule will need how much CPU how much RAM and then it it's going to look at and it's going to go through the worker nodes and see the available resources on each one of them and if it says that OneNote is the least busy or has the most resources available it will schedule the new part on that note an important Point here is that scheduler just decides on which nodes a new pod will be scheduled the process that actually does the scheduling that actually starts that pod with a container is the cubelet so it gets the request from the scheduler and executes the request on that note the next component is controller manager which is another crucial component because what happens when pods die on any node there must be a way to detect that the nodes died and then reschedule those pods as soon as possible so what controller manager does is detect the State changes like crashing of pods for example so when pods die controller manager detects that and tries to recover the cluster State as soon as possible and for that it makes a request to the scheduler to reschedule those dead Parts in the same cycle happens here where the scheduler decides based on the resource calculation which worker nodes should restart those pods again and makes requests to the corresponding cubelets on those worker nodes to actually restart the pods and finally the last Master process is etcd which is a key Value Store of a cluster State you can think of it as a cluster brain actually which means that every change in the cluster for example when a new pod gets scheduled when a pod dies all of these changes get saved or updated into this key Value Store of edcd and the reason why atcd store is a cluster brain is because all of this mechanism with scheduler controller manager Etc works because of its data so for example how does scheduler know what resources are available on on each worker node or how does controller manager know that a cluster stay changed in some way for example pods diet or that cubelet restarted new pods upon the request of a scheduler or when you make a query request to API server about the cluster health or for example your application deployment state where does API server get all this state information from so all of this information is stored in hcd cluster what is not stored in the LCD key value store is the actual application data for example if you have a database application running inside of a cluster the data will be stored somewhere else not in the hcd this is just a cluster State information which is used for master processes to communicate with the work processes and vice versa so now you probably already see that Master processes are absolutely crucial for the cluster operation especially the SCD store which contains some data must be reliably stored or replicated so in practice kubernetes cluster is usually made up of multiple Masters where each Master node runs its Master processes where of course the API server is load balanced and the it's a d store forms a distributed storage across all the master nodes so now that we saw what processes run on worker nodes and masternodes let's actually have a look at at a really stick example of a cluster setup so in a very small cluster you would probably have two masternodes and three worker notes also to note here the hardware resources of Master and nodes servers actually differ the master processes are more important but they actually have less load of work so they need less resources like CPU RAM and storage whereas the worker nodes do the actual job of running those pods with containers inside therefore they need more resources and as your application complexity and its demand of resources increases you may actually add more master and node servers to your cluster and thus forming a more powerful and robust cluster to meet your application resource requirements so in an existing kubernetes cluster you can actually add new master or node servers pretty easily so if you want to add a master server you just get a new bare server you install all the master processes on it and add it to the kubernetes cluster same way if you need two worker nodes you get pair servers you install all the worker node processes like container runtime cubelet and Q proxy on it and add it to the kubernetes cluster that's it and this way you can infinitely increase the power and resources of your kubernetes cluster is your replication complexity and its resource demand increases so in this video I'm going to show you what minicube and Cube CTL are and how to set them up so first of all let's see what is minicube usually in kubernetes world when you're setting up a production cluster it will look something like this so you would have multiple Masters uh at least two in a production setting and you would have multiple worker nodes and masternodes and the worker nodes have their own separate responsibility so as you see on the diagram you would have actual separate virtual or physical machines that each represent a node now if you want to test something on your local environment or if you want to try something out very quickly for example deploying new application or new components and you want to test it on your local machine obviously setting up a cluster like this will be pretty difficult or maybe even impossible if you don't have enough resources like memory and CPU Etc and exactly for the use case there's this open source tool that is called a mini Cube so what a mini cube is is basically one node cluster where the master processes and the work processes both run on one node and this node will have a Docker container runtime pre-installed so you will be able to run the containers or the pods with containers on this node and the way it's going to run on your laptop is through a virtual box or some other hypervisor so basically minicube will create a virtual box on your laptop and the nodes that you see here of this node will run in that virtual box so to summarize minicube is a OneNote kubernetes cluster that runs in a virtualbox on your laptop which you can use for testing kubernetes on your local setup so now that you've set up a cluster or a mini cluster on your laptop or PC on your local machine you need some way to interact with a cluster so you want to create components come configure it Etc and that's where cubectl comes in the picture so now that you have this virtual node on your local machine that represents minicube you need some way to interact with that cluster so you need a way to create pods and other kubernetes components on the Node and the way to do it is using cubectl which is a command line tool for kubernetes cluster so let's see how it actually works remember we said that minicube runs both master and work processes so one of the master processes called API server is actually the main entry point into the kubernetes cluster so if you want to do anything in the kubernetes if you want to configure anything create any component you first had to talk to the API server and the way to talk to the API server is through different clients so you can have a UI like a dashboard you can talk to it using kubernetes API or a command line tool which is Cube CTL and cubectl is actually the most powerful of all the three clients because with qcdl you can basically do anything in the kubernetes that you want and throughout these video tutorials we're going to be using cubectl mostly so once the cube CTL submits commands to the API server to create components delete components Etc the work processes on minicube node will actually make it happen so they will be actually executing the commands to create the pods to destroy the parts to create Services Etc so this is the mini Cube setup and this is how Cube CTL is used to interact with the cluster an important thing to note here is that qctl isn't just for minicube cluster if you have a cloud cluster or a hybrid cluster whatever Cube CTL is the tool to use to interact with any type of kubernetes cluster setup so that's important to note here so now that we know what minicube and Cube CTL are let's actually install them to see them in practice I'm using Mac so the installation process will probably be easier but I'm gonna put the links to the installation guides in the description so you can actually follow them to install it on your operating system just one thing to note here is that minicube needs a virtualization because as we mentioned it's going to run in a virtual box setup or some hypervisor so you will need to install some type of hypervisor it could be virtualbox I'm going to install a hyperkit but it's going to be in those step-by-step instructions as well so I'm going to show you how to install it on a Mac so I have a Mac OS Mojave so I'm going to show you how to install mini Cube on this Macos version and I'm going to be using Brew to install it so pretty update and the first thing is that I'm gonna install um a hypervisor hyperkit so I'm gonna go with the hyperkit go ahead and install it I already had it installed it so with you if you're doing it for the first time it might take a longer because it has to download all these dependencies and stuff and now I'm gonna install minicube and here's the thing mini Cube has Cube CTL as a dependency so when I execute this it's going to install cubectl as well so I don't need to install it separately so let's see here installing dependencies for minicube which is kubernetes CLI this is Cube CTL again because I already had it installed before it still has a local copy of the dependencies that's why it's pretty fast it might take longer if you're doing it for the first time so now that everything is installed let's actually check the commands so Cube CTL command should be working so I get this list of the commands with cubectl so it's there and mini Cube should be working as well and as you see mini Cube comes with this command line tool which is pretty simple so with one command it's gonna bring up the whole kubernetes cluster in this OneNote setup and that you can do stuff with it and you can just stop it or delete it it's pretty easy so now that we have both installed and the commands are there let's actually create a mini Cube kubernetes cluster and as you see there is a start command let's actually clear this so this is how we're going to start a kubernetes cluster Q mini Cube start and here is where the hypervisor installed comes in because since midi Cube needs to run in Virtual environment we're gonna tell minicube which hypervisor it should use to start a cluster so for that we're going to specify an option which is VM driver and here I'm going to set the hyperkey that I installed so I'm telling minicube please use hyperkit hypervisor to start this virtual mini Cube cluster so when I execute this it's going to download some stuff so again it may take a little bit longer if you're doing for the first time and as I mentioned mini Cube has Docker runtime or Docker Daemon pre-installed so even if you don't have Docker on your machine it's still gonna work so you would be able to create containers inside because it already contains Docker which is a pretty good thing if you don't have Docker already installed so done Cube CTL is now configured to use minicube which means the mini Cube cluster is set up and Cube CTL command which is meant to interact with the kubernetes Clusters is also connected with that mini Cube cluster which means if I do Cube CTL get notes which just gets me a status of the notes of the kubernetes cluster it's going to tell me that a mini Cube node is ready and as you see it's the only node and it has a must roll because it obviously has to run the master processes um and I can also get the status with minicube executing mini Cube status so I see host is running cubelet which is a service that actually runs the pods using container runtime is running so basically everything is running and by the way if you want to see kubernetes architecture in more detail and to understand how master and worker nodes actually work and what all these processes are I have a separate video that covers kubernetes architecture so you can check it out on this link and we can also check which version of kubernetes we have installed and usually it's going to be the latest version so with qctl version you actually know what the client version of kubernetes is and what the server version of kubernetes is and here we see we're using 1.17 and that's the kubernetes version that is running in the minicube cluster so if you see both client version and server version in the output it means that minicube is correctly installed so from this point on we're going to be interacting with the mini Cube cluster using cubectl command line tool so mini cube is basically just for the startup and for deleting the cluster but everything else configuring we're going to be doing through Cube CTL and all these commands that I executed here I'm gonna put them in a list in the comment section so you can actually copy them in this video I'm gonna show you some basic Cube CTL commands and how to create and debug Parts in minicube so now we have a mini Cube cluster and cubectl installed and once the cluster is set up you're gonna be using cubectl to basically do anything in the cluster to create components to get the status Etc so first thing we are gonna just get the status of the notes so we see that there is one node which is a muster and everything is going to run on that node because it's a mini Cube so with cubect you'll get I can check the parts and I don't have any that's why no resources I can check the services it keeps it will get services and I just have one default service and so on so this Cube CTL get I can list any kubernetes components so now since we don't have any parts we're going to create one and to create kubernetes components there is a cube CTL create command so if I do help on that Cube CTR uh create command I can see available commands for it so I can create all these components using Cube CTL create but there is no pod on the list because in kubernetes world the way it works is that the Pod is the smallest unit of the kubernetes cluster but usually in practice you're not creating pods or you're not working with the pods directly there is an abstraction layer over the pods that is called deployment so this is what we are going to be creating and that's going to create the parts underneath and this is a usage of qctl create deployment so I need to give a name of the deployment and then provide some options and the option that is required is the image because the Pod needs to be created based on certain some image or some container image so let's actually go ahead and create nginx deployment so Cube CTL create deployment we let's call it nginx deployment um image equals nginx it's just gonna go ahead and download the latest nginx image from Docker Hub that's how it's going to work so when I execute this you see deployment nginx Depot created so now if I do Coop CTO get deployment you see that I have one deployment created I have a status here which says it's not ready yet so if I do Cube CTL get part you see that now I have a pod which has a prefix of the deployment and some random hash here and it says container creating so it's not ready yet so if I do it again it's running and the way it works here is that when I create a deployment deployment has all the information or the blueprint for creating the Pod the for the this is the minimalistic or the most basic configuration for a deployment we're just saying the name and the image that's it the rest is just defaults and between deployment and Nepal there is another layer which is automatically managed by kubernetes deployment called replica set so if I do Cube CTL get replica set written together you see I have an nginx Depot replica set hash and it just gives me a state and if you notice here the Pod name has a prefix of deployment and the replica sets ID and then its own ID so this is how the Pod name is made up and the replica set basically is managing the replicas of a pod you in practice will never have to create replica set or delete a replica set or update in any way you're going to be working with deployments directly which is more convenient because in deployment you can configure the Pod blueprint completely you can say how many replicas of the part you want and you can do the rest of the configuration there here with this command we just created one pod or one replica but if you wanted to have two replicas of the nginx part we can just provide as additional options so this is how the layers work first you have the deployment the deployment manages a replica set a replica set manages all the replicas of that pod and the Pod is again an abstraction of a container and everything below the deployment should be managed automatically by kubernetes you shouldn't have to worry about any of it for example the image that it uses I will have to edit that in a deployment directly and not in the Pod so let's go ahead and do that right away so I'm going to do Cube CTL edit deployment and I'm going to provide the name genix and we get an auto generated configuration file of the deployment because in the command line we just gave two options everything else is default and auto generated by kubernetes um and you don't have to understand this now but I'm going to make a separate video where I break down the configuration file and the syntax of the configuration file for now let's just go ahead and scroll to the image which is somewhere down below and let's say I wanted to fixate the version to 1 16. and save that change and as you see deployment was edited and now when I do Cube CTL get pot I see that the old part so this one here is terminating and another one started 25 seconds ago so if I do it again the old part is gone and the new one got created with the new image and if I do if I get replica set I see that the old one has no pods in it and a new one has been created as well so we just edited the deployment configuration and everything else below that got automatically updated and that's the magic of kubernetes and that's how it works another very practical command is Cube CTL logs which basically shows you what the application running inside the Pod actually locked so if I do Cube CTL logs and I will need the Pod name for this um I will get nothing because nginx didn't log anything so let's actually create another deployment uh from mongodb so let's call it deployment and the image and the image will be so let's see here part so now I have the mongodb deployment creating so let's go ahead and log that status here means that pod was created but the container inside the Pod isn't running yet and when I try to lock obviously it tells me there is no container running so it can show me and it locks so let's get the status again at this point if I'm seeing that container isn't starting I can actually get some additional information by Cube CTL describe pod and the Pod name which here shows me what state changes happen inside the part so it pulled the image created the container and start a container so Cube CTL get pod it should be running already so now let's log it keeps it here logs and here we see the log output so it took a little bit but this is what the mongodb application container actually locked inside the Pod and obviously if container has some problems it's going to help with debugging to see what the application is actually printing so let's clear that and get the parts again so another very useful command when debugging when something is not working or you just want to check what's going on inside the Pod is Cube CTL exec so basically what it does is that it gets the terminal of that mongodb application container so if I do Cube CTL exec interactive terminal that's what it stands for I will need the Pod name Dash Dash so so with this command I get the terminal of the mongodb application container and as you see here I am inside the container of mongodb as a root user so I'm in a completely different setting now and as I said this is useful in debugging or when you want to test something or try something you can enter the container or get the terminal and execute some comments inside there so we can exit that again and of course with Cube CTL I can delete the pods so if I do get deployment I misspelled it so it keeps it here deployment I see that I have two of them and if I do because it get pod and replica set I have also two of them so let's say if I wanted to get rid of all the pods replica sets underneath I will have to delete the deployment so delete deployment and I'll have to provide the name of the deployment I'm gonna delete let's delete mongodb delete it and now if I'm gonna say Cube CTL get pod the Pod should be terminating and if I do get replica set the mongodb replica set is gone as well and the same if I do delete deployment nginx Deadpool and do the replica set see everything gone so all the crud operations create delete update Etc happens on the deployment level and everything underneath just follows automatically in the similar way way we can create other kubernetes resources like Services Etc however as you notice when we are creating kubernetes components like deployment using cubectl Create deployment um and I misspelled it all the time you'll have to provide all these options on the command line so you'll have to say the name and you'll have to specify the image and then you have this option one option two uh Etc and there could be a lot of things that you want to configure in a deployment or in a pod and obviously it will be impractical to write that all out on a command line so because of that in practice you would usually work with kubernetes configuration files meaning what component you're creating what the name of the component is what image is it based off and any other options they're all gathered in a configuration file and you just tell cubectl to execute that configuration file and the way you do it is using cubectl apply command and apply basically takes the file the configuration file as a parameter and does whatever you have written there so apply takes an option called minus F that stands for file and here you would say the name of the file so this will be the config file dot yaml this is the format that you're usually gonna use for configuration files and this is the command that executes whatever is in that configuration file so let's actually call it configuration file um I don't know nginx deployment and let's go ahead and create a very simplistic super basic uh nginx deployment file so here I'm gonna create that file so this is the basic configuration for the deployment so here I'm just specifying what I want to create I want to create a deployment the name of the deployment you can ignore these labels uh right now uh how many replicas of the parts I want to create and this plug right here the template and specification is a blueprint for the pods so specification for the deployment and specification for a pod and here we're just saying that we want one container inside of the pod with nginx image and we are going to bind that on Port 80. so this is going to be our configuration file and once we have that we can apply that configuration so deployment created so now if I get pod I see that nginx deployment pod was created and it's running and let's also see the deployment was created 52 seconds ago and now if I wanted to change something in that deployment I can actually change my local configuration for example I wanted two replicas instead of one I can apply that again deployment nginx deployment configured and as you see the difference here is that kubernetes can detect if the nginx deployment doesn't exist yet it's going to create one but if it already exists and I apply the configuration file again it's going to know that it should update it instead of creating a new one so if I do get deployment I see this is the old one or the old deployment and if I do Cube CTL get part I see the old one is still there and a new one got created because I increased the replica count which means that with Cube CTL apply you can both create and update a component and obviously you can do Coupe CTL with services volumes any other kubernetes components just like we did it with the deployment so in the next video I'm going to break down the syntax of the configuration file which is pretty logical and simple actually to understand and I'm going to explain all the different attributes and what they mean so you can write your own configuration files for different components so to summarize we've looked at a couple of cubectl commands in this video we saw how to create a component like deployment how to edit it and delete it we saw how to get status of PODS deployments replica sets cetera we also logged on the console whatever application is writing it to the console in the Pod and we saw how to get a terminal of a running container using cubectl exec and finally we saw how to use a kubernetes configuration file to create and update components using the cube CTL apply command and last but not least we saw Cube CTL describe command which will win a container isn't starting in a pot and you want to get some additional troubleshooting information about the pod in this video I'm going to show you the syntax and the contents of kubernetes configuration file which is the main tool for creating and configuring components in kubernetes cluster if you've seen large configuration files it might seem overwhelming but in reality it's pretty simple and intuitive and also very logically structured so let's go through it step by step so here I have examples of a deployment and service configuration files side by side so the first thing is that every configuration file in kubernetes has three parts the first part is where the metadata of that component that you're creating resides and one of the metadata is obviously name of the component itself the second part in the configuration file is specification so each components configuration file will have a specification where you basically put every kind of configuration that you want to apply for that component the first two lines here as you see is just declaring what you want to create here we are creating deployment and here we're creating a service and this is basically you have to look up for each component there's a different API version so now inside of the specification part obviously the attributes will be specific to the kind of a component that you're creating so deployment will have its own attributes that only apply for deployment and the service will have its own stuff but I said there are three parts of a configuration file and we just see metadata and the specification so where's the third part so the third part will be a status but it's going to be automatically generated and added by kubernetes so the way it works is that kubernetes will always compare what is the desired State and what is the actual stated or the status of the component and if the status and desired state do not match then kubernetes knows there's something to be fixed there so it's going to try to fix it and this is the basis of the self-healing feature that could kubernetes provides for example here you specify you want two replicas of nginx deployment so when you apply this when you actually create the deployment using this configuration file that's what apply means kubernetes will adhere the status of your deployment and it will update that state continuously so for example if a status at some point will say just one replica is running then kubernetes will compare that status with the specification and we'll know there is a problem there another replica needs to be created sap now another interesting question here is where does kubernetes actually get that status data to automatically add here or update continuously that information comes from the etcd remember the cluster brain one of the master processes that actually stores the cluster data so its CD holds at any time the current state is of any kubernetes component and that's where the status information comes from so as you see the format of the configuration files is yaml that's why the extension here and generally it's pretty straightforward to understand it it's a very simple format but yaml is very strict about the indentations so for example if you have something wrongly indented here your file will be invalid so what I do especially if I have a configuration file that has 200 lines it's pretty long I usually use some yaml online validator to see where I need to fix that but other than that it's pretty simple another thing is where do you actually store those configuration files a usual practice is to store them with your code because since the deployment in service is going to be applied to your application it's a good practice to store these configuration files in your application code so usually it will be part of the whole infrastructure as a code concept or you can also have its own git repository just for the configuration files so in the previous video I showed you that deployments manage the parts that are below them so whenever you edit something in a deployment it kind of Cascades down down to all the ports that it manages and whenever you want to create some pods you would actually create a deployment and it will take care of the rest so how does this happen or where is this whole thing defined in the configuration um so here in the specification part of a deployment you see a template and if I expand it you see the template also has its own metadata and specification so it's basically a configuration file inside of a configuration file and the reason for it is that this configuration applies to a pod so pod should have its own configuration inside of deployments configuration file and that's how all the deployments will be defined and this is going to be the blueprint for a pod like which image it should be based on which Port it should open what is going to be the name of the container Etc so the way the connection is established is using labels and selectors so as you see metadata part contains the labels and the specification part contains selectors it's pretty simple in a metadata you give components like deployment or pod a key value pair and it could be any key value pair that you think of in this case we have app nginx and that label just sticks to that component so we give pods created using this blueprint label app engine X and we tell the deployment to connect or to match all the labels with app nginx to create that connection so this way deployment will know which pods belong to it now deployment has its own label app engine X and these two labels are used by the service selector so in the specification of a service we Define a selector which basically makes a connection between the service and the deployment or its parts because service must know which pods are kind of registered with it so which pods belong to that service and that connection is made through the selector of the label and we're going to see that in a demo so another thing that must be configured in the service and pod is the ports so if I expand this I see that service has its ports configuration and the container inside of a pod is obviously running or needs to run its import right so how this is configured is basically service has a port where the service itself is accessible at so if other services sends a request to nginx service here it needs to send it on Port 80 but this service needs to know to which pod it should forward the request but also at which Port is that pod listening and that is the Target Port so this one should match the container port and with that we have our deployment and service basic configurations done and to note here most of these attributes that you see here in both parts are required so this will actually be the minimum configuration for deployment and service so once we have those files let's actually apply them or create components using them so let's head over to the console and here I'm going to create both deployment and service so Cube CTL apply deployment created and nginx service so now if I get the pods I see two replicas are running because that's how I Define it here and we have our service as well which is engine X service this is a default service it's always there this is the one we created and it's listening on Port 80 as we specified now how can we validate that the service has the right parts that it forwards the requests to we can do it using Cube CTL describe service and the service name and here you see the endpoints where you have all these status information here like the things that we Define in the configuration like app selector Etc we have the Target Port that we Define and we have the endpoints here and this must be the IP addresses and ports of the pots that the service must forward the request to so how do we know that these are the IP addresses of the right pods because if qctl get pod you don't get this information so the way we do it or way we find that out is using get pod and then you do Dash o which is for outputs and then we want more information so all white and here we see more columns here so we have the name and Status ready Etc but we also have the IP address so here is the IP address endpoint specified here and this is the other one so we know that the service has right endpoints so now let's see uh the third part of the configuration file which is a status that kubernetes automatically generated and the way to do it is we can get the deployment nginx deployment in a yaml format so when I execute this command I will get the resulting or the updated configuration of my deployment which actually resides in the hcd because etcd stores the status of the whole cluster including every component so if I do this I'll get the yaml output in my console but I want it in the file so I'm gonna save it into nginx deployment result and I'm Gonna Save it there and I'm going to open it in my editor next to the original one so as you see a lot of stuff has been added but let's just see the status part so all this is automatically edit and updated constantly by kubernetes so it says how many replicas are running what the state of those replicas and some other information so this part can also be helpful when debugging so that's the status but also if you noticed other stuff has been added in the metadata and specification part as well so for example uh creation timestamp when was the component created is automatically edited by kubernetes because it is a metadata some unique ID Etc you don't have to care about it and in the specification part it just adds some defaults for that component but again you don't have to care or understand most of these attributes but one thing to note here is that if you for example want to copy a deployment that you already have using um maybe automated scripts you will have to remove and get rid of most of these generated stuff so you have to clean that deployment configuration file first and then you can create another deployment from that blueprint configuration so that's it with this video so from now on we're going to be working with the configuration files so for example if I want to delete the deployment and the service I can do it using the file configuration file as well using delete and like this the deployment will be gone and I can do the same for service all right so using cubectl apply and Cube CTL delete you can basically work with the configuration files in this video we're going to deploy two applications mongodb and Express and I chose these two because it demonstrates really well a typical simple setup of a web application and its database so you can apply this to any similar setup you have so let's see how we're going to do this so first we will create a mongodb pod and in order to talk to that pod we are going to need a service and we're going to create an internal service which basically means that no external requests are allowed to the Pod only components inside the same cluster can talk to it and that's what we want then we're going to create a Express deployment one we're going to need a database URL of mongodb so that Express can connect to it and the second one is credentials so username and password of the database so that it can authenticate so the way we can pass this information to Express deployment is through its deployment configuration file through environmental variables because that's how the application is configured so we're going to create a config map that contains database URL and we're going to create a secret that contains the credentials and we're going to reference both inside of that deployment file so once we have that set up we're going to need Express to be accessed accessible through a browser in order to do that we're going to create an external service that will allow external requests to talk to the Pod so the URL will be HTTP IP address of the node and the service port so with this setup the request flow will now look like this so the request comes from the browser and it goes to the external service of the Express which will then forward it to the Express pod the Pod will then connect to internal service of mongodb that's basically the database URL here and it will forward it then to mongodb pod where it will authenticate the request using the credentials so now let's go and create this whole setup using kubernetes configuration files let's dive right into it and create the whole setup so first of all I have a mini Cube cluster running if I do Cube CTL get all which basically gets me all the components that are inside the cluster I only have a default kubernetes service so my cluster is empty and I'm sorry from scratch so the first thing that I said we're gonna do is create a mongodb deployment I usually create it in an editor so I'm going to go to visual studio code and paste a prepared deployment file there for mongodb and this is how it's going to look like so I have deployment kind and I have some metadata I'm just going to call it mongodb deployment um labels and selectors uh in the previous video I already explained the syntax of kubernetes yaml configuration file so if you want to know what all these attributes mean then you can check out that video and here in the template I have a definition or blueprint for parts that this deployment gonna create and I'm just gonna go with one replica so the container is going to be called mongodb and this is the image that I'm gonna take so let's actually go and check out the image configuration for mongodb and I see this image here let's open this and basically what I'm looking for is how to use that container meaning what ports it's gonna open and what's uh external configuration it's going to take so a default Port of mongodb container is 27017 so I'm gonna use that and we are gonna use variables environmental variables the root username and root password so basically I can on the container startup Define the admin username and password so let's go ahead and configure all of that inside the configuration file so here below the image of mongodbit so we're just gonna leave the name of the image and it's gonna pull the latest one and that's what we want so here I'm gonna specify what port I want to expose so ports that's the attribute name and container port and that's the standard Port so I'm gonna leave it and below that I'm gonna specify those two environmental variables so one is called let's see what it's called it's any DB root username and here is going to be a value so we're gonna actually leave it blank for now and the other one is called initroot password and we're going to leave that blank as well just value and once we have the values here um we're gonna have a complete deployment for mongodb this is basically all we need now note that this is a configuration file that is going to be checked into a repository so usually you wouldn't write admin username and password inside the configuration file so what we're going to do now is we're going to create a secret from where we will reference the values so meaning that the secret is going to leave in kubernetes and nobody will have access to it in a git repository so we're going to save this incomplete deployment file first of all so let's call it deployment or let's just call it yemo and save it here so that we get the syntax highlight and now before we apply this configuration we're going to create the secret where the root username and password will leave so let's create a new file and I'm going to paste in the configuration of a secret which is actually pretty simple so we have a kind secret then we have a metadata which again is just simply the name we're going to call it mongodb secret the type opaque is actually a default type which is the most basic key value secret type other types for example include TLS certificates so you can create a secret specifically with the TLs certificate type and a couple of more types but mostly you're going to use the default one and these are the actual contents so you have the data and here you have key value pairs which of course are the names you come up with so we're going to specify username or we can actually call it root username and we're going to call it root password here's the thing the values in in this key value pairs are not plain text so when we are creating a secret the value must be base64 encoded so the way you can do that the simplest way is go to your terminal so here I'm gonna say Echo minus n very important option don't leave it out otherwise it's not going to work and here I'm gonna put a plain text value that I want so I'm just going to go with just using whatever of course you can have something more secretive here and I'm gonna base64 encoding and the value that I get here I'm gonna copy it into the secret configuration as a value and I'm going to do the same with password so again I'm just gonna go with simple password obviously you want to have something more secure and I'm gonna copy that as a value here and save it is secret Dot yeml okay now we have only written configuration files we haven't created anything yet in the cluster so this is just preparation work and we have to create secret before the deployment if we're gonna reference The Secret inside of this so the order of creation matters because if I'm creating a deployment that references a secret that doesn't exist yet I'm gonna get an error so it's not going to start since we have our first component let's actually go ahead and create our secret from a configuration file so again I'm going to go to my console let's actually clear all this and I'm gonna go into the folder where I'm creating all these configuration files I called it kubernetes configuration and here I have both of my files so I'm do I'm gonna do Cube CTL apply Secret and secret created so I'm gonna do Cube CTL get secret and I should see my secret has been created this is something created by default with a different type and this is our secret here so now that we have our secrets we can reference it inside of our deployment configuration file so let's go back and this is how you reference contents specific key value data of secret so instead of value we're going to say value from and then I'm going to do Secret Key ref no secret key reference and name is going to be the secret name so this one here and key is going to be the key in the data I want the value of this key value pair so I want this part of the data so I'm going to reference it by key so you don't have to learn it by heart obviously all the syntax and attribute names important thing here is that you know approximately how to reference it the actual syntax you can always look up in Google or maybe from previous configuration files but yeah this is how you reference it and we're going to do the same with password so I'm gonna do from and I'm just gonna copy the rest here remember yemo is very strict with the indentation here is the same secret but a different key so I'm gonna use a password key here and that will be it so now we have the root username and password referenced from the secret and no actual values inside the configuration file which is good for security because you don't want your credentials in your code Repository okay so our deployment file is actually ready so let's apply that and the deployment created meaning if I do get all I should see the Pod starting up the deployment and the replica set so let's actually check how pod is doing container creating so let's actually watch it might take some time to create it if it takes long and if you want to see whether there is a problem there you can also do Cube CTL describe pod and the Pod name so at least we know nothing's wrong there so we see that it's just pulling the image so that's what it takes so long so let's see again Cube CTL get pot and as you see it's running so we have mongodb deployment and the Pod one replica of its part running now the second step is we're going to create an internal service so that other components or other ports can talk to this mongodb so let's go ahead and create service configuration so go back to yemo and here we can either create a separate emo configuration file for secret or we can also include it in the same one so in yaml you can actually put multiple documents in one file so if I put three dashes uh that's basically a Syntax for document separation in yaml so a new document is starting so actually I'm going to put both deployment and service in one configuration file because they usually belong together so here I'm gonna paste the service configuration and by the way I'm going to put all these configuration files in git repository and Link the repository in the description of this video so this is a service for mongodb let's go through some of the attributes here so it's the service kind just the name we're going to call it mongodb service selector this is an important one because we want this service to connect to the Pod right and the way to do that is using selector and label so using this here the labels that deployment and pod have service can find the parts that it's going to attach to right so we have the selector here and this is an important part where we expose Service Port so this is going to be the service port and this is going to be the container and since we exposed container Port it this address right here these two have to match so Target Port is container or pod port and this is the service port and obviously these two here can be different but I'm gonna go with the same port and that's basically it that's our service so I'm gonna create the service now this file and go back to my and so and I'm gonna apply the same file that I applied before to create deployment so let's see what happens see both deployment and service configuration but it's going to know that I haven't changed the deployment that's what it means here and a service is created so if I were to edit both for example I can reapply the file and deployment service can be changed so I think using local configuration files is a handy way to edit your components so now let's actually check that our service was created get service and this is our service and it's listening at Port 27017 and I showed it in one of the previous videos but we can actually also validate that the service is attached to the correct pod and to do that I'm gonna do describe service and service name for this here I have the endpoint which is an IP address of a pod and the port where the application inside the Pod is listening it so let's actually check that this is the right pod I mean we just have one but still so if I do get pot and I want additional output to what I get by default one of the columns includes the IP address which is this one right here so 172.17 06 that's the Pod IP address and this is the port where the application inside the Pod is listening at so everything is set up perfectly mongodb deployment and service has been created and by the way if you want to see all the components for one um application you can also display them using cubectl get all that will show all the components and you can filter them by name so Bongo DB and here you see the service deployment replica set and the pod so when you do all that component type will be the first here okay that's just a side info so now the next step we're going to create Express deployment and service and also an external configuration um where we're going to put the database URL for mongodb so let's go ahead and do it I'm going to clear that up and go and create a new file for Express deployment and service so this is the deployment draft of Express same things here Express that's the name and here we have the Pod definition where the image name is Express let's actually go ahead and check that image as well we don't need this this is Express and that's the name of the image Express and let's see the same data here let's see the port the express application inside the container starts at is 8081 and these are some of the environmental variables so obviously we need three things for Express we need to tell it which database application it should connect to so obviously we need to tell it the mongodb address database address it should connect to the internal service and we we're gonna need credentials so that mongodb can authenticate that connection and the environmental variables to do that is going to be admin username admin password and the mongodb endpoint will be this here so these three environmental variables we need so let's go ahead and use that so first we're going to open the port again container ports and the reason why you have multiple ports is that inside of the Pod you can actually open multiple ports so that's going to be 8081 and now we're gonna add the environmental variables for the connectivity so the first one is the username and this is going to be obviously the same username and password that we defined right here so what I'm going to do is I'm just going to copy them because it's really the same so the value from we're going to read it from the secret that's already there so I'm gonna paste it here second environmental variable is called admin password and I'm also going to copy that from here and the third one is gonna be the database server and since this is also an external configuration we could either do value here and we could write the mongodb server address directly here or as I showed you in the diagram at the beginning we can put it in a config map which is an external configuration so that it's centralized so it's stored in one place and also other components can also use it so for example if I have two applications that are using mongodb database then I can just reference that external configuration here and if I have to change it at some point I just change it in one place and nothing else gets updated so because of that we're gonna keep this incomplete deployment configuration and we're going to create the config map which will contain the mongodb server address so I'm going to create a new file let's actually save this incomplete deployment let's call it Express yaml and we're going to come back to it later so save that now we need a config map here so I'm going to copy the configuration and this is also pretty simple just like secret you have the kind which is config map the name and the same construct see just like you saw here data which is key value pair it doesn't have a type because they're just one config map type and that's it and here you again have key value pairs so database URL and server name is actually the name of the service it's as simple as that so what do we call our service we call it mongodb service so I'm going to copy the service name and that's going to be the database server URL so I'm going to copy that and let's actually call it config map for consistency save it and just like with secret the order of execution or creation matters so I have to have a config map already in the cluster so that I can reference it so when we're done I have to con create the config map first and then the deployment so the way that I can reference the config map inside the deployment is very similar to secret so I'm actually going to copy the whole thing from Secret put it here the only thing different here is that instead of secret I'm gonna say config map it's all camel case and obviously the name is gonna be config map that's what we called it I think the name let's actually copied and again the key is the key in the key value pair here so let's copy that as well so now I have our Express deployment this is just standard stuff and this is where the Pod blueprint or container configuration exists we have exposed Port 8081 this is the image with latest tag and these are the three environmental variables that Express needs to connect and authenticate with mongodb so deployment is done and let's go ahead and create config map first and then Express deployment CTL apply config map and I'm gonna do group CDL apply always express and let's see the pod so container creating looks good so let's see the pod and it's running and I actually want to see the Lux so I'm going to lock the Express and here you see that express service started and database connected so now the final step is to access Express from a browser and in order to do that we are going to need an external service for Express so let's go ahead and create that one as well so let's clear this output go back to visual code and as we did last time in the same file as a deployment I'm going to create express service because actually in practice you never have deployment without the service so it makes sense to keep them together and this is Express external service and this configuration right now looks exactly same as the mongodb service configuration and even ports are the same like here I have exposed service port at 8081 and Target Port is where the container Port is listening so how do I make these external service is by doing two things so in the specification section so I'm gonna do it below the selector I'm gonna put a type and a type of this external service is load balancer which I think is a bad name for external service because internal service also acts as a load balancer so I've had two mongodb pods the internal service would also load balance the requests coming to these parts so I think the load balancer type name was chosen not very uh well because it could be confusing but what this type load balancer does basically is it accepts external requests by assigning the service in external IP address so another thing that we're gonna do here to make this service external is right here we're gonna provide third port and this is going to be called node port and what this is basically is the port where this external IP address will be open so this will be the port that I'll have to put in the browser to access this service and this node Port actually has a range and that range is between 30 000 and 32 000 something so I can not give it the same port as here as I said it has to be between that range so I'm just gonna go with the 30 000 that's the minimum in that range and that would be it so this configuration here will create an external service let's go ahead and do it and I will show you exactly how these ports differ from each other so I'm gonna apply Express so service created and if I do get service I see that mongodb service that we created previously has a type of cluster IP and the express service that we just created is load balancer which is the type that we specifically defined in internal service we didn't specify any type because cluster IP which is the same as in internal service type is default so you don't have to Define it when you're creating internal service and the difference here is that cluster IP will give the service an internal IP address which is this one right here so this is an internal IP address of the service and load balancer will also give service an internal IP address but in addition to that it will also give the service an external IP address where the external requests will be coming from and here it says pending because we're in minicube and it works a little bit differently in a regular kubernetes setup here you would also see an actual IP address a public one and this is another difference because with internal IP address you just have port for that IP address with both internal and external IP addresses you have ports for both of them and that's why we had to Define third port which was for the external IP address as I said pending means that it doesn't have the external IP address yet so in minicube the way to do that is using the command minicube service and I'm gonna need the name of the service so this command will basically assign my external service a public IP address so I'm going to execute this and the browser window will open and I will see my Express page so if I go back to the command line you see that this command here is signed express service a URL with a public IP address or with an external IP address and the port which is what we defined in the node Port so I can basically copy that command which is the same as this one here and I get the page form on Express so now with this setup the way it's going to work is that when I make changes here for example I'm going to create a new database let's call it test DB whatever and I'm gonna create a request what just happened in background is that this request landed with the external service of Express which then forwarded it to the Express pod and the Express pod connected to the mongodb service an eternal service and mongodb service then forwarded that request finally to the mongodb Pod and then all the way back and we have the changes here so that's how you deploy a simple application setup in a kubernetes cluster in this video we're going to go through the usages of a namespace and the best practices of when and how to use a namespace first of all what is a namespace in kubernetes in kubernetes cluster you can organize resources in namespaces so you can have multiple namespaces in a cluster you can think of a namespace as a virtual cluster inside of a kubernetes cluster now when you create a cluster by default kubernetes gives you namespaces out of the box so in the command line if I type Cube CTL get namespaces I see the list of those out of the box namespaces that kubernetes offers and let's go through them one by one the kubernetes dashboard namespace is shipped automatically in minicube so it's specific to mini Cube installation you will not have this in a standard cluster the first one is Cube system Cube system namespace is not meant for your use so basically you shouldn't create anything or shouldn't modify anything in Cube system namespace the components that are deployed in the namespace are the system processes they're from Master managing processes or cube CTL Etc the next one is Cube public and what Q public contains is basically the publicly accessible data it has a config map that contains cluster information which is accessible even without authentication so if I type here cubectl cluster info this is the output that I get through that information and the third one is cubenode lease which is actually a recent addition to kubernetes and the purpose of that namespace is that it holds information about the heartbeats of nodes so each node basically gets its own object that contains the information about that nodes availability and the fourth namespace is the default namespace and default namespace is the one that you're going to be using to create the resources at the beginning if you haven't created a new namespace but of course you can add and create new namespaces and the way that you can do it is using cubectl Create namespace command with the name of the namespace so I can create my namespace and if I do Cube CTL get namespaces I see that in my list now another way to create namespaces is using a namespace configuration file which I think is a better way to create namespaces because you also have a history in your configuration file repository of what resources you created in a cluster okay so now we saw what namespaces are and that you can create new ones and that kubernetes offers some of them by default but the question is what is the need for namespaces when should you create them and how you should use them and the first use case of using or creating your own namespaces is the following imagine you have only default namespace which is provided by kubernetes and you create all your resources in that default namespace if you have a complex application that has multiple deployments which create replicas of many parts and you have resources like services and config Maps Etc very soon your default namespace is going to be filled with different components and it will be really difficult to have an overview of what's in there especially we have multiple users creating stuff inside so a better way to use namespaces in this case is to group resources into namespaces so for example you can have a database namespace where you deploy your database and all its required resources and you can have a monitoring namespace is where you deploy the promatoes and all the stuff that it needs you can also have elastic stack namespace where all the elasticsearch kibana ETC resources go and you can have nginx Ingress resources so just one way of logically grouping your resources inside of the cluster now according to the official documentation of kubernetes you shouldn't use namespaces if you have smaller projects and up to 10 users I personally think that it's always good idea to group your resources in namespaces because as I said even if you have a small project and 10 users you might still need some additional resources for your application like you know logging system and monitoring system and even with the minimum setup you can already get too much to just throw everything in a default namespace another use case where you will need to use namespaces if you have multiple teams so imagine this scenario you have two teams that use the same cluster and one team deploys an application which is called my app deployment that's the name of the deployment they create and that deployment has its certain configuration now if another team had a deployment that accidentally had the same name but a different configuration and they created the deployment or they applied it they would overwrite the first team's deployment and if they're using for example a Jenkins or some automated way to deploy those that application or to create the deployment they wouldn't even know that they overwrote or disrupted in other team's deployment So to avoid such kind of conflicts again you can use the namespaces so that each team can work in their own namespace without disrupting the other another use case for using namespaces is let's say you have one cluster and you want to host both staging and develop development environment in the same cluster and the reason for that is that for example if you're using something like nginx controller or elastic stack used for logging for example you can deploy it in one cluster and use it for both environments in that way you don't have to deploy this common resources twice in two different clusters so now the staging can use both resources as well as the development environment another use case for using namespaces is when you use blue green deployment for application which means that in the same cluster you want to have two different versions of production so the one that is active that is in production now and another one that is going to be the next production version the versions of the applications in those blue and green production namespaces will be different however the same as we saw before in staging and development this namespaces might need to use the same resource courses like again nginx controller or elastic stack and this way again they can both use this common shared resources without having to set up a separate cluster so one more use case for using namespaces is to limit the resources and access to namespaces when you're working with multiple teams so again we have a scenario where we have two teams working on the same cluster and each one of them has their own namespace so what you can do in this scenario is that you can give the teams access to only their namespace so they can only be able to create updates delete resources in their own namespace but they can't do anything in the other namespaces in this way you even restrict or even minimize the risk of one team accidentally interfering with another team's work so each one has their own secured isolated environment additional thing that you can do on the namespace level is limit the resources that each namespace consumes because if you have a cluster with limited resources you want to give each team a share of resources for their application so if one team let's say consumes too much resources then other teams will eventually have much less and their applications may not schedule because the cluster will run out of the resources so what you can do is that per namespace you can Define resource quotas that limit how much CPU RAM storage resources one namespace can use so I hope walking through these scenarios helped you analyze in which use cases and how you should use namespaces in your specific project there are several characteristics that you should consider before deciding how to group and how to use namespaces the first one is that you can't access most of the resources from another namespace so for example if you have a configuration map in Project a namespace that references the database service you can't use that config map in Project B namespace but instead you will have to create the same config map that also references the database service so each namespace will Define or must Define its own config map even if it's the same reference and the same applies to secret so for example if you have credentials of a shared service you will have to create that secret in each namespace where you are going to need that however a resource that you can share across namespaces is service and that's what we saw in the previous slide so config map in Project B namespace references service that is going to be used eventually in a pod and the way it works is that in a config map definition the database URL in addition to its name which is MySQL service will have namespace at the end so using that URL you can actually access services from other namespaces which is a very practical thing and this is how you can actually use shared resources like elasticsearch or nginx from other namespaces and one more characteristic is that we saw that most of the components resources can be created within a namespace but there are some components in kubernetes they're not namespaced so to say so basically they leave just globally in the cluster and you can't isolate them or put them in a certain namespace and examples of such resources are volume or persistent volume and node so basically when you create the volume it's going to be accessible throughout the whole cluster because it's not in a namespace and you can actually list components that are not bound to a namespace using a command cubectl API resources dash dash namespaced false and the same way you can can also list all the resources that are bound to a namespace using namespace true so now that you've learned what the namespaces are why to use them in which cases it makes sense to use them in which way and also some characteristics that you should consider let's actually see how to create components in a namespace in the last example we've created components using configuration files and nowhere there we have defined a namespace so what happens is by default if you don't provide a namespace to a component it creates them in a default namespace so if I apply this config map component and let's do that actually right now so Cube CD apply minus F config map if I apply that and I do Cube CTL get config map my config map was created in a default namespace and notice that even in the cube CTL get config map command I didn't use a namespace because Cube CTL get or cube CTL commands they take the default namespace as a default so Coupe CTL get config map is actually same as cube CTL get config map dash n or namespace and default namespace so these are the same commands it's just a shortcut because it takes default as a default namespace okay so one way that I can create this config map in a specific namespace is using cubectle apply command but adding flag namespace and the namespace name so this will create config map in my namespace and this is one way to do it another way is inside the configuration file itself so I can adjust this config map configuration file to include the information about the destination namespace itself so in the metadata I can add namespace attribute so if I apply this configuration file again using cubectle apply and now if I want to get the component that I created in this specific namespace then I have to add the option or the flag to cubectl get command because as I said by default it will check only in the default namespace so I recommend using the namespace attribute in a configuration file instead of providing it to the cube CTL command because one it's it's better documented so you know by just looking at the configuration file where the component is getting created because that could be an important information and second if you're using automated deployment where you're just applying the configuration files then again this will be a more convenient way to do it now if for example we take a scenario where one team gets their own namespace and that has to uh work entirely in the namespace it could be pretty annoying to have to add this namespace tag to every cubectl command so in order to make it more convenient there is a way to change this default or active namespace which is default namespace to whatever namespace you choose and kubernetes or cubesatel doesn't have any out of the box solution for that but there's a tool called Cube NS or Cubans and you have to install the tool so on Mac so I'm gonna execute Brew install X so this will install Cubans tool as well so once I have the Cubans installed I can just execute Cuban's command and this will give me a list of all the namespaces and highlight the one that is active which is default right now and if I want to change the active namespace I can do Cube ends space and this will switch the active namespace so if I do Cube ends now I see that active one is my namespace so now I can execute Cube CTL commands without providing my namespace namespace but obviously if you switch a lot between the namespaces this will not be so much convenient for your own operating system and environment there will be different installation process so I'm going to link the cube CTX installation guide in the description below so in this video we're going to talk about what Ingress is and how you should use it and also what are different use cases for Ingress so first of all let's imagine a simple kubernetes cluster where we have a pod of my application and it's corresponding service my app service so the first thing you need for a UI application is to be accessible through browser right so for external requests to be able to reach your application so one way to do that an easy way is through an external service where basically you can access the application using HTTP protocol the IP address of the node and the port however this is good for test cases and if you want to try something very fast but this is not what the final product should look like the final product should be like this so you have a domain name for application and you want a secure connection using https so the way to do that is using kubernetes component called Ingress so you'll have my app Ingress and instead of external service you would instead have an internal service so you would not open your application through the IP address and the port and now if the request comes from the browser it's going to first reach the Ingress and Ingress then will redirect it to the internal service and then it will eventually end up with the Pod so now let's actually take a look and see how external service configuration looks like so that you have a practical understanding so you have the service which is of type load balancer this means we are opening it to public by assigning an external IP address to the service and this is the port number that user can access the application at so basically the IP address the external IP address and the port number that you specify here now with Ingress of course it looks differently so let's go through the syntax of Ingress basically you have a kind Ingress instead of a service and in the specification where the whole configuration happens you have so-called rules or routing rules and this basically defines that the main address or all the requests to that host must be forwarded to an internal service so this is the host that user will enter in the browser and in Ingress users Define a mapping so what happens when that requests to that host gets issued you redirect it internally to a service the path here basically means the URL path so everything after the domain name so slash whatever path comes after that you can define those rules here and we'll see some different examples of the path configuration later and as you see here in this configuration we have a http CP protocol so later in this video I'm gonna show you how to configure https connection using Ingress component so right now in the specification we don't have anything configured for the secure connection it's just adhdp and one thing to note here is that this HTTP attribute here does not correspond to this one here this is a protocol that the incoming request gets forwarded to to the internal service so this is actually the second step and not to confuse it with this one and now let's see how the internal service to that Ingress will look like so basically backhand is the target where the request the incoming request will be redirected and the service name should correspond the internal service name like this and the port should be the internal service port and as you see here the only difference between the external and internal Services is that here in internal service I don't have the third ports which is the note Port starting from 30 000. we now have that attribute here and the type is a default type not a load balancer but internal service type which is cluster IP so this should be a valid domain address so you can just write anything here it has to be first of all valid and you should map that domain name to IP address of the node that represents an entry point to your kubernetes cluster so for example if you decide that one of the nodes inside the kubernetes cluster is going to be the entry point then you should map this to the AP address of that node or and we will see that later if you configure a server outside of the kubernetes cluster that will become the entry point to your kubernetes cluster then you should map this hostname to the IP address of that server so now that we saw what kubernetes Ingress components looks like let's see how to actually configure Ingress in the cluster so remember this diagram I showed you at the beginning so basically you have a pod service and corresponding Ingress now if you create that Ingress component alone that won't be enough for Ingress routing rules to work what you need in addition is an implementation for Ingress and that implementation is called Ingress controller so the step one will be to install an Ingress controllers which is basically another pod or another set of parts that run on your node in your kubernetes cluster and thus evaluation and processing of Ingress rules so the yaml file that I showed you with the Ingress component is basically this part right here and this has to be additionally installed in kubernetes cluster so what is ingress controller exactly the function of Ingress controller is to evaluate all the rules that you have defined in your cluster and this way to manage all the redirections so basically this will be the entry point in the cluster for all the requests to that domain or subdomain rules that you've configured and this will evaluate all the rules because you may have 50 rules or 50 Ingress components created in your cluster it will evaluate all the rules and decide based on that which forwarding rule applies for that specific request so in order to install this implementation of Ingress in your cluster you have to decide which one of many different third-party implementations you want to choose from I'll put a link of the whole list in the description where you see different kinds of Ingress controllers you can choose from there is one from kubernetes itself which is kubernetes nginx ingress controller but there are others as well so once you install Ingress controller in your cluster you're good to go create Ingress rules and the whole configuration is going to work so now that I've shown you how Ingress can be used in a kubernetes cluster there is one thing that I think is important to understand in terms of setting up the whole cluster to be able to receive external requests now first of all you have to consider the environment where you kubernetes cluster is running if you are using some cloud service provider like Amazon web services Google Cloud Leno there are a couple more that have out of the books kubernetes Solutions or they have their own virtualized load balances Etc your cluster configuration would look something like this so you would have a cloud load balancer that is specifically implemented by that cloud provider and external requests coming from a browser will first hit the load balancer and that load balancer then will redirect the request to Ingress controller now this is not the only way to do it even in Cloud environment you can do it in in a couple of different ways but this is one of the most common strategies and advantage of using cloud provider for that is that you don't have to implement a load balancer yourself so with minimal effort probably on most Cloud providers you will have the load balancer up and running and ready to receive those requests and for those requests then to your kubernetes cluster so very easy setup now if you're um deploying your kubernetes cluster on a bare metal environment then you would have to do that part yourself so basically you would have to configure some kind of entry point to your kubernetes cluster yourself and there's a whole list of different ways to do that and I'm gonna put that also in the description but generally speaking either inside of a cluster or outside is a separate server you will have to provide an entry point in one of those types is an external proxy server which can be a software or Hardware solution that will take a role of that load balancer in an entry point to your cluster so basically what this would mean is that you will have a separate server and you would give this a public IP address and you would open the ports in order for the requests to be accepted and this proxy server then will act as an entry point to your cluster so this will be the only one accessible externally so none of the servers in your kubernetes cluster will have publicly accessible IP address which is obviously a very good security practice so all the requests will enter the proxy server and that will then redirect the request to Ingress controller and Ingress controller will then decide which Ingress rule applies to that specific request and the whole internal request forwarding will happen so as I said there are different ways to configure that and to set it up depending on which environment you are and also which approach you choose but I think it's a very important concept to understand how the whole cluster setup works so in my case since I'm using a mini Cube to demonstrate all this on my laptop the setup will be pretty easy and even though this might not apply exactly to your cluster setting still you will see in practice how all these things work so the first thing is to install Ingress controller in minicube and the way to do that is by executing minicube add-ons enable Ingress so what this does is automatically configures or automatically starts the kubernetes nginx implementation of Ingress controller so that's one of the many third-party implementations which you can also safely use in production environments not just mini Cube but this is what a mini Cube actually offers you out of the box so with one simple command Ingress controller will be configured in your cluster and if you do Cube CTL get pod in a cube system namespace you will see the nginx Ingress controller pod running in your cluster so once I have Ingress controller installed now I can create an Ingress rule that the controller can evaluate so let's actually head over to the command line where I'm going to create Ingress rule for kubernetes dashboard component so in my mini Cube cluster I have kubernetes dashboard which is right now not accessible externally so what I'm going to do is since I already have internal service for kubernetes dashboard and a pod for that I'm going to configure an Ingress rule for the dashboard so I can access it from a browser using some domain name so I'm gonna so this shows me all the components that I have in kubernetes dashboard and since I already have internal service for kubernetes dashboard and the pod that's running I can now create an Ingress rule in order to access the kubernetes dashboard using some host name so let's go ahead and do that so I'm going to create an Ingress for kubernetes dashboard so these are just metadata the name is going to be dashboard Ingress and the namespace is going to be in the same namespace as the service and pod so in the specification we are going to define the rules so the first rule is the host name I'm just gonna call I'm going to Define dashboard.com and the HTTP forwarding to internal service path let's leave it at all path and this is the back end of the service so service name will be what we saw here so this is the service name and service port is where the service listens so this is actually 80 right here and this will be it that's the Ingress configuration for forwarding every request that is directed to dashboard.com to internal kubernetes dashboard service and we know it's internal because its type is cluster IP so no external IP address so obviously I just made up hostnamedashboard.com it's not registered anywhere and I also didn't configure anywhere which IP address this host name should resolve to and this is something that you will always have to configure so first of all let's actually create that Ingress rule so Cube CTL apply multiple dashboard Ingress yaml see Ingress was created so if I do get Chris in the namespace I should see my Ingress here and as you see address is now empty because it takes a little bit of time to assign the address to Ingress so we'll have to wait for that to get the IP address that will map to this host so I'm just gonna watch this and it's I see that address was assigned so what I'm going to do now is that I'm going to take that address and in my hosts file at the end I'm gonna Define that mapping so that IP address will be mapped to the dashboard .com and again this works locally if I'm gonna type dashboard.com in the browser this will be the IP address that it's going to be mapped to which basically means that the request will come in to my mini Cube cluster will be handed over to Ingress controller and Ingress controller then we'll go and evaluate this rule that I've defined here and forward that request to service so this is all the configuration we need so now I'm gonna go and and enter dashboard.com and I will see my kubernetes dashboard here so Ingress also has something called a default backend so if I do Cube CTL describe Ingress the name of the Ingress and the namespace I'll get this output and here there's an attribute called default backend that maps to default HTTP backend Port 80. so what this means is that whenever a request comes into the kubernetes cluster that is not mapped to any backend so there is no rule for mapping that request uh to a service then this default backend is used to handle that request so obviously if you don't have this service created or defined in your cluster kubernetes will try to forward it to the service it won't find it and you would get some default error response so for example if I entered some path that I have configured I just get page not found so a good usage for that is to Define custom error messages when a page isn't found when a request comes in that you can handle or the application can handle so that users still see some meaningful error message or just a custom page where you can redirect them to your home page or something like this so all you have to do is create an internal service with the same name so default it should be backend and the port number and also create a pod or application that sends that air custom error message response so till now I have shown you what Ingress is and how you can use it I've also shown you a demo of how to create an Ingress rule in minicube but we've used only a very basic Ingress yaml configuration just a simple forwarding to One internal service with one path but you can do much more with Ingress configuration than just basic forwarding and in the next section we're gonna go through more use cases of how you can define a more fine granular routing for applications inside kubernetes cluster so the first thing is defining multiple path of the same host so consider following use case Google has one domain but has many services that it offers so for example if you have a Google account you can use its analytics you can use it shopping you have a calendar you have a Gmail Etc so all of these are separate applications that are accessible with the same domain so consider you have an application that does something similar so you offer two separate applications that are part of the same ecosystem but you still want to have them on separate URLs so what you can do is that in rules you can Define The Host which is myapp.com and in the path section you can Define multiple path so if user wants to access your analytics application then they have to enter my app.com analytics and that will forward the request to internal and analytics service in the Pod or if they want to access the shopping application then the URL for that would be myapp.com shopping so this way you can do forwarding with one Ingress of the same host to multiple applications using multiple path another use case is when instead of using URLs to make different applications accessible some companies use sub-domains so instead of having my app.com analytics they create a subdomain analytics.myapp.com so if you have your application configured that way your figuration will look like this so instead of having one host like in the previous example and multiple path here inside now you have multiple hosts where each host represents a subdomain and inside you just have one path that again redirects that request to analytic service pretty straightforward so now in the same request setting you have analytics service and a pod behind it now the request will look like this using the subdomain instead of path and one final topic that I mentioned that we'll cover here is configuring TLS certificate till now we've only seen Ingress configuration for HTTP requests but it's super easy to configure https forwarding in Ingress so the only thing that you need to do is define attribute called TLS above the rules section with host which is the same host as right here and the secret name which is a reference of a secret that you have to create in a cluster that holds that TLS certificate so the secret configuration would look like this so the name is the reference right here and the data or the actual contents contain TLS certificate and TLS key if you've seen my other videos where I create different components like secret you probably notice the type additional type attribute here in kubernetes there is a specific type of a secret called TLS so we'll have to use that type when you create a TLS secret and there are three small nodes to be made here one is that the keys of this data have to be named exactly like that the values are the actual file contents of the certificate or key contents and not the file path or location so you have to put the whole content here base64 encode it and the third one is that you have to create the secret in the same namespace as the Ingress component for it to be able to use that otherwise you can't reference a secret formula the namespace and these four lines is all you need to configure mapping of an https request to that host to internal service in this video I'm going to explain all the main concepts of Helm so that you're able to use it in your own projects also Helm changes a lot from version to version so understanding the basic common principles and more importantly its use cases to when and why we use Helm will make it easier for you to use it in practice no matter which version you choose so the topics I'm gonna go through in this video are Helm and Helm charts what they are how to use them and in which scenarios they are used and also what is tiller and what part it plays in the helm architecture so what is Helm Helm has a couple of main features that it's used for the first one is as a package manager for kubernetes so you can think of it as apt or yam or Homebrew for kubernetes so it's a convenient way for packaging collections of kubernetes yaml files and distributing them in public and private registry now these definitions may sound a bit abstract so let's break them down with specific examples so let's say you have deployed your application in kubernetes cluster and you want to deploy elasticsearch additionally in your cluster that your application will use to collect its logs in order to deploy elastic stack in your kubernetes cluster you would need a couple of kubernetes components so you would need a stateful set which is for stateful applications like databases you will need a config map with external configuration you would need a secret where some credentials and secret data are stored you will need to create the kubernetes user with its respective permissions and also create couple of services now if you were to create all of these files manually by searching for each one of them separately on internet will be a tedious job and until you have all these yaml files collected and tested and tried out it might take some time and since elastic stack deployment is pretty much the standard across all clusters other people will probably have to go through the same so it made perfect sense that someone created this yaml files once and packaged them up and made it available somewhere so that other people people who also use the same kind of deployment could use them in their kubernetes cluster and that bundle of yemo files is called Helm chart so using Helm you can create your own Helm charts or bundles of those yaml files and push them to some Helm repository to make it available for others or you can consume so you can use download and use existing Helm charts that other people pushed and made available in different repositories so commonly used deployments like database applications elasticsearch mongodb MySQL or monitoring applications like Prometheus that all have this kind of complex setup all have charts available in some Helm Repository so using a simple Helm install chart name command you can reuse the configuration that someone else has already made without additional effort and sometimes that someone is even the company that created the application and this functionality of sharing charts that became pretty widely used actually was one of the contributors to why Helm became so popular compared to its alternative tools so now if you're if you have a cluster and you need some kind of deployment that you think should be available out there you can actually look it up either using command line so you can do Helm search with a keyword or you can go to either Helms on public repository helmhub or on Helm charts Pages or other repositories that are available and I will put all the relevant links for this video in the description so you can check them out now apart from those public Registries for Helm charts there are also private Registries because when companies start creating those charts they also started Distributing them among or internally in the organization so it made perfect sense to create Registries to share those charts within the organization and not publicly so there are a couple of tools out there they're used as Helm charts private repositories as well another functionality of Helm is that it's a templating engine so what does that actually mean imagine you have an application that is made up of multiple micro services and you're deploying all of them in your kubernetes cluster and deployment and service of each of those microservices are pretty much the same with the only difference that the application name and version are different or the docker image name and version tags are different so without Helm you would write separate yaml files configuration files for each of those micro services so you would have multiple deployment service files where each one has its own application name and version defined but since the only difference between those yaml files are just couple of lines or a couple of values using Helm what you can do is that you can define a common blueprint for all the micro services and the values that are dynamic or the values that are going to change replace by placeholders and that would be a template file so the template file would look something like this you would have a template file which is standard EML but instead of values in some places you would have the syntax which means that you're taking a value from external configuration and that external configuration if you see the syntax here dot values that external configuration comes from an additional yemo file which is called values.yaml and here you can Define all those values that you're gonna use in that template file so for example here those four values are defined in an values yml file and what dot values is it's an object that is being created based on the values that are supplied via value CML file and also through command line using dash dash set flake so whichever way you define those additional values that are combined and put together in dot values object that you can then use in those template files to get the values out so now instead of having yaml files for each microservice you just have one and you can simply replace those values dynamically and this is especially practical when you're using continuous delivery continuous integration for application because what you can do is that in your build pipeline you can use those template DML files and replace the values on the Fly before deploying them another use case where you can use the helm features of package manager and templating engine is when you deploy the same set of applications across different kubernetes clusters so consider a use case where you have your microservice application that you want to deploy on development staging and production clusters so instead of deploying the individual DML files separately in each cluster you can package them up to make your own application chart that will have all the necessary yaml files that that particular deployment needs and then you can use them to redeploy the same application in different kubernetes cluster environments using one command which can also make the whole deployment process easier so now that you know what Helm charts are used for it let's actually look at an example Helm chart structure to have a better understanding so typically chart is made up of such a directory structure so it would have the top level will be the name of the chart and inside the directory you would have following so chart.yaml is basically a file that contains all the meta information about the chart it could be name and version maybe list of dependencies Etc that I mentioned before is place where all the values are con configured for the template files and this will actually be the default values that you can override later the charts directory will have chart dependencies inside meaning that if this chart depends on other charts then those chart dependencies will be stored here and templates folder is basically with a template files are stored so when you execute Helm install command to actually deploy those yaml files into kubernetes the template files from here will be filled with the values from values.yaml producing valid kubernetes manifest that can then be deployed into kubernetes and optionally you can have some other files in this folder like readme or license file Etc so to have a better understanding of how values are injected into Helm templates consider that in values.yaml which is a default value configuration you have following three values image name port and version and as I mentioned the default values that are defined here can be overwritten in a couple of different ways one way is that when executing Helm install command you can provide an alternative evalues yaml file using valuesflake so for example if values yemo file will have following three values which are image name port and version you can Define your own values yaml file called myvalues.yaml and you can override one of those values or you can even add some new attributes there and those two will be merged which will result into a DOT values object that will look like this so it have image name and Port from value.yaml and the one that you overwrote with your own values file alternatively you can also provide additional individual values using set flag where you can Define the values directly on the command line but of course it's more organized and better manageable to have files where you store all those values instead of just providing them on the command line another feature of Helm is release management which is provided based on its setup but it's important to note here the difference between Helm versions 2 and 3. in version 2 of Helm the helm installation comes in two parts you have Helm client and the server and the server part is called tiller so whenever you deploy Helm chart using Helm install my chart help client will send the yaml files to Tiller that actually runs or has to run in a kubernetes cluster and Taylor then will execute these request and create components from these yemo files inside the kubernetes cluster and exactly this architecture offers additional valuable feature of Helm which is release management so the way Helm clients server setup works is that whenever you create or change deployment peeler will store a copy of each configuration client send for future reference thus creating a history of chart executions so when you execute Helm upgrade to chart name the changes will be applied to the existing deployment instead of removing it and creating a new one and also in case the upgrades goes wrong for example some yaml files were false or some configuration was wrong you can roll back that upgrade using Helm rollback chart name command and all this is possible because of that chart execution history that healer keeps whenever you send those requests from Helm client to tiller however this setup has a big caveat which is that tiller has too much power inside the kubernetes cluster it can create update delete components and it has too much permissions and this makes it actually a big security issue and this was one of the reasons why in Helm 3 they actually removed the tiller part and it's just a simple Helm binary now and it's important to mention here because a lot of people have heard of tiller and when you deploy a Helm version 3 you shouldn't be confused that tiller isn't actually there anymore in this video I will show you how you can persist data in kubernetes using volumes we will cover three components of kubernetes storage persistent volume persistent volume claim and storage class and see what each component does and how it's created and used for data persistence consider a case where you have a mySQL database part which your application uses data gets added updated in the database maybe you create a new database with a new user Etc but default when you restart the Pod all those changes will be gone because kubernetes doesn't give you data persistence out of the box that's something that you have to explicitly configure for each application that needs saving data between pod restarts so basically you need a storage that doesn't depend on the Pod life cycle so it will still be there when Paul dies and new one gets created so the new part can pick up where the previous one left off so it will read the existing data from that storage to get up-to-date data however you don't know on which node the new pod restarts so your storage must also be available on all nodes not just one specific one so that when the new pod tries to read the existing data the up-to-date data is there on any node in the cluster and also you need a highly available storage that will survive even if the whole cluster crashed so these are the criteria or the requirements that your storage for example your database storage will need to have to be reliable another use case for persistent storage which is not for database is a directory maybe you have an application that writes and reads files from pre-configured directory this could be session files for application or configuration files Etc and you can configure any of this type of storage using kubernetes component called persistent volume think of a persistent volume as a cluster resource just like Ram or CPU that is used to store data persistent volume just like any other component gets created using kubernetes yemo file where you can specify the kind which is persistent volume and in the specification section you have to Define different parameters like how much storage should be created for the volume but since persistent volume is just an abstract component it must take the storage from the actual physical storage right like local hard drive from the cluster nodes or your external NFS servers outside of the cluster or maybe cloud storage like AWS block storage or from Google cloud storage Etc so the question is where does this storage backend come from local or remotes or on cloud Who configures it who makes it available to the cluster and that's the tricky part of data persistence in kubernetes because kubernetes doesn't care about your actual storage it gives you persistent volume component as an interface to the actual storage that you as a maintainer or administrator have to take care of so you have to decide what type of storage your cluster services or applications would need and create and manage them by yourself managing meaning do backups and make sure they don't get corrupt Etc so think of storage in kubernetes as an external plugin to your cluster whether it's a local storage on actual nodes where the cluster is running or a remote storage doesn't matter they're all plugins to the cluster and you can have multiple storages configured for your cluster where one application in your cluster uses local disk storage another one uses the NFS server and another one uses some cloud storage or one application may also use multiple of those storage types and by creating persistent volumes you can use this actual physical storages so in the persistent volume specification section you can Define which storage backend you want to use to create that storage abstraction or storage resource for your applications so this is an example where we use NFS storage backend so basically we Define how much storage we need some additional parameters so that storage like should it be read write or read only Etc and the storage backend with its parameters and this is another example where we use Google Cloud as a storage backend again with the storage backend specified here and capacity and access modes here now obviously depending on the storage type on the storage backend some of the attributes in the specification will be different because they're specific to the storage type this is another example of a local storage which is on the Node itself which has additional node Affinity attribute now you don't have to remember and know all these attributes at once because you may may not need all of them and also I will make separate videos covering some of the most used volumes and explain them individually with examples and demos so there I'm going to explain in more detail which attributes should be used for these specific volumes and what they actually mean in the official kubernetes documentation you can actually see the complete list of more than 25 storage backends that kubernetes supports note here that persistent volumes are not namespaced meaning they're accessible to the whole cluster and unlike other components that we saw like pods and services they're not in any namespace they're just available to the whole cluster to all the namespaces now it's important to differentiate here between two categories of the volumes local and remote each volume type in these two categories has its own use case otherwise they won't exist and we will see some of these use cases later in this video however the local volume types violate the second and third requirements of data persistence for databases that I mentioned at the beginning which is one not being tied to one specific node but rather to each node equally because you don't know where the new pod will start and the second surviving in cluster crash scenarios because of these reasons for database persistence you should almost always use remote storage so who creates these persistent volumes and when as I said persistent volumes are resources like CPU or Ram so they have to be already there in a cluster when the Pod that depends on it or that uses it is created so a side note here is that there are two main roles in kubernetes there's an administrator who sets up the cluster and maintains it and also makes sure the cluster has enough resources these are usually system administrators or devops engineers in a company and the second role is kubernetes user that deploys the applications in the cluster either directly or through CI pipeline these are Developer devops teams who create the applications and deploy them so in this case the kubernetes administrator would be the one to configure the actual storage meaning to make sure that the NFS server storage is there and configured or maybe create and configure a cloud storage that will be available for the cluster and second create persistent volume components from these storage backends based on the information from developer team of what types of storage their applications would need and the developers then will know that storage is there and can be used by their applications but for that developers have to explicitly configure the application yaml file to use those persistent volume components in other words application has to claim that volume storage and you do that using another component of kubernetes called persistent volume claim persistent volume claims also PVCs are also created with yaml configuration here's an example claim again don't worry about are understanding each and every attribute that is defined here but on the higher level the way it works is that PVC claims a volume with certain storage size or capacity which is defined in the persistent volume claim and some additional characteristics like access type should be read only or read write or the type Etc and whatever persistent volume matches this criteria or in other words satisfies this claim will be used for the application but that's not all you have to now use that claim in your pods configuration like this so in the Pod specification here you have the volumes attribute that references the persistent volume claim with its name so now the Pod and all the containers inside the Pod will have access to that persistent volume storage so to go through those levels of abstraction step by step and parts access storage by using the claim as a volume right so they request the volume through claim the claim then will go and try to find a volume persistent volume in the cluster that satisfies the claim and the volume will have a storage the actual storage backend that it will create that storage resource from in this way the Pod will now be able to use that actual storage packet note here that claims must exist in the same namespace as the Pod using the claim while as I mentioned before persistent volumes are not namespaced so once the Pod finds the matching persistent volume through the volume claim through the persistent volume claim the volume is then mounted into the Pod like this here this is a pod level and then that volume can be mounted into the Container inside the pod which is this level right here and if you have multiple containers here in the Pod you can decide to mount this volume in all the containers or just some of those so now the container and the application inside the container can read and write to that storage and when the Pod dies a new one gets created it will have access to the same storage and see all the changes the previous pod or the previous containers made again the attributes here like volumes and volume mounts Etc and how they're used I will show you more specifically and explain in a later demo video now you may be wondering why so many abstractions for using volume where admin role has to create persistent volume and reuse the role creates a claim on that persistent volume and that is in use in pod can I just use one component and configure everything there well this actually has a benefit because as a user meaning a developer who just wants to deploy their application in the cluster you don't care about where the actual storage is you know you want your database to have persistence and whether the data will leave on a gluster FS or AWS EBS or local storage doesn't matter for you as long as the data is safely stored or if you need a directory storage for files you don't care where the directory actually leaves as long as it has enough space and works properly and you sure don't want to care about setting up these actual storages yourself you just want 50 gigabyte storage for your elastic or 10 gigabyte for your application that's it so you make a claim for storage using PVC and assume that cluster has storage resources already there and this makes deploying the applications easier for developers because they don't have to take care of the stuff Beyond deploying the applications now there are two volume types that I think needs to be mentioned separately because they're a bit different from the rest and these are config map and secret now if you have watched my other video on kubernetes components then you are already familiar with both both of them are local volumes but unlike the rest these two aren't created via PV and PVC but a rather own components and managed by kubernetes itself consider a case where you need a configuration file for your Prometheus pod or maybe a message broker service like mosquito or consider when you need a certificate file mounted inside your application in both cases you need a file available to your pod so how this works is that you create config map or secret component and you can mount that into your pod and into your container the same way as you would Mount persistent volume claim so instead you would have a config map or secret here and I will show you a demo of this in a video where I cover local volume types so to quickly summarize what we've covered so far as we see at its core and volume is just a directory possibly with some data in it which is accessible to the containers in a pod how that directory is made available or what storage medium actually packs that and the contents of that directory are defined by specific volume type you use so to use a volume A Part specifies what volumes to provide for the pod in the specification volumes attribute and inside the part then you can decide where to mount that storage into using volume mounts attribute inside the container section and this is a path inside the container where application can access whatever storage we mounted into the container and as I said if you have multiple containers you can decide which container should get access to that storage interesting note for you is that a pod can actually use multiple volumes of different types simultaneously let's say you have an elasticsearch application or pod running in your cluster that needs a configuration file mounted through a config map needs a certificate let's say client certificate mounted as a secret and it needs database storage let's say which is backed with AWS elastic block storage so in this case you can configure all three inside your pod or deployment so this is the Pod specification that we saw before and here on the volumes level you will just list all the volumes that you want to mount into your pod so let's say you have a persistent volume claim that and the background claims persistent volume from AWS block storage and here you have the config map and here have a secret and here in the volume mounts you can list all those storage mounts using the names right so you have the persistent storage then you have the config map and the secret and each one of them is mounted to a certain path inside the container now we saw that to persist data in kubernetes admins need to configure storage for the cluster create persistent volumes and developers then can claim them using PVCs but consider a cluster with hundreds of applications where things get deployed daily and storage is needed for these applications so developers need to ask admins to create persistent volumes they need for applications before deploying them and admins then may have to manually request storage from cloud or storage provider and create hundreds of persistent volumes for all the applications that need storage manually and that can be tedious time consuming and can get messy very quickly so to make this process more efficient there is a third component of kubernetes persistence called storage class storage class basically creates or Provisions persistent volumes dynamically whenever PVC claims it and this way creating or provisioning volumes in a cluster may be automated storage class also gets created using yaml configuration file so this is an example file where you have the kind storage class storage class creates persistent volumes dynamically in the background so remember we Define storage backend in the persistent volume component now we have to Define it in the storage class component and we do that using the provisioner attribute which is the main part of the storage class configuration because it tells kubernetes which provisioner to be used for a specific storage platform or cloud provider to create the persistent volume component out of it so each storage backend has its own provisioner that kubernetes offers internally which are prefixed with kubernetes DOT IO like this one here and these are internal provisioners and for others or other storage types their external provisioners that you have to then explicitly go and find and use that in your storage class and in addition to provisioner attribute we configure parameters of the storage we want to request for our persistent volume like this one's here so storage class is basically another abstraction level that abstracts the underlying storage provider as well as parameters for that storage or characteristics for this storage like what Disk type or Etc so how does it work or how to use storage class in the Pod configuration same as persistent volume it is requested or claimed by PVC so in the PVC configuration here we add additional attribute that is called storage class name that references the storage class to be used to create a persistent volume that satisfies the claims of this PVC so now when a pod claims storage through PVC the PVC will request that storage from Storage class which then will provision or create persistent volume that meets the needs of that claim using provisioner from the actual storage backend now this should help you understand the concepts of how data is persisted in kubernetes as a high level overview in this video we're going to talk about what stateful set is in kubernetes and what purpose it has so what is stateful set the kubernetes component that is used specifically for stateful applications so in order to understand that first you need to understand what a stateful application is examples of stateful applications are all databases like MySQL elasticsearch mongodb Etc or any application that stores data to keep track of its state in other words these are applications that track state by saving that information in some storage stateless applications on the other hand do not keep records of previous interaction in each request or interaction is handled as a completely new isolated interaction based entirely on the information that comes with it and sometimes stateless applications connect to the stateful application to forward those requests so imagine a simple setup of a node.js application that is connected to mongodb database when a request comes in to the node.js application it doesn't depend on any previous data to handle this incoming request it can handle it based on the payload in the request itself now a typical such request will additionally need to update some data in the database or query the data that's where mongodb comes in so when node.js for words that request mongodb mongodb will update the data based on its previous state or query the data from its storage so for each request it needs to handle data and obviously always depends on the most up-to-date data or state to be available while node.js is just a pass-through for data updates or queries and it just processes code now because of this difference between stateful and stateless applications they're both deployed in different ways using different components in kubernetes stateless applications are deployed using deployment component where deployment is an abstraction of parts and allows you to replicate that application meaning Run 2 5 10 identical parts of the same stateless application in the cluster so while stateless applications are deployed using deployment stateful applications in kubernetes are deployed using stateful set components and just like deployment stateful said makes it possible to replicate the stateful app parts or to run multiple replicas of it in other words they both manage parts that are based on an identical container specification and you can also configure storage with both of them equally in the same way so if both manage the replication of PODS and also configuration of data persistence in the same way the question is what a lot of people ask and are also often confused about what is the difference between those two components why we use different ones for each type of application so in the next section we're going to talk about the differences Now replicating stateful application is more difficult and has a couple of requirements that stateless applications do not have so let's look at this first with the example of a mySQL database let's say you have one mySQL database part that handles requests from a Java application which is deployed using a deployment component and let's say you scale the Java application to three parts so they can handle more client requests in parallel below you want to scale MySQL app so we can handle more Java requests as well scaling your Java application here is pretty straightforward Java applications replica pods will be identical and interchangeable so you can scale it using a deployment pretty easily deployment will create the pods in any order in any random order they will get random hashes at the end of the Pod name they will get one service that load balances to any one of the replica pods for any request and also when you delete them they get deleted in a random order or at the same time right or when you scale them down from three to two replicas for example one random replica part gets chosen to be deleted so no complications there on the other hand MySQL pod replicas cannot be created and deleted at the same time in any order and they can't be randomly addressed and the reason for that is because the replica parts are not identical in fact they each have their own additional Identity On Top of the common blueprint of the part that they get created from and giving each part its own required individual identity is actually what stateful set does different from deployment it maintains a sticky identity for each of its parts and as I said these parts are created from the same specification but they're not interchangeable each has a persistent identifier that it maintains across any rescheduling so meaning when pot dies and it gets replaced by a new part it keeps that identity so the question you may be asking now is why do these parts need their own identities why they can't be interchangeable just like with deployment so why is that and this is a concept that you need to understand about scaling database applications in general when you start with a single MySQL pod it will be used for both reading and writing data but when you add a second one it cannot act the same way because if you allow two independent instances of MySQL to change the same data you will end up with data in consistency so instead there is a mechanism that decides that only one pole is allowed to write or change the data which is shared reading at the same time by multiple Parts MySQL instances from the same data is completely fine and the part that is allowed to update the data is called the master the others are called slaves so this is the first thing that differentiates these parts from each other so not all ports are same identical but there is a must pod and they're the slave pods right and there's also difference between those slave Parts in terms of storage which is the next point so the thing is that these parts do not have access to the same physical storage even though they use the same data they're not using the same physical storage of the data they each have their own replicas of the storage that each one of them can access for itself and this means that each pod replica at any time must have the same data as the other ones and in order to achieve that they have to continuously synchronize their data and since Master is the only one allowed to change data and the slaves need to take care of their own data storage obviously the slaves must know about each such change so they can update their own data storage to be up to date for the next query requests and there is a mechanism in such clustered database setup that allows for continuous data synchronization Master changes data and all slaves update their own data storage to keep in sync and to make sure that each pod has the same state now let's say you have one master and two slave parts of my SQL now what happens when a new pod replica joins the existing setup because now that new part also needs to create its own storage and then take care of synchronizing it what happens is that it first clones the data from the previous part not just any part in the in the setup but always from the previous part and once it has the up-to-date data cloned it starts continuous synchronization as well to listen for any updates by Master pod and this also means and I want to point this out since it's pretty interesting point it means that you can actually have a temporary storage for a stateful application and not persist the data at all since the data gets replicated between the pods so theoretically it is possible to just rely on data replication between the pods but this will also mean that the whole data will be lost when all the parts die so for example if stateful set gets deleted or the cluster crashes or all the nodes where these pod replicas are running crash and every part dies at the same time the data will be gone and therefore it's still a best practice to use data persistence for stateful applications if losing the data will be unacceptable which is the case in most database applications and with persistent storage data Will Survive even if all the parts of the stateful set die or even if you delete the complete stateful set component and all the parts get wiped out as well the persistent storage and the data will still remain because persistent volume lifecycle isn't connected or isn't tied to a life cycle of other components like deployment or stateful set and the way to do this is configuring persistent volumes for your stateful set and since each pod has its own data storage meaning it's their own persistent volume that is then backed up by its own physical storage which includes the synchronized data or the replicated database data but also the state of the Pod so each pod has its own state which has information about whether it's a master pod or a slave or other individual characteristics and all of these gets stored in the pot's own storage and that means when a pot dies and gets replaced the persistent pod identifiers make sure that the storage volume gets reattached to the replacement part is a set because that storage has the state of the pod in addition to that replicated data I mean you can clone the data again there will be no problem but it shouldn't lose its state or identity state so to say and for this reattachment to work it's important to use a remote storage because if the Pod gets rescheduled from one node to another node the previous storage must be available on the other node as well and you cannot do that using local volume storage because they are usually tied to a specific node and the last difference between deployment and stateful set is something that I mentioned before is the Pod identifier meaning that every pod has its own identifier so unlike deployment where pods get random hashes at the end stateful set Parts get fixed ordered names which is made up of the stateful set name and ordinal it starts from zero and each additional part will get the next numeral so if we create a stateful set called MySQL with three replicas you'll have pods with names SQL zero one and two the first one is the master and then come the slaves in the order of startup an important note here is that the stateful set will not create the next pod in the replica If the previous one isn't already up and running if first pod creation for example failed or if it was pending the next one won't get created at all it would just wait and the same order is held deletion but in reversed order so for example if you deleted a stateful set or if you scaled it down to one for example from three the deletion will start from the last part so MySQL 2 will get deleted first it will wait until that pod is successfully deleted and then it will delete MySQL 1 and then it will delete my SQL zero and again all these mechanisms are in place in order to protect the data and the state that the stateful application depends on in addition to this fixed predictable names each pod in a stateful set gets its own DNS endpoint from a service so there's a service name for the saveful application just like for deployment for example that will address any replica pod and plus in addition to that there is individual DNS name for each pod which deployment pods do not have the individual DNS names are made up of pod name and the manage or the governing service name which is basically a service name that you define inside the stateful set so these two characteristics meaning having a predictable or fixed name as well as its fixed individual DNS name means that when pod restarts the IP address will change but the name and endpoint will stay the same that's why it said pods get sticky identities so it gets stuck to it even between the restarts and the sticky identity makes sure that each replica pod can retain its state and its role even when it dies and gets recreated and finally I want to mention an important Point here is you see replicating stateful apps like databases with its persistent storage requires a complex mechanism and kubernetes helps you and supports you to set this whole thing up but you still need to do a lot by yourself where kubernetes doesn't actually help you or doesn't provide you out-of-the-box solutions for example you need to configure the cloning and data synchronization in inside the stateful set and also make the remote storage available as well as take care of managing and backing it up all of this you have to do yourself and the reason is that stateful applications are not a perfect candidate for containerized environments in fact Docker kubernetes and generally containerization is perfectly fitting for stateless applications that do not have any state and data dependency and only process code so scaling and replicating them in containers is super easy in this video I will give you a complete overview of kubernetes services first I'll explain shortly what service component is in kubernetes and when we need it and then we'll go through the different service types cluster IP service headless service node port and load balancer Services I will explain the differences between them and when to use which one so by the end of the video you will have a great understanding of kubernetes services and will be able to use them in practice so let's get started so what is a service in kubernetes and why do we need it in a kubernetes cluster each pod gets its own internal IP address but the parts in kubernetes are ephemeral meaning that they come and go very frequently and when the Pod restarts or when old one dies and the new one gets started in its place it gets a new IP address so it doesn't make sense to use pod IP addresses directly because then you would have to adjust that every time the Pod gets recreated with the service however you have a solution of a stable or static IP address that stays even when the Pod dies so basically in front of each pod we set as service which represents a persistent stable IP address access that pod a service also provides load balancing because when you have pod replicas for example three replicas of your microservice application or three replicas of MySQL application the service will basically get each request targeted to that MySQL or your micro service application and then forward it to one of those pods so clients can call a single stable IP address instead of calling each pod individually so services are a good abstraction for loose coupling for communication within the cluster So within the cluster components or pods inside the cluster but also from external services like if you have browser requests coming to the cluster or if you're talking to an external database for example there are several types of services in kubernetes the first and the most common one that you probably will use most of the time is the cluster IP type this is a default type of a service meaning when you create a service and not specify a type it will automatically take cluster IP as a type so let's see how cluster IP works and where it's used in kubernetes setup imagine we have a microservice application deployed in the cluster so we have a pod with microservice container running inside that pod and beside that micro service container we have a sidecar container that collects the logs of the microservice and then sends that to some destination database so these two containers are running in the Pod and let's say your micro service container is running at pod 3000 and your login container let's say is running on Port 9000. this means that those two ports will be now open and accessible inside the pod and pod will also get an IP address from a range that is assigned to a node so the way it works is that if you have for example three worker nodes in your kubernetes cluster each worker node will get a range of IP addresses which are internal in the cluster so for example the Pod 1 will get IP addresses from a range of 10.2.1 onwards the second worker node will get this IP range and the third worker node will get this one so let's say this pod starts on node 2 so it get an IP address that looks like this if you want to see the IP addresses of your pods in the cluster you can actually check them using cubectl get pod output wide command where you will get some extra information about the pods including its IP address and here you will see the IP address that it got assigned and as I mentioned these are from the IP address range that each worker node in the cluster will get so this is from the first worker node and these are from the second worker node so now we can access those containers inside the Pod at this IP address at these ports if we set the replica count to 2 we're going to have another pod which is identical to the first one which will open the same ports and it will get a different IP address let's say if it starts on worker node one you will get an IP address that looks something like this now let's say this microservice is accessible through a browser so we have Ingress configured and the requests coming in from the browser to the micro service will be handled by Ingress how does this incoming request get forwarded from Ingress all the way to the Pod and that happens through a service a cluster IP or so-called internal service a service in kubernetes is a component just like a pod but it's not a process it's just an abstraction layer that basically represents an IP address so service will get an IP address that it is accessible at and service will also be accessible at a certain Port let's say we Define that port to be 3200 so Ingress will talk to the service or hand over the request to the service at this IP address at this port so this is how service is accessible within the cluster so the way it works is that we Define Ingress rules that forward the request based on the request address to certain services and we Define the service by its name and the DNS resolution then maps that service name to an IP address that the service actually got assigned so this is how Ingress knows how to talk to the service so once the request gets handed over to the service at this address then service will know to forward that request to one of those parts that are registered as the service endpoints now here are two questions how does service know which pods it is managing or which pods to forward the request to and the second one is how does service know which port to forward that request to on that specific pod the first one is defined by selectors a service identifies its member pods or its endpoint pods using selector attribute so in the service specification in the yaml file from which we create the service we specify the selector attribute that has a key value pairs defined as a list now these key value pairs are basically labels that pods should have to match that selector so in the Pod configuration file we assign the part certain labels in the metadata section and these labels can be arbitrary name so we can say my app for example and give it some other labels this is basically something that we Define ourselves we can give it any name that we want these are just key value pairs that identify a set of pots and in the service CML file then we Define a selector to match any part that has all of these labels this means if we have a deployment component that creates three replicas of paths with label called app my app and type microservice for example and in the service selector attribute we Define those two labels then service will match all of those three pod replicas and it will register all three parts as its endpoints and as I said it should match all the selectors not just one so this is how service will know which part belong to it meaning where to forward that request to the second question was if a pod has multiple ports open where two different applications are listening inside the Pod how does service know which port to forward the request to and this is defined in the Target Port attribute so this Target Port attribute so let's say Target Port in our example is 3000 what this means is that when we create the service it will find all the parts that match the selector so these pods will become endpoints of the service and when the service gets a request it will pick one of those pod replicas randomly because it's a load balancer and it will send the request it received to that specific pod on a port defined by Target Port attribute in this case three thousand also note that when you create a service kubernetes creates an endpoints object that has the same name as the service itself and kubernetes will use this endpoints object to keep track of which pods are members of the service or as I said which pods are the end points of the service and since this is dynamic because whenever you create a new pod replica or a pod dice the endpoints get updated so this object will basically track that and note here that the service port itself is arbitrary so you can Define it yourself whereas the Target Port is not arbitrary it has to match the port where container the application container inside the Pod is listening at now let's say our microservice application got its requests from the browser through Ingress and internal cluster IP service and now it needs to communicate with the database to handle that request for example and in our example let's assume that the micro service application uses mongodb database so we have two replicas of mongodb in the cluster which also have their own service endpoint so mongodb service is also of cluster IP and it has its own IP address so now the microservice application inside the Pod can talk to the mongodb database also using the service endpoint so the request will come from one of the parts that gets the request from the service to the mongodb service at this IP address and the port that service has open and then service will again select one of those pod replicas and forward that request to the selected part at the Port the Target Port defined here and this is the port where mongodb application inside the Pod is listening at now let's assume that inside that mongodb pod there is another container running that selects the monitoring metrics for Prometheus for example and that will be a mongodb exporter and that container let's say is running at Port 9216 and this is where the application is accessible at and in the cluster we have a Prometheus application that scrapes the metrics endpoint from this mongodb exporter container from this endpoint now that means that service has to handle two different endpoint requests which also means that service has two of its own ports open for handling these two different requests one from the clients that want to talk to the mongodb database and one from the clients like Prometheus that want to talk to them mongodb exporter application and this is an example of a multi-port service and note here that when you have multiple ports defined in a service you have to name those ports if it's just one port then you can leave did so to say Anonymous you don't have to use the name attribute it's optional but if you have multiple ports defined then you have to name each one of those so these were examples of cluster IP service type now let's see another service type which is called headless service so let's see what headless service type is as we saw each request to the service is forwarded to one of the Pod replicas that are registered as service endpoints but imagine if a client wants to communicate with one of the pods directly and selectively or what if the endpoint Parts need to communicate with each other directly without going through the service obviously in this case it wouldn't make sense to talk to the service endpoint which will randomly select one of the pods because we want the communication with specific parts now what would be such a use case a use case where this is necessary is when we're deploying stateful applications in kubernetes stateful applications like databases MySQL mongodb elasticsearch and so on in such applications that pod replicas aren't identical but rather each one has its individual State and characteristic for example if we're deploying a MySQL application you would have a master instance of MySQL and worker instances of my SQL application and master is the only part allowed to write to the database and the worker pods must connect to the master to synchronize their data after masterpod has made changes to the database so they get the up-to-date data as well and when new worker pod starts it must connect directly to the most recent worker node to clone the data from and also get up to date with the data state so that's the most common use case where you need direct communication with individual pods for such case for a client to connect to all pods individually it needs to figure out the IP address of each individual pod one option to achieve this would be to make an API call to kubernetes API server and it will return the list of PODS and their IP addresses but this will make your application too tight to the kubernetes specific API and also this will be inefficient because you will have to get the whole list of PODS and their IP addresses every time you want to connect to one of the pods but as an alternative solution kubernetes allows clients to discover pod IP addresses through DNS lookups and usually the way it works is that when a client performs a DNS lookup for a service the DNS server returns a single IP address which belongs to the service and this will be the services cluster IP address which we saw previously however if you tell kubernetes that you don't need a cluster IP address of the service by setting the cluster IP field To None when creating a service then the DNS server will return the Pod IP addresses instead of the services IP address and now the client can do a simple DNS lookup to get the IP address of the pods that are members of that service and then client can use that IP address to connect to the specific part it wants to talk to or all of the pods so the way we Define a headless service in a service configuration file is basically setting the cluster IP To None So when we create these surveys from this configuration file kubernetes will not assign the service a cluster IP address and we can see that in the output when I list my services so I have a cluster IP service that I created for the micro service and a headless service and note here that when we deploy stateful applications in the cluster like mongodb for example we have the normal service the cluster IP service that basically handles the communication to mongodb and maybe other container inside the Pod and in addition to that service we have a headless service so we always have these two Services alongside each other so this can do the usual load balancing stuff for this kind of use case and for use cases where client needs to communicate with one of those parts directly like a master node directly to perform the right commands or the pods to talk to each other for data synchronization The Headless service will be used for that when we Define a service configuration we can specify a type of the service and the type attribute can have three different values it could be cluster IP which is a default that's why we don't have to specify that we have a node port and load balancer so type node Port basically creates a service that is accessible on a static port on each worker node in the cluster now to compare that to our previous example the cluster IP service is only accessible within the cluster itself so no external traffic can directly address the cluster IP service the node Port service however makes the external traffic accessible on static or fixed port on each worker node so in this case instead of Ingress the browser request will come directly to the worker node at the Port that the service specification defines and the port that node Port service type exposes is defined in the node Port attribute and here note that the note Port value has a predefined range between 30 000 and 32 767 so you can have one of the values from that range as a node Port value anything outside that range won't be accepted so this means that the node Port service is accessible for the external traffic like browser request for example add IP address of the worker node and the node Port defined here however just like in cluster IP we have a port of the service so when we create the node Port service a cluster IP service to which the node Port service will route is automatically created and here as you see if I list the services the note Port will have the cluster IP address and for each IP address it will also have the ports open where the service is accessible at and also note that service spends all the worker nodes so if you have three pod replicas on three different nodes basically the service will be able to handle that request coming on any of the worker nodes and then forward it to one of those pod replicas now that type of service exposure is not very efficient and also not secure because you're basically opening the ports to directly talk to the services on each worker node so the external clients basically have access to the worker nodes directly so if we gave all the services this node Port service type then we would have a bunch of ports open on the worker nodes clients from outside can directly talk to so it's not very efficient and secure way to handle that and as a better alternative there is a load balancer service type and the way it works with load balance or service type is that the service becomes accessible externally through a cloud provider's load balancer functionality so each cloud provider has its own native load balancer implementation and that is created and used whenever we create a load balancer service type Google Cloud platform AWS Azure linode openstack and so on all of them offer this functionality so whenever we create a load balancer service node port and cluster IP services are created automatically by kubernetes to which the external load balancer of the cloud platform will route the traffic 2 and this is an example of how did we Define load balancer service configuration so instead of node Port type we have a load balancer and the same way we have the port of the service which belongs to the cluster IP and we have the node Port which is the port that opens on the worker node but it's not directly accessible externally but only through the load balancer itself so the entry point becomes a load balancer first and it can then direct the traffic to node port on the worker node and the cluster IP the internal service so that's how the flow would work with the load balancer service so in other words the load balancer service type is an extension of the node Port type which itself is an extension of the cluster IP type and again if I create a load balancer service type and list all the services you can see the differences in the display as well where for each service type you see the IP addresses you see the type and you see the ports that the service has opened and I should mention here that in a real kubernetes setup example you would probably not use node port for external connection you would maybe use it to test some service very quickly but not for production use cases so for example if you have a application that is accessible through browser you will either configure Ingress for each such request so you would have internal Services the cluster IP services that Ingress will route to or you would have a load balancer that uses the cloud platform's native load balancer implementation congratulations you made it till the end I hope you learned a lot and got some valuable Knowledge from this course if you want to learn about modern devops tools be sure to check out my tutorials on that topic And subscribe to my channel for more content also if you want to stay connected you can follow me on social media or join the private Facebook group I would love to see you there so thank you for watching and see you in the next video