Transcript for:
CKA Certification Exam Preparation Summary

a warm welcome to everyone I extend my heartfelt congratulations to those who have successfully passed the cka exam I also express my gratitude to the students who have diligently completed my playlist this comprehensive video covers all the essential aspects of the cka certification exam in a single format to enhance the understanding of complex Concepts like Network policies sidecars and our back policies we've embedded graphical depictions of the related question while it's a lengthy video I'm confident that by watching it thoroughly you'll gain the Knowledge and Skills necessary to Ace the cka exam so without further Ado let's embark on this journey to cka Mastery hi I'm nickel Dev the instructor of this cka certification Series in this series you'll learn everything you need to know to how to pass the certified kubernetes administrator exam here and designed 30 questions and answers for cracking the cka exam by the end of this series you'll be able to pass cka certification the certified kubernetes administrator certification is a globally recognized certification program offered by the cloud native Computing Foundation it is a performance-based exam that tests your knowledge of kubernetes administration this is the official website as you see the cost of this exam is $395 but sometimes you will get offers so you have to visit this website frequently for knowing the best offers who is it for they are clearly mentioned here this certification is for kubernetes administrators Cloud administrators and other it professionals who manage kubernetes instances I'm also assuming you have already completed some kubernetes fundamentals courses or you have some experience in kubernetes because in this series I will not teach you any fundamentals this is not designed for learning kubernetes fundamental mentals if you are looking for learning kubernetes then I am highly recommending certified kubernetes administrator by mumshad manth from udemy and also you can refer official documentation of kubernetes the cka exam covers a wide range of topics including storage troubleshooting cluster architecture installation and configuration networking security managing resources and application deployment you can see here the way to of each and every domain the exam is challenging but it is not impossible to pass learning the basics of kubernetes first there are many resources available to help you do this once you have a basic understanding of kubernetes start practicing with the command line you can use killer Coda website for learning purposes this is free for 1 hour go to killer Coda website and choose your environment if you want to play in kubernetes environment then you can choose playgrounds then you will get one master and one worker node cluster for 1 hour if you want a to practice cka scenario then you have to choose cka I will show you both first we can go to playgrounds and select kubernetes version we have one master and one worker node cluster for 1 hour you can use this environment for your kubernetes learning for 1 hour if you are choosing the udemy course of mumshad manam Beth it is coming with a free practice test once you are okay with the basics then next choose cka here you will get some scenarios basis environment let's choose one click on start you can see a question in left side and the terminal in right side the question is there is a deployment in the management name space and we need to write the logs of all containers associated with this deployment to /root logs. log let's try to fix first of all we need to know the deployments in the management name space we can see one deployment there deployment name is collect data next we need to find what are the containers that are running under the deployment we can use the edit command for that here we can see the deployment configuration in a yaml file we can see two containers in this deployment one is engine X and another is httpd let's exit this using control+ C key let's find the logs of engine X container from collect data deployment we can see the logs of engine X container according to the the question we have to write these logs to /root logs. log done next we need to do the same for the httpd container let's check the result yes it is right so like this you can attend all questions available in this portal after the basics of kuber's courses if you are stuck then we can use this tips and solution tabs for the help let's log out next about the exam the exam is conducted online and consists of a set of practical tasks that must be completed within a specified time frame typically 2 hours it is a performance-based exam meaning candidates are assessed based on their ability to perform real world tasks using the command line interface in a live kubernetes cluster you will have up to 17 to 20 questions in the exam and you need to get a 66% Mark for passing the exam once you enrolled for the exam then you will get an exam simulator with two practice exams with 25 questions I recently completed cka certification in my experience if anyone spends 2 months for studying kubernetes Basics and one month for certification preparation that is enough for clearing this certification in this series I will teach you 30 question and answers without further Ado let's kickart the first question is to deploy a pod called engine X pod with the image engine X in the control plan please note it should be scheduled on the control plane instead of the node and the weightage of this task is 3% it is a very straightforward question let's go to killer coda.com if you don't have any kubernetes cluster for learning purposes don't worry worry we can use killer Coda for 1 hour click on playgrounds click on kubernetes version 1.26 it will ask for authentication use one of the authentication method and log in to killer Koda click on start okay your environment is up and running first of all we have set an alias for Kuba CTL then we can use just K button instead of typing Kuba C this will save you time in your examination yes it is working now we have a cluster with one master and one node let's increase the font size for better visibility we can deploy a pod in kubernetes in two ways the first is the imperative way in this way using commands for creating resources in the cluster second is the declarative approach in a declarative approach you describe the desired state of your cluster or resources using a yaml file these manifests specify the configuration and properties of the desired resources such as pods deployments Services Etc in the exam point of view we need to know both ways this is the imperative commands for running a pod in a cluster Cube CTL run pod name here engine X pod is the Pod name and image equals engine X that means an engine X container will be run inside the Pod dry run equals client o yaml means instead of running pod in cluster we are going to take that as a output to a file let's inspect the Y AML file this is the yaml file supposed to apply to the cluster if we are not using dry run from the exam point of view instead of applying imperative commands directly to the cluster we can consider saving configuration files with respective question numbers using dry run oy it will help you in the exam as you see pod name is engine X and the container name is engine X okay let's create a pod by using this yaml file for creating a pod from a yaml file we can use the command Cube CTL apply minus f file name okay pod is created pod is running but you can see this port is scheduled into the worker node by default kubernetes will schedule a pod into a worker node unless if you specify the node name according to the question it should be scheduled on the control panel okay let's delete the pod and I will show you how to schedule this pod on the control plane pod is deleted let's copy the control plane name let's edit the yl file in order to schedule the pod in a specific node we need to define the node name under spec section let's apply the yl file again now you can see the Pod is scheduled on the master node okay the first question is completed now we can go to the next question the second question is to expose an existing pod called engine xpod as a service and the service name should be engine X service and the Pod Port is 80 the task weightage is 4% let's crack the second question we have created a pod for first question that we can consider as an existing pod for this question we have to expose this port as a service in order to expose a pod we can use this command Cube CTL expose pod and pod name as engine xod name means the service name here service name is engine xvc it's important to note that the port flag in this context refers to the container Port within the Pod not the port on the host machine let's check the service you can see the Pod is exposed through a cluster IP let's confirm whether the services is exposed the Pod or not we can use the curl command for accessing the service yes it is working okay let's go to the next question the third question is to expose the existing pod called engine xod as a nodeport service this service should access through the node port the port number is 30,200 the weightage of this question is 6% ah it is a little bit tricky question let's crack this question you can see engine xod is running we have to expose this pod through node Port let's expose it through nodeport the command is Cube CTL expose pod and the service name is engine X node Port SVC and the service type is node Port it's important to note that the port flat tag in this context refers to the container Port within the Pod not the port on the host machine okay service is created let's check the service okay you can see the service is exposed to a random Port according to the question it should be exposed through Port 302 in order to do that we need to edit the service the command is Cube CTL edit service service name here you can see service is running in for 31249 just edit and then replace it with 30,200 let's save and exit it will automatically redeploy you can see now the service is exposed through Port 302000 okay let's try to access the service in order to access we need to find node IP first okay you can see node one IP address is 17232 do2 just copy and curl that IP with port number is 302000 as expected this service is exposed through nodeport hi all welcome to the second part of the certified kubernetes administrator series we designed this series for people who are looking for cka certification in this series we will cover selected 30 questions and answers in the previous part we solved three questions in this video we will cover next three questions and answers without further Ado let's kick start the fourth question is you can find an existing deployment front end in the production name space you have to scale down the replicas to two and need to change the image to engine X version 1.25 you can see the weightage of this question is 4% let's solve it let's find the deployment in production named space we can see a deployment consisting of three pods in this name space we can solve this problem in two ways either we need to edit the existing deployment or we can use imperative commands first I will show you to solve this by editing the deploy deployment the command is Cube CTL edit deploy deployment name minus n namespace name here we can see the current replicas just edit this same as we need to change the image name done just save and exit then it will auto deploy as you see the deployment is scaled down to two replicas we can use Cube CTL describe command for detailed information about a kubernetes resource here we can see the logs of scaleout if we go to top side then we can see the details of current replicas and images in this deployment okay let's clear the screen this is the first way to solve this scenario next I will show the imperative way to solve the same scenario I reset the terminal let's check the deployment in production once again first scale down the replicas to two pods in order to do that we need to use Cube CTL scale Ploy deployment name replicas equals 2 minus n name space you can see the deployment is scaled down to two pods immediately next we need to change the image to do that we can use Cube CTL set image deploy deployment name and container name equals image name image is updated let's describe the deployment once again for crosschecking yes it is updated let's go to the next question the fifth question is autoscale the existing deployment frontend in production name space at 80% of pod CPU usage and set minimum replicas equals 3 and maximum replicas equals 5 for solving this question we have to implement HPA which is a horizontal pod autoscaler for this deployment at 80 % pod usage and also need to set minimum and maximum replicas let's try to solve this let's check again the deployments in production name space yes we can see a deployment in this name space we can solve this scenario using an imperative commands Cube CTL minus n namespace autoscale deploy deployment name Min equals 3 replicas Max equals 5 replicas and C CPU percentage equals 80 horizontal pod autoscaler deployed let's check and verify we have deployed HPA it will autoscale pods up to five pods when pod CPU usage has reached 80% you can see pods are scaled up to three this is equal to the minimum replica count which we set in HBA the sixth question is expose the existing deployment in the production name space named as frontend through nodeport and nodeport service name should be frontend SVC in the previous part we exposed a pod as nodeport service but in this question it is asking to expose and deployment that is the only difference and the weightage of this question is 4% let's try try to solve it let's check the deployment in production name space we have to expose this deployment the command is Cube CTL minus n Nam space Expos deploy deployment name name equals service name Port equals 80 type equals node port let's check the service okay the service is exposed through node Port let's check whether the service is exposing or not yes it is working as expected sometimes they will ask to expose the service through a specific node Port then you need to edit the service using the cube CTL edit command I will show you that also yes Port has changed in next video I will show you how to use kubernetes documentation for creating the kubernetes fundamentals resources welcome to the cka series part three this is a quick video on kubernetes documentation in the cka exam you can take the help of official documentation for solving problems this is the official website kubernetes doio on the left side you can find a search bar if you want to know anything about kubernetes you can search on it for example if you want to know how to create a pod then you have to type pods in the search bar click on the first link you can see a yaml file for creating an engine X pod you can take this as a reference just copy and try to create a pod in the cluster use Cube CTL apply command for creating the Pod yes pod is creating same like if you want to create a deployment then just search with deployment keyword just scroll down you can see a sample engine X deployment file you can take this as a reference for making any deployments okay let's try to make a deployment with this yaml I am going to change the replicas to two Okay let's apply deployment is ready but sometimes you will not get the direct link for solving your question I will show you in the next example search for persistence volume so many links are available choose first one if you are using this yaml definitely you cannot solve the question so search for PV once again and this time choose the second link yes this is the perfect one for creating persistent volume if you are facing any questions for creating or managing persistence volume in the exam then you need to just edit this yaml file as per the question let's just create a PV on the cluster yes PV is created and it is available for claim same like if you want to create a PVC then that also we can take from this document same like if you want to create a pod with PVC that also available on this page okay I will show you one more thing search for cheat sheet cheat sheet typically includes command and examples for tasks such as pod related commands service related commands config Maps secrets and troubleshooting tips Etc so you have to be familiar with this document because this is the only accessible website while your exam if you are familiar then you can easily face every question without any fear in your terminal if you want to set Alias then you can use this command same like if you want to to list the contexts then you can use this command if you want to create a busy box pod then you can use this same like lots of useful commands are available here if you practice cka with documentation then you don't need to memorize this all okay in the upcoming series we will show you how to take documentation help for solving some questions welcome to the cka certification part four in this part we will face some troubleshooting questions question number seven is you can find a pod task PV pod in the default name space please check the status of the Pod and troubleshoot you can recreate the Pod if you want these types of questions you will face in the examination as a part of troubleshooting you can see task PVP is in pending status to find the reason we need to describe the Pod configuration we can see a reason persistent volume claim test PV claim not found which means this pod is using a PVC for volume mounting but it is not either configured correctly or the respective PVC is not found let's check the persistent volume first okay we can find a PV let's check for persistent volume claim we can see a task PV claim there then what will be the issue let's describe the Pod once again here is the problem you can see an extra m in the Pod configuration let's fix it let's edit the Pod configuration let's remove 1 M from PVC name save and exit please note the Pod will not redeploy automatically when we edit the Pod configuration Instead This edited yaml file will save to the temp folder so please copy the yaml path for pod Recreation let's delete the pod in question they are allowing us to recreate the Pod okay pod is deleted let's recreate the Pod using the new yaml file from the temp folder yes pod status is changed let's wait for some time yes the Pod is running now welcome to the cka certification series part five in this video we will cover some pod troubleshooting scenarios the next question is deploy a pod with the following specifications pod name should be web pod the image should be httpd and it should be scheduled on node 01 we have no permission to change anything on worker node as well as Master nodes this question waited is 6% which means it is not an easy question let's try to solve the question let's create a pod pod is created you can see the Pod is created but its status is pending why it is pending we have to find the reason there are many reasons why a pod might be in a pending state here are some of the most common it can be due to insufficient resources node unavailability image pole failures scheduling constraints Etc to troubleshoot a pending pod and to get more information about the pod's status we can use the cube CTL describe pod command here we can see an error related to taint and Toleration which means nodes have some taints in kubernetes a taint is a way to Mark a node so that it can only be scheduled to pods that have a matching Toleration taints are used to control the scheduling of pods on nodes and can be used to achieve a variety of goals such as preventing pods from being scheduled to certain nodes evicting pods from nodes Etc ET taints and tolerations are a powerful way to control the scheduling of pods on nodes by using taints and tolerations you can ensure that your pods are only scheduled to nodes that meet your specific requirements by using this command we can see the taints of nodes 01 you can see the taints in node 01 we can either remove this taint or we can use Toleration in pod configuration in order to schedule the pod in node 01 but in the question they clearly said never do any changes in master and worker nodes so we have to use Toleration in the Pod configuration in this taint we can see two keys and two values that is effect is no schedule and key is node roll. kubernetes node if we need to deploy a pod in this node then we need to make a toleration in our pod configuration against this taint go to the documentation and search for taint in this documentation we can see how to taint a node for Toleration we have to use these settings let's delete the pod just add the Toleration under the spec area we have to find the key from tank first so let's save an exit this is the key just copy the value and pass in the yaml file paste against a key rest of all is same that is effect equal to schedule so we don't need to touch anywhere let's recreate again yes pod is creating within few seconds it will schedule in node okay it is working welcome to the cka certification series part six in this part we will cover a basic question on persistent volume and persistent volume claim without any delay let's go to the question create a new persistent volume named web PV it should have a capacity of 2 gbits access mode should be read right once host path /v /data and no storage class name defined next create a new persistent volume claim in namespace production named web PVC it should request 2 gbit storage access mode should be readr once and should not Define a storage class name the PVC should bound to the PV correctly finally create a new deployment web deploy in namespace production which mounts that volume at /tmp web data the PODS of that deployment should be of image and Jinx 1.14.2 we can see the question has three parts first we need to create a persistent volume then we need to create a persistent volume claim finally we have to mount volume to in jinx deployment first go to the documentation search for PV and choose the second link because the second link is the correct one for solving this question choose this yaml file as reference according to the question we need to change the persistent volume name as web PV storage class is not mentioned in the question so we can remove that change the capacity to 2 gb and host storage path should be /v/ dat let's apply the yaml file okay persistent volume is created so we completed the first part of the question let's check the available name space okay we need to create production name space next we need to create a persistent volume claim choose this yl file as PVC reference let's modify as per the question claim name should be web PVC name space should be production we don't need storage class and storage should be 2GB let's apply the yaml file okay web PVC is created we have completed our second part of the question also go to the documentation for deployments click on the first link choose this yl file as a reference change the deployment name as web deploy add production name space in this deployment we need to mount the volumes as per the question so go to the documentation again you can see a sample reference yl file for mounting the volume copy the volume part and paste it under the spec change the claim name to web PVC copy the volume Mount part and paste it under the container section change the mount path to /t/ web data save and exit let's deploy oh some error was found on the 20th line let's check okay we missed mentioning volume in the spec section let's deploy again yes deployment is successful let's wait for some time to run all pods yes pods are running let's check whether PVC is bonded or not yes PVC is bonded finally we solved the question this was a straightforward question but in the exam we can expect some complicated scenarios related to PV and PVC we will cover that in the later part of this series welcome to the cka certification series part 7 in this video we will troubleshoot a scheduling problem question number nine is to create a kubernetes pod named my busy box with the busy box 1. 31.1 image the Pod should run a sleep command for 4,800 seconds verify that the Pod is running in node 01 that seems like a straightforward question let's crack it this time we can use the imperative command for creating this pod the command is Cube CTL runpod name and image equals busy box1 31.1 minus minus command sleep 4800 okay the Pod is created let's check yes the Pod is running but in the question we can see they asking to schedule the pod on node 01 let's check whether the Pod is scheduled on node 01 or not you can see the Pod is scheduled on control plane instead of worker node here are a few reasons why your pod might be scheduled on the master node first reason is the Pod may have Toleration second reason is only the master node is available in the cluster third is the Pod may have a node selector that matches the master node we will we'll check one by one let's check the nodes availability first okay we have two nodes but if you check closely you can see node 01 status is ready but scheduling is disabled which means the node is in maintenance mode go to the kuber's documentation and search for Cordon you can see a command Cordon this command is using for to Mark a node as unschedulable this means that no new pods will be scheduled on the Node but existing pods will not be affected once you have cordoned a node you can use the uncon command to make it schedulable again let's run on Cordon command yes now we removed node 01 from maintenance mode now we can schedule the pod in node 01 you can see still our pod is running on the master node please note our pod will not be rescheduled to node 01 automatically we have to do this manually let's delete the existing pod let's recreate the pods again yes this time pod is scheduled on node 01 welcome to cka certification series part 8 in this part we will cover a network policy scenario the 10th question is you have a kubernetes cluster that runs a three- tier web application a front-end tier it is running on Port 80 an application tier that is running on port 8080 and a backend tier on Port 3306 the security team has mandated that the backend tier should only be accessible from the application tier which means we need to implement a network policy on the backend pod please look at this diagram normally any container can communicate with each other in a kubernetes environment unless we apply any network policies in this case we can see the front-end pod and the application pod can access the backend Pod without any restrictions if we want to block Ingress from front end and Ingress only from the application pod then we need to apply an Ingress Network policy in the back end this policy will allow the Ingress connection only from the application pods so not only from the frontend Pod but also from all other pods incoming other than the application pod will be blocked through this policy I will show this before we continue I have a small request for all of you I've noticed that many of you are enjoying the videos but not everyone is subscribed to the channel if you're truly enjoying our content and want to support us please consider hitting that subscribe button below your subscription not only keeps you updated with our latest videos but also helps us continue creating valuable content for you now let's get back to the video we have three pods in the cluster application backend and front end now we can see in even more detailed first of all we need to check whether the application pod and the front end pods have a connection to the backend pod or not now we are going to use telnet for checking the connection from the front-end pod okay you can see tnet is not available in this pod so we have to install the telnet first let's update the container base image first then only we can install the tnet now we are going to install telnet done let's try again the tnet to the backend pod from the front and pod Conan is getting let's do the same from the application pod let's update the container base image first then install the tnet we can see the connectivity to the backend pod from the application pod also in order to solve this question we need to take the help of the documentation search for Network policy Network policy is a big topic this is the sample yaml for implementing the policy if you scroll down then you can see some technical terms like Ingress and ESS Ingress is inbound connection and egress is outbound if you want to block any Ingress then you can use this yaml and if you want to allow all Ingress then you can use this yaml so many samples are here let's come to our question we want to apply an Ingress policy in the backend pod before that I will show you one thing you can see each pod have a separate labels actually this network policy is working based on these labels okay let's copy the yaml file we don't need egress so just copy and paste this much into a yaml file just give a meaningful name for your network policy give the matching label as tier equals backend because this network policy is applying to backend pods we don't need ESS we don't need IP block and namespace selector so just remove both under Ingress we need only the Pod selector edit the matching label label it as tier equals application and change the port to 3306 now we are going to apply an Ingress Network policy on backend pods which means after this policy is applied it will only allow the Ingress connection to Port 3306 from the application pods the rest of all pods will be blocked let's apply this yaml file policy applied let's check the connectivity now let's try from the application pod okay connection is getting let's try from the front end pod yes our policy is working you can see the connection from front end is blocking Byer policy let's describe the policy in more detail you can see the policy is applied to the pods that are labeled tier equals backend and allowing the Ingress to Port 3306 from the pods that are labeled by tier equals application so we have completed this scenario and next time we will cover even more complicated scenarios on network policies welcome to the cka certification series part nine in this video we will cover an intermediate level Network policy scenario the 11th question is you have a kubernetes cluster and running pods in multiple name spaces the security team has mandated that the DB pods on the project a name space be only accessible from the service pods that are running in the project B name space this is a little bit difficult question I will explain the question with a diagram as for the question you may have more than two name spaces in the cluster in this diagram you can see three name Spaces Project A B and C by default all pods and all Nam spaces can communicate with each other as per the question we have to only allow Ingress connection from the service PODS of the project B to DB PODS of project a so we need to block other PODS of Project B as well as other names spaces let's go to the cluster let's check the name space first we can see three name spaces next we need to find the PODS of all name spaces project a has One DB pod Project B has two pods service and web and project C has one pod application this is the IP address of DB pod first we need to check the connectivity using the Ping command we can see the service pod can communicate with DB pod let's check other pods connectivity okay the web pod has connectivity to DB pod okay application pod also has connected ity to the DB pod according to the question we can only allow the connection from the service pod of Project B and the rest of all name spaces and the Pod need to block we need the help of documentation to solve this scenario go to the network policies copy the yaml file save and exit before editing we need to add some labels on name spaces in order to implement Network policies in this scenario we need to label the name space first we added one label in the project a name space and we need to do the same in Project B name space next we need to find the labels of the pods in the project a and the project B please note this all labels because Network policy is working based on the labels on the pods let's edit the AML file first edit the name space It should be project a because we are applying this policy on Project a name space match label should be the label of DB pod we don't need egress we don't need IP block Nam space match label should be the labels of Project B because we need only the Ingress connection from Project B pod selector should be the label of the service pod because we need the connection from the service pod to DB pod two conditions are there and and or condition you can see now the settings are in or condition which means any one of the conditions is true then the policy will allow the connection so we need to remove the dash in front of the Pod selector in order to make the condition as and which means the policy will only allow the connection when both conditions are true true let's save and exit okay we have applied the policy let's check the connectivity now first we are trying from the service pod okay connection is there now we are going to try from the web pod you can see connectivity is blocked by Network policy next we can try from the application pod yes that also is blocked by the network policy we have successfully completed this question and I will show you one more thing I will show you what will happen if we are not removed the dash from the yaml file let's see let's delete the existing policy now I am going to show you how we'll behave the policy if we added the dash now it becomes an or condition that means either one condition is true then it will allow the Ingress apply again application pod is blocked service pod is allowing webp pod is also allowing because the network policy is checking whether it is coming from namespace Project B or service either one condition is true then it will allow the Ingress in this case web pods is from namespace B so the policy will allow the connection welcome to the cka certification series part 10 in this video we will cover a sidecar scenario question the 12th question is you can find a pod named multipod is running in the cluster and that is logging to a volume you need to insert a sidecar container into the Pod that that will also read the logs from the volume using the tail command the sidecar specifications are given below the sidecar image should be busy box version 1.28 and the container name should be sidecar and also need to mount the volume /var SL BusyBox SL log I will explain the question with this diagram we have a container and that container is mounted into storage so the logs of the container will be stored in the storage we need to add another container into the Pod and we need to mount the same storage volume into the new container so the sidecar container can access the same log files in the main container I will show you how to achieve this you can see a pod is running in our cluster let's curl this pod you can see the Pod is listening first we need to take the Pod configuration to a file I am going to take a backup of this yaml file in the exam you have to take a backup for precaution if you are dealing with existing running pods let's open the yaml file this is the running time configuration file of the pod in order to add a container into a running pod we need to created updation is not possible so actually we are going to take the existing pod settings and we will add a new sidecar configuration to this yaml file then we will destroy the existing pod and we will recreate the Pod with new yaml file that is our plan we can see lots of configurations in this yaml file but we need only the Pod name container image container name and the volume part of this file rest of all we can remove once again please note we need only the metadata container image container name and volume Mount this much is enough okay this much information is only needed for recreating this pod let's save and exit so this is the actual yaml file of the existing running pod we are going to add a sidecar into it let's copy the existing container section change the container name to sidecar change the image to busy box 1.28 we don't need port we need to change the volume Mount path as per the question we need to run the command inside the container for reading the logs we can copy it from the question you can see the main container and the sidecar containers are sharing the same host story storage volume so the log files that are writing to the volume will be accessible from the side car also okay we have to delete the existing pod first let's apply the new yaml file yes it is running you can see two containers in our new pod let's curl for checking the running condition of our pod pod is listening let's check the logs yes we are getting the logs of the main container through sidecar container we can see some old logs because the old pod has mounted a host volume so those logs are persisted Let's cross check once again yes we can see the new logs welcome to the cka certification series part 11 in this video we will cover a Cron job question the 13th question is you have to create a Cron job for running every 2 minutes with busy box image the job name should be my job and it should print the current date and time to the console after running the job you have to save any one of the Pod logs to the below path /root log.txt go to the documentation and search for KRON jobs we can take this y file as reference just copy to a yaml file change the name to my job five stars in the schedule means this job will run on each and every minute if if we want to run our job for 2 minutes interval then we need to add slash two the container name should be my job the command should be date we can remove the echo command let's apply the yaml file the Cron job is created and it should work in 2 minutes interval let's check at that you can see a pod is in creating status and after the job completion its status changed to completed so our Cron job has triggered one time and after 2 minutes it will trigger again let's wait for that and fast forwarding this video for time saving you can see after 2 minutes our Cron job created one more pod you can see so far three times cronjob triggered this creation andm completion will continue as long as this KRON job exists you can see so far three pods are created in 2 minutes interval let's check the logs of any one of the container you can see the output of the date command which means it executed our Command which we passed through cronjob and after successful execution of the command it went to complete delete status let's create a logs. txt file under the root directory as for the question we need to take the log to a file done we have added the logs to the logs. txt file welcome to the cka certification series part 12 in this video we will cover two questions one simple and one intermediate question without further Ado let's kickart the 14th question is you have to find the schedulable nodes in the cluster and need to save the name and count of nodes into a file first we need to find available nodes two nodes are available and both are in ready state in order to know the schedulable nodes first we have to check the tains yes we can see the control plane has a non-scheduled taint so the answer is node 01 node 01 is the schedulable node in this cluster as per the question we need to save this information to /root sln noes. txt file just fill this answers here the node name is node 01 the schedulable node count is one just save and exit this is the end of question number 14 next we can go to the 15th question the 15th question is please deploy a pod on node 01 as per the below specification the Pod name should be web pod the container name should be web and the image should be engine X you can see this simple question's weightage is 6% which means it is an intermediate question let's create the web pod we need to change the container name to web let's apply the yaml file you can see our pod is in the pending status still pending then check the available nodes you can see node 01 is in not ready status which means node 01 has some issues it may be related to some services such as cuet or Docker Damon Etc let's get into the node 01 using SSH command now we are in the node machine let's check for the status of cubelet service you can see the cubelet service is not running in the examination if you face this question then first you need to restart the service if it is not working then only you need to check for the configuration settings let's try for starting the service let's check the status again it is not started so it is definitely related to the wrong configuration settings this is the path of the configuration of cuet in the examination they may add some wrong paths in this file so you have to verify whether these paths are correct or not I'm taking this path first I can see this path does not exist cubelet is actually located in the bin folder not in the local folder folder let's edit the configuration file just remove the local From the Path just save and exit let's start the service again we are getting a warning that tells us to reload the Damon service first so just run this command for restarting the Damon okay let's start again let's check the status again yes now it is running now our node can schedule the pods so let's exit from node 01 first okay now we are in the control plane let's delete our pod okay our nodes are in the ready State now actually we don't need to delete the pod pod it will be scheduled on node 01 as soon as the node is available unfortunately I deleted the Pod so let's redeploy the Pod you can see our pod is running and now it is scheduled on node 01 welcome to the cka certification series part 13 in this video we will cover a cluster troubleshooting scenario the 16th question is please join node 01 worker node to the cluster and you have to deploy a pod in node 01 pod name should be web and image should be in jinx this question weightage is 6% without fur further Ado let's go to the cluster let's check the available nodes you can see we have only the master node in this cluster according to the question we have to join the node 01 into the cluster let's go to the documentation and search for the token actually for adding the nodes to a cluster we have to run a cube ADM join command from the node machine this token we can get from the master node on this page you can see the token create command by using this command in the master node you will get a token just copy this command and log Lo in into the master node in the examination you will be on a machine that is outside of the cluster so you have to SSH to the master node to run this command now I'm in the master node if you run this command then you will get a joining token in order to join a node to Cluster we need just run this command in the node so go to the node using SSH now we are in the node 01 run the token we got some error let's check whether node has joined or not node is not joined what will be the problem let's check okay Cube leet is not running so let's restart okay now it is running let's join again then exit from the node 01 yes now our node is added to our cluster let's create the pod yes our pod is running welcome to the cka certification series part 14 in this video we will cover one network related question without further delay let's kick 17th question is there was a security incident where an intruder was able to access the whole cluster from a single hacked web pod to prevent this create a network policy in default Nam space It should allow the web pods only to connect to service pods on port 8080 after implementation connections from web pods to application pods on Port 80 should also be blocked I will explain this question with this diagram we have a cluster by default all pods can communicate with each other but now our web pod is hacked by someone so we need to block the connections to application and DBP pods from web pod and only allow the connection to service pod on port 8080 in order to do that we have to apply an egress policy to web pod that policy should be based on the labels and Port let's go to the the cluster let's list the pods as you see we have some pods in the cluster web service DB and application pods I will show the existing connectiv ity for the that I will use the tnet command first I will check the connectivity to all pods from the web pod you can see the connectivity to port 8080 of the service pod same as we need to check the other pods connectivity we can see the connectivity to DBP pod you can see the web pod has connectivity to all pods in the cluster we have to restrict this through Network policy we will create and apply an egress policy on webp pod to the service pod port 8080 in order to do that we need to find the labels of the pods we can see the labels of the pods here if you watch closely you can find one thing both the service pods and the application pods labels are same so if you make a policy to service pods with only based on the labels then the policy will allow connection to both pods so we have to consider label and Port here go to the network policy documentation just copy this all to a yl file match label should be at equals web because we are applying policy to webp pod we don't need Ingress let's delete all Ingress configurations the port should be 8080 we need to add pod selector so copy that from the documentation match label should be at equal service we don't need the IP block we can give a name for the network policy just save and exit let's apply the policy okay our policy is applied let's check the connectivity now first we are trying to the service pod connectivity is there let's try to DB pod it is blocked by our policy let's try to application pod that also blocked by our policy we have successfully completed this network policy problem Network policy is a little bit difficult to understand so you have to practice this very well before the examination so I will add a few more scenarios of network policy in this series welcome welcome to the cka certification series part 15 in this video we will cover a question related to roles and role binding without further delay let's kickart 18th question is create a new service account JPS in namespace Project one create a role and role binding both name JPS rooll and JPS rooll binding as well these should allow the new service account to only create secrets and config maps in that Nam space I will explain the solution of this question with a diagram we have a namespace project one we have to create a service account called jit hops then we need to create a role that should only allow creating the secrets in the config map on the namespace project one then we need to create a role binding and we need to bind the service account to the role so by by using the service account we can create only the secrets in the config map in the project one name space let's list the available name spaces okay project one is there let's create the service account first now we have created the service account in the project one name space next we need to create the role let's take the help we need to find the verb and resources in the help page take this sample as a reference verb should be create resource should be secrets in the config map role is created let's describe the role in more details perfect we can see by using this role we can create config map and secrets now we have completed the service account and roll next we need to create the rooll binding for connecting this service account with the roll we can take the help here take this as a reference roll should be jop's roll here we need to mention our namespace project one here should be the service account name JPS perfect rooll binding is created okay we have created the role binding let's check whether the service account can create pods in the project one name space the answer is no because jitop service account doesn't have permission for creating the pod in the project one name space let's try for the config map the answer is yes let's check for the secret the answer is yes let's try for the deployments the answer is no which means our role and the role binding is perfect jitop service account has the permission only to create secrets and config map in the project one name space welcome to the cka certification series part 16 in this video we will cover a question related with Ingress controller without further delay let's kickart this time we are taking the question from the killer Coda choose Ingress scenario we can see the question on left side the question is there is existing deployments in the world name space we have to expose it through Port 80 of cluster IP let's check the deployments in the world name space we can see two deployments Asia and Europe as for the question we need to expose both deployments done we have exposed both deployments through cluster IP we will get help and tips on the left panel if you are stuck let's check okay so far so good okay we are in the next stage of the question the question is enginex ingress controller is installed we need to create an Ingress resource for a domain main name world. universe. mine it should Point through host entry and after the creation of the Ingress resource we can access our Europe Services through Ingress by calling world. universe. mine 30008 Europe and Asia service through SL Asia let's check the Ingress controller you can see an Ingress controller service is listening through node Port 30008 let's check the host entry as you you see the host entry for the world. universe. mine is already there and it is pointing to the local IP address let's go to the documentation and search for the Ingress we can take this yaml file as the reference first we need to find the Ingress class so let's save and exit Ingress class name is engine X mention Ingress class name here next we need to mention the domain name here so let's go to the documentation just search for host copy this line to yaml copy and paste our domain name to here path should be Europe and the service should be Europe if if you are apply this yaml then if anyone calls world. universe. 30008 Europe then this will rout through Ingress controller and it will reach our Europe deployment Port 80 we can reuse the same code for another path path should be Asia and the service name should be Asia according to the question we need to create an Ingress resource that should root to service Asia whenever we call the URL world. universe. mine 30008 Asia same like if we call for Europe then that should be routed to the service Europe we forget to mention the name space in ress resources are Nam spaced so we have to mention our Nam space here let's apply okay Ingress is deployed let's curl it yes our Europe service is listening next let's try for the Asia yes yes Asia also listening okay our task is successfully completed if you want to know more about this question then you can go to explanation part of this question if you want to know the ports of the Ingress controller node Port service then you can use this command you can see the cluster Port 80 is mapped to the node Port 30008 which means we can access the service through node Port same like all details and tips are mentioned here this is a sure question in the exam so I am recommending everyone has to practice this question in killer Koda environment welcome to the cka certification series part 17 in this video we will cover a Damon set question the 20th question is use namespace project one for the following create a Damon set named Damon imp with image httpd Alpine and labels ID equals Damon imp and with this U ID the pods it creates should request 20 M cor CPU and 20 megabyte memory the PODS of the Damon set should run on all nodes also control planes let's check the available node we have two nodes one master and one worker node we are going to create a Damon set for this cluster a Damon set is a type of workload object that ensures that a specified pod runs on each node within a cluster it's particularly useful for monitoring agents log collectors or other system level utilities go to the documentation and search for the Damon set copy this yaml file up to the volume mount paste into a yaml file just modify as per the question name should be Damon imp Nam space should be project one labels should be ID equals Damon imp and copy The UU ID also just replace the equal symbol to colon we can remove other labels mentions the same labels under the match labels image should be httpd container name is not mentioned in the question so we can give any name we don't need the limits CPU request should be 20 M core in memory is 20 me byte okay project one name space is not available so we have to create this first okay let's apply the yaml file Damon set is created let's list the Damon set you can see two pods are deployed in this cluster let's let's see in detail we can see two pods are deployed one in the worker node and another in the master node okay we have successfully completed our task welcome to the cka certification series part 18 in this video we will cover etcd backup and restore we can easily complete this task with three stages stage number one is is taking a snapshot of etcd stage number two is restoring this snapshot to a particular location and stage number three is changing the volume Mount of the etcd Pod to our snapshot restored location okay we have two pods in this cluster we are going to create a snapshot of the cluster once a snapshot is created then we will delete these pods then we will restore this snapshot once restoration is completed then we will get these pods back here we go go to the documentation and search etcd snapshot click on the first link scroll down copy these commands into a notepad Okay we need to find the certificates path go to the Manifest folder let's explore the etcd yaml file we can see the certificate paths here we need server certificate and the ca certificate and the key for taking the backup let's copy this one by one we have to mention the location to save the backup here I'm entering /root and the snapshot name is etcd backup. DB perfect let's copy this command and run in the cluster okay the snapshot is saved under the root directory let's verify the status of the snapshot we can use this command for verifying the snapshot perfect now now we are completed the first task next task is we need to restore this snapshot to a location before that I am going to delete the existing pods okay now we have no pods so after restoring we need to get back these pods so let's store the snapshot to a location take these commands as a reference we can modify the existing command let's remove save and replace with restore then we need to add add the restoration location copy this data directory the snapshot will restore to this data directory let's provide a location here I'm providing /var SL Li slcd backup okay now we are going to restore our DB snapshot file into this location yes we are successfully restored the snapshot to /var /li slcd backup folder we can see our restoration folder etcd folder is the current folder in the next stage we need to change this folder to our backup folder in the etcd yamla file so let's go to the Manifest folder we can see the volume is mounted to etcd directory we need to change that to new directory since it is a static pod it will recreate this pod as soon as we changed let's save an X ex it now we have changed the etcd location to New restored folder it may take a few minutes for setting up new etcd pod yes it is restored we can see the pods are running we successfully completed this task in the next part we will cover a cka examination question on etcd backup and restore thank welcome to the cka series part 19 in this video we will cover a question on etcd backup and restore 2 first the question is create a snapshot of etcd and save it to a particular location location in the backup name mentioned here you can use the below certificates for that after taking the backup you have to restore an old backup that is stored in another location to etcd backup folder please note you have to restore the old backup not the latest one once you restored then you have to change the etcd Pod volume Mount path to the New Path the weightage of this question is 12% without further delay let's go to the cluster first check the certificates that are mentioned in the question under under the root directory we have two certificates and a server key for taking the snapshot let's go to the documentation and search for etcd snapshot choose the first link and scroll down copy this command to a notepad according to the question we have to save the snapshot to under /root SLB backup directory and the backup name should be etcd backup new next we need to provide the path of the certificates in the key perfect let's copy and apply it to the cluster done snapshot is created next we need to verify the snapshot so let's go to the documentation just copy this command yes our snapshot is perfect next we need to restore an old backup please note as per the question we need to restore an old backup instead of the latest one this is the new backup and we need need to restore the old one okay let's go to the documentation we can take this command as reference actually we need to do only a small change in the existing command for converting to the restore so just remove the save and replace it with the restore command and the snapshot should be the old one and we need to add the restore Des destination directory here as per the question it should be etcd backup under the /var /li so just copy the data directory and add into the command perfect if we run this command then our old backup will be restored to the etcd backup directory okay before that we need to check the current pods in our cluster currently we have one pod we need to check the pods after the restoration so I'm adding one more pod actually you don't need to do this in the examination I am doing this only for the purposes of confirming whether our snapshot is restored or not after restoration okay now we have two pods let's apply the restore command our snapshot is successfully restored to etcd backup directory okay let's go to the location this is the location of the old snapshot restored we can see One etcd Directory here which is the current volume Mount point of the etcd Pod we have to change this location to our etcd backup directory so let's go to the Manifest fold fer etcd pod is a static B so we have to edit an etcd yaml file this is the path for volume Mont into the etcd Pod we have to change this path to the ETC CD backup directory let's save and exit since it is a static pod cubelet service will recreate the Pod once it is updated let's check the available pods now it may take a few minutes in the examination you don't have to test this once you restore the etcd then you can go to the next question this this is the high weightage question in the cka examination so you have to learn this very well that's why we provided this in two section yes it is restored welcome to the cka certification series part 20 in this video we will cover some debugging questions the 22nd question is you can find a pod named multicontainer pod running in the cluster take the container logs and the container ID of the C2 container and save it into the below mentioned location we have to save the logs and container ID to this location after that we have to restart the C2 container and write the cluster events to the /root SL event. log file without further delay let's kickart let's list the pods we can see a multicontainer pods is running in the cluster with two containers in order to see the container name we have to edit the pods okay the first container in the pods is an engine X container name is C1 and the second container is a busy box container name is C2 let's exit let's view the logs of the Pod you can see the pods logs but it is the engine X container logs because if you do not mention the container name then it will show the logs of the first container so let's specify the container name okay this is the second container logs let's save this to the /root SL log.txt file done next we need to find the second container ID first we need to find node name okay our pod is scheduled on node 01 so let's log into node 01 let's list the containers please note ER command will not work here so use CR ictl command instead okay we can see all the containers that are running in node 01 take the C2 container ID and paste it into /root slid txt file please note now we are in node 01 so let's log out and go to the master node done next we need to restart the C2 container and need to take the cluster events so go to the documentation and search for cheat sheet search for the events this is the command for showing cluster events sorted by timestamp just copy and paste to the cluster as per the question we need to save these logs after restarting the C2 container so let's go to the node 01 again let's stop and remove the C2 container okay container is recreated let's check the events now we can see the container creation event here just let's save this logs to the /root SL events. log file yes we have successfully completed this question okay sometimes they will ask for the events of the pods instead of the whole cluster so I will show that we need to get the Pod Name by using this command we can get the events logs of the pods now we are seeing the event logs of the multicontainer pod sometimes they will ask to save the commands that are used for showing the logs so that time we need to save the command to a specific file so just copy the command and save to the appropriate file for example here I'm saving to /root command. txt in the exam you can expect a question for saving the command to a file instead of the results so you have to read the question carefully okay we have successfully completed the whole task welcome to the cka certification series part 21 in this video we will cover a question related to pod priority without further delay let's kick start our first task is to delete the highest priority pod in the management name space okay let's list the pods first we can see two pods are running in this Nam space we need to find the priorities easily this we can find this by editing the Pod let's edit the first pod the priority of the runner pod is 20 million and the priority class is level two let's exit without saving next let's check the priority of another pod second pod priority is 30 million and priority class name is level three which means the second pod has more priority than the first pod as per the question we need to delete highest priority pod so we can delete the second pod before that I will show you the priority classes available in this cluster okay we can see four priority classes we can add new priority classes if we want that I will show you at the end of this video if we click on the solution then you can see a command this is the proper way because if cluster have more than two pods then it is not easy to edit one by one so we can use this command so when you run this command it will fetch the yaml representation of pods in the management name space and then search for the word priority in that y output when it finds a match it will display the 20 lines before the match providing context around where priority appears in the yaml output we can see the second pod priority is 30 million and the priority class name is level three same like first pod priority is 20 milon and classes level two okay let's delete the second pod let's check yes our first task was successful the second question is for creating a pod with priority the question is in nam space lion there is one existing pod which requests 1 gab of memory resources that pod has a specific priority because of its priority class create a new pod named important with the image of engine X in the same named space It should request 1 GB memory resources assign a higher priority to the new pod so it's scheduled instead of the existing one please note both pods won't fit in the cluster which means the cluster has no more resources for new pods so when we deploy a new pod with higher priority that that time the existing pod will be removed from this cluster due to lower priority okay let's check the existing pod priority first then only we can schedule the new pod with a higher priority One pod is running in this name space let's check the priority okay this pod is requested for 1 GB memory and the Pod priority is 20 million and the class is level two next we need to create a pod named important and that also need to request for 1 GB we should need to assign higher priority than the existing pod once it is scheduled then old pod will be removed okay let's create the pod let's list the existing priority classes exiting pod has level two priority so we need to assign level three at least let's edit the yaml file and add the priority class let's confirm it okay we added the priority next we need to add the resource limit go to the documentation we don't need the CPU limit memory limit should be 1 GB seems perfect let's apply the Pod is created we we will have only one pod in the cluster if everything is okay so let's check oh something went wrong both pods are running so definitely some issues there okay let's debug our new pod has priority but the resource request is missing I think some issues in yaml file okay let's delete the Pod we can take the help okay this is the issue we can find two resource declaration in our yaml file so we can remove one let's delete the Pod first let's apply again yes it is working as expected now you can see the old pod is terminating and our pod is in pending status once the termination is over our pod will be scheduled to the cluster actually this cluster is running out of resources that is why it is deleting the low priority pods when a high priority pod is deploying let's check congratulations we are successfully completed this question old pod is terminated let's check again yes our pod is running you can expect this type of questions in the exam next I will show you how to create a new priority class named level 4 and we will deploy a new pod with that priority class go to the documentation and search for priority class take the first sample we can assign 40 million let's apply the yaml we made a mistake we have to change the name to level four so just delete and recreate it yes our priority class level four has deployed okay let's deploy a pod with this priority class I'm taking a copy of the same yal file new pod name will be important too okay if we deploy this pod then our existing pod will be replaced let's try yes it is working our pod is now running welcome to the cka certification series part 22 in this video we will show you how to add an existing pod to a new replica set 24th question is create a replica set with below mentioned specifications the replica set name should be web app the image should be engine X and replicas equal to three there is already a pod running in our cluster named web front end please make sure the total number of PODS running in the cluster is not more than three which means we have to add the existing pod under our replica set without further Ado let's kickart okay let's check the existing pod first we can see one pod is running in this cluster so in order to add this pod under our new replica set first we need to know the labels of this pod so let's edit the Pod okay we can see one label for this pod so let's copy this label for making our replica set exit without saving okay now create the deployment dry run why we are creating a deployment yaml file for the replica set is we can easily create the replica set yaml file by editing the deployment file only we need to change the kind to the replica set that's enough to we need to add the same label of the existing pod here tier equals front end kind should be replica set we don't want strategy and replica set replica should be three okay our yl file is ready let's save and exit let's apply the yaml file okay let's list the Pod we can see two more pods are added into the cluster let's list the replica set yes we we can see three pods are running under our new replica set which means we are successfully added web frontend pod to our replica set let's describe the replica set we can see two parts pods are created by this replica set but under this replica set there is three pods which means our existing pod is added to this replication set if we delete this replica set then our all pods will be removed okay I am going to demonstrate this in the examination you don't need to delete these pods after creation okay let's delete the replica set yes all pods are deleted we can see not only pods that are created by the replication set our existing pod also is deleted welcome to the cka certification series part 23 in this video we will cover a question related with config map this time we can take this question from killer Koda choose this scenario why this time we are choosing a question from killer Kota because the same question you will face in the examination the first question is for creating two config Maps create a config map names true wide with content tree equals true wide this is the first config map after that we need to create a config map from a yaml file it is stored under /root slash let's create the first config map we can take the help take this as reference okay the first config map is created let's confirm it we can see the key is tree and the value is true wide okay let's create the second config map actually it is stored in a yaml file so we only need to just apply that name of the config map is Burke and it has three key in the values let's apply the yaml file let's describe it we can see three key and three values okay just validate it validation is successful the next question is to create a pod and we need to mount these config Maps through environment variable and volume Mount the question is create a pod named pod one with image engine X and make a key tree of config map True wide available as environment variable tree one which means we need to create an environment variable tree one and pass the key tree of the config map True wide into it next task is to mount all keys of config map Burke as volume the files should be available under sl/ Burk which means we need to mount the config map briak as a volume to the Pod specific path after that we need to test okay let's create the Pod first just remove unwanted lines first we can configure the environment variable name should be tree one okay we can take the help of documentation search for config map take this as reference config map name should be true wide key should be tree okay we are passed first config map through environment variable next we need to mount the second config map so go to the documentation again take this volume Mount as reference provide a name okay this means items in the volume Burke will be mounted to the pods location /c/ Burk next we need to declare config map as volume so go to the documentation we can declare under the spec provide a volume name config map should be briak okay now we have mounted the config map break to the Pod through volume mounting let's save and exit let's apply our pod is creating okay our part is running Let's test now okay environment variable is there next we need to check the volume mounts volume Mount also there okay let's validate it yes validation is successful congratulations we have successfully completed this question and you can expect this question in the examination welcome to the cka certification series part 24 in this video we will cover some simple questions the 25th question is list the pods in the Safari name space sorted by creation time and save the command to the below path the path is /root SLP pods amp. txt without further Ado let's kickart let's check the pods in Safari namespace first we can see some pods are running in the Safari name space we have to sort it so we can use sort by command for that done we can see our pods are sorted in ascending order sometimes they will ask for sort in descending order that time we can use Tac along with this command I will show that okay now pods are sorted in descending order actually the question is not to just display the pods and sorted the question is to save the command into a file so be careful answering this type of questions let's copy the command and paste it into the specific file according to this question we have to save the command into pods amp. txt file under the root directory please note we have to mention full command here instead of K we need to mention Cube CTL here this type of sorting question may be asked in different ways sometimes they may ask to sort the pods By Priority okay I will show you how to sort by the priority yes now pods are sorted by the priority clearly you could can see that last two pods have some priority difference okay we are successfully completed this task you can expect a question like this in the examination welcome to the cka series part 25 in this video we will learn how to disable scheduling in a node 25th question is create a new deployment named web deploy using the engine X version 1.16 image with three replicas ensure that no pods are scheduled on the Node named K worker let's check the cluster we have two nodes K worker and K worker 2 as for the question we have to deploy pods except K worker node in order to do that we have to change the status of the node to unschedulable let's use Cordon command for that now K worker status becomes scheduling disabled let's create the deployment as per the question we have to deploy a web deployment and the image should be engine X version 1.16 and the replicas should be three replica change to three and save and exit okay let's apply the yaml file deployment is up and running let's check check the pods we can see all pods are scheduled to K worker 2 this is only because of the status of scheduling is disabled in K worker okay we have successfully completed the question sometimes in question they will ask to revert the changes so let's uncon that node which means we are going to change the status of node to schedulable now our node becomes schedulable I will show you how pods are scheduled to nodes in the cluster if there is no restriction in scheduling let's redeploy the deployment now we deleted the existing deployment let's apply again now we can see all pods are deployed in all nodes I recreated this deployment only for just showing you in the exam you don't need to redeploy only you need to uncon the node after deploying thank welcome to cka certification series part 26 in this video we will cover a question related to draining a node 26th question is Mark the worker node named K worker as unschedulable and reschedule all the pods running on it to another node this question's weightage is 6% without further delay let's Kickstart let's list the pods first we can see three pods are running in this cluster in the default name space one pod is scheduled in K worker and the rest of pods are in K worker 2 according to the question we have to drain the K worker for maintenance and reschedule the pods to other nodes we need to drain the K worker node in order to do that we have to use Cordon command first let's check the status of nodes we can see the K worker node become unschedulable let's check the pods you can see one pod is still in K worker node because Cordon is using for just disabling further scheduling and it will not evict existing pods for rescheduling pods we have to use the drain command we can go to the documentation and search for drain command this is the command for drain the nodes okay let's try this command oops we are getting one error okay we can see one metric server pod is using local storage due to this draining is failed you may get the same error in the exam but don't worry you can see the solution in this error code you only need to add delete empty directory data argument with the command to solve this issue you can see there is no change in the cluster so let's retry the command once again here we go done we have successfully drained the node let's check the p pods once again we can see all pods are scheduled to K worker 2 okay we have successfully completed the question now we can make the node schedulable in the exam you don't need to do this unless they mentioned in the question let's check the pods now we can see still all pods are scheduled in quker 2 which means pods will not be rescheduled to the K worker as soon as K worker is available welcome to the cka series part 27 in this video we will cover a cluster upgradation question 27th question is given an existing kubernetes cluster running version 1. 126.0 you have to upgrade the master node and worker node to version 1.27.0 be sure to drain the master and worker node before upgrading it and uncinate after the upgrade the weightage of this question is 12% before going to Cluster you have to go through the documentation part search for upgrade here you can see different cluster version upgradation documents and choose the proper one here you can see the detailed documentation for the cluster upgradation you can see Debian and red hat based cluster upgradation commands here but in the exam your cluster will be based on Debian you have to practice this question very well before your exam because it is a must asking question in the exam with high weightage marks I'm recommending to make a cluster in your laptop and practice well I will share the link in the description for making a cluster in your laptop without further delay let's kickart okay we have a cluster with version 1.26 according to the question we have to upgrade the master and worker nodes to version 1.27 let's drain the control plane first okay control plane is drained and become unschedulable next we need to get into the control plane machine through SSH command now we are in the control plane machine in the exam you can get into the machine without passing a password let's unhold Cube ADM package okay we need root permission to do that so we can use pseudo- I command in order to get root privilege now we got the root privilege let's update the OS first next we have to install Cube ADM version 1.27 Cube ADM installed let's hold the cube ADM package holding Cube ADM means that once a package is marked as hold then it will not be automatic I Ally updated or removed by the package manager this is not a mandatory command from the exam point of view let's upgrade the cube ADM to the new version please note that this may take more than 5 minutes so please wait patiently done we have successfully upgraded we can see a success message on the screen also if you read the message then we can see a recommendation from the cube ADM for upgrading the cubelet service let's install the Cube bled version 1.27 okay let's restart the cube blet service now we have to exit from the control plane machine so you have to run the exit command two times now we return to the first machine yes we can see our control plane upgraded to version 1.27 but still it is unschedulable let's change that perfect next we need to upgrade the worker node so first we need to drain the node 01 okay let's SSH to the node machine now we are in the node machine let's run the pseudo minus I command for getting root privilege let's update the OS first okay next we have to install Cube ADM version 1.27 okay next we have to upgrade the cube ADM to 1.27 version done next we have to install cubelet 1.27 version let's restart the cubelet service okay run the exit command two times for exit from the node machine now we have successfully upgraded Cube ADM and Cube Le on control plane and worker node yes we can see both nodes are upgraded to version 1.27 but the worker node is still unschedulable let's uncon the worker node congratulations we have successfully completed this task this is must asking question so you have to practice this well for practicing this question am recommending to making a master and worker node in your laptop using virtual box as you see in my laptop my worker node of two core and 2GB RAM and my master node have the same configuration so if you have at leas I five machine you can make a cluster in your laptop detailed video I post on this YouTube channel I will share the links in description or you can go to YouTube and search the keyword kubernetes bare metal cluster on Ubuntu then you will get the video this is the video in this video I will explain how to create one master and one node cluster on your laptop you can find a GitHub link here so all commands for making this cluster will be available there and also I'm recommending to go through the official documentation to upgrade the cluster if you have a cluster on your laptop then you can easily practice these questions because this is the most weightage question in the exam welcome to cka certification series part 28 in this video we will cover an init container scenario question number 28th is add an init container named init container into the yaml file you can find the yaml file in this specific path the init container should create an empty file named slw worker SLC nft txt if C nft txt is not detected the pod should exit once the spec file has been updated with the init container definition the Pod should be created without further delay let's kickart let's go to the yaml file location yes we can see a yaml file here okay this yaml file is for creating a pod named web pod container name also web pod you can see this container is executing a shell script as per the script if the config file exists then container will run for 10,000 seconds otherwise it will exit which means we have to run an init container for creating this config file under the work directory since it is using an empty directory as a volume Mount so both containers can access access the same file let's edit the file go to the documentation and search for the init container choose the first link here we can see lots of examples okay we can take this as reference let's copy this portion paste here the name should be a nit container here we have to use Touch command for creating the config file Mount name should be work directory what does it mean we have an empty directory volume named worker directory and we mounted the same volume into the init container we can see the same volume is mounted in the main container so if a nit container creates a file then the same file can access from the main container also okay let's save and exit let's confirm the yaml file again perfect let's apply it okay our pod has started to initialize let's wait for a few moments yes our pod is running let's describe the pod we can see our pod first initialized the init container and that created a config file after initializing the init container our pod started our main container congratulations we have successfully completed this task thank you for watching this video if you felt this video is helpful then please do like and subscribe