Transcript for:
Understanding Red Hat OpenShift Features

hello everyone my name is abishek and welcome back to my channel in today's video we will Deep dive into red hat open shift so we will learn what is open shift why is open shift the most used Enterprise grade kubernetes platform we will try to understand the differences between kubernetes and open shift along with that we will also explore the open shift platform through the user interface as well as CLI so this video is going to be quite interesting please try to watch it till the end first things first let's start with what exactly is open shift many of you might be aware about kubernetes so let's do a quick recap of kubernetes and then let's move to open shift so kubernetes is a container orchestration platform where kubernetes is used to orchestrate the containers so to do a recap when you have a application a that you want to deploy as a container on the kubernetes platform so you create a pod for that one container or group of containers that you have and a pod in kubernetes is typically deployed using the deployment resource or replica set and once you deploy your single pod or group of PODS through the deployment for the service Discovery you create a service in kubernetes and to control the Ingress traffic that is the incoming traffic for the containers within your pod you create Ingress resource so this is the typical major workflow of kubernetes and on top of that let's say your pod wants to read some configuration so you create a config map it can be something like the application Properties or any configuration information that you want that your application wants to read it can be as simple as a Json data that your application wants to read during the runtime and let's say this information is secure so in instead of reading a simple configuration information let's say you want to read a API token or you want to read some password a connection string to the database in such cases instead of using config Maps you use a secret additionally definitely you will not have a single pod Port toport communication is taken care by container network interface so every pod or every container is typically attached with a container Network which is deployed through the container network interface and communication can be established between your ports through the cni and for your ports to run you have container runtime and to make your Port secure or to make your containers secure you also have ADD things like Network policies you have admission controls in kubernetes and to make sure your pods are running and you know they don't go down you also have pod disruption budgets requests limits lot of additional things that kubernetes provides as add-on or additional features but the typical workflow of kubernetes is something like this and this workflow is quite robust that means kubernetes is a quite robust platform it works very smooth it is quite tested and used by lot of people but there is one thing that makes organizations let's say you are working in Amazon or you working in a e-commerce organization which wants to deploy their containers at scale there are two major things that make organization back out from kubernetes and search for an Enterprise grade offering of kubernetes typically kubernetes is open source free to use anybody can download them and start installing kubernetes on their virtual machines but what the open source version of kubernetes lacks is the first thing support let's say while using kubernetes you ran into an issue that can be an existing bug with kubernetes which has a work on but you don't know how to implement the workaround or you are not understanding how to implement the workaround or it can be some question with kubernetes maybe your architect does not understand how to scale kubernetes for a particular requirement architect has a question about it now it's quite difficult to reach out kubernetes Engineers immediately because it's an open source platform so what you would do in this case if you're using open source kubernetes platform is you go to the GitHub repository you create an issue with kubernetes and wait for one of the contributors of kubernetes to respond back to you of course that will eventually happen but it is time-taking and Enterprises might not prefer that second thing is the management overhead quite obviously being an open- source project kubernetes is not opiniated that means kubernetes does not tell you to use kubernetes on Uber to Virtual machines or to use only Cent Os or to use only uh Red Hat Linux so it says you can install kubernetes on any operating system that you want these are the instructions and you need to take care of the management of it let's say while installing kubernetes on Centos you ran into an issue or when you want to install kubernetes at scale where you have let's say 10,000 concurrent users or you have application with huge resources so then at some point you will feel that managing kubernetes clusters is quite tedious so you might have kubernetes clusters with 50 nodes or you might end up with multiple kubernetes clusters each in different regions and availability zones so at some point you will realize it is a lot of management overhead as well so these are the two reasons why organizations probably second one is not a reason for everyone if they have a proper devops team and Sr team so this can be taken care but the first one is still a problem so because of these things organizations look for Enterprise grade kubernetes now in the Enterprise space of kubernetes they are nothing but you know companies which take the open- source project of kubernetes and develop their own distribution or their own flavor of kubernetes the first one in the list is red hat which has an offering called open shift and this is what we are learning today then you have eks provided by Amazon you have AKs provided by Azure you have gcp provided by Google platform and then there are companies like Rancher which have their own flavors of kubernetes so Enterprises often look at these Solutions of course they are still using kubernetes under the hood whether it is Red Hat open shift eks AKs gcp or Rancher they might not be directly selling kubernetes to these organizations but they are selling a flavor of kubernetes so they use the open- source kubernetes platform modify it and sell it to the customer with additional features as well as their own Flavor now when we get to Red Hat open shift specifically about that you will understand how many features Red Hat open shift provides on top of kubernetes which makes Enterprises decide that okay this is the solution that I need because above kubernetes you are getting a lot of things it's not just kubernetes kubernetes plus support for the kubernetes sometimes you get rid of the management overhead and a lot of added features now that we understood why companies look at Enterprise offering of kubernetes let's focus more towards open shift I will get to the open shift user interface in a while we will explore the different options that you can use through the user interface you will realize how rich is the red hat open shift user interface is because you can typically do everything that you do through the CLI and with a great ease so now the thing is coming to the open shift so open shift has two offerings one is the self-managed second is serviced that is the manage services part now in the self-managed part of open shift just like how you install and configure a open- source kubernetes project similarly you can take the open shift installer of course op shift has an installer which makes your life little more easy and then there is something called RH cosos Red Hat Enterprise Linux core os I'll not go into the details of this installer and rcos because typically it is more of administrative point of view and onetime thing so we will take the installation of open shift probably in some other video but you have a similar self-managed installation process of open shift where you can install open shift on your own virtual machines but the only condition is your virtual machines should have rhel or the RH cosos it has to be red hat Enterprise Linux now you come to the service or the managed offering on all the top Cloud platforms if you take AWS AWS has an Enterprise offering of red hat open shift the service based which is Rosa Red Hat open shift on sorry uh Red Hat open shift on AWS then you have Azure where you have a service on the Azure platform which is a o Azure red had open shift similarly you have on the Google Cloud platform as well as the IBM platform as well so either you can use this manage services on of open shift on your Cloud platforms or you can go with the self-managed installation wherever you have your VMS on the on premises or on the cloud platform you can install it let me show you on the AWS platform very quickly so if you go to AWS and search for Rosa Red Hat open shift service on AWS so you can just click on get started by default you will not have permissions to use Rosa you have to enable Rosa explicitly and you need to get the quota that is required if you are using a free tire of AWS because open shift needs a lot of resources for it to be installed by default installation of open shift is ha that is high availability so the high availability of kubernetes or open shift you need to have three control plane nodes and each control plane node it is recommended that you have at least 32 CPU and 32 GB RAM and number of worker nodes depends on you you might have three worker notes four worker notes six worker notes which is quite common to have uh six worker notes or even more than that depending upon your requirement so the default option to install Red Hat open shift is through the ha configuration so this is about the installation part of red had open shift what are the different things that you have additionally there are things like SN which is single node open shift so if you ask me abishek can I install open shift on a single node because you know I don't want the ha configuration which is quite heavy or my requirement is very simple so if you have just up to 70 concurrent users not 70 users 70 concurrent users that means 70 people trying to access your application or your platform at once not more than 70 then yes you can use a single node open shift which is called as Sno where the control plane as well as the data plane are deployed on the same open shift node or the same node but again you need to have at at least 64 GB Ram instance for it and up to 16 to 32 CPUs course right so this is the requirement you can go ahead and install single node open shift as well and apart from this open shift also has something called CRC which is similar to mini cube of course it is not equ to mini Cube you cannot directly compare mini cube with CRC but if you want to run open shift on your machine just like how you run kubernetes through mini Cube or k3s or kind if you want to run open shift you also have something called as CRC again for the CRC you need to have 32 CPU and 32 GB Ram it also works with 16 CPU but it is preferred if you have higher configuration finally you also have something called as microshift so these are the different offerings of red hat open shift the reason why I'm trying to cover each of them is because you understand what are the different things how you can install open shift and setup finally you have something called microshift which is used in the edge cases so it it is quite popularly used in the cases of edge Computing so if your organization is doing Edge Computing or if you want to set up a edge based open shift instance you can go with microshift it is very very light you just need 2 CPU and 2 GB Ram so these are the different things that open shift supports as of today fine now the biggest question is abishek what are the things that open shift offers on top of kubernetes I want to understand that so kubernetes is plain vanilla model as I've explained it provides the things that we discussed in the first five minutes whereas open shift can be understood as kubernetes plus Advan Ed features for example open shift has inbuilt cicd op shift has inbuilt networking so when you install open shift by default it comes up with Creo and you can also set up overlay networking like open shift comes with sdn lot of other things which are quite good for enterprise organizations to start with so the container runtime is Creo and it also has a networking like just like Calico or flannel there is something called sdn which is default in open shift on top of it open shift also comes with inbuilt observability so you don't have to set up monitoring or you don't have to do a lot of uh installation for the monitoring because open shift has default observability open shift has kops that is out of the box you just need to enable it open shift has advanced user management so you can easily integrate your SSO or your active directory that you have with open shift on top of that open shift has something called as operators which is my favorite feature of open shift because you can install hundreds of kubernetes controllers or even custom kubernetes controllers through this operators of open shift and installation you can do it through the helm chart as well but when I move to this operator section you will realize how it can make your controller immutable so we will get to this point and we will also deep dive into operator life cycle manager which is again available out of the box on your open shift clusters and above this you have a very rich user interface something that you see here of course user interface is also available for eks AKs but the user interface of open shift is very very rich and it cannot be compared with any other managed offerings so these are some of the things that red hat open shift provides on top of kubernetes of course you have the installation management that is made quite easy in open shift using the machine sets machine configuration U so there are other things like machine set machine config ation which we will cover in the administration part of open shift some other time now let's go to the open shift platform because I've been talking quite a lot about the advantages that you get through openshift let's explore them using the user interface if you want to try open shift like I said you can go to AWS and create a Rosa instance or what you can do is you can search for open shift sandbox so developer sandbox for Red Hat open shift go to this start your sandbox for free and you can get red hat open shift for 30 days so you can use red hat open shift free for 30 days but it will not have all the administrative privileges that you see here you can get a Fe of red hat open shift if you want to try it completely like you want a dedicated Red Hat instance for yourself then you should either go with Rosa or Aro because they are managed offerings you don't need large instances cool so what's in the red hat open shift user interface like why was talking more about it and why was saying it is very rich so if you go to the workload section you can get a list of running pods so these are the pods that are running in different name Spaces by default open shift has a lot of name spaces not like kubernetes where you just have a cube system name space which will have the default kubernetes workloads but in open shift you can see these are the number of name spaces that that are available out of the box each namespace has a significance for example if you look at the open shift router namespace let's search for so there is this namespace called open shift Ingress operator so this open shift Ingress operator which is out of the box what it does is it sets up a Ingress controller for you you don't have to do anything you just need to install open shift and you have this namespace installed for yourself and within the namespace there is an Ingress operator and if you search for Ingress namespace you have the ha proxy router running out of the box so the default Ingress controller in open shift is H aoxy of course you can change it you can use traffic you can use engine X Ingress controller you can use any other Ingress controller but the default one is h a proxy so these are the h a proxy things that are already running on your kubernetes cluster you don't need to install ingris controller so going back you can see the different pods that are running you can manage the pods through this user interface typically deploy edit whatever you would like to do and then there is something called deployment config which is very specific to open shift similar to deployments but there are few additional things you can manage your stateful sets secrets and config maps of course you can do this in the eks as well but you have more things which I was talking about something like the observability tab so open shift comes with default observability setup if I look at alerting or let me show you the metrics so I can directly insert a query just like the promql query and I can get insight of the metrics of the running ports on my cluster and you can also get the dashboards let's say you want to get a dashboard for the cube API server performance so you already have the dashboard here you don't have to set up grafana for it or you don't have to set up all the required things for dashboards you can change any of the uh API server component here or you can change it to pods let's say I'll change it to kubernetes pods in namespace and you can provide your namespace let's say my namespace is one of these let's take open shift console so you can see the CPU utilization of the pods in your open shift console consle name space you can get a graph of it historical usage of the CPU by your pods and you can also set some targets so everything that you do with observability alerting metrics tracing as well as dashboards and the user management part this is very very important like when you're working in an Enterprise it is important that you have your kubernetes clust ERS or your open shift clusters tied to an SSO because you cannot Grant permissions to each and every user right so if you already have a active directory or if you have an L app or if you have a identity provider what you can do is you can go to the user management section and you can tie up your IDP see add IDP you can add your o configuration let's say I'm using OCTA I can tie up my octo configuration here provide the redirect and everything and then your open shift cluster is tied up with that SSO it's very good right uh if you tie up your SSO with open shift developers can only get to see the developer section right till now I was using the administrative tab but if someone logs in with developer privileges this is what they can see they cannot see the computer related things they can only see the observability they can see the builds part they can see the config maps and some secrets that they have rback permissions with going back to administrator not only that you can manage resource quota through the UI limit ranges and custom resource definitions so in the custom resource definition you can create any new custom resources through the API and manage their life cycle my favorite part of the open shift is the operators so if any of you is not aware what is operators let me take five minutes here to explain operators and open shift takes operator to the next level because it also comes with something called operator life cycle manager operator life cycle manager is an open source project it's not uh you know the only Red Hat open shift thing you can also use operator life cycle manager but red hat open shift comes with default out of the box mm okay it might go out of the head for you if you're not aware of operators so let me start with that what is a operator in kubernetes everything or most of the things is controllers right whether you are installing Argo CD on kubernetes or open shift or whether you are installing IO or you might be installing uh any other thing like your service mesh likeo or any other thing end of the day Prometheus grafana everything is a kubernetes controller there are different ways to install the controllers right so you either install the controllers using the plain yaml manifest so if you go to Argo CD you can find the plain yaml manifest you can install through it or you can install the controllers using Helm charts additionally there is another way of installing controllers that is through operators you might be aware about manifest and Helm charts but operator's way of installing kubernetes controllers is very very robust why because what an operator does when you install a controller let's say I have installed Argo CD through the operator and for some reason some bad user or a bad actor or someone who does not know completely about Argo CD goes to the config map of Argo CD and updates a very important property of Argo CD you know which might break the installation or which might create issue with the Argo CD itself but because they have permissions to config map they have modified it right so what happens is operator will not allow this modification it will say that no you are not supposed to do it because the configuration that is provided to me is not something like this right so operator will Auto heal this just like what gitops does operators continuously monitor the state of the controllers that they have installed it can be config Maps it can be Secrets it can be p. yaml anything and if someone a bad actor is trying to change something in the operator uh sorry in the controller they will get rid of that change I'll give a quick demo as well let's go to Red Hat open shift go to the operator Hub which has list of operators typically you can install any kubernetes controller through this operator Hub let's search for Argo CD okay or you can simply search for gitops so you have something called Red Hat op shift gitops which is an Argo CD offering provided by Red Hat open shift this will install Argo CD for you when I click on install button so it will just ask you for things which version of Argo CD do you want to install XYZ click on the install tab now what it will do what this will sorry what this operator will do it will install Argo CD for you and in a moment once it is installed I'm going to modify a config map of Argo CD the default Argo CD config map which has all the properties of Argo CD you will see that this operator immediately overrides that change so what you will get with that something that you don't have with manifest or Helm charts where if you install and if you try to change anything a bad actor or even yourself without proper knowledge then that change will be there it might affect your installation now imagine if you're installing your open shift nodes through the operators or if you are installing your Cube API server through the operator which is what happens in open shift everything is installed through operators in open shift right so because of which your infrastructure or the important configuration of your open shift will stay immutable so the immutable infrastructure or the immutable configuration is one of the very important advantages that you get in open shift which is powered by operators I hope the installation is successful let's see perfect now aroid is installed because I use it on a day-to-day basis I know aroid is installed in a nam space called op shift giops you can see there are two name spaces right when I search for giops one is op shift giops operator and second one is op shift gitops this is the name where the operator resides right this operator this is the Nam space where operator resides and this is the name space where my Aros controller resides now if I go to this name space go to the workloads and let me update Argo CD config map yaml and what I will do is one of the important configuration let's try to update it I will change admin enabled from True to false okay let me save this and immediately you see this object has been updated let's reload and let's look at admin enabled we changed it to false but it is true back again again let me update false save it and the moment you do it you will see the object has been updated reload again it goes back to true you can try this with the node configuration anything on the open shift container platform which is deployed using the operators and your changes are reverted back but then you might be wondering then abishek what if I have to make an actual change like you know we were talking about bad actor who is trying to change admin. enabled but what if I really want to make change to this particular field right sometimes you want to update the promus Pod configuration or gitops pod configuration in that case when you install that controller through the operator what you need to do is you should go to the operator for example this was the gitops operator that we have installed and you need to modify that through the custom resource Prov provided by the operator you need to get used to operators to understand more about this particular thing so when you install a Operator Operator will have something called as operand so for gitops that we have installed this is the operand which has the yaml configuration and in this yaml file I need to update that disable the admin privileges or enable the admin privileges and every operator will have a documentation for example gitops operator documentation so when you search for that you will find a documentation for the giops operator and within the documentation you will find the steps on how to disable it so similarly Prometheus operator or any other thing of course if you're new to operators you might be finding it little difficult at this point but once once you get used to operators you will really love how they function and typically you can install anything through the operators uh if you go to operator Hub one more time and you can search for let's say Prometheus U see you have a Prometheus operator or you can search for anything that is in the AWS so you have these many operators of AWS of course there there are few things which might not have operators where the community or red hat is still working on developing those operators because the operator in kubernetes has to be written by someone right if you're talking about Argo CD operator which is something red hat and other companies are working on or the open source community of Aro C is working on end of the day it's a go program where you need to write that application in go or other ways okay going back now what is olm you like things about operators you will love when you learn about mm so mm what it does is it's a operator life cycle manager okay previously we installed gitops through the gitops operator right at some point there will be a new version of giops operator right probably giops operator now is installing giops through Argo CD and let's say it installs one do um what is the latest version so 1.2 of Argo CD let's say okay I don't remember it back of my mind at this point but let's say 1.12 version of Argo CD some point there are changes to the gitops operator people who have developed this operator and the new version of giops operator let's say it installs 1.13 or 1.14 XY Z okay so the latest version is I think 2.13 so it installs some version now what happens is operator life cycle manager will allow you to upgrade the operator that you have or manage The Operators it can be update create delete through a single button so you don't have to again bother about upgrading your operators maintaining your operators because you have something called as operator life cycle manager if you go to the installed operator section tomorrow if there is a new version of this operator just if you go to the subscription section and if you set update approval as automatic you will not even know that a new version of operator is updated so it is taken care automatically through mm you can also configure it to manual because some organizations might want to monitor that okay I don't want a new version all the time I want to monitor it so you can just set it to manual and whenever there is a new version you can see here that there is a new version if you want you can upgrade and upgrading is again a click of a button you can just upgrade so this is about operator Hub and installed operators and olm so this is a very very rich feature that you have in open shift I'm not saying you cannot do it with kubernetes you can definitely install mm in kubernetes and you can do typically everything that I've have shown here but this comes out of the box in Red Hat open shift so we were talking about the observability user management Administration let's talk about the builds thing so you can have automated CI pipelines that are set up and you can have automated container integ container registry as well if you're using eks you can use uh the AWS container registry right ACR ECR elastic container registry similarly with open shift as well it has an inbuilt container registry and you can have inbuilt CI plus CD the default CI solution is tecton and the default CD solution is Argo CD all that you need to do is again go to the operator Hub if you want CI just click search for tecton builds for open shift you need to just enable this operator just like how I enabled the giops operator now once you enable both of this you have ACI Solution on the open shift platform itself and you have the CD Solution on the open shift platform itself so it comes with outof the Box cicd solution as well you can explore more about Open shift like there are many things that we can deep dive and talk about but I want to give an introduction to open shift and explore the most important features so if you click here you will get copy login command and you just need to provide the password that you tried to log in uh let me just provide the password so once you provide the password to log to your open shift container platform then you will see a display token you can just copy it and take your terminal so this is my terminal and I can just type the login command that I have and you will be authenticated to the open shift container platform through the CLI okay it said certification uh certificate sign failed so if you just search for OC login there is a way to ignore this I don't remember the syntax back of my head so I will just search for this insecure right because I don't have that certificate on my machine so I'll just pass insecure skip TLS verify to true and that's it I am authenticated with that open shift container platform through the credentials that I have shared in the UI if I have Cube admin axis now I am logged in here using Cube admin if I have a developer access I will be logged in with that particular developer account that's why SSO integration is important once you have the SSO integration every user can go through the same process they can get their OC login command just like how we got here and they can execute it so this cluster will be deleted in next 30 minutes so even if you try to log in with this particular OC token you will not be able to do it because the cluster will be expired I've just created a cluster for 4 hours perfect so once you have this now you can do typically everything that you do with Cube CDL but open shift has something called as OC which is open shift client Library I hope the definition is right so you can do exact same things like OC get ports or OC get ports hyphen a which will list out all the pods on my open shift cluster but additionally OC has few more things right OC has everything that Cube CTL has but on top of that OC has more things which you can search using OC minus H so in the help command you can see you can do the administrative related things you can do the Au related ones it has an inbuilt registry that I was talking about you can create a proxy for your uh container registry right you can create many other things using OC you can just do OC who am I and you will see that you are logged in as cube admin it is preferred to use OC instead of cube CTL if you're using an open shift cluster like I said there are more commands and you can get better experience if you're using OC to install OC you need to create an account with red hat and just search for install OC for Windows Linux or Macos then you have this networking Tab and within the networking tab along with Ingress resources you can see one more thing called as routes now what exactly is routes so you can consider routes as Ingress equivalent in open shift so routes is similar to Ingress but routes have Rich TLS configuration that is in case of Ingress when you want to configure a secure Ingress using the TLs configuration so probably you will be using some certificates and you might be using some annotations which will make your Ingress secure so you will be providing the path for your certificates or how your load balancer should handle the Ingress traffic in terms of routes it is more simplified because in the routes TLS configuration you can create three types of secure routes for example if you see here the G OBS controller that we have installed it has something called as giop server which is the Argo CD server and if you look at the yaml file of this route you can see the host is specified just like Ingress and service is also specified just like Ingress then there is one tab here or one field here which says TLS termination as reencrypt so in the world of routes when you are dealing with sec secure routes you have three different kinds of secure routes one is Edge termination second is reencrypt termination so you can just say that I want this kind of termination I will explain what they will do so you have Edge termination re-encrypt termination and pass through termination so what exactly they will do is let's say you have a client okay client is nothing but let's say you have a user and this is your pod that is deployed on open shift you have Ingress controller which is H axy in this case right in the world of open shift by default it is H axy now what does Edge termination mean is when the client is trying to reach the ha proxy this communication will be secure whereas the traffic that flows from the load balancer to your pod will be plane traffic that is non- TLS traffic so this is about the edge termination whereas when you say that my termination has to be reencrypt then that means it is even more secure where the traffic from the client sorry to the load balance is secure which is TLS traffic and the traffic from ha proxy the Ingress controller to the Pod is also secure that is TLS right so in terms of edge TLS traffic from client to the Ingress controller and from the Ingress controller to plot it will be playe traffic whereas in terms of reencrypt it is sec Ure traffic TLS traffic from client to H proxy and from H proxy to pod as well finally you have pass through in terms of pass through as the name itself suggests client will pass through the load balancer so the load balancer will not interrupt the request it will just pass through the load balancer and directly reach the p and at the Pod you can handle secure traffic or for the pod you can just attach a certificate so that client can communicate with the port through the certificate so literally load balancer is pass through load balancer will not come into the picture so that is about the pass through traic so you have Edge termination reencrypt termination and pass through termination all that you need to do is you just provide that in the termination section when you provide the termination section it will handle if it is Edge reencrypt or pass through when you deal the same thing with Ingress you will need little more configuration because in the world of Ingress different load balancers handle in different way and typically it is handled through the annotations such Rich configuration is not available let's try to do a demonstration where I will create an application I will create the service for that application which is deployed as a pod and then we will create an open shift route for it and we will see how the route configuration is read by the Ingress controller how we can access the application publicly for that we will use combination of UI as well as CLI now that we know how to use both of them so first thing is I will go to the deployment section and click on create deployment I'm using the default namespace you can use anything that you would like to so there is a sample deployment file that is provided when you click on yaml view you can ignore that and I will provide a link in the description where I have the deployment service and Route configuration within a document you can also use the same thing so this is the deployment it's a sample gin application with two replicas of the Pod and if you don't know what is qu. IO here if you see instead of Docker I'm using qu. it is similar to dockerhub but this is completely open source and free uh because in Docker there is a rate limiting issue people also use quare where there is no rate limiting issue click on create okay instantly I have two replicas of the Pod that are running click on the pods perfect I have two pods running now let's try to create service for it and service we will try to do it through the CLI so again I will copy the configuration the yaml file and what I'll do is in the CLI let me create something called kin app. SVC sorry gin app. SVT do hyphen SVC do yaml of course the name doesn't matter but just for the purpose of understanding because I'm copying it from the UI these are the fields of the created service I'll just remove them all that we need is basically so I'm copying this from another cluster where the service is already there so all that we need is important things Nam space it has to be default right and then cluster IP is not required it will be autocreated you need to make sure that the port and Target Port are correct I have made sure the port is same for the container that is deployed selector should be same as the Pod and then type is cluster IP so let's do OC apply hyen F gin app SVC do yaml so it should create service for us if we do OC get SVC we should see the Gin app service perfect it is cluster IP that's why you don't have external IP address now the final thing is to create a route so to create a route we can use CLI and we can use this simple command OC create route Edge is the termination that I have explained hyphen FN service the name of the service and this is another important section insecure policy is read redirect what does this mean this means let's say you have amazon.com by default amazon.com is https but what happens if you try to access it HTTP Amazon.com or HTTP google.com automatically your request is redirected to https so who is doing that that is done by the load balancer when it receives the HTTP request it can automatically convert that to https but you need to explain it in open shift through the route configuration this is the field that explains when you say insecure policy as redirect that means if someone is accessing on HTTP redirect the request to https now as soon as the route is created who is taking care of it like in the OC create command I haven't provided any host name or I haven't provided the uh any other configuration right but who is receiving this route request it is received by the Ingress controller and like I explained in open shift the Ingress controller is ha proxy so ha proxy will receive this request we can see that if we go to the Ingress name space and if we look at the pods they will update the the H proxy configuration file just like you have engine.on file in h proxy you have ha proxy do config file so if you exec to any of these ports using the terminal or through the CLI there is a file here called ha proxy docfile right this one H proxy do config you can just do cat on this file and get sorry grip through the address so if you do OC get route you can see the host name that is assigned to your route and when you just do GP on the H proxy configuration you will see how load balancer is forwarding this request to the pod so you can see when I search for H proxy configuration file against the Gin app you can see it is saying the request is forwarded to this particular pod 10129 2.41 and 10131 0.70 what are these things if you do OC get pod hyphen W wide sorry hyphen o wide so you can see these are the IP addresses 10129 2.41 10. 131 0.70 so when you are trying to access this particular application on the load balancer IP address then the request is forwarded to this particular pod or this particular part on the round robin approach let's see if that is correct if I take this particular thing host and if I do curl on it so I'm doing curl on HTTP right I'm not doing https so curl is redirecting the request you can see the output when you provide hyphen L which is saying curl to provide the redirected output also so it will give you the output that the application has just do hyphen k sorry hyphen K see you got the output as hello if you directly give https instead of hyphen L okay if you directly provide https which I am going to do it here so you got the output directly as hello you can also try it through the browser if you are interested so it says the connection is insecure let's accept the risk and you got the output the Json data or you can also look at the raw data so this is the demonstration you can try with Edge you can try with reencrypt pass through and also explore different options that are on open shift thank you so much for watching today's video I hope you found it useful if you want me to make a video on any specific feature of open shift do let me know in the comment section I will try to do it see you everyone take care bye-bye