Transcript for:
Docker, VMs, and AWS ECS Overview

sorry okay let me let me rephrase the question the question goes like this and I'm trying to figure out whether there's a there's a similarity what we've learned we know that the when you hang the containers you you are hanging them in a in a network that is non rout not non routed uh called the um sorry what's it called again Bridge Bridge the bridge Network and in that Network you now have the containers U talking to each other by some mode but they're using resources off of the parent Docker host and I was just saying that in VMware and again that's just outside the scope of this VMware has something similar and with its um with its um hyperv and it's um what do they call it um the way it hangs its VMS on the VM host using VM host resources to slice andise the memory the storage etc etc is it actually yes it's it's actually let's let's let's um um actually clarify that a little bit right so if we it's part of the whole micro micro um Services architecture let's see something if you have um a VM and with the VM you would have let's consider this as my VM right um you're drawing see now let's consider this as my VM so this is my virtual machine all right so this is the host machine right the host so the your physical server that you buy it has its memory it has everything this host machine has what we call its OS right it operating system right yes now if we are doing virtualization using hyperv what do you do you would install your hyperv here right top of the on top of the on top of the OS correct host OS yes so this now gives you virtualization the virtualization technology on your host machine but now I want to again have now virtual service on this physical server so virtualization what will happen is I now have to build in this my original physical server I have multiple virtual servers so I have my vm1 I have for example my vm2 I have what my vm3 and I have what my vm4 let's just take this example right are we together yes yes give me give me a second please give me a second okay and I have this my vm4 vm4 is not cooperating now vm4 doesn't want to cooperate okay finally so I have vm1 let's just call it one 2 3 4 with hyperv each VM will again have an operating system installed right yes yes yes yes yes yes this is the technology which you're using when you're dealing with ec2 instances because at the AWS data center there's a host physical server there and that on that physical server they have done the virtualization so a virtualization software is like hyperv or vsf or whatever it's being used has been installed on that physical server and on that physical host now you have so many VMS and this is when what happens where you just go you launch and is in time all those is instances 10 of them can be on this physical holes hes this can be on this physical holes okay but when you you're dealing with Docker now you would we will still have our physical host let me try to make it a little bit bigger so I still have my physical host right and on this physical host by default it has its OS it's operating system so let's call it Linux in this case right on the on top of the OS what do we do we will put now our do Docker technology oh okay okay so Docker now runs on our OS on our OS no virtualization at this stage and in now because we've using Docker for conization we can now run multiple containers on the OS right so I can have give me a second let me finish this I can have now 1 2 3 4 five six and different containers each container now would be be just the application and Its binaries Right are we together am I making sense yeah so each container will just have let's say I have my what my um each container will just be the application uh binaries and the application code and all the dependencies so we extrapolate that into every container does that make sense I I think it does because it doesn't have any OS just the binary yes it doesn't have any OS so all the containers running on my Docker host what happened here so all the containers running on my Docker host will be running as processes on the host machine so they will be using the CPU memory some Network capabilities of our host machine so all these containers will share CPU they will share the memory they will share the network the default network of the host machine I'm not talking about the VLAN itself that's why they are all can they all can use the host IP and some poort on the host to be able to reach outside because they running as processes on the host machine itself does it make sense M follow question Prof this thing you just described so yes is that the reason why um we can't actually log in interactively into the Container because it's actually a service um running a process it run it runs as a process on the host machine so what do you mean by logging interactively you can actually get a terminal you can a terminal exactly like for example E2 you described you can easy can easily kind of log in put a username and password and get in because it has or the VMware that you basically talked about with v of them you can actually now log in because there's an OS with this you can do that you you cannot I I try to understand what you're trying to say but you cannot log in like that with a container but you can you can interactively attach your terminal on the host machine to a terminal in the container that's why when you do the docker exact D it interactive terminal then you have now a terminal in the container itself so then you actually inside the container but you cannot log in um if I understand your question very well directly into the Container I get it no let's go ahead yes yes any other question pleas can you SC SC a little bit let me just so there's a big difference between um uh the virtualization and Docker or the docker conization right right yes yes but if you understand what we are doing now is we are still still using the same technology so what we've done is this is an E2 instance that is already using the virtualization technology so we've gone to the physical host on the AWS data center we have selected one e two instance from there right and on this is2 instance we're installing our Docker and now we're putting containers here okay okay okay so Docker is the one that does the magic so Docker is the one that does the magic but we using a virtual machine to actually have the docker and putting containers now on the on in the virtu machine we together yeah so let's go to ECS please okay so ECS stands for elastic container service and ECS is AWS product for orchestrating containers so what does this mean if you if you have your containers if you have your images which you you to use to run your containers then you need some platform to be able to run those containers right yes BR because if you see what we're doing with a Docker you're doing Docker run and you're attaching to be able to reach those containers you have to uh uh do some Port mapping and you're using host IPS and actually uh reaching those containers from um the IP address of the of the host and some some Port that is not an IDE way of doing it right so let's say that we have an application that is running 500 containers 200 containers then we cannot we will not be able to access our application using that type of Technology right so I already said Docker by default Docker by default has some services that you can use to orchestrate containers orchestrate and we said we have what we call docker swarm so Docker swarm is a technology which you can use for Docker to manage or to run multiple containers and orchestrate um this containers now we are not treating Docker swarm we are looking at other technologies that do this better and from AWS perspective we have ECS and from another open source perspective we have kubernetes are we together so these are the three competitors if you put it like that that you can decide to use to orchestrate container workloads in your environment any question Pro what about what's the what about fargate get to that is not ECS is just a runtime it's just a comput computer engine I'll get you that okay so ECS is a managed container orchestration service that helps you to efficiently deploy uh manage and scale container uh workloads in AWS we together I want us to look at some certain concepts of ECS R can you just repeat that definition um because it sounded very simple and straightforward I just want to be able to captured um the definition of what ECS does as an orchestrator so I said ECS is a managed container orchestration service so don't worry I I I'll give you some um snippet of my my my one note okay so it's then okay all right thanks so but just for you understand in it is a manage service so a manage service from AWS which you can use to deploy and manage container containerized application okay so like I said there are other services that will do the same thing from doer you have the doer swam from uh AWS you have ECS and from the open source or the cncf you have um kubernetes okay so let's look at some concepts of ECS so there some Core Concepts of ECS ECS uses what we call an ECS cluster or just a cluster if you prefer and what is a cluster who can tell me what a cluster is when you say a cluster a collection of servers mod or um to for a particular function I guess collection of servers I think that's the simplest way I can put it so exactly so a collection you can have a collection of servers collection of servers Services Etc but just the fact that you have some sort of grouping right then you have CL of those resources so ECS uses a cluster to be able to run workloads so this cluster so a compute cluster is based on two main types of infrastructure so you have what we call um EAS to type or E2 e to type clusters and fate so you have easy to writing paron the second one didn't write the far didn't write yes so you have easy to type clusters and you also have what we call fargate clusters so is is two type clusters right you already know what an is2 is so you have a collection of is2 nodes and those nodes will will provide the compute capacity for you to be able to run your containerized workloads so running your Docker containers make sense is she there I'm sorry Pro is the is the compute cluster a type of an cluster or what different so I'm saying a compute cluster because I'm looking at clusters that would give compute resources compute uh uh power for running our workloads right these are containers these containers need a place to run right they need some Computer Resources for them to be able to run right so this compute um clusters are of two type the is type and the farget type so um is to instances would provide the computer resources needed for running your container workloads or fargate okay what is fargate some of you heard of farget before yes good I what is farget heard of it but don't really can't connect it to anything we've learned yet is it um the AWS managed um resource E2 or service so basically farget is what a sess compute engine from a AWS oh yes that that was an exam question though sorry so it's a sess compute engine or compute service from AWS so once you're using sever stock fet is one of the things you would incorporate in your severs tack because with SS with with you don't spin up instances as you when you need an is2 for example before when you need compute power you go to the is2 console you spin an instance you specify the uh what is it the um you look for an instance yes you look for the Ami you look for the type you look for a type that will give you the CPU that you need give you the memory that you need with with with with fargate you don't have to do all that because it's sence AWS completely manages everything for you from the background with f with with fid you only tell them how much memory I want how much CPU I want and then make sure that that capacity is provided for you for your computer requirements are we together yes what's the difference between um farget and the Lambda Lambda is code you're running code in Lambda farget is computer engine but where does the code the the code also runs on AWS infrastructure so they might be using farget in the background but with Lambda remember Lambda is some sort of an service that you're running um um uh uh Lambda code you need some code a to clear Lambda function and it runs in the background let's not talk about Lambda when we get to S talk about Lambda there okay so I'm bringing up fargate here because fargate is one of the capacity providers for ECS what do I mean by that so ECS has two type of capacity providers capacity is basically the infrastructure where you can run your ECS workloads or your containerized workloads in ECS so there are two types of capacity providers you have um um um um is type and you have FY are we together yes BR any question not not so far from me but I know I will ask you a question soon two times so FG fade is severas compute engine from AWS and you can use Fade to run containers without having to provision um is two instances okay so farget is a pay as you go service pay as you go pay as you go so the more you use it the more you pay if you're not using it then you're not paying for it okay so that's the whole goal of what we call sess are we together so I was going to ask a question just this just tie some things up for myself yes so so um we're talking there are three things in my mind when I'm thinking of resources so there's Compu there's um storage and there is one um storage will be things like um database and uh probably all the other things where you you know and then of course there's this thing called sess which is for me is an unknown I mean I'm I'm I'm looking at it I'm wondering okay where does it fit into the all the Matrix and why is it that um it is required for this particular purpose is there any other place in the um AWS ecosystem where seress compute resources are needed or is it just only here there other places where sess compete resources needed so why is it sess in the background when they talk about sess you should think about you as a user not managing the service itself so there's no server for you to manage if you're using each two instances you have a server that you're managing right right yeah but if you're using fargate then you're using a Ser a service which is sess AWS manages everything from top to bottom all you tell aw is I need a computer it's also a computer engine it serves the same purpose as your is2 instance because is instance will provides you what we call compute power right yes so this compute power obviously has what we call CPU it obviously has what we call memory this are the two main resources right so once you launching your is2 instance you pick an is2 instance that meet your CPU requirements it means your memory requirements right so you're managing that because you have to decide as a user from the onset that my application needs a server with what 10 virtual CPUs or 20 gab of memory are we together and you decide which server to use and if this is to if if this um uh resources are exhausted then your application experience some sort of throttling because we are out of resources okay but with ses or with fargate you just tell AWS that I want a compute Engine with 20 GB of memory 30 gabt of or 30 virtual CPUs and AWS takes care of the provisioning the scaling the high availability the security and everything for you in the background that got it yeah are we together yeah so um so just to be sure when when we say s does it me because we're not managing it at all doesn't mean it's not on the server but that's yes it's because you as a user you're not managing it everything is running on some sort of platform in the background it's transparent to you okay all this CPU and memory should there should be some compute engine or compute a host physical host somewhere that's providing that power right providing those resources in the yes in the back end but to you as a consumer you have nothing to do with it so it's sever lless from your perspective yeah Prof can you put a limit on the like the resources as because you said it's scales yes you can put a limit okay so you can put if you providing you you you're looking for Compu engine and you say I want um uh uh 30 gbt and of memory 20 gabt of of 20 virtual CPU AWS provides you with those resources now if you exhaust those resources then you'll still experience some sort of TR strling right but you enjoy the pay as you go service because because if you're not using them then you're not being you're not being charged for it however if your applications exhaust the resources which you initially requested then AWS is going to throttle you because you made a hard requirement before or you made a requirement before but fargate also has autoscaling it as a feature of itself so you can check that or enable it then AWS is going to keep scaling or keep adding your memory and CPU as per the requirements of your workload then your bill keep increasing multiply people are putting yes people keep that c because we don't want to get exorbitant bills at the end of the month but if that's not your problem then you can tell AWS provide me CPU and memory as much as my application needs and they're going to do just that for you are we together yes bro good so we're seeing that the compute clusters for ECS the ECS clusters take or um utilize compute capacity from fire gate type clusters o from ISU type clusters are we together yes BR and these are what I also refer to as capacity providers I want to just take about 30 minutes to discuss the CH then we'll have a lot of conversation as we doing the handson capacity providers okay ECS also has what we call a task task definition so a tax definition is a resource in ECS that you can use to define or to configure your application so it's a Jon ad Json F used to configure they or configure our application okay are we together question question so this definition go ahead this definition um file uh it resemble any of the ones we've done like say the um the instru C files that we created for say um not not what's it called um Docker file the other file the other one so is that the same concept of having f file the instruction definition file is that what this is only this is written in Json they a little they are different you would comp compare this to once we once something which we are still to cover in kuber is called a pot so a pot you would also have a a file that defines the port and inside that Port you would have a configuration for your container the image to use the ports to open the resources to be used and stuff like that all that when you're dealing with ECS that is defined in what we call a tax definition let me hold that thought I will ask the question again later this will make much sense once you actually see this and ask in in um in uh beanson okay sir and I'm going to share you guys a script or a small documentation that highlights most of the stuff okay so as we rush through so with the task definition you use the tax definition to prepare your application for ECS right you create a task definition it's a Json file and in the task definition you specify the parameters needed by the application so things like how much CPU is needed for the application how much memory the docker image to use what ports to open what volumes to be specified or volumes to be used by the application all these stuffs are actually passed in the stack definition the networking mode what networking mode is going to be used by the container all these are part of the tax definition are we together yes sir once you have this config file then you can use the tax definition to actually run your containers your image in ECS by creating what we call a task so you use a task definition to create a task and a task now is a running copy of or running instance of the application so here when I say application I mean they complete the image or the container because if if we already running uh an image then we having a container right yes so container necessary configurations necessary volumes interesting ports to open umor are we together yes bro so a taex is a running copy of a application there are different ways of actually running this application with ECS is either creating what we call a task or a stand alone task I will call it a stand alone task or you call what you create what we call an ECS service so an ECS service is still going to be dependent on our task definition right are we together please if you have a question ask so I would have have my task definition we can use the task definition to create a task stand alone task or we create a service and embedded in this service is a task so a service is basically a higher level abstraction that you can use to manage task are we together you guys are very Qui please can you repeat the service again please Bro so a service yeah so you can use a service so I would in in very simple terms it's a higher level higher level of obstruction for a task because embedded in the service you still have a task but you can use the service now to manage multiple task so if we have a stand tax let's take the example for um um um we have a we have our tax definition and we use that to create a stand alone taxk we would have just one instance of the tax or one instance of the task that is running our application but now let's say that we want to have multiple instances for example the application is supposed to withstand load you need multiple instances of um the container or multiple instances of that container to be able to withstand load then a stand alone task is not the best way to go about it you will create what we call a service and the service is because it's a high level of obstruction you can use the service to manage multiple tasks so Prof is the service part of the task a task is part part of the service is it like um and you can use an ECS service so is a service the must in this sense yes you can use an ECS service and the tax will be the slave no I don't want you to think about it like that EAS service to run and maintain multiple this the same thing as horizontal Port Auto scal exactly it's something like that something like um no it's not something like horizontal po it's something like a deployment deployment in deployment in kubernetes so if you if you've not if not use kubernetes then you will not you not understand what I'm trying to say so a task if we're comparing it to kubernetes it is a task will be our port and a service will be our deployment deployment or demon said are there some people with any kubernetes knowledge here sorry about this at all not yet why did you remove it now so I don't get you confused there's no point trying to explain something that you don't even know what I'm trying to talk about R okay so please explain this service again so that it is clear because uh I was trying to use what you wrote just now to to tie it in my brain give me a second let me try to where is my then you said deployment it made sense but now I don't service managing the task pardon is a service managing the task yes a service will be managing a task exactly so I can see you can use it a service to run and maintain multiple task so let's begin from do you understand what a task is if you do not understand what a task is let's start from there are they on separate notes or they all it can be on separate notes those are now you we're getting to another concept called placement strategies we're not there yet do you understand what a service is so start from task task so we'll start from so we'll start from the tax definition okay I thought I had this in my my one note give me a second it's there I think it's on it's up pardon no I'm I'm actually I want to look for a a tax definition image so you can see what I'm actually talking about okay so the service and the tax will be part of the same cluster service in the P tax will be part of the same cluster yes so task definition is the instruction the task is the actual um implementation of the instruction and of course the service is is the deployment right it's it's still not jelling yet please minute yes give me a second I think I want to bring you a um simple task explanation file so we can see what it's all about maybe that makes sense where did not go to copy image and trying to just just give me a second and put it somewhere so you can see it now sing please um Prof you said um ECS services and Ed in a task but you said a task is part of the service no I said a a task is embedded in the service is okay why is it not um showing me what I wanty that's right reason I was saying on your on your screen now you have under service embedded in a task that's why I got confused give me a second please I'll be right back to yeah that was a little bit Contra Contra inuitive a b counterintuitive did I place this somewhere else no it's in the right place one note is taking forever to zinc so how do I share the other screen this is what I want my bad I placed it somewhere El good so this is an image of a tax definition so what do we say it's a Json file that you use to define your application so you can see that our task definition all right inside our task definition we Define the image to we use the CPU some ports um what names we want to give for the the the the port what parts we want to open for example the containers um what roles we want the tax to be able to have um the networking mode and other aspects of the application so this is a task definition we use this task definition to create what we call a task and this task is basically a running instance of our application which is based on all the configurations inside the task definition are we good so far hello yes yes yes yes is this is this the same Jon file that pops up when you were checking for the network on our container when it was running that's I'm I'm in ECS now yeah okay so but it's still Jason so Jason is the same structure that's why it looks very similar to you but the parameters are quite different those are two different Technologies okay so this is an ECS this is this is peculiar to AWS ECS you would not have this in doer this structure or this parameters like tax definition container definition and stuff like that this is pecular but they with ECS and with kubernetes they are defining exactly the same parameters but in different ways so once we go to to kubernetes we'll be defining the same uh things which you see here but now using what we call um what is it y y y yeah okay but for ECS you're using Json so ECS doesn't support yo so let's we'll deal with that ones who get them so once you use the text definition which is basically our instruction in a text in adjon file to Define our application we create a task which is what a running instance of that application so you can create a stand alone task so when I say stand alone I basically mean one instance of the task one instance of the running application are we together yes good so now let's assume that we have our application and we have one instance of this application because you can see that inside a tax definition you have a container definition right so our container how we want to run that container is defined in our tax defination can you see my screen yes so inside the tax definition we have the container definition where we are specifying the image that we've already built and sent to Docker hop for example ECR we giving it the name of the container we're telling it what CPU requirements that this container needs um what parts to open um if we are some variables or some environmental variables that are needed for the application they also specified are there some Mount points which we need are there some volumes that needed need to be uh um added to our container all those attributes or all those features are passed in the container definition and the tax definition also uses what we call execution rules what network mood mood Network modood that the the tax is going to use so because this is AWS then there is another Network modood introduced here known as AWS VPC you already understand this right yes bro so when you see network a svpc you know what this is right using anws VPC exactly so a VPC is all the network AWS so but if this was um in in do you will not be able to have this this type of network mode but because we want to run the container in AWS using AWS resources then we have an additional Network M to choose from but it doesn't support only the AWS VPC it still supports the default mood from Docker which are the bridge the bridge Network what again postwork and now there are differences or there are different possibilities of using these different networks based on the comput engine which you are using if you are using an ec2 launch type which we already talked about then you can use the holes and uh uh all these other networks but fargate as a compute engine places some restrictions because you're not seeing anything you don't have access to the fire gate host you don't have anything so you cannot be able you're not able to use these other network mes when you're using the fargate launch type okay make sense yes for I understand that's a lot of information let's go back to our tax now we said that we running one instance of our tax let's assume that it's Black Friday one instance will not be able to one instance of application will not be able to supp support the load on our application on Black Friday right yes so for us to enable or for us to deploy our application in such a way that we can have multiple instances how many people you kidding me so that we have multiple instances we want to ensure we have something like high avability we want to expose our application application then we use a higher level construct higher level ECS object which is is a service so the service once you're defining a service it still uses the tax definition to define the service because you need somewhere you need to be able to um Define the application architecture what are the different configurations that the application needs and all those things are defined in our tax definition right so we would use the same Tax Service tax definition to deploy a service and in this service we can tell the service that please run two or three or 10 or 20 copies of a task are we together does it make sense so once we create the service then because the service is higher is embedded in the service we have multiple task if we set replicas 20 then the service is going to gener create 20 task and it is inside these task that our containers are actually running make sense yes BR some of these things committed to memory once you do hands on so if you do not really get it just bear with me once you to touch the hands on ask me um end to end questions as you wish okay yes BR um but if you do have a question now please should for the for the for the tax definition what's the key difference between the one for the service and the one for the standing loan task exactly the same it doesn't matter you're using the same task you're using the same uh TX definition to either create one instance of your application or create multiple instances of your application it's basically the same thing we're just ensuring multiple copies of the same application using a service so there's no difference with the tax definition so you're using the same tax definition to do both but where do we specify if we're doing a standing loone or a service like on the tax when once you're deploying it I can show you this right now when you go to the console you will see that right so so it's not I if it's inside a tax definition you define the number of no it's not it's not inside the tax definition the tax definition is just a blueprint of your application if the ACT blueprint of how your application is configured okay Al so so we're telling the blueprint to generate a stand alone or a service so we we we we can use the blueprint of that application to either run one copy of the application or run 20 copies of the application so if you want to run just one copy of the application then you would use just a stand you can use a stand alone task or you can still use a service with a replica of one so if I'm creating a service and I say I want just one copy is still going to create one one copy of a task it's just still going to generate one one task but using a service has more features to manage your application because for example if you're running your application and you're having your application and you're not using a service in order for you to expose the application to the outside world you would most likely be using the IP address of the node on which that container or that task is running however if you want to do it properly like in the production use case then you would use a service and between between this service let's say we have a service and this service has uh uh um um let me say four task we set the service to deploy four task and this service now would have um task one tax two tax three and task four right so we have different task that are running the same pce of our application but we need to know how to load balance traffic so this is running the same copies of our application this is our application right am I alone here so this is our application for us to be able to utilize all this application we need to have some sort of load balancing in front of it right so we can then play some EB or a load balancer that would load balance traffic from the outside wall to the task that are being managed by our service so we have a load balancer and the load balancer comes in Here and Now distributes traffic to the backend task doesn't make make sense can we a stand a loan to a service no so you have to like rebuild it you have to like reu you just use the same tax definition and you and deploy a new service okay yes Ma so there are multiple advantages of using a service because if you're using a service apart from load balancing you can also configure somebody said some something here called um what did you say HPA once again forgive me if you don't know what this is this is a concept in in C is called horizontal not scale no horizontal Port autoscaling I was writing load balancing horizontal Port go to killing come on is Victor with us I'm still with you good so if you're deploying your application just using a task you're not able to achieve this horizontal Autos scaling what is what does this mean this basically means that now we have our task or our service we've created a service initially how many T have we asked the service to create based on what we have here four four four four so we have four task but on Black Friday we anticipated that our four four task will be able to handle the load on blackf it turns out that so many people heard about our application or our our sales we had so much discount and instead of 100,000 people that we thought that would be visiting our website we had 2 million and we did load testing and we knew that four tax will be able to handle 100,000 so because we have 2 million people right we need more tax to be able to handle that load are we together yes is this making sense please yes yes it does it does making sense good so sir for yeah okay for this Auto scaling to take place in order to handle the additional load or the unexpected load the service has a future called service autoscaling so you have um what we call service auto scaling so service has a feature so these are the advantages of using a service so the service has a feature called service auto scaling and this is exactly the same as HPA once we get to HPA and what is happens is the service is able to add additional task based on the load is experiencing so initially we just said we want four but because the load on our website came more than expected the service has that inbu intelligence based on other um um services to be able to scale out on demand are we together but if you're launching your application or if you're running your application just using a a task then you're not able to make use of these uh robust features are we together yes bro no so sorry BR sorry to so does that mean that everything um obviously service can do more than a standalone task um is there typical use case whereby you probably use um standard known task or maybe when testing just just for testing all right all right so it's not as if as a cost implication of using Services than Tas stand alone tax is is it going to be the same just based on us it since it's pay as you go um now if you are it depends on the features you're using when you're you're using a service because like I said if you want to enable load balancing like having a load balancer in front of your tax that are load balancing traffic to the different tax that are running your applications and you're using a standalone tax you're not able to achieve that feature but if you using a service and you enable load balancing then the cost of using a service is different right it does increase yeah good and did I answer your question can I go ahead yeah yeah yeah go ahead go ahead sorry so apart from service auto scaling there is also what we call AAS avab availability Zone rebalancing okay so once you're creating your task or once you're creating your you've created your cluster let's start from from the architecture St your cluster and you've created your cluster and you're using is2 launch type let's use the is2 launch type because we all are familiar with that once you're creating multiple is2 instances you can decide that in this region I want to place my is2 instances in what three four five different regions right are we together yes that's for good so if you're using a service service already has that um um mechanism to be able to balance your task across availability zones so if you if I say create me four task and and I'm using four easy to notes across AIS the service by default tries as much as possible to place every task in on is two instance place a tax on on each is two instance in each a does that make sense yes right that is everything being equal everything being equal why do I say everything being equal because there are other things that a service might consider before it place a task on a node for example are there resources to be able to uh handle that task for example if if if in the taxation we said that we want this taxk to be to be um um placed on a note that has let's say 20 vcpus right do you see that please if if if if I get you confused you tell me but what I'm saying should make sense so look at the task definition before in this example we do we do not specify the number of CPUs that we need right yeah good but if we specify the number of C CPUs that we need here and we said we want for this application it needs at least four vcpus then we say that the service should provision four task it means that it will be looking for notes that have at least four virtual CPUs to be able to place a task on it are we together yes Ma so now let's assume that our is2 cluster has four is is2 instances and each is2 in is already placed in an availability Zone by default ECS and services will try to place a task in every a everything being equal everything being equal means um the CPU requirements are met the other constraints are made however if it needs to place a tax on a node and that node does not have the resources needed by that task then it can't place that task there right yes Dr chus Francesca Pamela are we here Michael yes sir following here yes are you sure you are following please this is important try if something is not clear please stop me okay and to be hon there lot of things there are so many things that I might not uh see during our conversation like that but if you ask me a question then it gives me the opportunity to to Deep dive okay so by default it tries to put every tax in in an in an easy but what is easy rebalancing let's assume that now when we initially launched the task right on four easy to notes then the service placed each task on an e to node in an A so we already have that balance there but something happened to the AZ one and A2 and the is2 instances in those AES were destroyed because the service is able to manage multiple tax the service will try as much as possible to re um deploy additional tax in order to meet the initial requirement because we said we want four task right but the four tasks were placed on each two on four notes and two notes are offline because of one reason or another then the service has the intelligence to know that oh one2 of the task which it's managing has failed because the underlying uh infrastructure is not there and because it discovers that the underlying infrastructure is not there it tries to reprovision a new task so at that point in time it wants its first goal is to meet the the number of desired tax that we need we need four right so but because let's say is three and four are not available the service is going to create these two additional task in AZ one and AZ two so you see that in a one and two we have two tax running to to to to to to meet the desired count the desired number of tax which we we we require then later on is three and four are back on online and your new notes now in A3 and four availability Zone rebalancing is the ability of the service to discover that oh we have A3 and four back online and it tries to balance the number of task across A's so it would go back to a one and two uh uh uh uh the flow is it will go to A3 and four provision new task then reduce the number of tasks in a one and two make sense yes yes for good so the service has so many features you have um um service Autos scaling availability Zone re rebalancing rolling updates so basically with rolling updates once your you're changing or you're updating your service your service is able to create new task based on the new revision of your tax definition and once the new task are running successfully then it can deprovision the old tax with old version of your application are we together would you please go Mar so rolling updates just from the name it's rolling it's updating your application in um in EN rolling fashion what do I mean let's imagine that this is our initial appli right our initial tax definition and in this our initial tax definition we had an image that has what version one right with the version one of our image we have our first tax definition so this tax definition is also versioned in ECS so the tax definition using version one of our image will let's call it version one of our tax def tax defination now we discovered that from uh the users there are some box and we need to fix them they we build a new image we've already seen that right once we build a new image what do we do right what do we do tax our image our new version two so once we tax our image with version two then we can update our tax revision to new version two so we have two versions of our tax definition the old version with our version one of our image and the new version with an updated version of our application are we together so we can use now a service to update the end user application with rolling updates now what does this mean if these four tasks that we're running we're running on version one so this is version one of our tax defination and now we have a new tax definition version two with rolling updates what a service would do is is going to provision additional task with version two and once these task are confirmed running then it deprovision our version one task does it make sense yes broim to blue green pardon is it similar to blue green similar to blue blue green but blue green is is you can have two versions running at the same time but with rolling update it's just ensuring that the task is running once the tax is running then it can deprovision the the the the um the old version of your task are we together and from what mner said is with with with um blue green a service with a service you can also have what we call Blue Green deployments what is blue green deployment can somebody tell me isn't it like when uh you test in the green the in the green and then um once you make sure that everything is working in the green then you just move to the blue exactly then you switch your traffic to Blue so blue blue green is basically have two versions of my application running so I have my version one running I have my version two running right then I can direct traffic let's say I have uh 100 people I can have 100 people actually using this application so I have 100 people using this application and I can see um 60% of the traffic should stick to this version one and 40% of my traffic goes to version two are we together yes bro yes so once the 40% or the new people using the new version of our application if they say oh everything is going fine then I switch all the traffic from version one to version two make sense yes yes does um do you change the blue to the new version do you update the blue to the new version and wait for the next upgrade or pardon do you so once um you switch traffic um to the Green version what happens to the old version in version you de commission it because you you're having two version of application because one is new one is old right you've you've added new features and you're testing it do you see what AWS is always doing on the console that we have a new version of the console that's blue green so they switch not everybody not everybody experiences that they might switch a some um uh percentage of their their traffic to the new console that's what they always start doing maybe 10% maybe some specific customers and to the new console for them to test and once they say oh this is good then they increase in percentage of traffic that uses the new console and they tell you that this is good and at some point if every everything is good for example they'll tell you that this Con this conso verion would be off at the end of the the the month then at that point they switch all the traffic to the new version of the console and the old one is gone so that's blue green very simple example are we together so any question on that so when there's a when there's um another update does the green become blue after the de commission and there's another update what happens yes the the same thing happens right because in the first version you had the the I don't is it the blue that is always the current or the new version so uh one of them is the blue is Aur the blue is a current so one of them is the current so the same thing happen the same strategy happens when you have a new version of the application right because at the point where I have a blue green one is the newer version of my application the one is the old version of the application right so once we are confident that everything is fine then we promote everything to um the blue for example and and that version is live if two months down the line we have a newer version then we do the same blue green again what was the new version then becomes the old version make sense yes ma good are we together what's the time 81 good if no questions I would pause I'll stop here then we we can get to the uh hands on and I think we'll have a lot of uh talking as we are going through the Hanson okay good I'm going to share with you um um Emma can you please pause the recording yeah I want to be able to push my doer image to ECR so on the docker host let me use it directly from the browser so I go to instances and does it instance connect can from scat while you push I go to instances and I go to instance connect so I want a terminer in my Docker host so on this Docker holes you should already have your AWS credentials configured right so you would create an IM user you do a configure and you have users and rules create um um users and rules already present and configured on this credential so that the inst instance can be able to make uh calls to AWS are we together yes bro should be there great so in order to push our image to ECR so we go to ECR right so let's use um what region should I use Oregon so we start from scratch so if we go to Oregon then let's create an ECR repository so you go to ECR and you want another one in a different region no for for those that have already done it it's fine just keep your ECR image and we will use that so I'm doing this for those that that do not have an image yet okay so this is ECR elastic container registry so with this you can share and deploy container software public privately so you just come to ECR and either public or private so we create a repository and you click on create repo so when you click on create repo then you give your repository a name space and a name the same syntax which we talked about with dock hop Victor are you there so um I can call this um Nam space j Tech demo and uh name hello world so I want to call because my application is a just basically a simple H application I can have the syntax are we get here yes good so you just create that so with with um ECR this will create a repository for us so you can do image tag mutability so you can specify if the mutability set for the image if it's mutable or it's immutable then encryption settings so by default AWS will encrypt your repository using aes256 algorithm and or encryption standard or you can use KMS okay you can use C so once you do this imil we that use stay stay with the default please okay so once you do that um you have a private repository so there is in the registry you have repositories so now we have a repo that we can actually put push our images into so you can see that right now there are no images in this repository but if you get into the repository and you just go to the push commands then ums AWS gives you the commands to use to be able to push your uh um uh images to this repository so basically the first command is just getting some authentication token and using so that you can use your Docker client to authenticate to this registry remember that everything that talks to to this needs to authenticate right so the credentials which you have on your CLI or the credential this the access key and the secret key of the user which you have here that user needs to have iron permissions to ECR are we together yes do we need to be as root or as Ubuntu a wherever you created it that doesn't matter okay so I'm doing it from root because this my my user for my machine is not added to the docker group but I need to build the image and I do all stuff so it depends on where you created the um you configured your CLI right good so um if I go back to my registry I can just pick the commands for the registry and I can authenticate so this command is going to do AWS ECR get loging password for this region because it's getting the region the region is Oregon right for this region and he has to lock in with the username and password and this is the um URI of my uh this uh ECR repository so you can see that it says that loging is successful good so once login is successful it means I can conf confidently communicate from my e to instance to that registry so now I want to build an image and push that image to that ETL remember the syntax you remember the syntax your image needs to T your username the image name and whatever version right m y so in um remember when we were creating the ECR repository we gave it a username and we said the username was J demo and this is the name of our image uh uh um name space and repository name so this is what we have to use to be able to push our images to that repository so if I come here and I do dock on images I can see that I already have some images there with um that name space I can actually build a new image or push an image which is already existing so this is an image you take demo hello world and that should also that should already be we can use that we can actually push that right are we together yes pro pro I have a question yes um can we can can we attach the IM Ro to the before we before we push the before we push the image can we attach where do you need an IR row what you need attach [Music] full R you want you're talking about different uh completely different things you're you're communicating with AWS for you to communicate with AWS you need to have permissions to communicate with AWS how do you communicate with AWS from a terminal you configur a terminal that's something you guys did when you just started when we just started this B right yes so you configure the terminal using credentials yes now if you just create a user and you create access keys and secret keys for that for that um user in AWS that you access keill and secret keys are used for authentification so there there are two phases there's authentication and there is authorization so to that user you need to give permission so that the user is authorized to then do things after they authenticate so the keys are used for authentication but embedded in those keys are also authorization uh tokens okay okay okay we together so now I want to build an image and push it to my um doer hop sorry to ECR so I can easily build or you can also push an image which is already existing so for you to build an image I need what do I need to be an image file so I need to be I need to have a Docker file so if I do Docker images sorry Pro um apologies I don't I've got to bring us back so why creating the user for the um then to attach the ECR permission so when you filter the permissions using ECR there are several permissions there remember when we having this we had to select I think multiple permissions for us to attach to the user which one in particular would be the permission for this case um I need to look at that specific but you can just give the person ECR full access so the person the user has full admission to full privileges to ECR yeah so that we don't don't troubleshoot pardon yeah because there's no particular one that says full access ECR but there's particular that says there's an there's a permission that says ECR for Access there are two types there one for comp comp give a says um Public Access let me just see uses and I want add permissions permissions myou policies directly so so the fourth one that's the one the fourth one is the one that says profile or image view in so that's the one that kind of there is no full access yeah to the last um give me a second this I'm using Amazon ec2 container registry full access yes that's what I'm looking for yeah then you have Amazon elastic container registry public full access so there's what um ECR yeah it starts with Amazon Amazon ec2 container this spelled it out that that's why Amazon if you type in container registry if you container yeah coner so first one right there no with a with a with one s yes this is it give this I was I was shocked when he said there's no full access and then there's the one that says public too the one that says Amazon elastic container registry public full access I'm not sure so just give give this full access you should be able to have the permissions you you need you can also just look at the permissions itself right so you say This Cloud trade this ECR is it full access so that should be good so let's go back um where was I so I need a docy to be able to create my image so I can do a Docker bill you understand what this command does yes sir Franchesca pela Pamela I'm assuming that you're pushing your image now to ECR because we need it for ECS is she with us Victor left us and others Flora Amanda lesie bro I'm still here I'm here BR who said I'm still here all right so I want to build a new image and I'm tagging a image called J hello world so that should build our new image are we recording is there so our image is buil so if I do darker images I should see an an image tagged hello well so this is the image image so we can also do a Docker push so we want to tack the image to um the specific to the yes to use this specificities of the repository so we can push to it right so ECR gives you the command to run so I do a Docker T this is the repo you remember what we said this is The Source tag and this is the expanation tag so we're basically talking an images already existing to what is required to post so now we should see an image if I look for Docker images I should see an image with that name so Docker images and this is what we have okay then we can then push this image either directly or you can still use your push commands from ECR and it will push that image directly to ECR so now it's pushing our image to ECR let's go back to ECR and ensure that an image is here still pushing so now it has pushed and it has given us an image digest so an image should be available here this is our image are we together yes bro man came up with uh it says denied your authoriz authorization token has expired probably do it again okay good did others follow this please yes BR great so now that we have our image in ECR let's go to ECS now and we're done with this for now so I can close the tab and I go to ECS so if I go to ECS ECS elastic container service and we want to create create a tax definition let's start with a tax definition are we together yeah sure so we create a tax definition what did you say a taxation is it's a chasing F that defines our task so that defines our application architecture okay okay see what is happening so I want to go to ECS so came to ECS I want to show you the tax definition and tax definition parameters so there are different parameters for the tax definition I'm looking for the temp PL for tax defination Tex defination template exactly exactly so these are the different parameters that you can pass if you want to configure your tax definition okay so there are different things that supports you can have take some time this the template this here provided from you by AWS you just configure it the family it requires a role the execution role the network mode the container definition and all the stuffs okay so this is where you can have create your tax definition configure it then you use this to to create the tax definition or you can do this from conso so begin level yes um um I'm guessing the tax definition we can use terone yes you can use terone you can use uh cloud formation and terraform obiously okay so um we want to create a tax definition from console so let's just create a tax definition so you can with tax definition with Jason if you used what I just showed you to Define your tax definition based on your specific requirements then you can pass the Json fire in here and create a tax definition are we together yes right so but we can also create it directly from the console so we want to give our tax definition a name I call it demo tax Def and um infrastructure requirements so what type of uh um um what type of infrastructure can we use for this tax definition so here we can select E2 types as we talked about or far right can leave the fors um um as given to us by AWS but what does operating system means what basically what architecture of the OS do we want to use what is the tax size so because you're using you can basically tell fate that oh I want uh CPU of uh this percentage or this number of virtual CPUs I want this percentage of um um um memory so you can see you can have up to 120 gab of memory if you're using fargate so you just P specify those details and excuse me farget is going to provision the infrastructure for you okay so you have what we call tax roles a tax role is an IM rle that allows containers in the tax to make calls to other AWS services so the container that is running inside your task if that container has to make calls to ed services then you need to give it um um a tax rule for example let's assume that our container is an data analytics comp container right and that container needs to analyze data that is sitting in an S3 bucket right that container needs to be able to make authenticate to AWS in order to talk to AWS Services because this tax because the container is running in ECS it doesn't mean that the container has credentials to make API calls to AWS endpoints so if the container has to talk to AWS for example a data analysis container that needs to analyze data that is sitting S3 that container needs to talk to est3 right yes Yesa we good hello yeah are we good yeah so you would give that container a tax role now there's a tax execution role tax execution rule is different from a tax rule the execution rule is used by the container agent so on every is2 instance that you're launching to run ECS workloads on there is what we call a container agent and ECS agent that is running on that is2 instance so if you want that is2 instance if you want that agent on the instance to be able to make calls to a Endo then you need to what we call a tax execution role okay are we good good then we have what we call tax placement so tax placement are those constraints that you place on the task so that um tax are only placed on each two instances based on specific requirements so you can keep this up so there are tax constraints so you can see here because this is a fet launch type it doesn't support uh uh uh uh this tax placement constraint so if you're using the E to launch type then you can use tax placement constraints we're good now we Define our container container details so specify the container name so we can call this container name Hello wall or I can see J Tech Prof for the task execution role how did you did you get the thing you have in the the entry because what the only option we I have is create new role and none oh because I have this in my account that's why so you can create just keep it on create new role it will create a new role for you because I already have this Ro in my account then it will uh it will already by default picks that Ro are we together yeah good so in our container this is where you specify the container details remember we still creating a tax definition so in this container you give the container a name it needs the image where does this the image of this container come from so we need that image from our container registry register [Music] so ECR this is the container image which we just pushed so if you go to The Container image there's something called URI so the URI you copy the URI into your clipboard then you can pass that urri to The Container uh image URI are we together so it's a private registry that's fine if you're using private Registries that are outside AWS for example Docker h or stuffs like that then you need to be able to pass in credential so it can authenticate to those private Registries let's keep it simple for now now do we want to do some pot mapping you remember the pot mappings that we talked about yes bro so you can add Port Ms to allow the container to access ports on the host or to send or receive traffic so you can do some Port mappings here and do we want to add Port mappings that's fine for us read only file system the file system in the container do you want it to be only read read only on the host F re only uh five systems so basically what this means is the container can only access the host file system in read only format okay so you can check that once once you do uh I'm not sure will cover it because most of the security things are when you do you do um um cks they certified kubernetes security then you you face most of some of the things okay so resource allocation so now this is where you can specify the resource um requirements of the container so container level CPU GPU memory limits are diff are different from tax level uh values these are defin how much how much resources are allocated for the container if the container attempts to exceed the exist attempts to exceed the memory specified in the heart limit the container is terminated so you can pass all those can you please can you please um do the do the do the of this please s [Applause] of the theory can you please Dr chus give me a second please I need to explain some stuff as I'm going ahead all right okay good so this is where you specify those those those those details or do you want me to just click click click and we go no that what you want I want to do the of it yes we are doing it that's what we are doing so you now if you have some environmental variables to pass to The Container you pass in the environmental variables there if you want to specify loging then you can check in the loging and lock location so if you want to keep the locks you can then specify uh where to keep the locks and stuff like that so do not want that so there are different options re restart policy how the container will be restarted if it fails heal check if ECS performs some health checks to make sure that the container is actually healthy before start sending traffic startup dependency and all the stuffs okay are we together course and if you want to specify some storage requirements for your tax then you pass in the storage here by default because you're using fargate fargate Provisions you 20 gabt of epimera storage so this is the storage that is available to this container once it starts using F okay if you want to do some monitor after the fact for your container then you can use container insight to um specify o to enable monitoring so with that we can create our tax defination please please I'm confused here ask you about say you already you already created it can you please if you can do it can do it from scratch so that you know how you created it ask say already created it before if you can he also said he also said um I should select that it will create it for me because he already created it before so when you select that create new role on in the drop down it will create a new role for you that's ah okay good let's go ahead so now we can see our tax definition which we just created we call it demo tax def so this is the tax definition this is the Json this is the uh um of um high level of our our tax definition because we do not we did not specify so many parameters this is all we get right but if you want to have a full-blown tax definition then you can specify all the parameters that are defined here in the tax definition template are we good Now using a taxk definition we create what we call a task but we set task and tax definition they need some sort of compute capacity or a cluster to run in so if we select this taxk definition you can if you go to deploy you can see that we have two options create a task or create a service are we together yes are we together so but if we go ahead let's try it if you go ahead want run a just to be sure about apologies again we are responsible to specify all the parameters that will be in the tax definition right so it's not as if we have a um given template from the developers or any other the this is all your the only thing that the developer will do is like I said they can give you only the docker file you you build that image you push it to Docker hop and you're doing all this configurations this is all you on your side the devops team and or you the devops engineer this has nothing to do with the developer the developer might only tell you that oh this application needs uh what 10 GB of virtu 10 virtual CPUs and 20 GB of memory so once you're doing it you make sure that the infrastructure that the application runs on has all those uh meets those resource requirements that's on you but the application guys need to tell you the resource requirements of the application are we together yes thank you good so once we go to um yeah if you go to run a task you can see that the first thing it needs some sort of compute capacity and because we did not create one so we need to create our cluster so in order to create a cluster you go to ECS clusters create cluster so let's give it a cluster name I want to call it demo cluster it creates a name Space by default and we can select the type of infrastructure we want to use for the cluster either fargate or E2 okay so I'm going to do fargate in in our call you guys on your group you try E2 using E2 if you face an issue that you ping me all right you can enable monitoring for your cluster you can uh enable en encryption for your cluster and you can also enable some tagging so that you can tag resources that cluster related are we together yes yes now be very careful once you you use this thing this functionalis with AWS please do not turn it on your it cost a lot of money container insights because it's scraping all the metrics of the containers and shipping them to Cloud was locks and generating all the stuff so be careful good so let's create a cluster so this will take some time in the background in whatever data center somebody's trying to provision our cluster are we together abdala you joined us yes sir thank you for joining us today so so that moning thing that you told us not to enable um so is that because we have other tools that we can use to do the monitoring is that why we are not using this one or just because of the cost um I am not turning in here because of the cost now um obviously I think there are other monetary tools that you can integrate with ECS and use I haven't done that because I've done it with kubernetes itself but if you're using ECF with EC s intensively which is a native AWS product I think using container insights might be a good idea or you explore other options all right yeah obviously other options can work because um it's all containers so if you want to run something like pritus and grafana and all those stuff they are all containers you put them there it would work so one question pardon what can make the cluster to fail to create what can make our FG cluster to fill yes to fill to create because I have this there was an error creating cluster demo resource handle return message error occurring operation cluster he has an error I what is the error error code 500 let meend um this a new account okay yes there's some sort of authentication that happens in the background from AWS to actually enable your account to create those resources just do it again and that I did it again that's great yes so there's some sort of account level settings that enable they do it in the background to enable to kind of check you for using ECS so just do it again and you should be able to create it are we together yes BR yes good so once you have your cluster created then we can create a task are we good mhm so we go to task go back to the txk definition like I said everything is based on the task definition so if you select your task definition select it and you click on deploy you can either create a stand stand alone task or we create a service let's try let's start with a task with with um St task so by default it selects the demo cluster and the compute options are the capacity providers or launch type so once you're using capacity providers you're able to use multiple capacity providers for launching your task you can see that if I'm using capacity providers I'm able to if I'm using fade fade also has the same concept with E2 called remember you have E2 standard or E2 on demand and E two spots right right yes yes yes so if you want to add capacity providers for fade you can also use farget spot pricing it's the same model with is two spot so excess capacity excess capacity is now auction at lesser prices and if you're using spot pricing for running your task and there's demand for AWS will give you um to manye notification to tell you that oh that F capacitor which you're using we need it back so your application should be able to uh handle such interruptions okay so we stick with the capacity product providers and we're using fargate and we want to create our our task so you can see it's not a service it is a task the tax definition it is used the tax definition which we created above and revision revision basically means what version or what you understand what what revision or what version of the tax definition that is using so the first version and desire task one okay and task group no task group then we can just create our task so this taxk is actually using all the information which we provision or which we specified in our tax definition to create our task so using this task we should be able to access our application are we together hello why our task is creating let's create a service service is actually what so you still go use the same tax defination you go to deploy and we want to create a service you see the different options that you have for the service now we using the same cluster we're using capacity provider for the computer option and we still using fade okay we want to create a service tax definition it's using the same tax definition rision one and let's give the service a name so I want to call it um what is it gch Hello World Service so service are of two types this what we call a demon and this what we call a replica so a replica basically means it will create the number of task which you specify in the service with the demon type it create one task for every is to instance that is part of the cluster so if you're using a capacity provider which is the fet type then the demon the demon service type is not supported right so if I want to do uh capacity um I want to have let's say four four versions of my task then I can specify here four you can turn on the uh availability Zone rebalancing which we talked about deployment options you can have here the rolling update you can have blue green deployments and these are the uh uh specific uh uh configurations for the rolling update and what percentage of task do we want at minimum to to be running [Music] and we can also configure what we call service connect service Discovery so with service connect and service Discovery if you have multiple tasks that are running which these different task having different containers are part of your applications and you want this different tasks to be able to communicate with each other then you can enable things like service connect and service Discovery in the SS cluster to ensure that commun communication now now for to load balance because we said that the service should have four task we want to be able to load balance or place a load balancer in front of this this task to load balance traffic on the task then you can turn on load balancing and we can either select Network load balancers or application load balancers based on your requirements and you can see it by default our container based on the tax definition which we specified and we creating a load balancer if you already have an load balancer in your account you can still use the load balancer are we together are we together SC up a little bit let me see if I missed a step just a little bit the one before to ask are you creating service now are you creating service now yes I'm still service step a littleit yeah good M so with the balancer we click we took yes what's your what's your qu I just want to know to create I'm just watching as you're doing so with the load balancer you can select whatever load type of load balancer type which you need you give it a crate or if if there's an existing load balancer in the account then you can use the existing load balancer and you can you give the load balancer a name heal checks you can leave everything as default listener is 80 because no https is available at this point in our application and we want it to create or use and a a Target group so for this case we're creating a new Target group and we can create our service so now because we've told the service to provision a load balancer this service by default will create a load balancer in our account so I'm in Oregon so we can see that a load balancer will be provisioned by the service can you see no so this load balancer would also provision Target group for the load balancer and this load balancer will distribute traffic to the different task in our application are we together I'm lost a little bit where you have service name when creating your your service the service name I can just give any name yes give it any name okay thank you is Victor there Franchesca P we good yeah yes Pro um I'm watching what you're doing for so we give some few minutes for our load Bion thiss we delete it when you finish once we done clean it up okay so this is the tax which we created in the start so we can see that the desire status is running and the last status is running so our tax is actually running stopped pardon stopped what did you say it should sto what is stopped like when you have run m is stopped what why is yours stopped I'll look into that later so why is your stopped so you can see that we had give me a second we had a task definition we had our task and four of these task are actually coming from our service and there is a task that is not from no service are we together yes BR so right now all our service tax let me delete this day let me stop the task which is not based on the service stop selected so you can see that we have our service and these are all the task that are based on our service right and if we look at the load balancer our load balancer now should be active and we can actually use the load balancer DNS to access our application let's look at the Target groups so Target groups now we have four targets so these are the targets these are the IP addresses of all the containers the four different containers that are actually running and they are all Targets for the Target group and they healthy so we have four healthy targets so our load balancer should be able to reach this application where is the DNS name of this look again yeah so that should be the DNS name and we using the DNS name we should be able to reach our application what is happening now let me look at something our load balancer this is our load balancer right balancer DN name eror Cod that's fine listener rules 80 that's fine forwards to the service that's fine Network mapping all good is this external balancer address scheme internet facing that's good good okay this is probably the issue so this is our security group that a load balancer is using inbound rules it is using all type of traffic that's correct protocol Source this is the problem so let us let us add a rule that basically receives all traffic from everywhere save Rule and good and now we have our application are we together mhm makes sense of course any question for Me Mine Mine Mine with the provisioning I don't even know what was happening I'll look at yours later now we said that this is using the auto scaling right to no using the load balancer to distribute traffic to our respective task so you can see that we created a service this service this service replica and creating four task are we together yeah why you guys very quiet we just taking it in Pro if there's a question please ask H Pro how did you um what did you do with the security group uh I just open um it from every traffic yeah it's not allowing me to um is it a source yes a source if you look at the security group The Source is taken from the another security group from the security group itself so you need to just open it from internet because we are trying to come from the internet to access the not specify on IP V4 cider for existing reference group because you're trying to edit a rule add a rule or I should add a rule okay just sorry Prof um again you said we've just created um a replica which is obviously to scale um the task into for is it the same thing the do compos would do this the same thing um Dr swam would do SW is the same thing kubernetes will do but kubernetes will do it better okay so this is a service in kubernetes a deployment the task in kubernetes will be a part definition we'll get to that those things later on Monday hopefully any question for me so this is just to handle traffic um spikes I mean when we have in what yes yes what is just to handle spikes please I mean this replicating of task running in four places is for us to have traffic distributed you have traffic distrib yes multiple multiple containers being able to handle the load right yes it's the same thing like um if you have one server that one server cannot carry 100 people you need two or three right so as more people are trying to connect to that server you need more servers to be a to handle the load same thing with containers if one container cannot handle that load it needs to um the traffic needs more more continous to be able to handle that load okay okay thank are we together so you can see that when when we were doing um um with just with Docker and you're running your container and the docker this is the same container that you're running on the docker Hol and you're doing Port it and mapping it so a a container orchestration platform basically gives you the ability to manage this containers in a better way so you can see that because we're using ECS which is a container orchestration platform we running the same containers we can easily run multiple copies of that container we can easily load balance traffic to those multiple copies of contain of the container that up multiple copies of containers that are running our application hello yes sir yes sir we are here we are here am I making sense yes sir one of the beauties of services is if you see we are using a service right and that service has four task running let's simulate that two a task failed because of one or two reasons so let's stop the task the service is intelligent enough to understand that there is a problem so we are stopping a task is the service should be able to restart another task in order to meet the service requirements it's provisioning another one why is it not provisioning for me is it provisioning for you already yeah M says yes so service is provisioning so if you have a service and you have a desired tax desired number of task if this any of the tax fails because for one or reason or another the service as an orchestrator would understand that oh I need by default eight task running and I can only find three or four then I need to provision other tasks in order to meet the desired count are we together so this ensures that you have a desired count of task or desired number number of containers running in your infrastructure to self your workload are we together yes Ma sorry um so so in so say say for example um the for we specify the CPU um in in in in individual task right if the CPU utilization um actually falls over the particular threshold would that automatically Al can we also config it in a way that it automatically you know spin up another one or Prov another one okay now you're talking about when you're using e to launch type for sure you can do that so once you're using a service the service can actually do the monitoring and it understands that oh it notices that oh CPU and memory um um resources on this node spe memory resources on the E node have been consumed but I need to spin up another so I need a new node so it's going to uh uh uh uh trigger the auto scaling group to Tri to spin up a new um is2 node and is going to place the task on the is2 new is2 node okay so is that done in the task definition or in a in a separate um no the service defination service service all right yes you remember when we creating the service where um um and this is the launch time remember faget launch time everything is handed by you in the background but that's why I said I want you guys to try with the is2 launch type once you're creating it with the E2 launch type you'll see that you have to configure a capacity provider for this easy tools which is an auto scaling group that should manage the auto scaling of these two instances that are part of your ECS cluster are we together yes bro yes any question Pro when you running um a single tax does it work the same way as if it stops it just repr Provisions no that what what some of the issues with running a single task right if it fails it fails it doesn't know that it needs uh um uh should provision provision yeah okay so what you're saying is for it to do the Autos scaling should have more than one it should be service it should be used a service any question good if no question for me then I think that's it for us today clean up good let's clean up in order to clean up you go to the service select the service and you go to delete service so check false delete hold on there is a check box here false delete say How likely know when I don't have what you have I'll get to you so check box here false delete are we together once you click select false delete then it would ask you to type in delete so you can actually delete the service then you can go go ahead and delete your service that should clean up and take off all the tax that are part of the service and everything should be G so should I share my screen give me a second put that I don't want your screen in in the recording trouble should you later okay are we together so this cleans up everything this will clean up every tax which is running and once no once this tax is done then we can uh uh delete our demo right our cluster what of balancer please I'm coming to you that okay so that will not that will not take care of the load Balan so once this is done please go to the cluster and delete the cluster so you delete it and the name of the cluster which is demo in this case and they should be able to delete all cluster once that is done then you would still find the load balancer that was provisioned should still be in the account please go to the load balancer itself select the load balancer actions delete load balancer you type confirm to delete that load boner are we together yes ma'am with this every resource which you just created should be gone when we back to clean State what about the name space so this are things that are created by default I don't it's not takes some time before a clears that up from your from your account but you cannot delete it good so if you want T defination is just a config configuration file that is in the account you don't you don't doesn't cost you money it doesn't do anything for you but if you want to clean it up obviously you go there you go to tax defination and you can Buton you can keep it inactive or you delete that are we together I take it that um leaving our image in our ECS won't cost us money as well ECR ECR rather yeah it should be very minute I don't know you can check please and if it if it is R cost's let's do the let's check for me I got an error deleting the cluster what it is you yeah because most likely something is still work using the cluster that's why it's it's there it's you cannot you cannot delete a cluster if a task is still running right so ECR is not free but it's also not um you see $0.1 per gigabyte per month what is 0.1 10 cent 10 cent are we good great that's it I think we can stop the recording Christopher aoto is requesting to share can you share oh I think mine created a cloud formation stock the record is still on yes information start creating in the background okay so that's what it that's why it errored out I couldn't delete the the the cluster because the cloud peration stack is still in progress Del Del is still in progress by default once you're creating the cluster AWS Provisions the cluster via a cloud perion stack in the background are you see my screen yes I can see your screen the recorder is still on