Transcript for:
Overview of Elastic Container Service (ECS)

Hello everyone. In this video, we're going to be talking about Elastic Container Service, or ECS for short. This is going to be an overview video where I talk about what Elastic Container Service is, followed by how ECS works from a user perspective.

So let's not waste any time and jump right into it. So the first topic obviously is what is Elastic Container Service? So ECS is a managed service to run containers, typically using Docker. So if you're a little bit out of the loop and you don't know what Docker is, Docker is essentially a tool that exists that allows developers to launch containers and ensure that container instances are isolated from one another.

So ECS kind of sits on top of Docker, so it allows you to launch, set up, and monitor your Docker containers on your ECS cluster. We're going to get into what a cluster is in a few moments. So in order to run Docker containers, you need some kind of infrastructure to run them on.

And for that, there are two options. There's a serverless option, which is sometimes called Fargate. And then there's a managed option.

And with that comes EC2 instances that you kind of rent and you're paying by the hour for. So let me just briefly go over the difference between the two. So if you're using ECS with EC2, like I said, you are managing those EC2 instances. You're applying software upgrades.

You're applying patches. You're identifying security vulnerabilities, so on and so forth. So there's a little bit of extra work for you to do if you decide to go with the EC2 option when you're running an ECS task or cluster.

Now if you're using serverless with Fargate, it's a lot more hands-off. You relinquish a lot of the control that comes with setting up your infrastructure. However, it's much much easier to use.

So if you're just getting started, I suggest to check out Fargate because you can really get off the ground quickly. But if you have some kind of use case that requires specific EC2 configurations, then you may want to go with EC2. So or if you even have just some ec2 indices lying around you can always just use those for your cluster now in order to be scalable ecs supports auto scaling and that allows you to handle variable volume so as your traffic rises and falls you can set up auto scaling such that on a certain metric for instance it could be traffic it could be memory utilization it could be cpu utilization you can bring the number of containers up or you can drop that down in response to fluctuations on this metric So this is very, very useful from an availability perspective and ensuring that your service always has enough infrastructure to keep it healthy and serving all the traffic that's coming in.

Fourth, ECS is great for ad hoc jobs that run on a regular basis or full-scale services that require a certain number of containers up and running at all times. And we're going to get into a little bit about how that works in a few moments. Second to last, using ECS in combination with Docker is very cost effective. With ECS, you can host multiple different containers on a single compute resource.

So if you're using EC2, for example, you could have multiple Docker tasks and therefore multiple Docker containers that are running on that single instance. And since you're using Docker, you don't have to worry about the overhead that you previously had to worry about if you're using virtual machines in the sense that you don't have to spin up multiple operating systems. With Docker, you only need to have one active operating system, and you can have multiple containers that are running at any given time on that machine. So it's great from a cost effectiveness perspective because you can better utilize the resources that are available to you and not wastefully use them on operating system overhead.

And fourth, I just want to briefly touch on how it compares to some other options. So Lambda and EC2, how does that compare to ECS? Well, actually, I have a whole video on this where I go through the difference between these three things at some length. I'll put that video in the description section below for you to check out.

But to put it briefly, with Lambdas, you're worrying about basically code. It's a serverless option, so you're not worrying about infrastructure in any way besides that. With EC2, if you want to run your application there, you have to worry about deploying your code to these EC2 machines, running your bash scripts or whatever to spin up your processes. And with ECS, it's kind of a middle ground between Lambda and EC2. Depending on what you prefer, you don't necessarily have to worry about a lot of the infrastructure as you would with EC2, but you get a lot of the advantages in terms of containerization in combination with Docker.

So that's what Elastic Container Service is. I hope all that made sense. So let's move on to kind of how ECS works from a user perspective. And what I mean by that is like, how do people use it if you're in the console and you're trying to set up ECS?

So it all starts with you, the user. You come along and you define a Docker file. And this Docker file, I'm sure you all know what one looks like, but if you don't, let me briefly explain it.

It essentially just contains settings, it contains the operating system that you want to run on, any dependencies that you're going to need, all the source code that you're going to be using for this image, and also contains other things like scripts that you want to run on startup. So you build this Docker file, it's basically just a text file in a special format. And you build that into an image and upload that to ECR.

ECR stands for Elastic Container Repository. And usually when you're talking about ECS, you hear ECR a lot. And that is because ECR is the place where your images are hosted that are going to be launched onto your ECS containers. So once you have that all set up, you can also like... key your applications by name.

And just one small thing as well, another neat thing about ECR is like ECR keeps a history of all the images that you've built with relation to that application over time. So you can quickly roll back to a previous version if there's something wrong with a new image that you just built. So just a handy little piece of information that may be useful.

After you've uploaded your Docker image to ECR and associate it with a name, there comes ECS. And with ECS, the first thing that you need to do is define a task. So you can think of a task as kind of, it's an abstraction on top of a container. And it's essentially a construct in ECS that tells ECS how you want to spin up your Docker containers.

So a task can contain more than one containers. So for instance, you may have a configuration where you have one task that has one container with a set of resources. And then another container with, in this case, the same set of resources. Now, why is this useful? Well, a lot of times some applications come in pairs.

For instance, what if we had an application where we want our first container to be responsible for maybe hosting a REST API on Node.js, but that REST API also needs a database, maybe something like MongoDB. We don't want to host both of those things on one container. We want to separate them out. So this is one instance where you may have, you know, container one with your Node.js REST API and then container two with a MongoDB instance. And in your task definition, you can specify ports that are open between these two containers so that they can speak to each other.

So this is what tasks are. They kind of sit on top of containers. They contain specifications on how to launch those containers, and also the settings in terms of port mappings, resources, so CPU, RAM, so on and so forth.

Now, after you've set up your tasks, then comes the fun bit in terms of setting up the cluster. And with a cluster, you can essentially think of it as a abstract resource farm. So if you're using EC2 instances, you would set up your EC2 instances within this cluster. So, you know, if I have a, let me grab a pen over here really quick, actually.

So you can have EC2 instances sitting inside of here, inside this cluster, and they are going to be kind of targets for whenever you want to launch tasks. So in terms of actually launching tasks or bringing up an application, What you would do is you as a user, you would take this task in the cluster configuration and you would say, run it on this cluster. And what would happen is that if you're using EC2s and not Fargate, when you specify EC2s with a cluster, you basically have a scenario where an ECS agent gets installed on your EC2 instances and that ECS agent communicates with your ECS cluster. and receives requests to launch any new software or do anything that is relevant for that cluster. So what happens in this relationship is when you specify this task and map it to this cluster, you can say, run it on one of these EC2 machines.

And if you wanna run many, many tasks, so many instances of this application all at once, then you can say run three tasks and that'll automatically be deployed to the resources that are available to it. And just get rid of that here so I don't confuse anyone. So that is one option and that's great for kind of short-lived jobs, maybe like batch data processing where you just want to spin up some containers. Then when you're done, just tear them down.

Now there's an alternative mode to ECS, which is more well-suited for long-running applications. So things like the example that I brought up here where, you know, you want to spin up a container that's hosting an application and you want to have a database and you want to keep that running so that it can feel traffic for, you know, days, hours, months, years, whatever, right? So if that's the case, you would use a slightly different approach and set up a service on your ECS cluster.

Now what a service essentially allows you to do, well among other things, it allows you to specify a minimum number of tasks and therefore containers that are running on this services cluster at any given point in time. So say for instance, say you have like a really, really popular application, you need lots and lots of infrastructure to run it on. You would define a service that says, At any given point, I always want to have at least 10 tasks of this type, of this configuration.

Remember, a task has an ID, it has a name associated with it. And what that would mean would be that you always have at least 10 container 1s, at least 10 container 2s. that are running on your service.

You also get a whole bunch of other great tools that come as part of using the service. So it'll constantly monitor your containers, make sure that they're up and running, staying in a healthy state at any given point of time. It also comes with some great metrics for monitoring and some dashboarding tools.

So you can constantly be monitoring your service to make sure it is running healthy at any given point in time. Now, if you're using Fargate, it's pretty much the same, except that you don't worry about EC2 instances. You just say, I want...

X number of tasks. And then, you know, the cluster will go out and find EC2 instances for you. Again, all of that is hidden from you, so you don't need to worry about it.

So the next question is kind of how do you make this thing scalable so that it can receive tons and tons of traffic? Well, with this service, you can also, I'll just draw this in, I forgot to label it. You can define a load balancer that comes with this service. So now we have like, multiple containers that are running. within this service, so like C1, C2, C3.

You can have a load balancer associated with it. So now whenever any request comes in, tries to hit this service, your service will distribute traffic to one of the different containers that are located on this services cluster. And this is very useful from an availability perspective and making sure that your service always has enough resources to run at any given point of time. So if you're looking to set this configuration up and actually do this in real life, I have another great video where I walk you through how to do that in the console using a Docker file.

So I'll put that here on the right so you can check it out. And if you enjoyed this video, please don't forget to like and subscribe so that you don't miss out on my next one. Thanks so much guys, and I'll see you next time.