so um just m has worked with Docker or looked at Docker before but at least we all here have heard about Docker right yeah kind of heard like an overview of a video regarding it heard about images and all that St up yes things like that but I didn't go know deep I just wanted to know like an overview what it is what it's all about all right orchestration yeah so um let me show my screen tell me when you canum my screen please can you not yet is it taking forever I'm not yeah good so want to look at doer but before we look at Docker Docker is a system that we used to uh run microservices so what's all microservices all about so when we start hearing or talking about microservices that's when you start talking about tools like doer Portman coner D and all the different stuffs that are used to run microservices what is microservices all about can anybody tell me can any any idea uh just a Layman um idea uh microservices is like different applications you you can have uh that make up a whole big application that you might have for example you you gave us an example of Amazon um web services so um in for the microservices you have something like the P um payment platform your address platform your um your cart platform each and on each of those different um applications that we make use of are microservices make up a big um landing page or or website so that's what microservices is all about so but before the whole because microservices is a new way of doing things before microservices there were challenges faced in the industry right software development deploying applications and and and and making those applications uh maintaining those applications was a big problem in the industry and people faced word is generally referred to as the metrix of H you can look dependency Matrix of H what is the dependency Matrix of H before we talk at the dependency metrics as hell let's see the traditional monolithic architecture of applications what does this mean you would have developers and they de trying to develop an application let's use Amazon I like using Amazon because almost everybody has used Amazon before so Amazon you have the UI now you go there and you start clicking and ordering stuff that UI which is displayed on your browser is what is the known as the front end integrated to that front end you have an application that handles login an application that handles user management an appication that handles payment an application that handles your addressing and tracking of your orders and stuffs like that so these are different parts of the application that are all integrated and presented to you the user so you have the front end you have some logging Services you have some database because all the information or all the the things that you're buying they've been stored somewhere so some datab Bas is also present you have your user which is there you have some in some form of invoicing so let's imagine that all these different parts of the application was was developed in one block of code so you have one code or one source application source code that is handling all these different parts of the application so this is generally what is known as a monolithic application does that make sense so a monolithic application is basically an application that has does different things it handles front end it handles logging it handles user management it handles invoicing so you have different part of your you have different you have an application that has different functions are we together so it's a monolith just tell me if does if doesn't if that doesn't make sense can you respond to me please one more time yeah can you go over it again one more time so I'm saying consider that this is you have an right Amazon let's use Amazon for for an examp example so the Amazon has a UI I'm trying to use that example because once you go to Amazon to other what do you see you go to your browser you type Amazon.com or whatever it is there is a page a UI UI is a user interface that comes and it's displayed to you that's generally known as the front end but on that UI you also have different functionalities you have to click to to sh shop when you shop you put it in your card after you put it in your card you need to check out you need to pay right once you also creating your Amazon going to the Amazon for the first time you have to lock in right you have to create your profile all these are different parts or different uh features or different applications that have all been integrated to the front end so you can actually use the whole Amazon service does it make sense that point yes so what you see what you what you see on your UI uh is not just um a single application you have different applications it could different uh one application that has these different functionalities so you have an application that displays you the UI that same block of code will be the in that same block of what Ian what do I mean by this you have a a block of code let's say this is our source code for our Amazon service so we have our source code right and in this source code you have code for the UI you have code for login you have code for what is it give me an example payment you have you have code for C the card card exactly you have products right you have code for address Etc so all these is developed at one block of code right so you would have your developers developing and integrating all these different functionalities in a single block of code and once they do a mavin b if it's Java then you have one artifact that has all this functionality so in that case they have integrated different functionalities of the code inside the same Deployable or the same executable are we are we together are we together I don't care how many times I have to we we together yes sir we together good so in that case you refer to this as a monolithic application mono what's the meaning of mono one one exactly so one block or one ja or one artifa is deploying and when deployed has all these different functionalities are we together yes bro good now the monolithic style of architecturing application has its drawbacks imagine that oh our login service is working fine our database service is working fine our user management our cards and stuff like that is working fine but we want to make some improvement on just to the UI just to the front end what will happen we need to ensure that um once we make uh changes to our front end part of our code our mon code it doesn't broke it doesn't break the the loging it doesn't affect people's shopping it doesn't affect um payment services and Stu like that are we together so you have to put in mind and and and always consider the Integrations of different parts of their montic application but the goal here is just to improve the front end so you see that because I am we have a monol I have to take into consideration what will happen with the login what will happen with payment what will happen with card what will happen with products and stuff like that so this present some challenges with handling monolithic applications applications that have been architected in a monolitic way are we good so far yes yes sir good so that's the first problem the second problem is you have something called let's assume that we do not only have a monolithic application now we also have on a server like our is2 server or whatever server we are deploying different applications or different system or services for example once we doing when we were doing our demo some time ago imagine that in your environment you have Marvin that is run running on the server you have Jenkins that is running on the server that same server you have Tom that is running on that same server you have this different Services running on the same server we see that maravin needs Java right yes bro Java right son Cube too son Cube needs Java right yes bro so let's imagine that we have one server so we have our server and this server has what is it son Cube and it has Marvin it has Jenkins it has etc etc and all these server all these different applications they are dependent on on Java all these applications they have a prerequisite which is Java right do we agree yes yes sir now for some reason our Jenkins is not working fine and we see that we go to jenin do a research why Jenkin is not functioning properly we see that oh um they have updated or they there was a book in Jenkins and there a new version of Jenkins that we need to use but that new version of Jenkins needs a for example all this were they were running on let's say Java 1.7 and our new version of Jenkins needs what ja Java 1.12 right yes so Java in need one Java 1.2 but the sonar Cube that is already running the Marin that is already running the Tom card or whatever application that is already running on the server does not at this point support 1.12 can somebody see the problem yes what what what we what do we have to do in this case have to move them off that s put them in a different C oh you have to because just because of Jenkins you have to now you have to upgrade sonar Cube you have to upgrade Marvin you have to upgrade on all these stuffs are we together so all these now have results what is referred to as what the dependency oh dependency metrics of hell you can look the that off and actually see what this is about so you have different applications this like an example you have my SQL that is running there some application that is based on nodejs that is based on angular and all these different applications they are having or they sharing some form of dependency right some form of Library they are dependent on all these uh they have this dependencies but because we want to upgrade just one of the applications now we Face a problem because if we upgrade our Java applic our Jenkins to one our Java application to 1.12 because our new version of Jenkins needs a then sonar cube is is broken then Marvin is broken then different applications are broken are we together yes bro so now we are in a trap how do we solve this so you see that we have a problem with monolithic style of developing applications and we have a problem of also having all these different applications being run on the same infrastructure sharing dependencies dependencies do we agree yes very much how do we solve that problem The Way Forward is to break that application into multiple pieces that each hold their own dependencies individually isolated is not dependent on the host uh uh um um uh uh libraries and packages but every application is package in such a way that it has everything it needs to run how do we achieve that we achieve that by re-architecting our monotic application into a microservice so let's assume that this is our our um an example we would now re architect break down our monotic application so we will have an application so we'll break down our monolit into um in this case one two three four five five different applications right so we have five applications and we have one that is just the UI we have the second that is saving just a login we have the third that is doing uh storing our data the other s uh manages user access invoicing and stuff like that does it make sense yes BR if it doesn't make sense please tell me and I go does it make sense yes bro yes it's making some sense senseon do make sense yes trying to come together is that doors make sense or doesn't make sense it makes it makes sense sense it does sense so now in order to solve the problem presented by montic style of architecting applications and also having different applications running on the same infrastructure the best way to approach this is to move away from um monoliths and and having all these applications sharing sharing dependencies on the whole system to microservices and with microservices you break your application into different components each component completely isolated from the other what do I mean by that it doesn't need the other to run it doesn't really need the whole system to run in terms of libraries in terms of Frameworks in terms of packages if for example our loging UI needs Jenkins or need needs um Java it is packaged with the Java version in needs if the front a needs Java it is packaged with the Java version it needs so if there's a problem with the login we work on the login and we can move the login Java version to whatever it needs that doesn't affect the front end that doesn't affect the user management that doesn't affect the invoicing that doesn't affect the different components of our application so each application is broken to a standalone component and we achiev that by now spting our remember our our our our our source code that had the UI that had a login that had and all of this now goes into different artifacts so there will be an artifact for for the logging application there'll be an artifact for the UI there will be an artifact for the payment and and stuff like that and we achieve all these by breaking this down and packaging them into what we know as containers so there'll be a container that would serve the login there will be a container that will serve the UI there will be a container that will Ser the different different components of our application does it make sense yes sir so the container is going to be able to hold the entire application the container is going to hold the entire application the entire component so if it's if it's for the logging service that container would hold the entire component now we are trying to lay the foundation all right so um trust me you will still see very complex architectures that you have one container it can run but it depends on another container to run so that they can share the stuff like that but let's keep the complexity out of it for for now and deal with the concept right are we together yes bro so the goal of a container is the best practice of a container is it should do one thing and do it well what do I mean by that if the container is package or is a package for the front end it should be for the front end we should not have a front end container that is also serving lock in you should not have the front end container that is also serving user uh invoicing or their shopping cart now once you have this setup and you have a container and let's call this our locking service let's call this over let me let me put it this way so you have what is it when once you go to Amazon what's the first thing you see you see the UI right uhhuh once you see the UI what's the next thing you have to log in right on UI yes BR log after you log in what do you do you start shopping right yes bro shopping after you shop after you shop or you can see shopping and card exactly for Simplicity after you shop what do you do you pay pay right payment process so you would have all these different system or all these components of this application broken into pieces right yes this is going to be my is Victor here I doubt it Mr Victor are you there no I don't I think he's probably running late so you have this components of the Amazon UI broken into these different components but because when you go to the login the login needs to know that you to the UI it needs to be able to talk to the U to the uh how do you call it the login service then you would have all the different components of this application now communicating with each other using what we call rest apis so they able to make what call Api calls the login wants to talk to shopping cart and all this stuff is different components of the application can now talk using what we call in the in the oh sorry am I not the host no I'm not think why let me make you please one we together yes bro so that's how you API thing again yeah so I'm saying that we had our monolitic application we've broken that application now into different parts right yes how do these different parts communicate you have the UI the UI is just to beautify that UI to today you go there you see Amazon they upgrading their their UI right you see can see that the interface is being changed they upgrading adding new features making it nice but that did not change how you locked into that that application right mhm now why because the development of that UI is decoupled from your lockin service now the development of that loog in service is decoupled from their shopping how you shop it's decoupled from how you pay for example they let's say from the beginning you were only able to pay using your credit card tomorrow you go there you see you can pay with PayPal after that some other payment methods have been also upgraded it means that the features of the payment service itself has been improve that did not affect how the UI was working right because you have these different components that make a whole application broken into different parts and each part is independent and we can completely improve them without affecting the other so you can see that this now would solve our problem of dependency or the hell of dependency hell of metrics because if the L the login UI needs Java if it needs Java 1.1 it runs to Java 1.1 if the if the shopping cart needs Java 1.9 will give you ja 1.9 but if all of these different components of this application needed to be dependent on the Java version on the host system itself then we have a problem we all agree on that yes per yeah now once you break this components they need to know how to talk to each other right so they talk to each other using what we call rest apis or apis apis basically mean application programming interface Mara the developer you can best explain what this is but an API if you can see what when you go to your the browser and you go to Amazon login you can see when you click on the shopping cart you're making an API call you don't know there's a call in the background transparent to you the user clicking on that interface but that interface which is the front end it's making an API call to the shopping cart service once you're done with the shopping cart it ask you that do you want to check out you say yes clicking on that thing like that is also making a call to the payment service so this API calls that integrate the different aspects of the application it's not done by you develops engineer that's the job of developers because the developers are developing the application they are responsible for making sure that this thing is integrated correctly and functioning correctly that's not the job of you and I but we are trying to understand what happens there right so that we know our job different job are we together yeah quick question so how do they how do they do they I know they probably communicate over the internet but what kind of protocol do they use what API call yeah the API call what kind of protocol do they use um you can look that up I'm not really a developer so I don't know but it's not no no no it's it's um it is not no it's that's that's really high up the OSI level it is not an a internet communication there's no internet communication this this different uh um aspects or components of the application that have already been integrated so they running probably in the same cluster in the same um uh uh like end points uh yeah they are end points actually but there are end points that you and it's only the the the application itself that understand it there's something they call SDK software development kits so once you're developing your application you need to know how to integrate these different Frameworks so these are not HTTP calls are not htpa calls these are calls that only application at the level of the application framework themselves they understand how to talk to each other so this is not something that you can you can you can you can you can put firewalls and block 443 or block 0 and it stops working no so an API call it's how programs communicate or applications communicate with each other they should be able to understand receive that one application is trying to communicate with them and send back to respond are we together yes bro yes any question so what happen if that that communication fails something is broken in the com unication something is broken in the communication you the user you're only going to see your browser that something is not working you're clicking your your loging thing and you're putting credentials credentials are maybe one or two people have experienced that just using another service not only Amazon you click and you logging in and your credentials are being you're inut you're putting in your credentials but nothing is happening right you even let you put their credentials like that may be another level of of of of of of failure there but I've experienced that I'm using a service and they say put in your username and password you put in your username and password and you click enter and nothing happens it just uh everything is is is um erased and it keeps asking you for username and password probably because there's some API calls behind the application that's not working in that case what did they say create a support ticket once you create a support ticket A J ticket is open that developer is Ping that thing is logging in the is is not functioning as it should be and they go and start checking at the level of the program itself what is happening are these two different components not able to talk to each other are we [Music] together hello a yes Prof um I have a question so you mentioned that um like the whole blog of code is going to be like SE separated like the various services that needs to be done the login the user interface and all of that it's separated and stored in different containers so that they are not really dependent on um like they're not all tied up together that one might affect the functioning of the other is it like you have to manually do that separation or you just have to write an entire block of code and there is a tool that's going to like do the separation you as a devops engineer or as a cloud anator that's is not your job now this is what we call application architecting have you guys seen on LinkedIn they looking for jobs and architect they say they want an an application architect soft a software architect that's the word word yeah that's a software architect is different from what you are looking for so software Architects are responsible for architecting the application so they will be responsible for ensuring that this different components of this software should be broken down in this order good once they do that the developers are responsible for also ensuring that once we broke them into different 5 10 15 20 components what should happen these different component should be able to reach the other component that it needs to function properly for example we have the UI that will taken it out of the of our monol application but that once you go to the UI you should be able to talk to the payment service so the payment service and the UI they should be able to understand those different API calls we make a call to payment payment says okay everything is fine and you once you pay you see a respond right that on your UI that oh your payment has been successful because the payment service sent a respond back to the the UI service right and it it displayed that to you those two those two different containers in the background are talking to each other the meane that that communication is broken or one cannot understand the the other very well you and I will start seeing errors on the UI so back to your question yes the monolith every component is taken out and put in its respective container now like I said there are some environments that the developers have already developed the source code the dependencies they know that this code needs this dependency they prepare the source fire they prepare everything and you as the develops engineer are responsible for building that image there are environments that you you as a develop Eng you're responsible for picking up the image and using it you might be responsible for just setting up the pipeline that builds that image and sends it to dock hop or sends it to ECR or you you build it send it to ECR but pick it up picking that image from the repository where is stored putting it into the cluster it runs it uh um displays whatever it is and is exposed to the outside world to customers that's your job developing the application source code itself that is not your job are we together yes sir yeah po can you explain my job again and we pick up that's high level I'm not saying that when you employed so generally you're you're not responsible I have never been responsible for writing Source Cod for applications no if um I deploy the application and you go to the UI and the UI is not functioning properly you investigate why is it that the port itself is not running why is it not running is it lack of resources then that's my responsibility I have to fix that is it leaking memory is it consuming too much resources that than uh more than what it's expected now go back to the developer that this thing is call is leaking memory it's consuming more than more than it's expected one application for example in a server should not be consuming all the um uh the memory or CPU on that server except the developers are aware of it and that's the requirement for the application but that's architect that's not our topic of conversation but developing that code itself is not your responsibility okay but you might be responsible for building the image that's what we will talk about today so if you as a devop engineer as TX to build an image then you have to in the develop culture talk with the developers and they present will present to you everything that is needed to build an image like what it's called source code itself the dependences what user is supposed to run in the image and these different stuff then you write a doc F and you build the docker file and you push that Docker file that Docker image to where it's supposed to be and now an orchestrating micros service or a container orchestrating platform can now pick up that Docker image run a container and you're responsible for setting up everything that makes that container available or the whatever is running in that container available to the outside world quietness thank you yeah thank you for the explanation any other question for me good so now we've seen the problem right mhm why we are moving to microservices and once we go to microservices one of the best guys on the job is Docker because with Docker we are able to package our applications into different Deployable units that's why we want to look at Docker today good so what is Docker generally so Docker is generally just a platform for you to ship and run applications what do mean by shipping [Music] um let's use the our normal real life scenario what is a container if you I most of you are in the US if you want to send something back home what do you do you need to look for a container package that thing inside and ship it home right well look for a shipper what do they do they put it in a container and they ship it home so they are shipping Goods Docker is shipping software so with Docker you build your application and you put it in a darker container why do we say shipping because once you've built that container and it is running if it if it can run on your laptop which is a unique space system you can run wherever why because in that image it has all its dependencies everything it needs to run am I making sense yes I have a few people in the call I can see Emma Pamela and Mara the others are quiet are we good or do I have to okay we okay we're good we're good yes Prof we are good what the the container was built on Windows window containers to be honest I've not seen it but the the windows now is the is the what is it called the OS right so it means that if you build your container that is supposed to run on Windows then you can run it on any other windows the goal is that it is a container if you test it on your Windows machine and it's running perfectly you spin up a Windows machine in the cloud it would run there but can you because that container has everything it needs but can you run on a Linux system if it was it's a Windows based machine it's Windows based container so Windows based container it's a Windows based container you can't swap OS and run containers like that no it doesn't work I have seen blogs and stuffs like that on Windows to be honest I've never used it it's not something widely used it's some old developers are very comfortable Microsoft you know Microsoft was the Advent of this thing right so some people are really still very comfortable and working it but I don't care where you as as as how is it call you as the developer you where you want your your your your application to run if you build your application that it should run on the windows I as a CL Cloud engineer my job is to provide a Windows machine and ensure that that thing is running there if it's not running I will call for you why is it not running obviously it's a container is going to tell you the locks why is it not running if it's running because of application errors immediately get into a call okay your job is to run the container wherever once it is tested on the developer PC and it is running it should run on my cloud machine Cloud VM okay good now let's look at Docker dock like I said is a platform for shipping and running containers and with docker you can separate as we just we already saw the application from your infrastructure because every application now has its own dependencies right are we together yes for now once you install dock on your system by default it comes with depending on the version or the the the the the what's the word I don't want to say trim the the version of Docker you install is what we call the docker engine and with the docker engine is going to come with the docker Clan and the docker server and I think it's Docker desktop which is actually another UI that you can actually see on open it on your UI and you actually see a um a console can tell your containers that are there the images that are there the volumes that are there and stuff like that okay so once you install Docker Docker comes with the docker CLI and Docker host or the docker demon so this is the docker demon it's generally known as a server right what is the clan the clan is just like your AWS CLI you're able to run ad course right once you have the docker installed and the docker client is available then you can start running Docker commands like Docker run Docker build Docker pool because the docker client part of the whole um server architecture thing is there once you run a doctor command from the docker client that command is sent to the docker demon which actually executes what you're trying to do are we together is this too much oh am I too fast can you take that one more [Music] time it said once you install doer Docker is based on what we call a client server architecture based on what we call a client server architecture the clan is that part you that you you interact with where you're running your Docker Docker run Docker build Docker inspect Docker image list and stuff like that that is the client that is the CLI that's the on your terminal that's the binary or whatever that allows you as a human being to interact make run commands that's the CL but that CL has its job what actually execute whatever you're trying to do because it is via the client that you can tell Docker what you're trying to do what you want you want to be on image you tell the client and the client is going to send that image to the server which actually executes okay so this server is what most old people generally called do demon okay so the doer demon is actually responsible for executing whatever you intend or you wish and how do you pass that information to the do Devon via the do Clan are we together Fresca Alan Zan andand Flor we good yeah is it installed on what is it installed on I'm not sure question like the I I know you said we have do client you have the docker host and the docker registry so this this Docker host is it like a regular server like an ec2 server is it like a container or is it what is it and that's I know that's where you have the dock demon right already but what is not um the docker host and the docker generally when we saw the docker host we talking about where the do the demon is where the server is all right now once you install on your ec2 instance or whatever you would have the client and the server on on the same machine okay where you can have architectures where the client is somewhere else and the server is somewhere else what do I mean by that you can have um a server a central Docker server that we You're Building to build images or or you spin up and E two instance and that's install Docker to it and that's the the what you call the server and because I have a Docker clan on my on my local laptop and I can run Docker commands I can actually communicate to your to that remote Docker server that remote Docker demo okay so that's possible so but you can also install it and on the same machine you have the client and the servant that's why we have here our simple architecture but if you want to have a client and um to communicate to the server then you need to tell Docker where to go and how do you do is let's assume we we spin up a an is2 instance in in the cloud and that is2 instance has um something like 1.11.1 sl32 so that's the IP of this is instance and this is our demon or our Docker how you call do demon now I want to ensure that my dockor client is talking to the do demon then because my client is on my PC I would still run the same commands what is it for example Docker run but I need to tell it tell it which server it has to communicate so you can use the- S flag which Bas generally means host and this host now will be my what 1.1.11 that will be the IP address and I need to tell it the port where Docker is available and I think the docker Port is two three 2375 I think and 2376 so this will be for um unencrypted communication unencrypted is it spelled again encrypted and this will be for encrypted communication I don't want to go into this details now because the encryption and un encryption is how you configure your dock once you installing it let's forget about it this is our intro so let's yeah so that's that means we are getting into security right yes yes so that's going to confuse people so but basically what I want you to get is you must not have the client and the server on the same PC right you you are able to communicate from your because you have Docker on your local PC you have the client if you want to communicate to another servant which is hosting Docker and building your Docker images then you can basically tell you where to go and actually where to make that communication and do that building together yeah I was just waiting for you to say I together so that I can latch on to some some question I need to ask um I don't want to take everybody back but in my head I'm trying to figure out so Docker in itself right is a platform that does a number of things including Building images right uh I'm I'm struggling with the connection with Docker and containers um since the output from Docker really is Docker images right so where does the containerization come into this conversation at all that's where I got lost so I think I'm going to answer that question very soon actually I think my next picture or something will say something like that second question is second question I know I I'll wait for that one but the second question is this Docker does certain things I mean on on a list of say what are the kind of three things Docker as a platform actually does for us in this uh uh de de devops Journey so that I can put it into some pigeon holes and understand what it actually does Docker does two main things it can build and it can run containers now the obviously Docker builds and runs containers because of the docker it has what call it has a run time and because of the run time it's called run C you can run containers on dock but uh I would say 80 90% of people are not using doer to run con obvious ly testing environments and small teams and developers are using can test that using doer to actually run the containers that are buil with doer but running and orchestrating containers that's where kubernetes comes in kubernetes doesn't build containers it doesn't build images whatever is built in Docker is run now run in kubernetes does that answer your question no it makes me more confused even whatever is buil in Docker is now R in no whatever do build Docker builds the images kubernetes does and you run the where Docker obviously can also run it but Docker is not good at orchestrating containers as kubernetes so we go to Docker that Docker please build this thing and give us the image we take that image and we put it in cumes because cues is the master in orchestrating container doing containers okay yes but Docker also has a feature that runs containers Docker in can run containers so there's a difference between running it and actually orchestrating it right so um what do I mean by running you can have a container and you build it and you run and it's serving you serving you now um you need some sort of if I if I have a application let's say one uh one container it's seven in application is seven in UI gch students then 10 students that one container is able to support that now I have a th students you see that there's a load on that container that container is not able to solve that problem just running that one container in Docker will not be able to handle that traffic from thousand students I need a guy that would understand that yeah thousand students trying to talk to this container and he going to do some scaling he's going to do some all this stuff that's where kubernetes comes in has the power to do all that okay darker also has something but is still not F future rich as kubernetes it's called doer swam if you've heard about it we are not going to cover do swam do not have time for it okay so with Docker you can build images and you can run images so let's did I answer you yeah it did it did it did it did thanks so before you proceed um yes so what I'm understanding is that typically what um organizations would now do is build a image with DOA and then push it to um kuet to do the orchestration exactly than so it's not only doer that builds images there's also something out there called pman okay potman also builds images but I would tell you 99.99% or let's say 80 or 90% of the people are using Docker because Docker is the they started this whole thing let me put it this way it's the Jenkins of um containers it's the Jenkins of containers exactly does that make sense sorry one more question one more question one more question so so time you mention Docker images in my head um picturing ec2 image I mean an Ami that's not the image we're speaking of right now is it obviously it's not easy to am but it's it's also an architect that or an an an artifact that you can use to to spin up um um the application or something like that so it's your artifact for your it's a packaged in fact it's an artifact for your application because inside the image you have the application you have the dependences you have everything that it needs to run it requirement so everything is built inside all you need to do is deploy it is it like a ja like the thing we did in um in the cicd where you get a ja file or final artifact It's the final artifact so the Java in that case was your Deployable you image in this case is now your Deployable okay because you've already packaged in that image everything required for the application to run source code you didn't need Java Java is also part installed in that image what version of jaam java is needed it's inside image does it need some other things to talk to it's also packaged in that image and everything is packaged in that image before it is built so that image is ready to run got it now all right thank you so we're seeing that you have the docker client you have the docker host with the docker demon and now you have what we call container Registries we'll look at it and the Registries like Docker hop ECR that's basically where you keep images okay you can build the image it will stay on your laptop do you want to share it with the world you need to put it somewhere so that others can access it right if you have that on your local PC there's no way on air I can get to it okay except you may a PC server and open some make it public and it starts receiving traffic good now what happens when you run a Docker command this is an example of a Docker command okay this is an example of a Docker command so Docker run inine soix in this case is our image okay we'll get to this just hold our toix in this case is our image this is the command we are running from our client Docker run because like we said once you have the client then you're able to tell Docker what to do give it instructions so an example instruction is Docker run we will break down this command as we go ahead in the in the Handel part of it we'll break down this command but once you send an instruction to with from the clan the clan now sends another API call because like I said the client and the demon they need to talk so you can see dock Ron goes to the demon it goes to it goes to the demon do it goes to the do p it goes to the demon so all these things are talking to the demon and what does a client do what you have on your Ps see it actually converts that command which you've given Docker what you're trying to do into an API call that the demon can understand okay so the dock demon now we check if that image because we are saying Docker Onix inine here is an image it's going to verify if on your host system there's inine there if there's noix there then it's going to pull inex from a registry so like I said inex is a command now this is [Music] um not the full name of the of the the image itself because it's actually doah hop. library. engine and stuff like that and the version but because we just we'll get to it when we look at images in that in DET so let's just move ahead so the the the demon now converts that command the now checks if that image is available and if it's not available it pulls that from a register where it can find that image and if you do not tell the pass a registry a specific registry for that image then the docker demon by default things that you're talking you're talking about Docker hop so he going to go to Docker hop and look for that image once it sees an image then it brings down that image converts that image to an oci compliant bundle oci is open container initiative it is basically um standards for building images okay so all St images should follow the standard so that we can run them everywhere if you build your image with pman I can run it in kubernetes I can run it in Docker so oci is just standard that says all images should conform to this uh pattern so that we can run them everywhere all right so the do demon also comes with what they call um coner d Docker D so this container D which is now what most people are using before we had what we called Docker D Docker D so Docker D and container D these are called container run times so the container runtime is actually responsible for running that container okay and the continer run times are dependent on some something called uh um uh Ron C and ronc is um all the the uh binaries that are responsible or that that actually make container on time does its job but in the background you have it's using what we call Linux um name spaces and c groups and stuff like that I don't want us to get into that detail so what I want you to to get from here is once you do make your command it converts that command into to um an API Cod that the demo understands the demo now does the job checks if the image is available if it's not post it if it if if um it's available then the demon also tells um cont that all this image is available the person wants it to be run then basically runs the command if you want to push that image to a responsible to registry the docker demon is also responsible for sending that image to the registry and checking if the image is there and stuff like that make sense yes ma yes so in simple terms your image here is your engine X yes in simple terms our image here this is our image okay so what the registry is just like a a store where you put all your um completed images images okay yes so you would only put an EMA you would only uh talk to um the registering ones you have something to keep there right like what what did we use for Marvin um Nexus Nexus yes okay sir are we going to find engine X in ECR or it's just going to be in do Hub if you want to put it there you put it there but you not tell the public one can we get it from ECR or is no you for example like I said I want to get into it later this name inex is actually doer. slash Library slash let me try to write this out clearly sorry for my writing what is happening here so this name the full name is actually doer. Library SL engine engine latest this is actually the full name of that image why because once you just say I want engine and you don't not tell it the library or the the the registry to go to this would be the registry in that registry you have what we call user space for example you create one you will create one and you have it so the user space then will be maua or something like that and this actually the name of the image I'm looking for and this is the version of the image so if I just say I want inine X I do not tell it the library or the registry to go to then Docker assumes I'm talking about Docker hop okay okay if I do not tell it the name space to go to then it assumes that it's looking it's looking for that image in docker's public registry and it's known as library and this is the name I'm looking for and if I don't not tell it the version then it's assume I'm looking for the latest version so if you do not want the latest version then I will tell it that I want something like what inex 22x do 3.1 then it's going to look for engineers. 3.1 assume I'm still talking about public library and I'm still talking about dock H if you want this from ECR you have to put in the full command you tell it registry to go to you tell it user space to go to you tell it the name of the image you want and you tell it the version you want I have a question sure yes so I'm just trying to to wrap my head around spe inex when when you say inex image so theix images are different from um uh repository to repository is that what I'm hearing so there isn't just one uh type well there will be versions of engine X I know but the engine X in in say ECR will be different from theine X in the doix is owned by for example the I don't know the name I don't know if it's called inin xinix is owned by pardon did you say something no no it's fine bro so engine is owned by let's say it's engine I don't know the name of the company or whoever is responsible for building Engineers they have what we call a public available image now where is that image they would probably have it in Docker hop when when I when I when I stop my screen I'll show you what I mean if you go to Docker hop and you look for images you would see engine and is it official it would tell you that okay this is the official image ofine most official images are in Docker Hub why because by default Docker run times once they are running images if you do not specify where he has to look for that image it points to Docker hop now if you as an individual or as a company you want your version of engine in ECR you can pull that engine image from dock hop and push it to to ECR then you will tell your doctor run time that please use my own version of enging go to not do hop but go to ECR go to this repository name like this look for the engin called image called inine if the tag in ECR is latest then it picks that if there's another tack you have to pass the T for it to look for so and you will do that because you want to customize your engine X image is that why why you would do that that's why you would do that because as we go to images you will see that images have a base right so from that base you can add your own customizations there to suit your applications need once you suit your applications need then that is what is ready for you to deploy you need a place to keep it so that you can continue using it do you want to make it public you have to make it public but you and I we cannot push to docker's public library only Docker can do that so even if you want to use Docker H hope for your custom engine image then you would pass in a username here so each time that you want it to go to dock hope and bring your image you don't have to set tell it the registry but you need to pass your image so you will tack the image in a specific way that it knows that okay I have to go to Docker hop and look for this image there and stuff like that go to the hands up I think it make much sense I left that so I will talk about it then are we together yes BR yes we're processing if we are not please tell me so I have a silly question though I mean this is silly please this is the intro right we haven't really entered the heavy lifting yet this is just the beginning we cannot Deep dive into Docker we cannot have doer deep dive but I'm going to cover every aspect of it and how how you use it now um if you take a look at the docker documentation which I will show you you would see that it's just like what Jenkins or just like any other tool that we've been talking about but I'm going to give you the main concepts of it okay yeah thanks now let's talk about some measure concepts of do images which we've been talking about so what is an image so we said that an image is a standardized or it's a package or it's a template that has includes all the files all the binaries all the configurations right it includes everything needed to run a container or needed to run your application are we together yes bro good we have some important aspects of images once you build an image and you push it you cannot run that image and start making changes or on the base layer so images are immutable so once an image is built you cannot just make a change if you need a new version or you need a change in that image you need to build a new image so that's what it means by images are IM imitable okay so container images are composed of layers let's take look at I want to get to doif fire later but let's start from the docker file images Docker images are built from what we call Docker files P are you there am I too fast or too too much is this too much I just P Mich you guys okay good so images are bu from do and a doy is basically a document that is used to C create an image okay so in that docy you would specify the instructions that you want in the image do I want to install some packages you can put them inside so this is an example of a Docker image sorry this is an example of a Dockery are we together yes bro so this doery is being used to create a darker image and what is a f I said the doif FY is a document that creates an image and it provides all the necessary instructions to run like the files to copies commands and stuff like that okay so in this doy we are saying that please use Python 3.1 so python 3.1 who can tell me what this is this is another image the base image this is another image so this is what we call the parent image so this is what we call the parent o base image so we are building just like what Victor was saying we are building our own image or we from another base what does this mean if I'm if I if if I want to build an application that is that needs um for example HTTP right so an HTTP image is already existing out there that the HTTP guys already built and package and stuff like that I don't have to build that image again I can just call HTTP image and I add my own applications on it if I'm building a python application python application will need python for sure right yes bro good so I do not have to build an image that needs python there are images outside there that already has Python and all the dependencies I'm just calling it here so the from command is telling you that please go look for the base image python. 312 so this is the base image okay and we are setting a work dear working directory so the working directory is just basically where our application will be and stuff like that inside our our our image okay and you will have something like a copy the copy command basically is copy uh requirements.txt here so this would be our requirements for application so there will be a file in our build context we see that we get to the hand hands on there'll be a file called requirements. and we are copying that file into our image add. root please if something is not clear you stop me this is important okay so the Run command is used to run or to run um commands inside the uh our image okay the work there is going to create a directory and the copy command from the copy command it basically copies so Copy Source here so this is going to copy from our host so this will be from the host file system and this will be into our containers file system or image file system so just as um on your on your Linux machine if you want to update the package management package manager for for Ubuntu what do you do what command will you run AP update AP update exactly so AP update in containers if you want to run this same AP update inside container what you will do is you pass in the command run AP update so it's going to exit that same command in a container I'm saying this so that you understand what this run is basically doing okay so in Linux if you want to add create a user a Linux user what's the command user ad user ad so run user ad will basically create a user called app inside my container okay so if you want to use that user now which I've created then you pass in the instructions called user and if I want to run a command inside my container you pass in the con instruction called CMD so if you go to the doer documentation will see it you can see in manner there's a reference there a bunch of this like 300 400 of these commands that you can look into and see how they are all used okay any question yeah where does the requirement do txt reside so this requirement do txt the copy command means copy something from the post file system so there will be a file system on the the host itself into the container so I'm saying that copy requirements of txt so there will be a file called requirements of txt I have a very similar U image that we're going to build in the demo and you'll see what that this actually means it's basically copying that from the host file system into my container because the com the the this application needs it so in here in here I'm having my base image so I have the base base image and I'm creating the directory where the application would run and I'm copying the dependency you see install the application dependencies so the requirements.txt would have all the dependencies that this application actually needs we're copying it into the container and pro where do we copy it to here um but where is it this is the Linux command what what is this you guys did Linux the current directory what is it like the current directory where we run in the that's the home directory no that's not that's the I guess the current directory yeah going to be the working directory the work directory the work di working directory where you are at the moment that's the present working directory pwb so that be the the local for SLR par is that going to be the um the user local for/ is that where we are copying it to in the image yeah because what directory are you working from I guess wherever you are at the moment when you're running this command so I'm thinking is it going to be the the work di probably need to check so but you get the G yeah cuz I mean on the command line the dot is literally where I'm at on my system yes so maybe the the the working directory will probably be changing you into the local USR Local app copy the requirements into them and install the requirements then you copy the source this is also a directory on the local system so this will be our source code and it's copying it into another directory call here then I want to expose a port so this our um image what where will um which port or where will it be accessible so I'm saying that this would be um 5,000 if this was Jenkins what would be what what would be the command here 9,000 no exactly so if was Jenkins I'll be exposing 8080 exactly and I'm creating and adding a user and I'm doing a user app user using the user to run the command and these are the commands that will be run at run be used to be used to start the container at wrong time okay sure make sense yeah yes so so Prof all the dependencies that would be find in the requirements.txt is it going to be in a remote or is it going to be local where you are pulling it from at this Junction that's not what we are looking for but this everything that in this requirement. txt this will be Pro given to you by the developers now okay now if you're running this in a c Pipeline and is a most likely this this is going to be a fire that is also in the same full directory in your GitHub repository all right so you're going to tell it there's a build there's a context where the pipeline is running or the walking the runners are walking so you can you would basically tell it where to copy all this information from I don't want us to get into that complexity now I want you to understand what do ofile is all about is this making sense where is pela I thought she started with us here she's still here she left to the background when when she a little bit but I'm here okay good are we good yes bro so each part or each command in our Dockery each command in this Dockery creates what we call a layer in the file system so each command you run copy this this is going to create a layer in file system run creates a layer copy all these are creating what we call layers in the image okay and there's a reason or there's an advantage for this AR architecture because let's assume that oh um um I already have this image right and this image already has this different layers so we have a layer that was still based on the same parent image still based on the same parent image still based on the same parent image we are still using the same workm uh we are still using the same requirements we are still installing the same requirements but we just want to change our source code right so we build a first image and we call that what version one then we updated our source code oh and we want to BU version two of this image because darker images have this layered architecture you will see that we already have all these layers already built and we are making changes only at the level of our source code right and what will happen doal is not going to build the previous four different layers again it's going to recognize that a change only happen at this layer and it's going to rebuild our layer five if it is called if it is layer five and every other layer after that and because of this is what we call it leads to fast builds because of the layered architecture paron are you guys tired oh it's not you don't understand little a little hard to get completely just just a lot to posess in one sit much information in one SE I get it good any question if no question then we can move into the next concept or next objects of doam which is actually container we've already been talking about it right so containers are what runable instances of an image okay containers are isolated they're self-contained they independent they portable portability here means what you can carry that container if it's not running on a Linux Linux normal PC carry that container I can run it on Virtual virtual machine I can run it anywhere and it has everything it needs to run okay that's good reg take back so in that file code file each line of code is actually a layer is what you're saying it's going to build a layer in my image yes each one the one each each line that you have on a doery like this all these are going to produce layers in okay our image and if you have so many layers in the image you make make your image bigger that's not too good we are not talking about Docker automization now because there's something that we call um multi-stage builds and um trying to wrap all these things in one command so that for example if I if you have a command like um um what is it let's use AP again you have a update what always the next command after update I'm install upgrade whatever yeah you can use that so I have a command called AP update so I will say run AP update if I want to install something I'll also say run AP AP install install now both commands are going to produce two different layers in my doer image can optimize this by having this in one command and I have just run AP update and inst AP install now I have just one layer in my dock image so this reduces the size of our dockum image and because the size is smaller it's easier to start it's easier to stuff like that we're not talking about optimization I want you guys to first of all get this concept once you get the concept then we can start looking at how these things are being optimized and stuff like that now most questions that people go to interviews and they asking they'll always ask you how do you how can you reduce your Docker image sty there's something in Docker called multi-stage bills multi-stage bills please look up look that up multistage bills so basically I can be building one image and using the same image in another stage so I have one image for example let's say this my Docker fire had 20 different commands and I wanted to install Marvin I wanted to build my application and produce the Java some some some some artifact I need the artifact to also continue some uh building process all right so that first stage that had to run install Marvin or run this I don't need all that in all those layers in my application right so the first stage will build that install all those depend encies and build my my my source and actually com combine my compile my source code once the source code is compiled with Marvin which I also installed in the container I can discard order just carrying one was compiled into my next stage of bills that actually does the running of my application okay that's too much I'm trying not to overload your brain are we together yes sir yeah yeah yes what would have actually helped me I don't know about everybody else is is there a prerequisite knowledge we need to have for this thing to really sink in before you know we load up on on these Concepts is there anything fundamental that we have to are these are the fundamentals then we're in trouble no you're not vict you get used to it am I good okay if you say you say so that's why if you remember we are saying that Linux is everywhere right right so let me let me say Linux is your fundamentals but there's nothing about do Docker that you needed to know before we start this okay okay I see what you're say Linux probably was just fundamental okay and concept even the CD concept is still fundamental because it's still kind of there's nothing about CSD here I know but the understanding the concept the whole process and all that if you apply it in a way you can still make sense out of it so I me was this too much today can I just run through what I understand by this Docker file and tell me so from what I get it is saying that pull an image which already exists in this case python 3.2 exactly Once you pull that image create a directory right is that correct yes app yes once you create that directory copy the requirements yes right so that requirements is copied and then it runs that um it install the this is where I get so it install does it install the image based on the requirements or what are we installing where do you see install R so installing what is inside the requirement so if requirements had some inside the requirements let's say it's it need some other dependencies some other software or some other things they will all be in the requirements then I'm installing the requirements pip is also a package holder just like app I don't know if some of you have come across it yeah okay got it and then it will copy the source code from there then it will copy the source code from our Local Host into inside here then it's going to expose this container to export a port on the container so make that container available so most of you are looking at engine exactly open port 5,000 if it was engine I would say expose Port 80 because port engine is all available at Port 80 right for those have used it okay and then you add add a user all app now if you do not add a user by default your container will be run as a root user if you don't want the container to be run as a US user then you use the word user and this is telling the doer that please once you want to run this container use the user called app which is the user which we created before okay I think you're a little bit lost at that point at that point is the container running no we are building an image we're building the image still okay so what does that CMD is a command that once we are running this com container on on Startup this is the cont the command that will be used to initialize the container so the CMD instruction will give it that like a default command yeah it's like a default command you can bypass this once you want to actually run the command you can deviate from the default but this is the default command okay okay what I want you to get from here is not understanding the specifics of this Dockery but understanding the structure of Docker files and how Docker files are built okay MH okay let me see if I can find something sorry what what language is this is this is this file written in bro a sec please okay so if you come to I don't know if because I'm using my tablet documentation and you go to manual or you go to references so you can see um the docka fire reference so you can see the different commands here and what they actually do okay so change working directory so if I want to create a user it sets a user do I want to set a shell do I want to run do I want to from can you see yes sir can you see my screen yeah so those are there yes so this is the reference this are where these things are coming from the add add add local or remote files and directories command CMD specify default commands Okay copy means what copy files and directories entry points specify default executable so you can see some do what the ARG you skip that yes I I don't want to go through the completely so yes ARG basically means use B time uh B time variables something like that okay so you can you can deep dive into every command so if you have you want to set environmental variables inside your your your dock image then you use the the instruction called EnV you want to expose a port use the expose describe which port application is listening on okay if you want some heal checks then you have the heal check if you want some label there's label these are all the instructions that you pass instruction that you pass into the docker file and what it actually does okay are we together yes so if you pick a Dockery after this class just go to online you pick a doile and then you can see what what is look every Docker file will have all these instructions every Docker file must have this instructions and all those instructions are coming from here nobody's inventing them Docker already told you how you're going to use it so if you go back to my Docker file there's nothing that is not present there so you can take this Docker file and you go back here and you actually see see what is actually happening there where is my thing okay yes bro yeah yeah but you you didn't answer the question what what what was the actual language this thing is written in is it something text just plain text or is langage this is not really a language I would say this is a txt actually does not have an extension to be honest give me a second let me check I never thought of it way because a doery once you want to create a doery you just need to give it a name doery it doesn't have an extension if you pass an extension then there's a problem the do Docker will not understand it so so but then there's a format in which it will understand the way you actually like you said is structure it's structured in a way that Docker understands it so I'm I'm just thinking that but because Docker is actually built with go say go yeah what say go language Goan language go no so I just did a quick search and it says that a Docker file is not a programming language but domain specific configuration file so a domain specific configuration basically means it is a syntax that is specific to this application only this application would understand it okay but it's WR in go programming language right this is not go Docker in the background itself is go just like cuberes it's go but cuberes is all Yama so this doaa itself is not a language it's a domain specific stuff text just like you had um Jenkins fire the Jenkins fire itself is not really language it has some sort of groovy syntax but Jenkins is also domain specific to Jenkins why doer is WR programing language because the application the dock in the background what you you installed in the background is go but this is an instruction that you're giving to build an image this is not the dock Up application itself so that Docker client that Docker demon which you're communicating with that's what is built in go okay this is not go at all if you've ever seen a go thing this is not go are we together yes BR yeah so just gole search and you see an example of a go quot you see something completely different from what you're looking at here so Prof let me tell you why I'm I'm struggling here yes it's not a lang it's it has a structure and we have to understand how this is excuse me what this structure does so that we can also write it right yes and this is written not not in vs code it's written in plain text something like plain text right you can use vs code use vs code yeah code you can use whatever you want can use whatever you want but make sure that the name is Docker f with a capital D and no extension if not Docker will not build it okay is like a Jenkins file yes it's like a Jenkins file it's like a Jenkins file only that the format is different the upper case the upper cases for the for the commands that Docker instruction instructions the instructions you're telling Docker to do you're telling the demon to do all the instructions are in upper case as you see as I showed you in the documentation that is how you pass it and those instructions must be at the beginning of the line the line line yeah I had a quick question what the between the CMD and the entry point good question because I think they are both they are used interchangeably but I think there's a slight difference I'll check that to be honest TB says specified defaults and then entry point it's different okay so an entry point allows you to configure a container that will run as an executable two possible for entry and that and CMD what is it anyway we can check that later they have a slight difference can they both be used no I do not think you can use both one will override the other good are we on the same train now can we continue yes bro yes sir let's get it so just FYI please there's no you don't have you don't need any basis to understand or to to not feel like you you you needed to know something before starting this Docker I started it from what is Docker basically Prof will Docker be the prerequisite for kubernets kubernetes images are the prerequisite forber where do you want to build that image doer is it Portman whatever it is an image that's why we said this images have to be oci open container initiate they have a standard wherever you build that image it must be compliant to to the oci standards so that any orchestration platform can be can run it besides um kubernetes what platform can um yes there's something called ranja I think yeah so many some I was talking to I think it's mesos yeah a lot of things out there I heard about one called mesos yesterday I forgotten the name meos yes me and obviously ECS so ECS also runs containers right ACS is to AWS ACS is native to AWS then there's also open shift there's a comp something platform on open shift yeah bunch of them there okay yes bro do we take a break or should I continue please continue thank you very much good we have on today what do we have hands on today yes we done with doer today give me a second I think we'll take five minutes and we come back oh thank you where is the recording I want to pause can you pause it for me but before you go I want us to talk about something give me a second oh we we have the conversation in five minutes please five minutes not more than that do we take the five minutes now you got to tell us some no where is U Pamela please pause the recording so we talked about coners we talked about Registries so let's look about networking in Darker so we'll talk about networks and storage and I think we're done call some concepts of do let's talk about networking do so what is um where's everybody am I looking myself I think the cameras are not just on yeah that makes me feel like this is not some of us are multi taxing no we're here I just changed my computer e in dinner in um class good um share this document afterwards because it's a little trying to understand and taking notes it a little a little hard do you people prefer when we provide you this documents I don't have any issue giving to you but I think huh it helps it helps so what what it does Prof is that since you're you're running through documents and explaining things in class with us right and it's it's based off of these documents it's easier for us to to map the learning to parts of the document that we were able to you know um um understand if you will so the document helps cement especially when we're revising later on there's a soft copy or a hard copy of the document helps us remember yeah especially I'm definition I'm going to try I'm going to try to export it because this is actually my not this is actually my one note one and the one from Monday too Prof please so I'll try as much as possible to export it and make it available to you and the one from Monday okay I will I actually tried exporting one from Monday as you send me a message but when I export from my Note part my one note PDF Prof let me give you my email address and you share your one to me and Franchesca you serious Prof what can't you just do the like a copy of that page and put it I tried it on a a Word document or something but I guess the format the format the format changes yeah see what of copy it and paste it on onee before copying on one and taking it to already on one copy it and put it on notepad then before you copy it from notepad I take it what he's trying to say is I should copy it and put it on Note Plus+ please let's continue we'll discuss this one we're already recording so let's cut this one so no more no more ban good so yeah what happened oh my can you pause the rec coding I think my I can't I can't so um by when you install Docker right Docker creates by default a couple of networks so there what we call the bridge Network as you can see the host Network and the non Network now we set that containers when you run them they're running on a host right and if you have multiple containers as we said we have a application that has been broken down into different containers and those different containers need to talk to each other it means that all these containers have to be in some sort of a virtual Network that they are able to reach other each other right are we together yes we know so once you create let's assume that this is our host machine Let's assume that this is our host machine and we install Docker by default Docker creates what we call a network and the default network is called Bridge Network so once you do a Docker run if you do not specify which network you want the container to be attached to then by default Docker is going to attach our container to the bridge Network so I have my container one see1 it gets attached to the bridge Network and um I forgotten the side range of the bridge network but let's assume that it's some 10.0.0.0 what 16 and we have our container two it also gets attached to the bridge Network we have our container three and our container four all these containers get attached to the bridge Network and it is because of this bridge Network that you can have different containers being able to reach each other in um on the host are we together now if I have different applications that are running on my contain on my the same h i can then separate um the different the network that I want the containers to reach for example I have let's say our application and I have app one and App application one has let's say five containers so from C C1 to C5 and I have another application say app two and it has containers from C6 to C 10 and up application one and application two even though I'm I'm using the same host because it's a big is2 instance it has a lot of gigabytes of of of memory and CPU I can run multiple applications in on it but I do not want the containers of application one to be able to talk to the comps of application tool so inside my host right inside our host I can then create multiple networks so I'll create a network that would be the network for uh application one and I create another Network and this will be the network for application to am I making sense yes sir so all the containers of app one so from C1 to C4 they will get attached to the this network and let's say you have 10.0.0.0 sl16 and I have 172 0.0 sl6 so all the app containers of app two will then pick their IP address from this side of range why the containers of AB one will pick the IP addresses from this side of range but because they in completely different vertu networks they cannot talk to each other that's how also you can enhance um Security in your container environment sort of like separate vpcs sort of like a separate VPC if you want to call it that way but remember this these networks are on the host system itself yeah would that still be called a bridge network no the name of the default network is bridge but there I don't want to go go into deep into network drivers there's a storage there's a network driver which is brid so if you want to create another Network say I want to create who ask the question abdah yeah if you want to create another Network say the um a n called abdala you need to actually pass a driver a driver is actually what is responsible for creating the network so the bridge network is the default Network driver that is in Darker if you want to install your own drivers that's also fine but you then call your network whatever you want to call then you can attach containers to your abdala network if you do not pass the DH Network flag once you're creating a container then the container gets attached to the default Bridge Network okay make sense yeah yeah okay just like having default VPC and then customize yes exactly bro sorry to back you up a little bit um I I I okay so again we are going into containers um and I'm still visualizing Docker to be the platform that actually does this um that creates the the file so I'm still struggling with the relationship between now you have like containers like 1 2 3 four containers around a a a network in Docker host on the docker host so like this diagram can you just re-explain this diagram you just drew um not in the ter terms of network but in how Docker is helping us uh implement this configuration that you have on the screen am I making sense I'm not even making sense to myself right now um you are let me just so you have what we call you have containers you understand what containers are for the from images right mhm and we said that these containers if they are contain for different application they should be able to talk to each other right okay yes so if I'm if I'm having an application that has 10 different components and 10 different containers and those 10 different containers they are running on an E2 instance that E2 instance is a Mach machine just like any other seven right correct now containers themselves are a low level I would say machine themselves or servers themselves we need to know that each container has its own IPM address where do they get this IP addresses they get this I from a virtual Network that is on the Linux that is on our is2 server okay so this network this network which is created on the server itself if you drill down into Linux there's something in Linux called Linux name spaces and c groups and stuff like that I don't want to go into that details okay we'll just find but Docker uses all the Kel technology and stuff that that to create these virtual networks for containers to use so that all the containers that are running on a host are able to talk to each other okay for coners to be able to talk to each other they need IP addresses right right good so they get their IP addresses from these virtual Networks okay so once we have if you do not specify the network to use then by default Docker will put your container in the bridge network if you want to have different networks then Docker presents you a command to create a new network just do Network create you give your network a name you give your network the side range which it needs then do creates that Network then each time you want to create a container that would use that your custom Network you need to specify the network that the container should be attached to and do is going to attach that to that Network so this gives some sort of um Network isolation on the same host so that if you have containers of one application that have no business talking to containers of another application then they don't talk but the fact that they if you leave them all to be attached to the same network then at a higher level hackers can actually get into your container break out of that container and they able to see other things that actually affect other other okay understood so Prof if you want them to talk to each other you put them in the same network right in the same network yes good sir yep does that also answer the question how do you ensure security within your continence yes you can Within the containers or in your your hosting platform but that's some form of security that's yes that that that enhances Security in the environment by ensuring that applications of uh containers of one application that have no business to talk to another application if they're all running on the same host they in separate networks okay are we together yeah okay was that too much no it's good good just just one just one more question with the networking part so you you the separation is done I Tred to look for the networking term um even though they reside on the same host or the same physical host word segmentation segmentation segmentation okay and is it like using a virtual uh uh landan you know like in in in networking something called a VLAN VLAN yeah virtu vir area network is it something similar when you're separating you can use that you can use that concept okay all right thanks are we together yes sir so this is you should know but once you're creating containers and using the containers to to to run them in kubernetes this is not part of it we'll talk about networking Incan so this is networking in Docker okay so Pro say you have your two separate um networks for App application one and application two in the future you need an applic a resource in application want to communicate with a resource in application is that way you can configure it like we did with um when it comes to like VPC paing um or you would have have you ever had have you ever had something called IP tables yes yeah so that's that's deep dive Linux right and yeah and and manipulating the routing on the host machine itself so because every host machine every instance that you see as you see them they have what we call a routing table yeah so you manipulate the routing table using what we call IP tables so if you want to list the do all that thing then you need to go down into IP tables that's deep down Linux that's not what we complicated yeah okay I don't like to use the word complicated because you can actually find yourself in environment and you have to do it I had to do it um before AWS gave us what we call AWS private n before the private n came into existence I was in a project and I had to use iput tables to configure all this nting and stuff like that blah blah blah so it's doable I did not know it I had to read and stuff like that so it's all good um containers by themselves are EPA what do I mean um once you have an image and you spin up a container the container has the re only fire system which is coming from our immutable layers and it has what we call the writable layer so everything if it's an application that is generating logs if it's an application that is generating data that data has been written in some f fire system inside the container right but it is epea it is not persistent so every every data generated by the container during its existence is lost when the container dies so if you delete the container or something happens to the host and that container was kicked off that host and whatever data that is generated during its lifespan is lost so we would say that containers are epimera eff okay now let's say we have a critical application let's say we have a voting application and that voting application during voting you guys just voted in America now right and during the voting thing and in the background all those things were running in containers and people are voting and something happened to that container and the container died or it's kicked off and the container is deleted from that host and you're about to call results and there are no vote why because all the data inside the container has been lost it means that that's a problem right yeah we need to find a way to make data generated by the container to be persistent so you do that using what we call volumes what using what sorry volumes volumes okay okay so using what we call volumes so container volumes or darker volumes there are two types of volumes and there are two main types to that you can use to persist data and it's called Uh one is called volume mounts so we have volume mounts and you have what we call buying mounts excuse me are we together what was the second one volume mounts and buying mounts okay I'll find a way to give you guys the document since you need it I don't have a problem sharing that with you okay need it and every other document I might use which is going to help you whatever that helps I provide that to you good thank you now the volume mounts is good is used when you want data generated by the container to be persisted on the um to be to be written on the H so let's see let's look at this scenario you have a container and that container is getting that information from let's say our voting information and people are voting and the continer is keeping that data if you want to persist that data which the voting application is getting from input from the UI or from another application if you want to persist that data then we need a way to ensure that everything that is written into the writable layer of the application is stored on the host file system am I making sense here yes BR good now you would use volume mounts to do that because anything written on the host is transferred to the on anything written by the application in the containers file system is transferred onto the file system of the host machine so you're trying to link the host and the container are we together yes bro Victor yes sorry my mouth is full no it's okay I just wanted to be show now what if yes when are we going to need this in terms of linking the the um when are you going to need it from the host to The Container ask your question again when are we going to need we going to use it use it I just said when we want to persist data being generated by the container okay because like I said we have an application that is a voting application and your us voting application are voting for a whole day but because containers do die they epimera it means that during the container lifespan everything written to the container is in a temporal directory it's tempor fire system the writable ler is temporary if something happens and that container is KCK off the host or something happens and whatever and the container is not existing anymore any data it generated during its lifespan is lost so if it was a voting application all the data that was sent into the application is lost because it's in the temporary fire system for us to be able to persist for you to persist data between container restarts because things happen a container is restarted right once the container is restarted for you for that container to continue having the data which it generated before it needs somewhere to be able to fetch that data from right so you you you bind or you you you link the fire system of the container and the fire system of it host machine so that anything that is written into the Container is also transferred onto the host file system right so if the container is restarted what happens it comes back it mounts the file system of the post machine and it's able to see the data which is on the fire system are we together yes bro so it's almost like an EBS V volume that's attached to you can you can look at that way and we are in the docker Docker sphere or Docker some let's look at it from the docker perspective but we're talking about the the EBS volume itself is what will give gigabytes to the fire system right remember okay because if you're reaching if you're writing on your F system on your is2 instance as some point they tell you that the F system is full what is full is the ABS volume so there are multiple lay layers of this thing are we together yes BR sir yes so volume mounts are good when you want persist data rason into the container on the whole F system buying mounts are the reverse now I want to I have data and I want to make that data to uh uh accessible to a new container let's say I have data on my file system and a container is an analyzing um uh tool and I want that coner to be able to analyze some data how do I make that data accessible into the file system I'm using we use Buy Mount so you use pine Mount to make data that's already existing or data that you're generating to be available on the inside the container the difference between buying mounts and volume mounts is with volume mounts because it is the container that that is generating the the the data and putting it on the host file system you as a user on the F on the host machine you cannot ex um uh you should not manipulate that data if you manipulate that data then it becomes somehow corrupted and the container might not be able to actually use it as it should but if you're using buy mounts then the container know that that this is pre-existing data and the data or the the the the file system can be updated by both the container and also by other processes on the whole system does it make sense yes Ma I need another yes or 10 yeses yes yes yes yes yes yes yes yes yes yes yes go again the difference between the two yeah so I said but you you you you got the the the volume mans that you use volume mans when you want to persist data from the container to the H system and use buying mounts when you want existing data probably generated by a user or another process on machine itself available in the container now the difference is buying uh volume mounts containers the data is only manipulated should only be manipulated by the container processes any other process on the host system should not be able to manipulate that data if that happens because probably you have root processes and stuff like that if that happens then that data becomes um corrupt uh corrupt will make sense but it it becomes difficult for the container processes to actually understand and it can actually lead to to errors so yes so normally volume M data is should be data that is only manipulated and updated by processes in the cont container but if you are you're using bind Mouse because you already have pre-existing data either generated by you either you as a user of the host machine you create a file system and you put in some data there and you want a container process to actually analyze that then you can use buy Mounds so that you as a user can add data there or the container itself can generate and manipulate and write into that F system does that make sense Prof yes Prof so volume Mount you can use it I want to talk about the context of the election that vote so you use volume Mount let's say every vote that came in when the container goes down or the host dies every other every count on it also went away right so you use the volume Mount to retrive it you use the volume on to persist the data on the so that so that if the first thing I want you to know is containers are Epal what does that mean they stop and start that yeah what is the what is the consequence of that because the five systems in containers are not permanent whatever data that is generated by the container processes get lots across restarts so we need to find a way to persist persist the data and we persist the data using volumes now there are different types of volumes and what scenarios will you use them there's volume M there's buying Mount okay yes okay I think I get the part for the volume M so the bind let's say when on What scenario do you use the Buy Mount just put it in the back of your mind that when you yes go ahead are you asking me a question or you're about to recite tell me what you think yeah I wanted to but just say it no go ahead tell me what you think web application don't be shy not shy I'm trying to put things together you use you use bind Mount where the when the origin of the data is not necessar from the container exactly yeah okay so so this Buy Mount can be used to maybe Forge the result of the votes of the election by taking some data from somewhere and then put it so now I I get it okay that's not what Prof said no I know but I'm just saying on another okay yes we use toost like websites and like constantly upd the website because I guess the data will be coming from the the host machine now I don't know how your web application itself is going to work but get the point of the buying mounts so Prof if the if the container processes actually manipulate the bind mounted data um at this point is you know it doesn't does it own it no it can't own it because it was there before um trying to figure it out the the ownership of the data is not the goal of volumes is to persist data okay the good thing about buying mounts is put processes in the container and outside the container can manipulate that manipulated okay thank you so one is dock managed the other is like uh host managed what I'm saying uh so volume Mount is managed by the dock because it is created by dock yes Docker dock Docker creates them and then the bind Mouse will be would be U managed by the host and so you the host system it says yeah so for buy mounts you use buy mounts when for example the file system that you want to buy into the Container is already existing if that file system is not existing the container is going to fail because it's looking for already existing data or already existing Mount point but if you're using volume mounts and that file system is not existing on the host itself the container is going to create it explain that say that again I don't want to overload you guys with some of the things no I think we're just trying to fine tune it the concept but it's almost making sense just repeat that part volume yes is managed by you said it Docker itself so the docker apis manage volume mounts because you can use Docker volume crate do will create those things if you're mounting a volume into a container a fire system into a container and using volume mounts if that fire path or that directory is not existing on the Hol itself then the docker is going to create that with volume mounts but if it is a buying Mount because buy mounts as we said is pre-existing data that you want to load into a container if that is not already existing then the container is going to tell you that there's a problem I can't find that data that you're giving me with volume Ms the container can easily create that fire stem if it was not existing on the machine and ride into it okay bu M you mean yeah no volume m i mean volume good so another aspect of Docker is case we talked about storage already high level pleas and we have doer compos so like I said um Docker run all those commands can easily Run start one container and stuff like that but if you want to manage an application with multiple containers how do you do that it's easy to use something like Docker compos okay but now orchestrating the containers themselves then Docker also has his own feature known as Docker swam which you can use to orchestrate containers but the orchestrating tool that we will focus here is k8s are we together hello yes sir yes we're here good that's it um Docker is just like K so um what do I mean by that there are a bunch of 500,000 commands Docker commands I can't give you all we're going to look at some of the basics here and uh my advice to you I'm going to show you something that I have is my own personal cheat sheet on every tool I'm using and stuff like that and I have a text file my local PC and I I use it I do my Stu and I just it's like my my own how do you call small diary in the whole thing so my advice would be most of these things you're trying to develop yours I I took a look at most of this cheat sheets you see on LinkedIn on people say this my opinion that I don't really grab it but once I'm preparing it myself then um kind of makes sense did that make sense yeah so you're going to share your cheat sheet with us no I'm not going to sh you show you what why you sounding like [Laughter] [Music] Franchesca BR you stop sharing are we together yes I stopped sharing I'm done with the Whiteboard oh can snipe [Laughter] it that's your expertise J with SN it's not going to help you I think you're right you know it's the note that you create yourself that are actually meaningful in the long run let's see sample let's see sample just you can see wrong one let me show you what I mean you see my screen no screen want to refuse to share you're moving too fast moving too fast that the goal but I just want to show you something so can you see my screen tiny like this we can see it's for me not for you yes okay so what I'm trying to show you is you see I have some things that when you go kubernetes when you go to Docker when you go to different things and this is um my cheat sheet so you can you kuet you can you can prepare and stuff like that so when you as you're studying you're doing whatever you're doing just this is a t txt file and I'm writing down stuffs and I'm also putting um some information on it what it means and stuff like that so this this in my opinion helps me rather than picking up a ready made or Che it online you can take it go through it but I think this is what helps me more than all those things already prepared okay so in my opinion it works for me this way it might not work for you if this if the other ones work for you then okay they I think they if you go to just go go sear cuber do you see a bunch of stuff online and maybe that will help you okay good but you know what's coming from your this as a guide it's different point I don't have a problem sharing this Thoughts with you but I need to go through it because I I P it for my understanding I I guess if I give this thing to you maybe you read it and you get confused no but you know like what Propet saying is Mak sense good so um for the demo we want to be to uh practice building an image pushing the image to do hop and inspecting some of the uh different functionalities of um using a Docker CLI to inspect a image and stuffs like that so in order to install Docker we want an is2 instance so an is instance which is publicly available and with an SSH key because we have to SSH into the instance is prerequisite go ahead so let's launch our instance this is the name and select Ubuntu instance type T2 micro select the key pair already existing key pair um and and then SE for the security group all traffic you can either create one or use an existing one that has all traffic uh storage configuration just select the default and lunch instance once we launch our instance let's to it then I will put a couple of commands in the chat where is the chat window I can't find my sh window somebody send me a message thank you very much I guess we can log in Insurance available please ass sear into the instance what what did you say Pro once your instance is available please SS into the instance okay and let's install do so we installing Docker using um a a script prepared by Docker I did not do it so it's in the docker docent um documentation that I got all this from so you run them in the do we need um um Java NOP just I don't know the script is there probably insts Java I don't know think but there's a script that I Dosh that should install every dependency needed em you can still you can from you can still s from V code right you want the link where I got the commands from yes no go to connect yeah sorry I wanted to um an instant connect but he said SSH but that's fine yeah whatever whatever you want to do please get a terminal and install I think that'll be better it's quicker okay just connect down there if you want connect down connect down down yeah was up yet there we go so we in commands in the chat the commands from there on the first commands huh which one there's a call command and the call command downloads SS script yeah then there's c2sh that's basically running the script so once that's done then we should just do Docker version to see if Docker is actually there okay Pro yes what's the difference between this and DOA installing dock.io what's the difference between what this command and installing docker.io where did you see do install docker.io um I saw it on this um this Rumble that I used permission connect to do yeah because um this dock by default would you can you sud sud is sudu d i or sudu Su whatever now run the same command again or yes run the same command again all of them or just last no just it's already installed just do a dock version okay can you type it do verion yeah still have get used to so now Docker is there you can see our Docker is actually there juste close this thing at the bottom of your screen this thing yes good so just type Docker dsh so Docker help so you can see the different um things that you can do with the do do events so you need to get used to finding commands this way please even with kubernetes something they said do. is unofficial package meaning you should before you install do you should uninstall it if you have it let go ahead please so it's not valuable what do I type now what you do what you see here is the docker C so this is the docker Clan right so it's here that you make um you pass instructions to the docker demon so in this our own set in this our setup our Docker client and our Docker demon are on the same instance but you can have scenarios as I demonstrated that you have the doer demon somewhere else that you want to try you want to try to talk to do demon then you have to pass in the- s flag to actually be able to reach the do demon are we together yes sir so let's create a directory so let's create a directory say call the directory do demo so make the do demo everybody EV are we good yes so we are thank you you are are you following up please I am I am I'm doing the hands on there's a reason why I did not really prepare because this is I want you to practice it and this is not much okay okay so you can see here we have a directory called dock demo so everybody you create a directory called dock demo make their dock demo Victor we good or you still eating I'm not teach I'm actually doing it on my laptop please some of you once in a while turn on your cameras you can keep it only on your face the facial expressions do help me all right when I see facial expression it tells me that this thing I'm saying I need to repeat it or not it helps me facial expressions do help me please good so um now we have the clan and let's make a demo a demo directory we call it make the docker demo and you see into that Docker demo directory and I want us to create a Docker file so let's call a file VI Docker F so VI Docker the docker must be d capu d f must be Capal d where's my but the directory name doesn't have to be Capital yeah it doesn't matter the directory it's any directory now we want to actually create our sample application looks like I need AA screen so I want to share a couple of files the chat going to three um I'm back to the same problem what is wrong with my setup today so fix up there are three files in the chat app.py doaly and requirements.txt I want those three files inside the directory let's have those three files inside the directory file inside the docker file or inside so copy the content of the docker file into the file you created copy the say that again please please copy the content of the docker file into your Docker file that you created we create another file called app.py you copy it content into up. py and requirements txt and you copy um the content into requirements.txt does it make sense yeah now a second let me save the file first open it up I'll prepare um before Friday maybe prep short RMI on gith hops maybe see maybe that will help hello are we good yeah I'm trying to open up the the file actually I wasn't really referring just to you I'm talking to everybody in the call oh okay they ask us to create the first trying to get it done pela are you doing the same Zan Prof are we supposed are we are we supposed to have three files for the various okay three files one file is called doif FY one requirements and one is actually the application itself that doesn't look right re we just copying the content and paste them in the new files yes just copy the content and paste them in the new file and the only one it's only the doctor has the D as a cap we the command before exposing the wait requirement has only a line of code right Pro I'm asking if requirement oh sorry go ahead go ahead I was asking if requirement has just one line of code flask um um Pro my question was um do we typically expose the port after running the command um I'm not sure I can answer that question should confident you can check check I mean I guess when we test it we'll find out we are not testing this I just want you to see build this push this so this is not an application that we going to run and see something in the browser the goal here is to build the image push it once we get to KES where whatever where we will be deploying then we we will build an image that is actually working so let's just go ahead okay so then the second one the same thing right copy the second file this requirements file under this line right what no this is a file close this file three different files save it yeah three different files so you save this so this is our DOA file but just give me a second okay to my connection did you use your s or you using instance connect instance connect I [Music] found so basically H can somebody tell me what this is happen what is happening in this F Pam please or Franchesca are they with us and Michael Zan here BR can you tell me what is happening in our the do five what what what do you want to say I don't understand your question Prof can you um just tell me what you think what you you you understand is happening inside this Docker file okay we have from that's uh python um that's like the version of the Python then we have the work directory which is the app directory uh where is going to copy the requirement from is where we have the copy which is requirement dot um text uh then the next line is to run uh p in stock which is also uh on uh inside required that uh requirement thatx then uh copy to the folder app MH then uh CDM do command line python uh H that's like the pon uh what's it called code then the Expos that's the part that is going to expose exactly thank you very much really appreciate yes P yes you you um you called me yeah um I was asking you to tell us what you think what you understand do we share great so let's continue yeah please can you explain the uh the line of run I didn't I didn't catch it when Mr Zan was explaining it so if you want to run a command inside Docker the run is actually the command you you would use to run um Pamela please can you open do on your new browser just go Docker documentation and let's go to manual let me show let's all read it and you see what it means go to just type the docker documentation fast up a little bit please okay sh the link already in the chat you can just cck on your good is it this one click on manual yes go to manual go to um reference insert go to reference refence reference top man okay yes reference so reference there reference for Docker compos requ for Docker requ for CLI requ for B checks and stuff like that go to the docker file reference go to your what's it called yes exactly so this is where you see all the stuff then you can deep dive into the different commands so can you look for run that's run here run so what do run says you should execute a command Okay so join the build yes execute the once you're building the the the the the the docile building an image then the run will execute whatever command is placed inside the um yeah the CMD is the runtime command the CMD is the command that will be executed once we want to instantiate the image that's once we want to um start a container oh okay okay good let's go ahead please ex save this and you create the um app.py and the requirements of P [Music] we should be done in 30 minutes we are fine do whatx and app. app.py I should do what requirements.txt spacep what's the is it uppercase r or no what is in the docker file because that Docker file is going to look for that file okay let me cut the docker file first you don't have to okay you can cut the do I know exactly um no you're already root we don't need you do okay it's capital D that's why you can find it vi vi requirements txt space up. TX I think you should be able to edit both of them at the same time I should do one at a time one at no you copy no no you copyed something else yeah conf who is that please mute requirements of txe this let's be a bit fast space up. txt app.py wait wait wait wait wait wait wait and and we are able to edit both files at the same time yes bro good I they have the same content no you just copy and paste you just switch between in in Finish finish reement TC and go to upd please okay so what do I put in here what is in the requirements of txt can somebody put it in the chat is FL you can open it it's in the it's in the chat is fla equal equal 2.0 it's in the chat I'm missing in the chat yeah not sure I can't see the upper part of my screen you're not pasting anything okay can you um bang Q what is it colon Bank col Q Bank bank leave the file without without ping anything there yeah yeah do it again I don't know if you know how to use Echo you could also Echo that thing into a f create create cre just cut the let's see what it is do we all have the files yes bro there yeah great then the last one app. py do you already have app [Music] py extension um all this right more everything in Pyon five where it says um from flux um yeah so this is what you would not concern yourself with as a developed engineer this is all from the developer can you cut that thing yes let's see okay good so basically this a simple flash application now do LS so in here you have what we call the docker file you have the application file and you have the requirements the requirements here will be the dependency that this application needs right yes and the application.py will be our source code now I want to build our source so I can have a Deployable I need to pass all those instruction how to build that code with my doery so the doery as um who was at Zan already explained does everything because it's a python application we are using a base image with python base we are creating a directory for the application we copying the requirements and we are installing those requirements then we copy our our application source code then start the application so app the python app. P just basically starts the application and we're exposing it 3,000 great can you clear your screen please are you doing LS good so LS please so we can see something is blocking the top part of my screen but I I have to so so um in order to build an image you need to use the command Docker build or Docker image build so the the the old syntax Docker Bill and you give it the context it still works but Docker also changed the API syntax and it's every object now has its own sub command so we can have Docker container start Docker container create Docker image T Docker image build and stuff like that the docker build which the old way of doing it still works so if I want to build this command let me put the command this image in the chart We Run The command do build can I just type it yes do build lower case C D I mean one word or separate word no space space between the dock and build no no you I don't think the command has finished the command is not done so you have doab Bill it's in the chat doab b-t you can call it test image that's that's an error please not test name test image do test image yes that's image and um yes so let's discuss this command please the docker Docker is uh lower case and you have two Docker bills there so Docker bill is in command that we are passing to the demon and we're giving it a tag and we are saying that a tag should be called test image and if you look at it there's a um a DOT at the end of it and you see that yeah good what this means is what we call a build context a build context in Docker is basically where you're telling the Builder the docker demon where to find the necessary files so if I am in the present working directory and right here I'm sitting in this directory and here I have my Docker file I have all the requirements then dot b means what what does do means in Linux right that present creates build it right here it right here so if all these uh files the necessary files needed for building this Docker was in a different directory then I would put the relative or the absolute path to where my Docker fire is and all those stuffs and this is known as The Bu context okay in do are we together yes br are we really together yes good so hit enter so there's something wrong with your doif fire can you see that yes is the something you something you did not past well can you V into the doery yeah okay it's the walk walk the syntax so you you have duplications in that file can you see that you have two from so remove the first and second and 10 I see I see so just hit D twice no just go back to the beginning of the line stay in in Escape mode so hit D twice d t yes hit it again go up I remove the first one beginning of the line so yeah can save the fire and and build again last command so you can see that each line in your command is being built can you see that yes so if you scroll up to the beginning of this your bill it's going to tell you one of two two of two two one or four two two and all that stuffs are we are we together what part one of two where is the one of two how many of the processes are complete um where's my pen good oh I don't know to do this way and 10 can you see this is it yeah oh okay yeah I see it yes so as he building it's telling you um this is the first he build the first the first was from right so that was one of five instructions the second was working directory 20 five and these are all creating different layers inside your your do and once it's done then our Docker image is available so in order for you to see the image which you just built you do a Docker image LS or Docker LS based on the new or old command syntax LS do image LS then is it one word the command or is it two words Prof mute L do they give space image LS do image L do do is always so you can see that we have the um the image you just built right M can you see that it's called touch test image are we together is that what is that what we called it that's what she called it test image what did you call yours well I just called it Victor image exactly so this is her image called test image and that's the image that is available there so you can do Docker um uh image history so Docker image history they give the name of the image is three so what is it called test image that's okay [Music] Enter so do you you see what does this tell us you can see that the the the the are layers of this image that was built by somebody what is it six weeks ago right we did not build that did we no good so that's that's why we said darker images are immutable these layers of the image all these layers are coming from um the initial from in our Docker file is taking um an image right are we together yes sir so it's taking an image from somewhere so these are all the layers in the previous image which are immutable we can't do anything then 3 minutes ago we added a working directory we copied some requirements we run binge B install that we copy the app Direction and we Expose and stuff like that are we together so this is how you get the history of an image okay and you can also see that say can you put the command inside the um chat please Docker image there's a okay let me show you something now just do Docker image help I want you to be able to know how to find all these different commands once you you're faced with it Docker image- help when you want to look help in CLI there's always d DH is the for short form can you see that yes so you can see so the docker image and command I want to show you how you find commands especially with C client tools AWS whatever you're using so what is the command it can be build it can be history can be import inspect what is it what's the next one load LS prone pull push remove save tack blah blah blah so the first one we did was history I think the first one you did was history and what what does this what what what does it say here read it can somebody read it of show the history of an image shows the history of an image so the history of this image we just created is telling us that 3 minutes ago we did some stuff but six weeks ago somebody did something from the base image other image that we didn't we did not have anything to do with it image come from from some that from the the from part which you have the first line in your Dockery that is the base or the parent image so that first line said from python something something right yeah because this a python based application so we don't want to use an image that already has python install and stuff like that so let's get it from somebody it's existing there okay so now we'll build our own image this is your own custom image if you make this image publicly available and two weeks ago and I come and use this your image and I put my own layers on on and somebody does darker image and with my own U image that I created they going to see all this history that six weeks ago somebody did something 3 minutes ago you did something then whatever I do on so that's how you get the history of an image are we together yes sir now let's all go to Docker hop and create a repository so just go to Docker hop and you sign up for repo everybody please because I'll call everybody I'll call names here and I want to see their image in your in your Repository do you already have um an account it looks like it Prof I'm only following I'm not I don't I'm not with my computer I'm not home okay you want I want you to create a repository here so everybody you create your own repository in doer hop okay Prof uh is it okay if you share the files on SL because I'm trying to the do file which files please the the the first three files you shared oh my God we talked about that 30 minutes ago you didn't tell us that you have a problem no no no what I'm saying is to send it on slag so that I will have it because I'm trying to pick it up from my from Zoom it's not working for me because I'm using my phone but I want I'm going to use my laptop now okay no problem thank you just don't when you're facing a is issue don't hesitate to stop me all right so that we can solve your problem immediately rather than taking us 10 or 20 minutes back just FYI where is the attach command on this SL I'm going to do the ID I'm going to do that very soon okay upload from computer [Music] you should see it do we all have our Docker hop accounts yes bro good so now that we have our Docker H accounts we want to be able to push our Docker image to our Docker Hub let's go back to your terminal for you to be able to push your image to your Docker H your image needs to be tagged in a way that it knows which account to go to so you have to tag your Docker image using your your username of your account now let's try something do Docker images Docker image list Lo image LS you see an image called test image right yeah let's try to push this and see what happen do do push test image so dock push test image and let's see what happens good night because we haven't connected the do Hub to This Server yet if you look at it it is not pushing it to your space it's trying to push it to do docker's uh Library no it's using docker.io but it's using the I remember I said this library is a user space and the docker's official space is called Library you see yeah so what does it say it says that the requested access to this resource is denied you're denied because you and I we cannot push into do space for you to be able to push you need to be able to push into your space so we have to tack this image in such a way that it knows which library to go to so everybody you need to tack your image so do do attack do attack d-h I intentionally did not make this command because I want you guys to know how to find it do attack. do d s D1 not s h h h help that's in help yes so you can see how you T an image so you can t an image using the source image tag than the target image together you're not smelling Wings you're not smelling um okay so I I need to use this command so you need to use the doer tag what is our source image do a doer image LS again and let's look at it is is this one test image so what's the tag of the test image oh okayest look at it what's the tag yeah look at it it that's why I want you to do do image LS again so that you have a clear screen can see okay let me do Docker Docker image LS you see yes so if you look at it where's my pen there's a section here that says repository yes right there's a section that says repository and what is a repo test image but that there's no repository called test image right yeah there's a section that says T and there a section says the image ID when was it created what is the size of the image and blah blah blah okay for us to be able to push that image to your repository it needs to be tagged in such a way that it knows the repo to go [Music] to are we together yeah are we together everybody please yes BR yesd you you feel like I feel like to ask I got it I got it y so if you s you saw the command do image tag what does it need it needs the source image and the tag and the target image and a tag that's the command there so let's do that so Docker image tag you put pass in your Source image and it's colum tag and the target image and the tag now the target image will be your username on do Hub Allen to have this thing blocking my view lest and then what's the target image did you say the target image begins with Library remember the the the thing which we saw up here what was it where was it you see this thing oh okay the library yes so so the library here will be your username this is the name of the image then if we want a tag say like something like V1 then we can put tag V1 that's the the syntax so what's the library the library here would be your name what's your username on Docker hop please am I making sense hello yes sir am I making sense yes bro yes bro so let me send you guys a screenshot of syntax where is it can I put it a picture in the chat Zoom chat is that visible is that possible yes bro use the file um under the chat it's got like a file icon where is the chat can somebody say hello in that chat again please thank you so prop the tag here uh for the Target will be the same tag as the source right if you don't want it to be having the same tag then if you leave it like that then it's going to push it and it going to have latest but now let's say this was a version one of our our application what will you do you put col V1 right col V1 yeah let me V1 so colum V1 so enter there's a problem with your command doer image tag there's no T there good so if you do Docker image LS you should see two images now Victor are you with us he's gone I don't think he's in the call again sorry I had to step away so I'll just have to no issue no issue but I don't think you where did you stop do you have your image anyway just continue so you can see that you have two images right yes so you have an image called uh um m m y whatever that is test image view one and the image and stuff like that okay now we want to push that image to Docker so in order for us to push that image to Docker you need to lock into Docker let's try pushing the image to Docker without logging in and see what happens okay so let's do Docker push you copy that image and you paste okay you complete image and it's t or you just put copy um the new image right yes copy the new image name without the tag and let's see what happens at times they say learning by error is the best way so Docker image Docker push P the image enter you see T does not exist because you did not specify a tag it's using the tag latest once you don't specify a tag to Docker it always tends to look do you see it's using default tag laters but there's no image with this name that has a tag called laters right so you need to put the tag so now let's update that with do go to the column V1 enter now denied you don't have access because Docker doeses not not uh if not loog in so to for you to be able to push you need to log into to Docker so do do Docker log dasu login okay yes that's to log in so hyph U stands for user they you pass in that username of yours enter what's your password youit your password the password find it are we all here abdah yes sir and Bo of greatness are they see here yeah bro we here yeah yeah you got to L in with your so once you log in we can push yes once you log in then you can be able to push still saying deny no is succeeded they success now it's right there yeah am I hold right there please this means your logging is s successful you've successfully logged into do now you push again so doer image push or doer push now it's actually pushing that image to Docker home so go to your Docker home you should be able to see that image do push oh look at it right here that's how you build an image and that's how you push the image to Docker Hub so so so Prof once it's in Docker Hub it can be shared with other collaborators exactly this is a Public Image this is a public repository right MH this name that she has here m hyph test image somebody should do Docker pull on their machine and pull this name and tell me if they receive it or Docker pool Docker image pool Docker pull or you just do Docker run you do Docker run it's going to pull that image it's going to look in your directory and and you will see that it doesn't have that image he's going to go look for that image and because that is tacked with the library because that image is tacked with the library where it is located then doer knows where to go look for very delicious Victor mute let me mute him are we together oh yes Prof um did somebody suceed to pull her image yes good so that's how you share images with people so em that application that you have there in that thing if because it's in a public repository then everybody can access it pro so this same image can be pushed to um ECR the same image can be pushed to ECR you guys should look into pushing it images into ECR so by by Friday when I come on Friday please you all will show me your images in reposit in ECR you mean Thursday can you go back to are we together yes bro P Michael lesie alen amand Dr chos are we together please Pro how can I um I created the account on the on the this do H they say I download click on download I I I I I log in I I sign up for Lo H my first time my question is this I will I download the do H app you don't you can do it browser base you don't need to download anything ah I sign up just sign up just sign up yes you have an account there I I cre it now so this is a free tier account with dock free tier accounts you can create on limited public repositories but you're limited to just one private okay so I I don't have to download anything you don't have to download anything okay prop was that PO command again do p do go to go to your that's why I've been we have been trying to look at this CLI because I want you to be able to find these commands yourself go to your your thing and show me how you do it go to Docker image help it's going to show you all the commands clear your screen please um am I not sharing clear clear clear clear your screen okay clear so doer image Edge we show you everything please play with these commands and see what happens can you see that is there are the commands there there's a build there a history there's a import inspect load LS Push Pull remove save Tack and in this list how many commands have we used today five4 what did we use um history pull push and T we use build we use history used LS we've used pulled we've used pushed and we've used T they all coming from here so these are sub commands so how do you get now the command is Docker image bu you can do now Docker image if you don't know what to do after bu do Docker image bu- again and see what happens so it it keeps giving you help based on the command that's how you find these things please it helps and not just for Docker then I want you to guys all you all to to understand how you get these things as you see can you see so now this is giving you um information based only on the pool command so what sub do you need with the pool command this is it so with the pool command you can pull all you can do whatever you doing so stuff like that learn how to get this information I can I will send you a small text with 510 commands and if you if you understand 510 command that's fine for you but I encourage you to to know how to do this all of us yes so so when you did the docker image pull sorry when that was to try to pull her image that she's uploaded did everybody was everybody able to see that image it's public everybody that did it should be able to see it because it is in a public repos high high report is public so p p will download an image from a registry that's it WR on it yes your question so when she when he did when when when if I do Docker image po do I have to reference that particular image file you have to reference that particular image file remember that that image is tacked in such a way that Docker knows where to go look for it if you just say Docker image pull test image is going to try to look for the test image in Docker official user space which is the library space and that image might not be existing there so because you want to talk to Emma's image you need to it should be properly referenced I sent a short screen um screenshot in the yeah in the chat can you open it pleas your Zoom chat I's see yeah okay can you see that yes sir so what happens is this is an image this is the registry the registry is dock hop so we don't need to pass it there because dock already knows that we are talking to dock hop if you were going to be trying to push this image to ECR you will have to tack this image in a different way because it's going to a different registry are we together yes sir so in that case this tag register you need to tag it if you do not tag it by default Docker is going to know that we're talking about docker.io now this library is docker's official user account so if you do not pass a library to the account to the image Docker is going to try to look for it in this user space which is library and this is the image name and because we don't pass a a a a version to the image what would DOA think latest it to give latest exactly it's going to be looking for the latest image okay yes bro good the ECR is what elastic container registry elastic container registry for AWS oh okay any questions now let's clear screen please I do do image LS Docker image LS Docker LS what any of them would work so we see two images there so do do do Docker image help again and let's look at the command there good so there's a command that says Docker image inspect can you do Docker image inspect then the name of any of those images Docker image inspect can you see what what does inspect do inspect will display detailed information detailed information on one or more images so what pick any image and let's inspect it so you have an image called test image let's let's try with test image good so that's the information about our this specific image are we together a lot of stuff so that's a lot of stuff you can see there exactly that's what I'm talking about so you can get information about that image I want I'm looking for the network since we talked about Network today I'm looking for Network pardon oh yeah um network network there should be a section for nwork Expos Port um yes this is all information from the docker file right because it's telling us information about I'm trying to look for the network to to to see the bridge Network what network Docker used to attach this image to can you scroll down there should be a section on network or something like that so Prof the bridge network is actually a real Network yes okay scroll up did somebody see network um I'm trying to grab it now do a grap or find or whatever screw up screw up there a lot of blind the network should be a heading is what prop is saying so like you have parent you have ID you have oh okay and I'm grapping but I'm not seeing any that work can we grab it no he already grabbed it no network Ro FS file system meta oh that's the image sorry that's not a container that's that's an image that's why thank you so you only see a network when once you have you have a container running so let's try to run that container just just do do Docker run okay from there okay so um so do Docker run it's okay that's everything lower case space then name of the image to test image this might fail because the image maybe the image is not properly config yes it's not properly configured so that will not Pro propably run so let me pick think about an image which we can easily yeah do Docker runix but this is going do Docker Docker run- D engine DD means do it in the background so INX X yes so you can see because um we don't have engine on your local it's pulling it from somewhere it has downloaded it and it should be running now so do Docker PS Docker PS is just basically list the containers that I inside my machine so now you can see that I have engine you have engine running do you see that yes bro good so you can see also the port of engineering so this exposed was exposed in the container status up 12 seconds ago and stuff that that so like I said where does the name come from which name because I have a different name yeah it's if you don't give it a name Docker does anything and gives you picks anything and gives you so if you want to give a name to The Container there you give the D Dash name flag okay okay then he going to give your name so let's inspect this container you can either use a name or use the container ID so do a Docker container inspect or Docker inspect whatever do inspect pass in the container ID or the container name yes you see I see the bridge Network anything after that that's okay Enter so like we said when you create a container if you do not specify a network then Docker puts it in the bridge Network and stuff like that okay so you can see the network this is it this a network information so this is is in the bridge Network um the mark address of the container the networ ID the endpoint the IP address of the container uh the Gateway and all the stuffs okay how many how many IPS does it allocate to this um particular server what do you mean by server to this we are talking about a container itself one nobody can you have two IPS right but that bridge network has a Gateway because remember every network if people want to talk they should have a Gateway right yeah so that's the Gateway of the network BR yes I just wanted to know you you run the image and created a container that's the there's a there's a gap there in my mind so you run the image and created a container for engine X how does that how does that work so we what did we say a container is an instance of an image yeah an instantiated instance of an instated image so if you want to actually run whatever application is inside the image then you run the once you do a Docker run it starts that um container that image so it builds a container it starts that image once when we go to to uh to kubernetes it's going to be CU c a r so keep say run you give it the image name it's going to do the same thing it's going to start start start container from that image okay all right thanks okay so the the the the Run command is multif faet what do I mean you remember we can have Docker push Docker pull so there was no image on this on her machine there was no engine image engine so she had the docker looked on the local system and saw that oh no Engineers is here I need to find engine because he just said it was in he knows that it is doer. library. and latest so he went up there pulled the image then started the the wow the the thing so the Run command us a lot of things so you could also do that by doing a Docker pull you go pull that image Docker create create the container and stuff like that but to embed all those things do Ron do Ron do all that thing do everything so the library has ACH of images yes so let's go to dock hop you can go to dock your dock hop just go to the dock hop and you can look for something like Jenkins that's something we've tried we've already used right so search Docker hop search Docker just look type in Jenkins there interesting so that's a Jenkins joh Jenkins so what does jenkins. Jenkins mean because Jenkins has his own user space called Jenkins and the name of the container the image is Jenkins click it this yes yes just like you had what what was yours m mu. test image right your user space was called I'm finding it difficult to pronounce that thing OT right yeah and the name of your image was T image so Jenkins also have their own user space called Jin Jenkins and the name of that image is called J Jenkins so once you're seeing these images it should be yes um the question is um do companies typically round containers for like Jenkins or they they would um create a server that's what we do we don't use servers that we loading things like everything is running kubernetes so they they um they'll um run the container from the image on do now now it depends you not this image you can pull this image to your local now if this is the latest image and if this something happens on the local system and Jen Docker or kubernetes doesn't find that image it's going to go back to Docker H and pull the latest image right so if you do not want that then either you use a tag if the image itself gives you a specific tag or you pull this image down and you put it in your private repository then you point your your kubernetes or your your your your yo files to picking the image now from your private proy so you have a fixed version you only make a change when you want to make the change am I making sense yes bro so you can pull it and even you giv you the command there do p Jenkins Jenkins can see that Jenkins Jenkins you can look for what is it my SQL postgress uh Tom Cat all those things they have images here so you don't have to go through the hard work of um installing you don't have to to go through the hard work of doing all that if you using microservices architecture okay okay so all the things even anible can you look for anible there's an image for an Aline oh answerable so most of them you see there's an image for this everything has an image somewhere now if you understand how to use microservices there will be no way you spin up servers and start installing Services there everything will be built in the microservices architecture now because we can we know that we have an orchestration platform that can in can add the number of parts of anable that we need reduce them when we don't need them and stuff like that are we together yeah that's a very powerful very powerful tool then very powerful what did we use Tom card can you look for Tom card Pro so the Jenkins um project that we did we could have just used containers with the Jenkins project we did we can use containers exactly but I did not want to start going out that road because now I have to start telling you how to build that cluster how to pull those things not done that right H you would have been like deer in a headlights some some some some people here will not be here in fact so which com I said comast um Tom cat um one of the one of the the most reliable open- source guys that have actually Building images that people trust is what they call bitnami as you go to get into microservices you most people are using it bitnami bitnami chats and bami images are are reliable good please play with this play with the docker P play with the in commands I've shown you how to find those commands play with them play with them play with them and create your own cheat sheets the only thing is bman Nami person or it's a um it's a company that they don't have only Tom card they have images and hchs everywhere it's um okay they into if you even you go look for something like my SQL Ms SQL post big money yeah just no just you see bami they also have one day so different companies are so C is also having one day so you see there myo also having theirs there so a lot of things happen so just just just hit enter enter enter here yes enter wait wait wait wait before you go there you can see that on the let me see again on the docker Hub there's what we call verified Publishers so U Docker it's them they have verified that no vietnami these guys are doing it so they have verified so you can see the trust of your images here before you use them right some images you'll see them official like it's an official let's look for for what did we just use inex look for the engine image too early so in in X I think you spelled a little um oh okay I spelled it wrong looking for the bit looking for go back go back go back go go scroll up Scroll up no this is what I'm looking for you can see uh what we call uh trusted come to the trusted content you can see images from Docker official you can see them from verified Publishers you can see them from stuff like that so click on tick tick it's almost like T form what you exact almost like terap and stuff like that so you can see them so those official images are from it's like Docker themselves they responsible for the project and making sure that it's a available if you have also engine images from um verified Publishers and stuff like that so you can use that all right so Docker H I don't know the last time I check this there there are millions of images here so everybody is pushing so if you if you build your application and you think that this is very nice package it and put it up here and people start using it just like somebody pulled Emma's image pull it and start using it and life continues images pardon do people sell images so if this thing is already public why will I pay you exactly so there most of them are open source right but some of the images now even though they are there there are some features of it that for you to be able to use it now you have to pay some license fee as well now but for me to pull a Public Image I don't you don't need any any any payment to pull a Public Image once it is made public it's public so I might build an application make the image public and I make a community version and an Enterprise version and I only tick you only get access to some featur when you you buy some license you pay me and I send you a small token and you know that okay once you put in this token then some new features are available to you good choose Thank you guys very much sorry for the overlap but I hope the class made sense please tell me if you did if it did not find a way to make it better there's no class tomorrow thank you yes um we agree that um tomorrow maybe we can stop the where