Transcript for:
Setting Up and Deploying a Django Application

in this video i'm going to show you how you set up a django app and deploy it to a server running on aws using docker compose [Music] this is going to be quite a lightweight deployment all we're going to be doing is setting up a docker compose file that will allow us to run the code in a production type environment the benefits of this is that it can be quite quick and easy to get up and running and it has a fairly low overhead because you don't need to manage lots of different servers or lots of big infrastructure to run your service if you have a small project and you just want to get it online so that users can view the application and play around with it then this is the deployment method for you i should mention that there are some drawbacks to this deployment method it's easier to deploy however it is more difficult to scale so if you're going to be scaling your app to have like many thousands of requests per minute or something like that then you may want to try a different deployment approach such as using um app engine or ecs fargate or something like that so you can scale up your docker containers as you need but this as i say is just going to be a lightweight deployment that you might use if you just have your own project that you want to get up and running or you just want to be able to demo your project to other users on the public internet in order to get started you're going to need docker installed a code editor a github account and also access to aws so we're going to be using a server that is available in the aws free tier but it's important that you take responsibility for any payments that are accrued on the account technically if you follow the tutorial exactly you shouldn't be charged for any resources however there's no guarantee so it's always important to check which resources you're creating in aws and see if there is any cost associated with that outside of the free tier before you get started you're also going to want to make sure you're familiar with ssh authentication so we're going to be using ssh to connect to github and also to connect to the server that we're going to be using and then we're going to be using it to pull the code from github to the server so we're not going to cover in detail how to use ssh authentication i'm assuming that most people probably already know how to use it however if not don't worry there's a link in the resources of this video to a github tutorial that explains how it works so you can set up your github account with ssh so let's get started we're going to start by creating a new project on github so if you open up your github page we're going to go ahead make sure you're logged in and then we're going to create a new repository i'm going to click on repositories and we're going to create a new repository specifically for the app that we're going to be deploying i'm just going to call the repo something like django docker compose deployment you can of course call it whatever you want maybe you already have an existing project in mind that you want to create but we are going to be creating a django project from scratch so that we can demonstrate things like handling media files and things like that on the deployment you don't need to enter a description unless you want to you can make the repository either public or private i'm going to be showing you how you can deploy a private project using a deploy key in most cases you're going to want the code to be private because you're going to want your application to be just private to your organization however if you're creating an open source project or something like that then you may want to make it public i'm going to leave it public just so it's easier to share with you guys when i'm done so i'm going to check the box for add a git ignore and we're going to choose the python git ignore file this is a python project we're creating then i'm going to check the box to add a readme and then click create repository once the repository is created we can go ahead and clone it by copying the ssh url and head over to the terminal or if you're on windows you might use powershell or git bash and then you're going to change the location you want to clone the project to so i'm going to put mine in document slash workspace get clone and then paste the url this will go ahead and clone the project to our local machine now we can switch to the project and i'm going to type code dot to open up in visual studio code i like to use visual studio code but of course if you have a different code editor that you prefer to use you're welcome to use whichever editor you prefer so now that we have the project open in visual studio code the next step is to set up docker in our project so we can go ahead and create a django project that we can use to test with i'm going to create a new file in the root of the project called requirements.txt this is going to list our python requirements that we're going to need to install in the docker image when we build it in order to run our project so the main requirement we're going to need is django so i'm going to type django more than equals 3.2.3 comma less than equals 3.3 if you head over to pi pi the python package repository then you can find the current version of django that's available and you can get the versions and all i'm doing with the syntax here is making sure that we have the latest patch version so if any security patches or anything are released it should automatically be installed but then we won't install a version that is equal to or higher than 3.3 because that could contain breaking changes so we want to be able to gracefully upgrade our project and increase that version as and when we wish as i was saying if you go to pi pi and you type in django you can see the django project here and it's currently 3.2.3 is the latest version so now we'll head back to our code we'll make sure that the requirements.txt file is saved and we're going to create a dockerfile in the root of the project with a capital d and then we are going to add to this docker file i'm going to start by basing it off the python 3.9 alpine image so if you go over to docker hub which is hub.docker.com and if you search for python click on python you can see that there are various different tags available and we're going to be using the alpine tags so if you search for alpine alpine is a very lightweight base image that's recommended for docker so you can see it has various combinations here i'm going to be pinning it to the most specific one that is not a patch version of python so i'm going to be using 3.9 hyphen alpine 3.13 so we go back to the code we type from python colon 3.9 hyphen alpine 3.13 and again i like to pin the versions here because that means we have the most stable and reproducible experience so if you use the same version as this then it will ensure that the steps in this tutorial are as similar as possible even if there are new versions being released sometimes if like a new version of python or a new version of alpine is released there may be some changes you need to make to the doc file so i recommend using this specific version that i'm showing you on the screen here now which is python 3.9 hyphen alpine 3.13 now i'm going to add a label which is best practice to set the maintainer i'm just going to set it to our website you can of course set it to whatever you want then i'm going to type env python on buffered1 and all this does is it says when you're running our application we want python to print any outputs directly to the console so it doesn't buffer the outputs which can create issues with logging and things with docker so it's recommended when you're using python in a docker container to always set and python on buffered1 now we're going to copy some files so we'll do copy and we'll copy requirements.txt or turn up caps lock here requirements.txt to forward slash requirements.txt then we'll do copy forward dot forward slash app to forward slash app so what this will do is it will copy the requirements file that we created to our docker image at forward slash requirements and it's going to copy the app directory which we're going to create right now so if we just create a new folder inside the project called app this is the directory that's going to contain our django source code so when we build the image we wanted to copy the current version of the code that is in the app directory into the image so that it can be used when we run our application in the deployed environment next we're going to type workdir forward slash app and this tells docker that the working directory of new containers made out of the image should be at the forward slash app directory so effectively it will be like doing a cd forward slash app every single time except you don't need to do that because it's automatically working from that directory when we start the container this means we can run the django management commands directly in the container without having to specify the full path then we'll type expose 8000 which will be the port that we're using to connect to when we run our django development server i'm going to show you how we do that in a minute now we're going to type run and i'm just going to tell you what to type out here and then i'm going to explain each line after because that might be the easiest way to understand this block so let's type run python for hyphen m v e n v forward slash p y two and signs then the backslash and then indent to the same level of the line that you're currently typed python at and type forward slash pi slash bin slash pip install hyphen hyphen upgrade pip two and signs and then the backslash again forward slash pi bin pip install hyphen r forward slash requirements.txt two more enzymes and then a backslash and then add user hyphen hyphen disabled hyphen password hyphen hyphen no hyphen create hyphen home and then app so the run line or the run block here that we have runs a command when we're building our image so it's a list of different commands that we're going to run and you could specify them all on their individual run line so you could do run this then run this then run this the reason we put it all on one line is because docker will create a new image layer for every single run command so if you want to reduce the number of image layers to keep the docker images as lightweight as possible then you can add all of the run commands on one line and separate them by this double and sign backslash and what this does is it just says run this command and this command in this command in this command but it does it all in one layer so when you change this layer it will get rebuilt but if you don't change anything in this run command list here then that layer stays the same and also it doesn't create multiple different layers one for each line that you have so it just makes the images a bit more lightweight what each command does here is the first one python hyphen m uh v e n v four sas pi will create a virtual environment inside our docket image for storing our python dependencies now some people think that you don't need to do this and in some cases i don't do this either however i find that it's useful to do it because it does help separate the python dependencies from any dependencies that might be on the alpine image so technically there shouldn't be any dependencies on the base image because it's a lightweight image that doesn't come with any dependencies for python pre-installed however just in case there are just to avoid any conflicts it's useful to be able to put them all inside their own virtual environment which we're creating here at the forward slash py location then we call the pip install upgrade command and we call it by specifying the full path to the pip executable inside the py environment that we created and we do that because we haven't added it to the system path yet so if we just did pip install hyphen hyphen upgrade pip that would upgrade pip on the version of pip outside of the virtual environment we want to ensure the virtual environment has the latest version so we specify the full path then we specify the install command in the same way by specifying the full path inside the virtual environment and we install the requirements file that we previously copied here on line six finally we add a user and we set it to have a disabled password which means there's no password login for that user and we also say don't create a home directory for this user because we don't need this user to have a home directory and the user name that we give it is just called app because this is the user that is going to be running our app in our container if you don't do this line then what will happen is your app will run as the root user and that's not recommended because if somebody compromises your application then they'll have full access to everything inside that container if you do add it as an unprivileged user such as app then if the if an attacker does compromise the application at least they'll only have access to whatever that app user has access to so it's just a security precaution that is recommended whenever you're creating an application that's going to be actually deployed now below the run line we're going to type e nv path equals and then open up these double quotes here forward slash py forward slash bin and then a colon and then the dollar sign path and what this does is it adds our virtual environment at py forward slash py to our system path which means whenever we run a command that uses python it will automatically use the python inside our version environment and this is what we want when we are running our container because when we run our app and we manage dependencies and things like that we want to be doing it from our virtual environment so we don't want to do it from the version of python installed inside the alpine image so basically what it means is if we run any more commands that use python we don't need to specify the full path here because it's already been added to the system path next we're going to type user app and all this does is switches the user from the root user that is the default user we logged in as to the app user that we created on line 15. so anything else that we run after this last line will be run as this app user instead of the root user and again this is just to make sure that any application we run inside this docker container is running as app and not running as the root user with full privileges over the container so save the file and now let's move on to the next step next we're going to go ahead and create a docker compose file for running our development server and this is just going to be for development of the application and it's good to have this on hand because presumably whatever application you build you're going to want to develop it on your local machine and to do that it's useful to have a development server that runs in that is different from the production server that we're going to be creating later go ahead and create a new file in the root of the project and call it docker hyphen compose.yml then we are going to type version colon and then in quotes 3.9 and then services colon and the service i'm just going to change these spaces here to two if you haven't done that already i recommend doing that just to make the file a bit easier to read underneath services i'm going to add a new indentation and then app colon and then build colon and then context colon dot and i'm going to explain what each of these lines mean after i've typed it all out then below the build or level with the build line we're going to type ports colon hyphen 8000 colon 8000 and then volumes colon and then hyphen dot forward slash app colon forward slash app so save the file now i'll explain what each of this does so the top line this is version this is the version of the docker compose syntax that we want to use it's best to use the latest version that is currently available on the docker compose documentation and this just ensures that if docker compose is updated and it uses new syntax versions it knows what version you intended to use with this file next we have the services block so this is the services that are going to make up our development environment we only define one service so far and that is the app service that we define on line four and the first two lines of the app service so lines five and six are the build and then it sets the build context and all this does is it tells docker compose to build from our current directory so the context um with the single period symbol or the symbol single dot says just work from the current directory that we're running docker compose from which means it will by default pick up this docker file here that we created in the root directory later on i'm going to be showing you how we're going to use a different context for something else so it should make more sense when i show you that then we have the ports mapping so this line here on seven and eight says map port 8000 on the container to poor 8 000 on the host so our machine our development machine is the host and the container running the application is the container so we want to be able to map the port so we can access the application via that port from our local machine when it's running inside docker next we have volumes and this specifies a volume mapping from our local directory with the app directory here to the forward slash app directory on the system or on the docker container so when we're running our application in our development server we want it to automatically receive any updates we change in the code directly in the container so we make a change to our source code we want to be reflected in the container immediately so that it will auto refresh the server so then we don't have to manually restart the server every time we need to test a change and this is only needed for the development server because that's when you'll be making changes to the code and you want those changes to be immediately reflected in the container running your application for the deployment one that we're going to be creating later we're not going to need this line because we're going to build our image in one go and we're only going to be rebuilding it anytime we do a deployment so we don't want it to necessarily automatically update the code i'll show you what that means later on in the tutorial next we're going to create a docker ignore file so create a new file in the root of the project that's called dot docker ignore and i provided a link in the resources of this video so in the description of this video there should be a link to a sample docker ignore file and that looks like this so you have all of these items here i'm going to copy the contents and paste it in and then save the file and all this does is it says exclude certain directories from the docker build context so whenever you build your docker image it's going to gather everything that is in the current context which is in the current location and it's going to copy it into the container and access it as part of the build process however some of the files such as the git directory with all the get hidden files in it get it get ignore any hidden files and then the app pi cache things like that we don't want them added in the build context because it just slows down the process and they don't need to be inside the container so we create the docker ignore file and list all of these files out so it just makes our containers a little bit faster to build and it also prevents us from accidentally copying stuff to the container or to the image that shouldn't be there now we're ready to go ahead and actually build our image so if you go ahead and open up terminal or if you're on windows it will be either git bash or powershell and type docker hyphen compose build and then hit enter this should go ahead and build our docker image so we'll see whether we define everything correctly in our dockerfile in a moment if there are any typos or errors then they should appear now because the build would fail it seems to be working so far so we'll just see if this works successfully we'll just wait for this to finish and then we'll continue [Music] [Applause] okay so that appeared to build successfully so now we can go ahead and use this docker image that we created to create a new django project and we can do that by typing docker iphone compose run dash dash rm app sh hyphen c and then in brackets here i'm just going to move the screen up here so you can see it there we go and then in the brackets at the end we're going to type django hyphen admin start project app and then a single dot now close the quotes here and what this command does is it basically creates a new django project so i'm just going to move this here so you can see the app directory here when we run this it's going to create a new container out of our docker image that we built and it is going to run the app service so that is the service we define in docker compose then it's going to run the command the sh command django admin start project app and then dot and what that will do is it will start a new django project a template project and it will call it app and it will place it in the current directory so because we set the working directory as forward slash app that will be in the current directory and because this app directory here is mapped as a volume in docker compose the files should appear in our project let's go ahead and test that by hitting enter on the command and in a second or two you should see that it goes ahead and creates our django project i'll just wait for that to finish okay so finished successfully and now inside our app directory you can see that we have a sample djangco project that has app and then some settings and things like that so this is just a template django project that we can use to test our deployment the next thing we need to do is update our settings.py file so that it pulls certain configuration values from environment variables one of the best ways to configure an application that is running anywhere so whether it's running on your local machine or the deployed server is to use environment variables this lets you customize certain configuration values outside of the source code project so when you run your application on the server you can specify custom values that are only stored on the server and not committed to git or to your git project this is useful because you don't necessarily want any passwords or secret keys added to your git project where everyone can see them you want them in a restricted file on the server so that they're secure and that you don't have them shared everywhere with everyone who has access to your code let's go ahead and do this by opening up the settings.py file inside the app directory first thing we need to do is import the os module at the top here so type import os and this is the module that allows us to retrieve environment variables then we're going to scroll down and we're going to change the secret key line here you can just delete this secret key that was auto generated by django we'll type os.inviron.get and we're going to get something called secret underscore key in all caps and what this will do is it will retrieve the environment variable secret key and it will set it as the secret key inside our django project next we're going to set the debug option so debug is an option that we should have enabled when we're debugging locally but as you can see from the comment here that was automatically added to the template you don't want to run it in production you want debug to be turned off in production that's because debug mode gives you more information about the background running of your application things like secrets and different things about the code it shows you the code running behind the scenes and so on and you don't want that accessible to somebody who is accessing your app on a public server because it's a security risk so we disable debug mode in production but we want it enabled when we run our application locally and the way we can do that is we can replace the true here with ball open the brackets here then int open brackets again os got environ.get and get a value called debug or caps i'm going to set the default value to zero so this comma and zero is the default value if the debug environment variable isn't set what this will do is working from the inside out it will start by retrieving an environment variable called debug and environment variables always arrive as string values if they're set so even if you put a one or a zero it's going to be a string value so we need to first convert it to an integer so then it will be either a one or zero integer and then we'll convert that to a ball using the ball function here and then we should end up with a boolean which we can assign to the debug option to set debug true or false so if we specify a zero then it will be false and if we specify a one then it will be true and we default into zero so we don't need to specify anything and it will automatically be turned off and i like to do that because then that means you don't have to remember to disable debug mode on production you only have to remember to enable it on your local development machine and this is just a bit safer because it means you're less likely to accidentally leave it on in production next we're going to update this allowed host option so allowed host is a security feature of django which limits access to the application to certain host names it's a security feature that prevents a certain type of attack that is explained in the django documentation i can't remember the actual name of the attack on the top of my head but if you click on the documentation that i'll link to in the description it should explain everything about what that is basically you need to specify a list of host names that are allowed to access the application if you have debug mode turned on then you don't need to specify the list of host names it's only when you have debug mode turned off however we need to be able to specify the host names when we configure our application and it's best to do this in a configuration file such as an environment variable configuration file instead of doing it inside the code directly because the hostname might change on each of the different servers that you're deploying the application to so we're going to go ahead and add an extension to this allowed hose so we're going to extend the current value which is just an empty value and basically it's a list of items so you can have multiple allowed hosts for any given configuration however as i mentioned environment variables don't support different types everything comes in as a string so what we're going to do is we're going to accept a comma separated list of different host names and we're going to split that up and then we're going to assign the values to allowed hosts so let's go ahead and do that now by typing allowed underscore hosts dot extend and we're going to call the filter function here which is a built-in function we're going to filter none and we'll do dot environ dot get we'll get allowed underscore hosts and then a blank string as default and then dot splits and then in there we're gonna add a string with a comma in it add a comma to the end and what this will do is it will retrieve the allowed host's value or it should be allowed hosts plural make sure you get it right unlike me so it's going to retrieve the allowed hosts environment variable which should have a comma separated list of all of the different hosts that are allowed to connect to the application we're then going to call dot split then with a comma in it to split that by the comma so it's going to split each one up into an individual item and return a list then we're going to filter the list to remove any non values that's what this filter function here does and that's just because if you um if you split a list by default there might be a non value either at the beginning or the end so you want to just make sure you clear any of those out we also have a default string here of just an empty string and that is just so that when we're running in debug mode enabled we don't need to specify the allowed host otherwise it's going to give us an error because it will return none by default and you can't split none you can only split a string so that's why we specify the empty string here now that's done make sure you save your settings.py file and then we're going to open up our docker compose file again and we're going to add the environment variables to the docker compose file so the environment variables can be added in a number of ways one of the ways is to define them inside the docker compose file so that they're passed to the docker container when it automatically starts so then we can configure our application in a single location you can add environment variables by adding a new line below volumes and calling it environment colon and we're going to add secret underscore key equals dev secret key it's not that important to specify a real secret key here because this is just for the development server so it's not going to be accessible to the outside world it's purely just for our local development purposes below that we're going to do hyphen debug equals one which does as i explained earlier sets the debug mode to enabled and because this is our development server that's what we want now we can save the docker compose file and the next step is going to be to add a database that we're going to use for our application so we're going to add a database by adding a new service here inside docker compose and we need to add a line that is level with the app service because it's going to be another service type db colon image codon postgres colon 13 hyphen alpine then environment colon hyphen postgres underscore db equals dev db hyphen postgres underscore user equals dev user hyphen postgres underscore password equals change me so what this does is it defines another service that is called db and it uses the image postgres 13 hyphen alpine this image will be pulled automatically from the public docker hub repository and it allows us to just simply run a version of postgres so we're running version 13 here and just like we do with our app service you can configure the database using environment variables and this is how it suggests you configure them if you go to the documentation for this postgres image on docker hub so what we're doing is we're creating a database and it's going to have the database name of dev db the username of dev user and the password of change me now you can put a more secure password if you want it's not really necessary because as i mentioned before this is just for our local development machine we're going to be changing all this stuff when we actually deploy to a real server now what we need to do is add the configuration to our environment so we configure our database here but we also need to tell our django app how to connect to this database because they're running in separate services by default it's not going to know where to connect in order to access the database so we're going to do that using environment variables again so we'll add a new line below this hyphen db underscore host equals db and this is the name of the service that we're going to connect to then we'll do hyphen db underscore name equals dev db and then hyphen db underscore user equals dev user and hyphen db underscore pass equals change me so it's important that these values match the values that are specified in the environment variables for db so dev db should be the name dev user should be the user and change me should be the password if these don't match then you're going to run into errors because your django app is not going to be using the correct credentials to connect to the database also keep in mind that you don't want to add any spaces before the environment variable here because that will actually add them to the real value that gets passed in you want to make sure you add the equal sign and then put the value straight after the equal sign don't have any spaces or quotes or anything around here because that's just going to mess up the configuration now below the environment block we're going to add another line as depends underscore on colon and then dash db and what this will do is it will set up a dependency from our app container on the db container and this basically says two things one is that the db container should start before the app container and the second is that there should be a network connection set up between the app and the db container so if you need to connect to the db container the service that is running for that container then you can just use the name of the service as the host name and it will know how to connect automatically so this is quite useful because it allows us to easily set up network connections in between different services okay now save the docker compose configuration file and now we need to go ahead and actually add the postgres driver to our django application we can do that by opening up the docker file here so we'll start with the dockerfile and we need to add a couple more dependencies when we set up our container so we need to install some packages that are needed for our postgres driver to add these dependencies we need to make some changes to our docker file so what we'll do is we're going to add a line here to the run block so at the end of line 13 the pip upgrade line we're going to add a new line below that i'm going to type apk add hyphen hyphen update hyphen hyphen no dash cache and then postgres sql hyphen client and then we're gonna add double and sign backslash and we'll add apk add dash dash update dash dash no hyphen cache and then dash dash virtual and then dot tmp hyphen d e p s short for temporary dependencies then and and backslash and then i'm going to just indent here because this is kind of an extension to this line actually we don't need the anan here make sure you remove that so just the temp hyphen depths and then backslash with no and because i'm going to break this onto two different lines here and it's just easier if it's easier to read that way so we don't want to have a really really long line with all the dependencies i'm going to put the dependencies in an indented block underneath here so i'm going to type build hyphen base then postgres sql hyphen dev and then musl hyphen dev and now we can have the double and sign backslash and then below this line here the requirements in store line we're going to leave that where it is and below that we're going to add apk del.tmp hyphen depths and and and then a backslash now that was quite confusing you might want to pause the video and just make sure that you've typed everything out correctly i'll also put a link in the description of the video to the actual source code for this if you actually wanted to go and just copy that from the source code although i do recommend typing out because it helps to learn it better now i'm going to talk through what changes we made so we basically needed to add some dependencies and there's two different types of dependencies that we add one is the dependencies that are needed after the postgres drive is installed and that is the postgres sql client so this installs the client and everything that the postgres sql driver needs in order to connect to the postgres server so we install that here and we're going to leave that installed in the docker contain the docker image then we have these temp dependencies so these are only needed in order to install the driver so once we've installed the driver we can then remove these dependencies to keep the image lightweight now this is optional you could just install them and leave them on there however this is not recommended because the best practice when working with docker is to keep the images as lightweight as possible because this means it's a lot easier to clone them move them to different machines and to run them and it means that they're a lot more lightweight when they're running on your server which is good because there's less kind of overhead there's less memory being used and things like that so what we do here is we set up a virtual set of dependencies called temp depths so apk if i didn't mention it is the alpine package manager and is what you use to install packages on the alpine docker images so the update says to update the package repo for these specified dependencies so that it makes sure it pulls in the latest version of them and no cachet means don't say any cachet because this is all about making it as lightweight as possible we don't want it to store any cachet on the image that is then saved after we finish building the image so here we specify the temporary dependencies which will get installed as part of line 15 and 16 and then we install the requirements.txt file so this is when our driver is actually going to get installed because we're going to define it in a minute in the requirements.txt file after it's installed we then run apk delete or del and we delete those temporary dependencies so this is how it cleans up the temporary dependencies so they're no longer needed on the system or they're no longer stored on the system when we actually deploy our application okay now that we've done that let's save the file open up requirements.txt and we're going to add the driver here for connecting two postgres so we're going to type psycopg the cycopg2 more than equals 2.8.6 comma less than equals 2.9 so save that file and what this is is the recommended driver to use with django when you're working with postgres database so it's what django will use to connect to the database now we need to go ahead and modify our settings.py file one more time and we're going to modify it to support our database we want to configure our database to be postgres instead of the default here which is sqlite so we can do that by just replacing the engine here so we're going to change the engine from django.db.backends.sqlite to django.db.com then we're going to remove the name here and we're going to add host or caps and we're going to get the host from os dot environ dot get and we're going to get it from db underscore host and then comma name and then colon os dot environ dot get db underscore name and then comma user is going to be os dot environ.get db underscore user and now at the end here we're going to have password colon os dot environ.get and then db underscore pass so just like we did previously when we retrieved the environment variables by their names we're doing the same here for the different values that needed for django to connect to the server so if we save the file and i'm just going to open this side by side here so you can see in docker compose that i have it on the right each of these different values in the environment variables matches up here so we have host name user and pass and here we have host name user and pass so it's just going to pull the values in from the environment variables and set it up on our database so that django can connect to the database server make sure all of the files are saved and now we can move on to the next step which is to create a model that we can use to test with in django before we can create a model we need to create a new app in our django project to add the model to so if you open up the terminal or the git bash or the powershell window whatever you want to use and you type docker compose run hyphen hyphen rm app sh hyphen c python manage dot pi start app core so this will run a command inside our container for the app service and it will run python manage dot py start app core so we use the django cli to create a new sample app if you hit enter you should see that it goes ahead and it's going to pull down the postgres server first because we've added that to our docker compose file but once it's done that it's going to create the new core app and we're just going to use the core app to create a simple model that we can use to test our deployment we're not going to go too in-depth about actually creating a django project all we're going to do is create a simple model that has a file field so that we can upload a file using the django admin in order to test our project so we got an error here and that error is because i forgot to do an important step which is to rebuild the container after we updated the docker file so docker does not automatically rebuild the docker image every time you change the docker file you need to do it manually and you can do it manually by typing docker hyphen compose build hit enter and this should go ahead and rebuild our docker image using the latest changes that we added to our docker file and we'll also see if we made any mistakes or errors in the dockerfile when we created it so i should have done this right after i changed the docker file apologies for that if you did it already then you don't need to wait for this because it'll already be done so you can just when you run the command to create a new app it should have automatically worked so we'll just wait for this to finish and then we'll continue [Music] okay so the docker image was rebuilt successfully as it was building it reminded me of something i need to explain inside the dock file that you might not be familiar with so if you open up docker file here these dependencies they i didn't just create them out nowhere these are the dependencies that you need in order to install the cycopg to postgres driver so i found them through trial and error and through looking on lots of stack overflow pages unfortunately there's not a lot of great documentation when it comes to installing this driver on site in the alpine image but i found out that these other dependencies you need so in order to install that requirements.txt file for cyclop g2 you need to have the build base postgres postgresql dev and then the musl dev packages installed and then once it's installed you don't need them anymore so we can remove them through the temporary dependencies okay let's go back to creating our image so i'm going to go back to the terminal now that the images are built i'm just going to use the up key to run the previous commands i'm going to run the docker compose run rm app sh hyphen c python manage.pipe start app call which is going to create a new app in our project called core if you hit enter hopefully this time it should work successfully okay so seen that it worked successfully let's go ahead and open up our file explorer so the core app is where we're going to create our model before we can do that we need to enable the app in our django project by opening up app forward slash app before it says settings.py and just adding it to the list of installed apps here so if you find around line 39 you should see a list of installed apps below we're just going to add a new line and add the core app this tells django that we want to install this app in our django project so that it can actually be used and this is important for it to pick up the models and the migrations and stuff that we're going to be creating in a moment make sure you save the file now open up the core app and we're going to create a new file inside or we're going to create some new lines inside models.py so models are database models each model reflects a different table in our postgres database and we're going to create a sample model that we can use to test our deployed application so if you delete this comment that says create your models here and we're going to do class sample with a capital s and then models dot model colon and we're just going to have a single field called attachment equals model dot or models dot file field open and close the brackets here okay save the file and you might be wondering why this choice of file field because when you are working with deployed django apps the most common issue that people have the most common challenge that people face is with handling user uploaded media files because the django configuration can be a bit tedious when it comes to handling these files so the files i'm talking about are files that are uploaded by users as the app is running so once we start our app we're going to upload an attachment here to the sample model in order to test the behavior of managing these media files that get uploaded and again it's because it's fairly easy to deploy a django app but to get this bit right it's a bit harder and it's often what trips a lot of people up so that's why i'm specifically creating a model that has an attachment so we can test this in the django admin but because we want this to be focused on the deployment i'm not going to be creating any django pages or anything like that we can maybe have a separate tutorial for that if you want to like and subscribe and leave a comment below we can get that to you if we create that but for now we're just going to be focusing on deployment and handling the media uploaded files so in order to be able to manage this through the django admin we need to add a new line to admin.py we need to register the model so we're going to add from core.models import sample which imports the sample model we just created and then we're going to do admin dot site dot register and we're going to register the sample model this just makes it accessible in the django admin so we can browse it and we can actually upload something to it in order to test now save the file and the next thing we're going to do is create our migrations so we open up the terminal and we type docker hyphen compose run dash dash rm app sh hyphen c python manage.py make migrations and what this will do is it will create a migrations file for this new model so we're not going to go too in detail about migrations because we want to keep this focus on deployment but it's basically just instructions that django creates to make changes to the database when you deploy your application so it keeps track of all the fields you've added deleted and all the tables that you've added and deleted and it helps django to automatically do that for you when you deploy your application now that we've done that we should have our migration file created in migrations here so we don't need to do anything with that just yet the next thing we need to do is add something called a wait for db command now there's one issue that often happens when people are using django with a postgres database when you're running it using docker and that problem is that sometimes when you first start your application the application starts before the postgres server is available so the postgres container may have started but it might be initializing some things and setting up the database behind the scenes so it's not quite ready for django to connect but then django tries to connect and then it crashes and it creates a lot of confusing issues that people aren't sure how to fix so the way that you get around this is you create something called a wait for db command and we add this to our django commands so that we can use it to wait for the database to be available before django actually tries to connect to the database and do anything with it so we're going to do that by creating a new file inside core and there's a few files we need to add so the first one is we need to add a file called management and inside management we need to add a underscore underscore init underscore underscore dot py and this is so that python detects this as an actual python module then inside management we're going to add commands and inside commands we're also going to add our underscore underscore init underscore let's go dot py and then we're going to add a new file inside commands called weight underscore 4 and it's called db dot py make sure that you have the file structure correct here so it should be core and inside core you have management and inside management you have the init.py and commands subfolder and inside commands you have init.py and then wait for db so here we need to add some custom logic this is some logic that i created in our advanced course on building a rest api and using django and docker with that so if you want to take that course then please check out the link in the description the video teaches you how to build a rest api from start to finish using django and it sets it up for deployment using docker so we're going to add this wait for db command here it's going to add a comment to the top that just says django command to wait for the database to be available then we're going to do import time and then from cycopg to import operational error as psi cop g2 op error then from django.db.utils import operational error then from django.cor.management.base import base command then we'll do class command and we'll base it from base command add a doc string here that says django command to wait for database and then def handle self comma then the asterisk sign args comma double asterisks and then options and then comma colon at the end here and this is the entry point for command then self dot std out dot right and we're going to write the message waiting for database dot dot and db underscore up equals false and then while db underscore up is false try colon scroll up here a bit self.check database equals and then the default database in case we have multiple databases i'm going to explain each line of this after we type it out so don't worry if any of this doesn't make sense i'm going to explain in a minute and we'll do db underscore up equals true and then level with the try block here we're going to do accept cy cop g2 op error comma operational error and then self dot std out dot write database unavailable waiting one second and then time dot sleep one now level with the while block here i'm just going to add a final line that says st self dot std out dot write self.style.success in all caps database ready or database available whatever you want to type okay so save the file and i'll talk you through what this does so basically we're importing some things at the top we're going to need time we're going to need these exceptions that i'm going to explain in a minute and then this exception here which is one from django that i'll also explain in a minute and then the base command so base command is the base class for creating custom django management commands which is what we're doing here we're creating a custom django management command so the convention for creating a command is you define it inside this file structure so this is all listed on the django documentation that i'll link to in the description you basically structure it like this this is the name of the command that we're adding it's in a subfolder called commands which is in the subfolder called management and this will automatically be registered as a command because we've structured it like this and we have a class inside it called command that is based on the base command then we have this def handle method which is the method that django will call when we call the command so when we use the the framework for calling the django management command it's going to check to see if there's a handle command or a handle method and then it's going to pass the command details to that method in order to execute the code so this is kind of the entry point to start the code for our command we're starting by just doing std out which just writes a simple message to the screen says we're waiting for the database then we have a boolean here that is db up is false so we're going to assume when we first run the command before we've checked we're going to assume the database is not available then we have a while loop so while db up is false so while there's no database we're going to try here and we're going to try and do self.check and then the database equals default so the self.check is a method that is available in the base command class that checks to see if the django app is ready so we can use that to check if the database is ready and what i found out through lots of trial and error and stack overflow searching and things like that is that if you call this method here self.check before the database is fully ready so maybe it can connect to the database but the database isn't fully initialized then it will throw an error which will be either the cycopg2o error so the operational area from the uh driver that we're using or it will throw the error from the dbw tills so this can be a bit confusing but it throws a different error depending on what stage it's at in the database starting up so at a certain stage it might throw the cyclop g2 error at another stage it might throw the django.db.utils error so to catch both of these errors we're going to add them both to the accept block here so if these errors are caught then it will just write to the screen that the database is not available and it will just wait for one second and then it will retry and then eventually when the database is available dbr will be set to true and this while block will stop executing and then this last line here will run and then we can move on to the next command once this is done we are ready to update our docker compose file to actually handle our migrations and to run this command before we start the app so i'm just going to close down some of these tabs here just to keep it nice and clean now i'm going to open up the docker compose file so docker compose.yaml and we're going to add a new line here to the services app block so the line we're going to add is called command i'd like to add it kind of close to the top here so underneath build i'm going to do command colon and then this greater than symbol and then below that sh hyphen c open quotes python manage dot p y weight underscore four underscore db and then double and and then start a new line here we're gonna do python manage dot p y migrate double and sign again and then python manage dot p y run server 0.0.0.0 colon 8 000. so what this does is it overrides the command that we're using to start the docker container so when we run docker compose up to start our docker services it's going to run this command for our app service and the command is going to run is first we wait for db which will run the code we just added which basically says wait for the database to be available then it runs migrate which applies any migrations so any new migrations we have will be applied to the database and then we run the run server command on port 8000 which creates the development server and allows us to connect to it so now we can save this file go back to our terminal or the git bash or the command prompt powershell window and we're going to do docker compose up hit enter and if everything has worked correctly it should start our server and you should see it start here okay so you can see that we got an error here and it says check got an unexpected keyword argument database that is because if we go back to our wait for db command i made an error here apologies for that it should be database is plural so we do databases and then you save the file now if you go back to the command prompt or the terminal window do control c to stop the server now we're going to run it again hopefully this time it should work successfully [Music] okay you can see that it waited for the database so it said waiting for database database ready and then it performed the migrations and now it started the development server now because we have mapped port 8000 we can go ahead and connect to that development server by opening up our browser creating a new tab just heading over to 127.0.0.1 colon 8000 hit enter and you should see this wonderful landing page for the django application so this is just the template landing page for when you haven't created anything in your app yet so we can see that the development server is now working the next step is we're going to configure our application to handle the static and media files that i was talking about earlier so if you remember i was talking about how this always trips developers up when they are first deploying a django application the reason is it's a little bit complex because of the way that django works so django when you deploy it to production it's recommended that you use something called a whiskey service which is a web service gateway interface what that does is it takes requests from the internet like http requests and it passes them and runs them as python code so it's really good at running python code and it does it very effectively however what it doesn't do so effectively is serving static files so static files are things like images javascript css anything that is a static file that isn't run in the python code now it can technically serve these files however it's not recommended that you do that when you deploy an application because it's a very inefficient way of serving these files what's recommended is you use something called a reverse proxy so i'm going to show you a diagram here of how that works basically you put a proxy container in front of the application and this is called a reverse proxy because it accepts requests and then it forwards them to the correct location this proxy typically can run something like apache or nginx i like to use nginx because i like the documentation and i find it works really well with the whiskey server that i use called you whiskey so here we have the internet and we get requests sent to our nginx server and what that will do is it will check the url of the request and if the url starts with static so it's a static file it will serve it directly from the file system which will be a shared file system with the app container and if it doesn't start with static it's going to send the request to the wizki server that is running our django application this means that all of these static files such as the images javascript any binary file anything like that will be served directly from nginx which it does extremely well and very quickly but any other any other requests will be sent to the django application and be run as python code so this is the most efficient way of handling a django deployment and it's the one that's explained in the official django documentation so this is what we're going to be setting up and what we need to do is configure our django project to store the static and media files in the correct place and then configure docker to map these volumes and then create an nginx proxy so that's what we're going to be doing next so let's go ahead and configure our application to handle the static and media files head over to the source code and we're going to start by opening up the docker file again so we need to add some changes to the docker file here and the changes that we're going to add is we are going to add a double and backslash here and add some new lines to this run block so add the double and sign or the double ampersand and then backslash and then we're going to type mkdir hyphen p forward slash vol forward slash web forward slash static and then the and and backslash again mkde hyphen p forward slash vol web slash media and then and and backslash then c-h-o-w-n chow then dash capital r app colon app forward slash vol and then and and backslash chmod then the uppercase r again 755 forward slash vol so what this does is it creates a new directory so on line 20 we're using mkdir to create a new directory at forward slash evolve forward slash web forward slash static this is going to contain our static files so static files are things that we create in our source code project that need to be used for the django application so things like css and javascript would typically be in static files then we create another directory called media so in forward slash vol web search media we're creating a media directory dash p here just says create any sub directories that need to be created in order to create that full path the media directory is going to be used for any media files so this is any file that is uploaded by a user as the application is running so when the application runs they might upload something like an attachment like we're going to demo here or they might upload a profile picture or something and this basically is a media file so it's something that is uploaded during the runtime of the application so stack files are created before we deploy our application in the source code and the media files are added as we run the application next we have the ch own command so chow ch own you can call it whatever you want basically this changes the ownership of the file so when we create it by default these will be owned by the root user however we need them to be owned by the application user so it has permissions to add and change the contents of these files so we do that by assigning app and the app group to forward slash vol and the r basically says recursive so any sub directory there assign it to the app user then we set the permissions here and it should be the default permissions but just to make sure we ensure that the owner has access to read write and change anything in those directories next we need to make a change to the docker compose file just make sure you save the file head back to docker compose and this is kind of an optional change but just add the volumes line here dude hyphen dot forward slash data forward slash web colon forward slash vol web what this does is it maps this web volume that we just created in our docker file to the data slash web directory inside our project so we should see it appear here and the reason i do this for the development server is just so we can see the files being changed in the directory as we're running the code so i do this just for testing to make sure i know where the media files are being stored and that they're being stored in the correct place when we deploy to production we're not going to be doing this we're going to be creating a different docker compose file that is set up for handling these static files in a more efficient way now you want to make sure inside git ignore if it isn't already there you want to add data to the end so it just adds this data directory so add forward slash data and what this makes sure is that this data volume that we map here doesn't get added to our git project so if we upload test images we don't want it being added to our git repository we want it to be separate and excluded from the git repository because those don't belong in the source code project now we can go ahead and update settings.py to configure the locations that we just created for our static and media files if we open up settings.py scroll to the bottom and you see the static url here we're going to change this to static forward slash static and then we're going to add a new one media underscore url equals forward slash static forward slash media forward slash then we're going to add media underscore root equals forward slash vol web slash media and then static underscore root equals forward slash vol static okay so i'm going to explain what these settings do just save the file first to make sure that the changes are saved and the first two are the url prefixes that are going to be used when the django app generates urls for the static and media files so static files are anything that's generated for static files such as an image or a javascript file or css and things like that any of those that we use in our django templates will always be prefixed by forward slash static forward slash static and any of the media files that are uploaded by the user will be prefixed by forward slash static forward slash media and what this does is it sets up a url structure that we can then use to configure our proxy to catch all of these different urls and what that will do is it will allow us to capture those urls and forward them to the location where those files are and then send the rest of the request to the django application so we're going to assume if the url starts with forward slash static it's going to be a static file that gets served from the nginx proxy otherwise it's going to be a url route that needs to be sent to the django application then we have these two other lines here which are media root and static root and what these are is it sets the root directories in the django app that we want to store these files so this is where these files are actually going to be stored on the file system it's nothing to do with the urls that get served this is where they get stored on the file system so media root says if we upload any media files to the django application store them in forward slash vol web media and when we run our collect static command which is a command that collects all of the static files that we need for our application it's going to place them in forward slash volts such web static so we can take this location and we can map it to the proxy image which can then access the files and then serve them directly from the proxy without sending them to the app and i'm going to show you how you do that in a moment the catch is that django doesn't serve the media files by default in the development server so there's a small change we need to make to the application so that it serves the media files when we're doing the django development server for the development purposes so open up the file explorer here and we want to find app forward slash app forward slash urls.py then we're going to add some lines to the top here so i'm going to add from django.com.urls.static import static and then from django.conf import settings then we're going to add some logic here so we're going to add if settings.debug colon url patterns plus equals static settings dot media underscore url comma document underscore root equals settings dot media underscore root comma okay so you can save that file and what this does is it appends to our url pans the url mapping for the media files and it basically means that we can access the media files when we're running our development server for local development we put it in an if statement to say if settings.debug because we only want this to happen when we're running in debug mode we don't need it to happen when we're running in production because the nginx proxy is going to handle managing those urls we don't need to manage them in our django app now that we've set up our django app we can go ahead and test our local development server and ensure that we can actually upload image or upload files through the django admin we'll go ahead and open up the terminal and we need to first create a super user that we can use to connect to the django admin so again we're not going to go into too much detail about this but basically the django admin allows you to connect if you have a super user account which is kind of like a highly privileged account in the django database so we can do that by closing down our django development server type docker hyphen compose run dash dash rm app sh hyphen c python manage dot py create super user then hit enter and this should go ahead and create a new super user for us that we can use and we'll just give it a minute and let it run here you go so now it's asking for a username you can put any username i'm going to give it admin and the email address i'm going to do admin at example.com you can specify your own email address if you want but it's not going to be used for this demo then a password so i'm going to put a password in here re-enter the password obviously make sure you remember the password and then we're going to run docker hyphen compose up and then we are going to wait for the server to start once the server has started we can open up our browser and i'm just going to download this image actually because this is what i'm going to use to test with you you need an image or a file or anything that you can use to test with so i'm going to just put this on the desktop sample image and now in the browser go to 127.0.0.1 then 8 000. this will take you to this landing page here you can do forward slash admin hit enter then it takes you to the admin login so we're going to use the same username and password we created for the super user there's admin and then the super secure secret password and now you can see here that we have inside the page we have the core samples so this was the sample model that we created to test with you click never and we can click on add to create a new instance of this sample model and then the attachment here should give you this option to choose a file so we're going to click on choose file and just choose any ideally an image file if possible but you can really use any file it should be supported so i'm going to go to the desktop here and then just upload my sample image and now click save and you can see that it's create a new object and it's created successfully so if we click on the sample object here you can see that it says currently sample image.png now if i open that in a new tab i'm just going to use a middle middle click here you should see that it loads the image correctly you can see the url for the image starts with static forward slash media which matches up inside our settings.py file to the media url so any media file starts with static forward slash media so that appears to be working correctly if you open up the project because we mapped the volume if you look at the data directory here you should see that the image exists there so this is what i was talking about when we set up our volume mapping in the docker compose file we map the volume to our project directory just so we can check that it's working correctly so you can see here the image has been uploaded and this will be the case for anything we upload any sample model that we create with an image should be placed in this directory now that that is working on our local development server let's go ahead and configure our project with the deployment configuration we're going to open up the code and we i'm just going to commit these changes to get right now actually so i'll just do git add dot get commit dash am i going to type added django project okay that will give us a nice clean slate to work from so we can see the green highlights for the new files and so on now i'm just going to close these files out and what we're going to do is we're going to create a new directory inside the root of the project called proxy and this is going to be used to store the docker configuration for our reverse proxy that we're going to create with nginx as i mentioned before we're going to be creating a reverse proxy using nginx and it's going to handle all the static and media file requests and forward the rest of the request to django in order to set this up we need to create a configuration for this proxy so we're going to do that now let's start by adding what's called the you whiskey params file so this is a predefined list of header parameters and i'll link to it in the resources of the video or the description of this video and it's basically on the official documentation for the uwizki application we're going to be using to run our django app and all it is is this list here and what it does is it allows us to create a file that maps different headers to the request that's sent to the whiskey server and this is useful when you are forwarding requests because if you ever need to access any of the request headers in django you want to get the request headers that were made on the actual request to the proxy not the one that the proxy made to the app so if for example you were to try and get the remote address which is the computer that's connecting to your django app by default if you didn't have this it would give you the address of the proxy and not the address of the user so you want to actually define this list here so that we can forward these header values to the actual wiz key service so copy the contents there and then go back to the visual studio code or whatever text editor you're using inside the proxy directory we're going to add this file as uwsgi underscore params paste the contents in there and save it and then we're going to add a new file called default.com.tpl so inside proxy we're going to add default.conf.tpl this is the nginx configuration file that we're going to set up so nginx knows how to handle out requests so what we'll type here is we're going to add server and then these brackets listen and then the dollar sign and then open and close these curly brackets and do listen underscore port and what this syntax does is it allows us to pull in values from environment variables and we're going to be running a little script that does that for us when we start our proxy i'll show you how to do that in a minute but basically this says listen on whatever port we specify here which is going to be something like port 8000 or 8080 then we'll have location forward slash static and we'll set that to alias forward slash vol static so what we do here is we set up a location block to catch any urls that start with forward slash static and it's going to forward them to forward slash vol static so when we run our proxy we can map this volume to the same volume on our app container so all of these static and media files are shared and accessible between the proxy and the app and this allows us to basically forward any request that starts with static to this directory inside this directory we're going to have another static directory and a media directory so the rest of the euro gets trimmed off and appended to this so that's why when we did our settings which i'll just open up right now when we updated our settings we changed the base of both urls to forward slash static so it's static media and static slash static what this does is it means inside our nginx configuration we can define one location block that catches all of the media and static files and because we have static and media at the end of this url here it will then be able to retrieve that from the correct sub directory in slide vol static now we need to add another location block it's just location forward slash and this is going to catch everything else that hasn't been caught by the first location block so with nginx they're checked in order so when the request comes in it will check the first block if it matches that it will just serve it from the alias vol static if it doesn't match the static url it will pass it on to the next block which basically just catches everything else so this is the block that we want to forward to our whiskey service so we can do that by typing uwsgi underscore pass and then the syntax here for the environment variables again app underscore host and then colon app underscore ports and don't forget to add the semicolons at the end here otherwise it won't work then include forward slash etc nginx forward slash uwsgi underscore params and then client underscore max underscore body underscore size 10 capital m and what i'd like to do is just line them up here so they are all very easy to read okay so save the file i'm just going to explain what these lines do here so line 9 basically says pass the request to the uwizki pass service which connects to the app host and the app port that we're going to specify in the configuration so the host would be the host name where the server is running the app so the container that's running the app and the port would be the port number that the service is running on then we include the usb params file that we created previously now don't worry about the url here because i'm going to show you when we create our docker file we're going to be copying the location of that file to this location here so that's what where this url comes from next we have the client max body size and it's set to 10 megabytes so what this will do is it will set the maximum size of the request that can be sent to the proxy to 10 megabytes now depending on the size of the files you're going to retrieve from your server or the size of the files that the server is going to be receiving from the client you might want to tweak this so if you need to upload files that are bigger than 10 meg then you're going to need to increase this because it's going to put a cap on the maximum file size that is uploaded to the nginx proxy which is then forwarded to the application next we're going to create a new script that is going to be used to run our proxy server so inside proxy create a new file and call it run.sh and we're going to start by adding the uh it's called a shebang and it is the hash symbol the gate symbol then exclamation point forward slash bin sh so this tells the thing that's running the script that we want to use shell we just want it to be a standard shell script don't try and do anything clever like add bash here because it won't work because the alpine image that we're using is a very stripped back lightweight image that doesn't even contain bash so if you did forward slash bin slash bash which lots of people seem to like to do then that won't work the script will fail it needs to be forward slash bin forward slash sh if you're using a base image that does contain bash then you're welcome to use bash if you want next we're going to add set hyphen e and then we're going to type e n v s u b s t and then the less than symbol forward slash etc ngnx forward slash default dot conf dot tpl then the greater than symbol forward slash etc engine nginx forward slash conf dot d and then forward slash default.conf so what this line does is it runs this little command here called and substitute and it basically takes a file and it substitutes all of that syntax here these this syntax so the the dollar sign and then the open and close brackets and whatever's inside is the variable name it substitutes that with the environment variable matching that name so if we have an environment variable matching listen underscore port it's going to replace this with whatever the value of that variable is so it's a handy way to pull in configuration values at runtime and this all comes down to the 12 factor app model so this is when you basically or one of the components of it is that you should only have one single place where your application is configured and lots of people like to make that single place the environment variables because it's nice and easy to use when you're running instances of that app that's what this line does it basically accepts the template and it outputs the actual file which will contain the same template script or the same template file but with these values populated with real values finally we need to start the nginx server by typing nginx hyphen g and then d a e m o n off colon so what this does is it starts the engine x service and we pass in daemon off and what that means is don't run it as kind of like a background daemon or demon however you pronounce that word run it in the foreground of the docker container and this is recommended when you're running docker because each docker container should only ever run one application at a time ideally that application is at the foreground of that docker container so all of the logs that are output to the application get sent straight to the docker logs so you can see them in the docker logs and you can view them and use them to debug issues okay save the run.sh file and now what we need to do is create another dockerfile inside our proxy so this is different from this dockerfile that we created for the app this is just the dockerfile for our proxy so we'll create a new file call it dockerfile and we're going to base the image from ng x inc forward slash nginx hyphen unprivileged colon 1-alpine so what this does is it basically uses the engine x inc nginx unprivileged image from docker hub to build our docker image and the reason i use this one and not the standard nginx one that would be a lot easier to type is because this runs as an unprivileged user if you remember when we were creating the app docker file i explained that you don't really want to be running your main application in docker as the root user because that's the most powerful user has permissions to do anything in the container so if your application gets compromised they can access anything the root user can access however if you use an unprivileged user so that's a user that doesn't have root privileges then the attacker or whoever compromises your application can only access whatever that user can access which is generally speaking a lot less than what the root user could access so it just is a bit of damage control if your application does ever get hacked it just means they can't access the full user so i found that nginx inc actually has a specific docker file just for this called nginx unprivileged next we're going to add label maintainer equals londonappdeveloper.com of course feel free to put your own website or your own email address if you want and then copy dot forward slash default.com.tpl to forward slash etc engine x slash default dot conf dot tpl now copy dot forward slash u whiskey on params two forward slash etc slash nginx slash u whiskey underscore params now copy the run.sh script to forward slash run.sh so what this does is it copies the files that we created here into the docker image at the location specified and so these are the these are the files that are used inside our run script now we're going to go ahead and define some default environment variables so for our template to actually work it's going to need value set to all of these environment variables to save time later on and to set some default variables we can do that in the docker file so there's default value set so you don't need to specify them they become optional if you need to customize them you can specify them when you run your image or you run your container but by default they're already set so you don't need to change them so we'll do that by typing env dot um sorry env listen underscore port equals 8 000 then env app underscore host equals app env app underscore port equals 9000. so we're going to listen on port 8000 that's the server that the ng x service is going to listen on then we're going to use the hostname app which is the name of the service that's going to be running our container with our django application and the app port we're going to be using is 9000 so when we configure our django app to run using the wizki service we'll set it up to use 9000 and because these are just environment variables we can always customize them if we need to when we run the application now we need to switch to the root user because we need to perform some actions that require root access on the image so we'll type user root next we're going to run some commands using the run command so we'll type run mkdir dash p forward slash vol static and then we're using this double and backslash syntax here it's a bit messy i know but it is worth it to save those extra layers of the docker images and we'll do chmod 755 forward slash vol static make sure it's spell static right otherwise it won't work and then double and backslash touch forward stash etc ngnx conf dot d forward slash default dot conf double and backslash ch own channel nginx colon ng x forward slash etc engine a slash conf dot d forward slash default dot conf double and sign and then the backslash chmod plus x forward slash run.sh so i'll explain what this does so first we create the static directory so this is the static file directory that we're going to be mapping as a volume then we use chmod shimod to change the permissions of that directory so the owner of it gets to read and write and make changes to it then we do touch and we touch this file here so this default.conf so because we're going to be running as the ng x user which is the unprivileged user that's going to be running the nginx application by default it's not going to have the permissions required to create this file so when we run the command in our run script here so we run this it's going to say there's no permissions to access this utcs nginx conf d slash default.conf so what we do here is we use touch which creates an empty file so the file exists and then we change the ownership of that empty file to the nginx user and what this means is that that user can overwrite the contents of the file so this line here line five the end substring line can run successfully so it's just a little quirk that we need to add in order to follow the best practices of not running our application as the root user finally we're adding chmod plus x which just gives executable permissions to our run script that means we can execute it as if it is a binary executable on the machine next we're going to add a volume forward slash vol static now this is kind of optional you don't need to add this but it's useful for if you do decide at some point to deploy your application to a different service like ecs fargate we actually have a course that teaches that in depth if you're interested in taking that course and learning how to do a full production grade deployment using aws ets then please click on the link in the description of the video to go and take our devops course because we teach all of that in that course it's like a 14-hour course which will give way more information than this video which might be an hour or so long next we're going to go user engine x and this switches back from the root user to the nginx user so docker will use whatever the last user we were switched on when we were building the image so if we didn't have this line it would still be as root and then we would be going through all this effort for nothing because we'd run the application as a root user anyway which is the security risk i was explaining earlier finally we're going to add cmd and then in these square brackets forward slash run.sh so this says the command to run new containers of this image should run the rondo sh command by default it just means we don't need to specify it in docker compose we simply just need to run the image and it is the default script that will be run whenever we run the image we can override it if necessary but we don't need to do that because we're just going to be running this script to start the application now that we've set up our proxy that we're going to use as the reverse proxy for our application we can go ahead and configure our django app to run as a whiskey service so that we can run it in production so there's a few changes we need to do for that one is to create a script just like we did with the rundo sh script for our proxy we need to create the same script for our django app so i'm just going to close these here and inside the project in the root of the project so not in the proxy directory or the app directory in the root of the project create a new folder called scripts and inside that we're going to add run.sh so it's going to be called the same name as the proxy script except it's just going to be in scripts slash run.sh just like we did for the proxy run script we're going to start with the shebang which is the hash or the gate or the pound symbol whatever you call it and then forward slash bin sh then set dash e and then we're going to run the commands necessary to start the server so we're going to be doing a bit different from the commands that we use inside our docker compose which is used for the development server we're going to be running a different command to run it in kind of a production mode so it's going to run using a wizki server instead of using the django development server but first we need to run the same commands as we did which is the wait for db so we'll do python manage.py wait for db and then because we're creating a script we don't need those annoying kind of and and and then the backslash because this is just going to run as a script we'll type python manage dot py collect static dash dash no input so what this will do is it will collect all of the static files that are added for each app in the django project so the typical convention is that you would create a folder called static in each of the apps that you have so you might have let's say five or six apps in your django project each one might have their own static directory and you want to collect all of those stack directories and put them in the same place so we want django to do that for us and we can do it with the collect static command which comes pre-built with django we pass in no input because if we don't do that it might ask us are you sure you want to do this and because we're going to be running this script as part of the deployment we're not actually going to be able to say yes to that so we do no input so it just goes ahead and does it and doesn't ask for any input on that command the location that is going to store the static files in is the location that we defined in the settings.py file so this media root sorry not media root the static root static root is the destination for the static file collection so when you run collect static gathers all of the stack files from every app you have installed places them all inside this directory which can then be sent to the proxy to be served directly now we're going to run python manage.py migrate which will run any migrations that have been added to the project if no migrations have been added then it doesn't do anything it just checks and if there's nothing to add then it doesn't do anything but if there is it will run the migrations to make sure the database is updated to the latest version next we're going to do the command to run the uwiski service so we're going to be using uwisky and to run that we need to type uwsgi hyphen hyphen socket space colon 9000 hyphen hyphen workers for hyphen hyphen master and then hyphen hyphen enable hyphen threads and hyphen hyphen module app.wsgi so what does this do well we run the uwisky command which is the command for running you whiskey then we do the socket here so this says run it on a socket on port 9000 and the socket is the type of connection that nginx can make to you whiskey so that it can actually serve the application so this is how ngnex connects to our application on port 9000 then we specify workers so you wizkid's services can be split up into multiple workers and workers are basically concurrent workers that can accept requests so you can have multiple workers running in any uwizky server and i recommend for it seems to be the default in most of the documentation depending on the type of application and how long your requests take to run you might want to tweak this so it is possible in certain deployment setups that you might want to have maybe one worker per container and you want to scale it by having multiple containers but more likely you're going to want multiple workers on a container and you just want to make sure that you don't have too many that it causes the container to crash it all depends how much resource you give to your containers i think four is about a reasonable amount for this type of deployment so i'm going to leave it at four the master says run it as the master daemon so similar to daemon off in the nginx script what we do is we make sure that the this command runs in the foreground so it's the foreground command that is running in docker so all of the logs and everything are output to the docker logs so that we can view them when we view our docker logs then we have enabled threads which just enables multi-threading in the application and then we have module app.wsgi so what this does is it says run the wizki model which is provided by the django command that creates the project so when we create a project you remember that we're working from the app directory so we specify app dot wsgi this is automatically generated by django and allows us to actually run our django app as a whiz key service you don't need to specify the dot py because it's a python module so you always trim off the dot py and you just specify the name of the file now that we've created this script we can save it and then we can open up requirements.txt and we're going to add the uwisky application as a requirement so we'll do uwsgi more than equals 2.0.19.1 comma less than equals 2.1 so this you can find on the python package repository and it is the u whiskey application that can be used for running python applications in production now we need to make some minor modifications to our docker file and again this is the dockerfile of the app project the one in the root directory not the one inside proxy and we need to basically customize it to copy in our scripts and to add another dependency that is required for installing you whiskey and also to add our scripts to the path so we'll start by copying scripts in at the top underneath the copy lines here we'll just do copy dot forward slash scripts forward slash scripts then inside this big run block here we're going to find this line here which is the temporary build dependencies that are needed only for installing the python packages they're not needed for the long term running of the application just to install it we're going to find the musl dev here and we're going to add space linux hyphen headers so through much digging and trial and error i found that the u whiz key installation with pip requires these linux headers for it to be successful so we add this here to the temporary dependencies and then we can scroll down and what we're going to do at the end here is we're just going to add an and backslash and then chmod hyphen capital r then plus x forward slash scripts so what this does is it makes any script that we copy in via our scripts directory executable so the r is for recursive so it just means everything inside this directory make it executable and the reason i do it like this is because the chances are as you work on your application you're going to want to add many different scripts to your main application so i add a directory here so that you can just add the scripts to this directory they'll automatically be added to your docker image so you don't need to manually copy each script in you can just put them in the scripts directory and it could be other helper scripts that you need for your application to run now we add these scripts to the path so we're going to modify this line 27 here and we're going to do at the beginning it's important that it's at the beginning because otherwise it will be after the path which probably won't work or that might work but to be consistent it's best to put it at the beginning we'll do forward slash scripts forward slash actually no forward slash just forward slash scripts colon so then we have forward slash scripts is on the path and then our pi bin is also on the path we want to make sure both of these are on the path at the beginning so then when we need to run a script we don't need to specify the full path to that script we can just call the name of the script and it should work now at the end of the file we're going to add cmd and then in the square brackets here we're going to add run.sh this will run the script as the default command for running the containers made out of this image okay so save the docker file next what we need to do is set up a docker compose configuration for the deployment so the way that i like to work when i'm deploying applications like this is i'll have a docker compose that is the default docker compose specifically for development purposes but for deployment i'm going to have a specific deployment docker compose that has some slightly different configurations that make it ideal for a good production deployment so we can do that by creating a new file inside this directory i'm going to call it docker hyphen compose hyphen deploy dot yml and some people might like to change this name a bit and maybe make it docker compose prod dot yml or docker compose dev for the development environment and prod for the production environment and maybe staging for the staging environment however where you want to do it you can create multiple different docker compose files depending on the type of deployment you want to create but i like to just create a simple docker compose deploy that is used for all deployments so the first thing we'll add is the version colon 3.9 so we use version 3.9 syntax then services and then again i'm just going to change my spaces here to two spaces this is totally optional um with yammer i just like to use two space instead of four because i find it's a bit easier to read but if you like four then stick with four then i'm going to add app colon build colon and context colon dot and this is the same as our other docker compose file so basically just use the current directory as the build context but then we're going to add restart colon always so this is recommended when you are using docker compose for deployment and all it does is as you might expect is it means that the application will always automatically restart if it crashes so our app crashes the service will automatically restart without us having to log onto the server and do it manually and this obviously helps with the stability and the reliability of the deployed application next we have volumes colon and this is going to be a bit different from the volumes we created before we're not going to create a volume to our directory because we don't need real-time code updating inside our container what we want to do is build the container each time so this makes it easier to kind of roll back to the previous version if we need to because we can find the previous version of the code rebuild it and then run the container again but what we are going to do is we are going to specify a volume but we're going to use a named volume for the static files named volumes i'm going to show you in a minute but you basically define volumes in docker compose with a specific name and then instead of mapping it to a specific file for you it will handle the mapping of that file behind the scenes and it will store it somewhere on the system in an efficient way so it's a more efficient way of using volumes with docker compose so we'll just type static hyphen data colon forward slash vol web and then environment colon and then dash db underscore host equals db then dash db underscore name and now we're going to do is use that syntax again for pulling in environment variables so the one that we use in our script with the end substring we're going to also use in our docker file it's also supported by docker compose and this is how we can create a configuration file which is kept out of git source control that we can use to configure the application in production so we do it by doing the dollar sign and then these squiggly brackets here braces db underscore name dash db underscore user equals db underscore user and then db underscore pass equals db underscore pass and then hyphen secret underscore key equals secret underscore key and then hyphen allowed underscore hosts equals allowed underscore hosts now we're going to add depends underscore on colon dash db so these are all the configuration items that we're going to have for our app in production and we're going to retrieve them from an environments variable file that we're going to create in a moment and we're going to depend on the db service which we're going to create right now so below the app service now db colon and we're going to define the postgres service again so image colon postgres then colon 13 point or sorry 13 hyphen alpine then restart always and volumes colon dash postgres hyphen data colon forward slash var postgres ql for attached data then environment colon dash postgres underscore db code equals and then the name of the db underscore name the other benefit to this way of managing them is that we can define the db name once in a file and it's used for both the postgres and the app which means we don't have to duplicate the value in one place we'll do dash postgres underscore user equals db underscore user and then dash postgres underscore password equals db underscore pass so this defines a db service and we're using the alpine 13 image we're sending restart always just like we did with the app and we set a volume here and it's another named volume to postgres data and we're setting it to this line here so this is different from the one we had for our development server the reason we do this is because on our production site if we close the containers down or we delete the containers we don't want to lose the data from our database we want it to be persisted in a volume and the way that you can do that according to the documentation for the postgres docket image is you map this path on the container to a named volume so what this means is that this volume data will always be stored on the server that we're running and it's going to map to the location on the postgres container that runs the data so basically it means that we can have consistent database even if we destroy and recreate our service so that is good because you don't want to lose all of your user data just because you typed docker compose down then we specify the environment variables again and we take them from the environment variables file using this dollar sign and then the braces syntax here now we need to define our proxy service so below the db service we're going to type proxy colon and then build colon and then context and this time instead of the context being just dot i'm going to set it to dot forward slash proxy which sets the context to our proxy directory here which is useful for building the proxy then we're going to do a restart colon always and then depends underscore on colon we're going to depend on app because the proxy needs the app and the app needs the db and it all works in a nice kind of symbo symbiotic relationship then we're going to do ports colon and we're going to map port 80 to 8 000 then we're going to do volumes colon dash static hyphen data colon forward slash vol static so what we do here is we build from the proxy context so that's the directory where the dockerfile is for the proxy image restart always because we want it to restart it crashes depends on app because the proxy needs to be able to access the app via the network so that it can forward the requests to it then we forward the port to port 80 so 80 is the default http port in most cases you want to forward http applications to port 80 and we're going to afford it to port 8 000 in the container so we can still run our nginx servo on port 8 000 by accept request on port 80 when they come into the proxy then we map the volume again so this name here this static data should match the name up here this defines a named volume and what this does is it says we're going to have a shared volume that both the app and the proxy can access and this is how the proxy is able to serve the static files without bothering the application python code so without sending the request to the django application it can serve directly from the volume which is shared between the app and the proxy there's one last thing we need to do to this file and that is defined volumes colon and then the named volumes that we create so sometimes you can add this line first if you wanted but basically postgres hyphen data colon and then below that static hyphen data colon and they should both be on the same line and this just defines the named volumes and it allows you to configure them if you want to configure them in one place so the volume here static data should match the name static data which should be the same for the app and the proxy and the name postgres data should match the named volume passed to the db container here or the db service okay so now we can save that and we can deal with the configuration file so the way that configuration works is when we deploy the application to the server we're going to create a file called dot env if you use the python git ignore file you should see that the env file is excluded from it so if we get ignore here inside this file you should see env is excluded what this is is an environment variables file so it allows us to define a list of file a list of configuration that can be pulled into these values when we run our application this means we can keep the configuration outside of the git repository so things like the secrets and the passwords and stuff are not committed to get that only stored on the server and wherever else you safely and securely back them up to the common practice is to create a sample file that is used when new people want to deploy the application so this way you know a list of all of the different variables that need to be set when you deploy the application to the server so you can do that by creating a new file that is e dot env dot sample and this should be added to git but with some dummy values so with some testing values that aren't actually real passwords so first we need to define db name equals db name or whatever you want as a test db underscore user equals root user and then db underscore pass equals change me and the secret underscore key equals change me and allowed underscore hosts equals 127.001 now these values could be whatever you want as i mentioned they're just a template so we're not going to use this we're just going to use it to copy a new file so we can then change all of these values on the server that we deployed to so save the file and we can actually test our deployment locally now i know it's running a lot of different docker things locally because we have our development server and stuff but i like to test the actual deployment process locally before i actually push it to the server and this means that i can debug and fix any issues on my local machine before i actually push them to the server and it just helps save a lot of time and this is the beauty of using docker is that it's a consistent environment everywhere you want to deploy your application all of the configuration and everything is stored inside the project code and all you need to do is clone the project to wherever you want to deploy the application to and have docker installed and you can go ahead and run the application wherever you are so we're going to do that now and we're going to start by creating a new file called dot env you should see that is gray grayed out because it's excluded from git we'll just paste the values in which that's the wrong values let's go copy the values here and paste them into the emv file save the file and i'm just going to leave it as default here so i'll leave it as the default values because i'm just running it locally it doesn't matter i'm going to destroy the environment after anyway so um we only need to modify these when we actually are running on a real server so let's go ahead and test that now we're going to open up the terminal or the git bash or the command prompt or powershell whatever you use on windows and we're going to run docker hyphen compose dash f and then we're going to specify the name of the file that we want to use because we're not using the default docker compose file anymore we need to actually specify the name of it which is docker compose deploy dot yml then we're going to do down and do dash dash volumes and the reason i do this is because we might have some conflicting volumes from our docker compose file that we created earlier and i want to make sure we clear all of that out so we don't run into any issues and this won't normally be an issue because it's very rarely you would actually be testing the deployment docker files on your local machine usually this is maybe something you would do once before you actually deploy the application but we're going to do this now just to make sure there's no conflicts so doing docker compose down volumes just make sure it clears everything including the volume so if you omit dash volumes then the volumes are maintained which is what you want in most cases if you do dash dash volumes it is going to remove those volumes that you created to store the database and the static files which you probably don't want to do in production because you're going to wipe your database but for our local machine just for testing this is this is what we're going to do and then we're going to run the same command so docker compose dash f document compose.yaml and then we're going to type build so it's going to go ahead and build our docket images this will be both our proxy image and also our app image with the latest changes now there might be some errors in the code we're going to find out now whether there was any typos or issues in the docker files hopefully there's not and then the images should build successfully if there is then we should see an error on the screen or something that says that something failed to install or something so we'll just wait for that to finish and then we'll continue [Music] okay so the images were built successfully now we can move on and run the next command which is the up command so docker compose dash f docking compose deploy.yaml build and then instead of build we're going to run up what this is going to do is it's actually going to start our docker images or start our docker services in the deployment mode using the deployment yaml file so this is kind of a simulation as what's going to be running on the server that we actually deploy our application to you can see that it started by running the application it applied the migrations you can see them being applied here and then it spawned the new whiskey worker so if you scroll up you can also see some outputs from the db and we haven't got any outputs from the proxy yet but that's good usually because if there's no errors then it doesn't output anything until you access it let's go ahead and open up the browser and let's navigate to the 127.0.0.1 now you're not going to use port 80 here so or you're not going to use port 8000 because we mapped it to port 80 which is the default port so if you hit enter you should see and not found because we haven't actually mapped any urls and this is how it should look if debug mode is disabled so if debug is false then you're going to see this page instead of the standard 404 not found page that you see when running the management development server with debug mode enabled now if you got an error at this when you run docker composed up saying that the port is not available this is because port 80 might be in use by a different application on your machine so if you can locate that application and turn it off that's the best way otherwise what you can do is you can go to your visual studio code and just temporarily change this to a different port here from 80 to like 8001 or something and that will allow you to test it locally but then just remember to change it back before we continue and deploy to another server i just say that because sometimes applications can occupy port 80 on your machine and you might not easily be able to locate them unless you're familiar with the network tools and stuff on your machine that allows you to find out where the application is running okay so now we're going to continue and we're going to do a test to ensure that we can upload images in production mode so i'm going to create a new tab inside my terminal or create a new instance of your terminal or your git bash or your powershell window then type docker compose hyphen f and we're going to use the same deploy file run hyphen hyphen rm app sh and then hyphen c python manage.py create super user because we wiped the database and we're using a new database on a different volume we need to create a new super user in order to test the django admin so we're going to do that by create super user and then we are going to just call this one admin the email address admin at example.com and then a super secure password and then add that password again and now it's been created we can open up the browser we can add dot admin to the end here and then we log in with the details we just provided should see samples here and we're going to create a new sample add sample i'm going to upload the same file i did earlier you save and then open up the sample object click on it to test you should see that it works so we're serving these static files correctly now you will notice that the file does not get added to this directory this is the one from before but now the file is not being added to this directory if we actually delete this directory we delete the file and then you refresh it's still there and that's because instead of the directory that we mapped with our original docker compose file we're actually mapping it to a volume which is kind of hidden on the system that is not inside our directory and that's how it's going to run on the production environment and you can clear that by just doing docker compose and then specify the file for deploy and then down and then if you do that volumes it will clear those volumes as well so i'm happy with the deployment it all seems to be working as expected now let's go ahead and actually deploy this to an aws server the first thing you're going to want to do is you're going to want to head over to the aws free tier and sign up for an account if you don't already have an aws account so i'm going to assume from now on that you do have an account if you don't have an account please go over to the aws free tier site and sign up for one of the free tier accounts which is kind of like a 12 month free tier trial that gives you access to some of the aws resources for free once you have that i recommend that you set up an im user and things like that and log in to the console using that imuser i'm not going to cover that in this tutorial because it's quite a long-winded process and if you want me to create a another tutorial on how to do that please leave a comment in the comments below and i will do that so let's go ahead and head over to the console so it's actually console.aws.amazon.com once you sign up to aws you should have all of this information given to you via the emails that you register with once you're logged into the aws console you should be able to choose services and ec2 ec2 is a service that allows you to create virtual machines that you can use to run code like the one we're going to be deploying in this project on the left hand side you should be able to see the key pairs under the network and security option so if you click that and then we're going to click on actions import key pair and i'm going to give it the name which is the name of the user and the machine that my key is from so demo mbp so that means demo macbook pro and what this is the ssh key that we're going to use to authenticate with our server that we create now we're not going to be covering the details of ssh authentication in this video i assume most of you will probably be familiar with it already if not then there's a great tutorial in github that explains how to use ssh authentication that you can use to learn how to use it and then you can just come back to this video if you do want me to create a specific video about that then please let me know in the comments what we need to do here is paste the contents of our public key so i'm going to retrieve that by opening up my terminal and doing cat and then this squiggly line the home directory for ssh slash and then id rsa dot pub so this is the public key that we can share with the internet to allow us to connect to the server with our private key so we should have already had these generated on our machine if not as i said there's lots of guides on the internet that show you how to create ssh keys so we're going to paste that in here i'm going to do import key pair and this is going to import the key pair into our aws account so this will put the public key in our aws account so we can use that to create virtual machines that we can then connect to with ssh now we're actually going to create a virtual machine so we'll head over to ec2 dashboard and we're going to click on launch instance and launch instance and we're taken to the page where we can choose an ami so an ami is basically an operating system that our virtual machine is going to be based off i like to use the amazon linux 2 ami because it is optimized to run in aws ec2 servers it's also eligible on the free tier so we're going to choose that one by clicking select and then we're going to choose the instance type now this is where you can get charged a lot of money for creating different instance types so you can see here this t2 micro is the one that's eligible in the free tier but it's probably not going to be powerful enough if you have a real application with lots of users when i say lots maybe if you have 100 users a day or so you might be okay with the t2 micro depending on what your application does so if it's very process intensive then what you basically need to know is that the more that your application does and the more users that you have the larger the instance you're going to need but for this tutorial we're just going to use t2 micro because it's a free instance the generally the larger the instance the more you're going to be charged per month so it's important to look into the costs figure out the cost on the aws cost calculator before you choose an instance because some of these can cost quite a lot of money like hundreds or thousands per month so we're going to do the free tier eligible one and i'm going to click on configure instance details this page we can just leave everything as default and we can click add storage now this allows you to increase the amount of storage assigned to the virtual machine just like with the instant size you are charged more for the more storage that you use so i believe if you leave it as eight gigabytes then you won't be charged in the free tier although that might not be correct so please verify that before committing to this um eight gigabytes is what i'm going to use because it's the default value that is set up here so i'm just going to use eight gigabytes but if you have an application that's going to need more data then you might need to increase this to something higher like 20 or 100 gigabytes because eight gigabytes can be used up quite quickly especially if people are uploading files and stuff to your application i'm going to click on next add tags and then next configure security group and this allows us to configure the access to the machine so you can see there's already a rule here that allows access on port 22. what this does is it allows ssh access to the machine so this is so we can connect to it and administer the machine in order to install and run our application i'm going to click add rule and i'm going to add a new rule to allow http that's on port 80 and then leave everything else as it is and click review and launch and then we are going to scroll down here and click on launch it's going to ask us which key pair we want to use so make sure you choose the key pair we added earlier in the drop down here so this is important because this is the only way that you can connect to the machine once it's been created if you have this wrong or you don't specify the right key then you won't be able to connect to the machine you're going to have to destroy the machine and create a new one with the correct key pair so once you've selected that you need to click this box and i acknowledge that you have access to that key and then you can click launch instance this is going to go ahead and create a new instance in aws so you can see it says your instances are now launching if you click view instances it will take you to the page with a list of instances and you can see our instance is still pending so we're going to wait a couple of minutes until that instance is started and then we're going to continue once your instance is running you should see this instant state running here if you click on the checkbox here and then you drag this little bit up here you can see all the details for the instance that's running so the instance is like a real server but it's a virtual server that's running on aws so it has a public ip address that you can use to connect to and it also has a dns address that you can use to connect to the instance we're going to copy the dns address and then we're going to open up our terminal or the git bash or putty if you're on windows you might want to use putty to connect to it and we're going to connect to the server by typing ssh ec2 hyphen user which is the default user added to the amazon linux 2 images at and then the we're going to paste the hostname so that is the ipv4 dns address here then we're going to hit enter and it might ask you to confirm the fingerprint and you can type yes it only asks that the first time you connect and now we're connected to the server so we can actually perform actions on the server to set up the dependencies that we need to deploy our application and the dependencies that we need are git and docker basically because once you have those two you can run the application so we're going to start by installing git we'll type sudo yum install git hyphen y the hyphen y just says if it asks any questions just automatically say yes so you can hit that and it will go ahead and download git and install it on the machine and we're going to use this to deploy our code from github to the machine then we're going to type sudo amazon hyphen linux hyphen extras install docker hyphen y so this is going to add docker to the [Music] machine now that docker is installed we need to enable it so that it starts with the machine when the machine starts so we can do that by typing sudo system ctl enable docker dot service this enables the docker service so it starts automatically when we reboot the machine then we can type sudo system ctl start docker dot service which just starts the service so that we can get it started without having to reboot it just now now we need to add our user to the docker group so that our user has the permissions to run applications using docker so we'll do that by typing sudo user mod hyphen a and then capital g which is like ad group i believe and then docker ec2 iphone user so what we do here is we're adding the ec2 user to the docker group which will give it the permissions it needs in order to run docker containers you hit enter and then what we need to do is install docker compose so docker compose can be a bit interesting to install if you head over to the installing docker compose page which is docs dot docker dot com forward slash compose slash install and if you scroll down and you click on linux actually has a command here that is used to install it on linux so this is the command we're going to be using to install we need to copy the contents of this the full line make sure you get it from start to finish go back to the terminal or the git bash or the powershell and hit enter and it will go ahead and download the docker executable to the machine so it just basically downloads it off the internet and stores it somewhere on the server now we need to make it executable by following the step 2 here copying this command paste that there and then it makes it executable so now we can run docker compose first we just need to log out the machine and log back in so that the group that we added our user to gets applied so it doesn't get applied until you log out and log back in time exit and then i'm going to push the up key to just use the same ssh command to connect to it again connect to the server again and now we are connected to the server and it should have all the dependencies that we need in order to run our project the next thing we need to do is make sure that our project code is updated and pushed to github because we're going to be deploying from github we need to make sure github has the latest version of our code i'm just going to go back to the running service here and i'm just going to push control c to close the service down and once that's done i'm going to do git add dot to make sure all the files are added then git commit hyphen am and then finish project i'll type and then get push origin and this is going to push all the latest code up to github then we need to head over to our github page so you want to make sure you you're logged into github and you want to click on the project that you are deploying from and this step is optional if you are using a public repository a public repository means that the repository is publicly available so everyone on the internet including our server can access it in order to retrieve the code however in most cases you'll probably have a private repository because if you're creating an application that isn't open source then you're going to want to protect that code from the internet and not make it publicly available so in that case we need to set up something called a deploy key so we can do that by heading over to settings on the project and then the option to create a deploy key should appear and then we have deploy keys so here we can add a deploy key so what we need to do is go back to the server generate a deploy key so use the ssh terminal that's logged into the server and we can generate it by typing ssh hyphen keygen hyphen t ed 25519 hyphen b and then 4096 what this will do is it will generate an ssh key on the actual server so it can authenticate with github so there's two different keys at play here one is out key on our local machine that allows us to connect to the server now we're setting up another key that allows the server to connect to github and it's best practice to not use the same key for both of these things because you should only really have your own personal key on your own machine your server should have its own key so then if you need to disable access to that server you can do that easily through the github console so once you type that hit enter and you can just leave it in the blank location or the default location now you can type a passphrase this is an optional passphrase that you can use um or that you would need to use every time you deploy updates to the server the chances are if somebody already has access to the server they can access the key therefore they can access the code that's already on the server so i don't think it's that necessary to add a passphrase for this particular key but somebody might disagree if you do then explain why in the comments i'll be interested to learn from you why it is an added layer of security if you are working on some really secure software or something like that for convenience i'm just going to leave this blank so i don't need to type a deployment password every time i deploy now that the key has been generated i can output the public key by typing cat then the squiggly line here forward slash dot ssh forward slash id underscore ed25519 dot pub so this will give us the public key so we can copy that now go back to our github page here click add deploy key and give it the title ec or aws deployment or something that makes sense to you and then paste the contents and what this does is it adds read-only access to this particular key unless of course you check this box to allow write access but for a deployment key typically you would never need to give right access like the the server doesn't need to add code to the git repository it just needs to retrieve the code so it can run it on the server so we're going to leave this unchecked and click add key and then i'm going to go ahead and add the password for my github account so i just need to remember what that is paste that and click confirm password okay and now the key is added now that the deploy key is added we can go ahead and actually clone and run the service so let's go back to the home page of the project and we're going to click clone and we're going to use the ssh url now if you are cloning a public project and you didn't add the deploy key because you just want to have a publicly available project then you would use the https url so but we're going to be using the ssh url it's important that you choose the right one because if you use ssh but you haven't set up authentication then it's not going to work and if you use https you can't authenticate with the same ssh key so that isn't going to work there so if you want to deploy using the deploy key that we just created use the ssh url head back to the server i'm going to type git clone and then the name of the url we'll type yes to add it to the fingerprint and then the project will be cloned so now we type ls you should see the project here on the server you type cd django and then tab it should auto complete and you can switch to that directory now what we're going to do is add the configuration so i'm going to type cp and then dot env sample and then copy it to dot env then i'm going to do dot vi and this is the editor that i'm using so if you're familiar with vi then use vi if you're familiar with um nano or a different text editor then use that use whichever text editor you're familiar with uh because you want to open up the file with a text editor and you want to make the changes to the file so i like vi so i'm going to do vi.env db name and i'm going to add the db name here so let's just call it app root user let's call this app root user db password will be something like super secure password one two three and then the key is usually random a string of random characters so you can actually generate a django secret key and there's a bunch of generators online it's just a random string of characters that's used for django now allowed hosts you need to change this to the host name of the server that you're using so if you have your own hostname that you're going to point to the server then you will use that in our case we are going to just use the hostname that was given to us when we created the server so you need to copy that again head back to the terminal and paste it in you can add a comma separated list of host names so hostname two hostname three and each one will give access to the application on that hostname so if you have multiple host names you can specify them all here and we're just using one which is this one that was given to us by aws so we're going to save the file and you do that on vi by typing escape then colon and then wq for write quit and that's all the configuration we need to do we're now actually finally ready to launch our applications so we can type docker compose hyphen f docker compose deploy.yaml it's important to specify the deployment yaml file up hyphen d this runs the application in the background so it'll be running on the server in the background hit enter docker will then pull down the dependencies that are needed and it should run our application so the first time you run it it's going to take a minute because it needs to download and build the containers once that's done it should be a lot faster in the future [Music] once it's done you should be able to access the application on the url that we get for the public dns so if you copy the url here on aws open a new tab paste it in you should see the not found because we haven't actually defined any views or anything templates but if you do forward slash admin should see it takes us to the admin page and it has all the css and everything loaded because we can see the styling and everything so that means the static files are working now let's go ahead and create a super user for us to test with so i'm going to use the up key to run a similar command as we did before but this time we are going to change it to do a run hyphen hyphen rm app sh hyphen c python managed or py create super user and we need to run this again because we are on a new server with a new database so we have to create another super user i'm going to call it admin we'll just admin at example.com and the password so now i'm going to log in with admin and then the password you can see that we logged in here i can go ahead and create a new sample model which contains some kind of image save that click on the sample model and you can see that it is serving the static files correctly if you need to inspect the logs on the server you can do that by typing a command that starts with the docker compose hyphen f and docker compose deploy and type logs so what that will do is it will share all of the latest logs that have been output to the screen and if you need to update the code then what you would do is if you're updating your application you would make the changes on your local machine then you would commit them to git and push them to the git repository and then you would do git pull origin on the server which would pull down any changes and of course we haven't got any changes so it says already up to date so once you do that you'll pull the latest code down then you would do docker compose hyphen f then the deploy file build and then the name of the service so the app service that would go ahead and rebuild the container with the latest version and then you run instead of build you would run up hyphen hyphen no hyphen depths hyphen d app and what this will do is it will start a or it will replace the current app container with a new version of it but it won't affect any of the dependencies so it won't shut the database down or the nginx proxy down in order to do that it will just update it in place so that's how you deploy a django app to aws ec2 so it's a very quick and dirty way of doing it even though the video took a long time it's a lot shorter than our 14-hour course that teaches how to do it in a production grade environment where you would define all the infrastructure as code set up automation and set up things like automated workflows so that when you push your code to git or you push it to your gitlab instance it automatically builds it and deploys it to an environment so it's a lot quicker than doing that it still takes some time but once you've done it a few times it should be pretty fast and most of this was creating the actual application for us to deploy and i wanted to do that just to show you how to create and deploy an application from start to finish i hope you found this useful if you have any feedback or comments and please leave them in the comments below so if you have a better way of doing this or you think something could have been done different then please leave them in the comments that way we can all learn from each other thanks so much for watching and i'll see you in the next lesson