hi everyone and welcome to this 8 Hour def SE cop course which is for both beginners and advanced here you learn some core def SE cops toools such as kubernetes and doker and I will show you how to secure them as you can imagine we're going to cover plenty of topics I will show you how you can use the doker content trust to protect your Docker demon and hosts who apply the app armor and the comp security profiles we will implement the docker B security to audit your Docker host after that we will analyze vulnerabilities in Docker images for that we're going to use CA QA and anchor and also the cve databases we'll talk about how you can manage Docker Secrets networks and Port mapping at that point you're going to become more advanced in Dev secs so we're going to also explore more advanced tools such as C advisor dive f Al and administration tools such as painer Rancher and open shift finally we're going to focus on kubernetes security practices and I will show you how to use Cube bench and the kubernetes dashboard to enhance your kubernetes security I will teach you how to use very popular tools in the industry such as promeets and graina to Monitor and observe kubernetes clusters and vulnerabilities now that's everything let's jump straight into the course I hope you enjoy it and let me know down in the comments if you would like me to create courses on other topic that you're interested in also if you want to see more of my courses and more courses like this one you can definitely subscribe to keep up with my content in this section we're going to talk about the devops architecture for the Enterprise so before learning the dev SEC Ops it is going to be quite beneficial for you firstly to learn the devops architecture which is what you're going to be doing in this section and then moving to everything related to Def SEC Ops so in this section we're going to cover a brief introduction to the devops in it delivery creating reference architecture introducing devops to the companies and working with the voice model but let's start with the introduction so since this course is going to be focused on implementing devops on large scale companies so we cover all the cases and so before we go to the specific challenges for the devops and Dev SEC Ops we need to First have a common understanding for what devops is so basically devops is the development and operation stages working as one team and this team builds a product and then runs it it is that simple so the devops has gained a lot of momentum over the past decade essentially in Enterprises but actually implementing the devops is a difficult part the reason for this is that enterprises are not organized in a structure that works for devops this is because in the last century most Enterprises outsourced a lot of their it and for that reason devops becomes way more difficult when development is done inhouse while the operations are outsourced so by bringing the teams together into development an operations environment the Enterprise can speed up the delivery and release new products and services and in that way less handovers are needed between the development and operations in that way the quality of the product will improve since devops already includes quality assurance testing and security so here I bring in front of you some of the key benefits for devops the first one is that devops brings the business the development and the operations together into one properly working system on the other hand Enterprises having devops can respond faster to demands from the market because they're absorbing continuous feedback from their work in the same way products are continuously improved and upgraded with new features instead of planning major next releases and wasting ton of time to create a complete new project and outsources and finally through the Automation and the devops Pipelines Enterprises can really reduce the costs in terms of both development and operations and at the same time improve the quality of their products and when building devops the first and the main starting point is the Enterprise architecture this is where the business goals are set and we Define how those goals are met so the it delivery is key to Meeting those goals so in large companies there architecture also defines the it delivery process and we will specifically look at the it delivery and its processes in more detail so as I already mentioned the Enterprises typically have operating model that is based on Outsourcing this makes implementing devop more complicated normally the Enterprise architect will have to have a very clear view of the connection between the different processes and who is responsible for fulfilling those processes basically who is responsible for what when and why and the next question would be how this can be converted to devops so before that we first need to understand what are the main processes in it delivery and actually bring here in front of you all of those processes and let's talk a little bit about them so first let's start with the business demand this pretty much covers what are the business needs and understanding on what the requirements are for the product that's been developed those requirements are normally set up by the people that are going to use the product or the clients so customers will demand product that meets specific functionality and quality so the architecture must focus on the delivery and the endtoend product to satisfy the requirements that were defined initially so in devops an assigned product owner makes sure that the product meets those requirements the product owner will have to work closely with the software designer and then we move to the next process which is the business planning so once the demand is clear the product needs to be sculped in devops the product teams typically start with the minimum viable product so the first iteration of the product that does meet the requirements of the customers is actually called the minimum viable product so when designing this product the processes need to be able to support the development and the operation of the product so the business planning involves the quality management and testing that are two of the major components on the it delivery then it comes the development process and in devops the product team will work with user stories a team must break down the product into components that can be defined as deliverables to do this we should have clear definition of the user story and so the user story always have the same format so any user story need to have an acceptance criteria or the definition of done or d o d and once the product is developed then we can talk about deployment at this stage the code is tested validated and it matches the user stories and so now it can be deployed into the production state so the testing and the releasing of the product is normally automated in the pipeline so that should be easy Once set up at once so before the code is actually pushed to the production it also needs to be merged with the configuration you need to think about the security packages that need to be applied to components that can run into production so at the test and the quality process the full package consisting of application code and infrastructure components needs to be validated and be ready for production so the result bu after the develop the deployment stage it should be a live product that is ready to use so when you performing testing if bucks or violations to security are discovered the product needs to be set back to the earlier stage or to the development and finally once the product is deployed then we move to the final process which is operations so after deployment the live product of course needs to be operated to do this Enterprises work according to the IT service management principles in fact The Operators are in the same team as developers but it does but this doesn't mean that the IT service management principles are not valid anymore for example when incidents occur The Incident Management process should be immediately triggered so during operations they could be looking for requirements fulfillment Incident Management program man management and so on so with all those delivery components for the it process devops actually adds The Continuous integration and the continuous delivery or the cicd you might heard about it and this is one of the most important Concepts when talking about devops and pipelines so let's talk about this let's first talk about what is continuous integration so continuous integration or CI is built on the principle for shared repository where the code is frequently updated and it is shared across all the teams that work into a cloud environment The Continuous integration allows developers to work together on the same code at the same time and so changes into that code are directly integrated and ready to be fully tested in different test environments on the other side The Continuous delivery is the automated transfer of software to test environment ments so the timate goal of the continuous delivery is to bring software to production in a fully automated way so all the various of the test cases should be performed automatically so after development the developers should immediately receive feedback on the functionality of their code and so due to that the cicd requires a feedback loop and this is why we say to be continuous it needs feedback about the delivered products and Serv and so this is then looped back to the developers and from there new interactions are planned to improve the product or the service so this works quite well if a single Enterprise controls the full cycle and no processes are outsourced to another organizations however for many big Enterprises this is impossible so they have to Outsource however sometimes activities are not perceived as core activity and some companies decide to be more cost efficient and to delegate that to somebody else however in the past decade all those organization have been through a massive change the it departments have became more and more important and in some cases the software development has become a core activity and a good example for that are actually the banks so all the banks are actually it companies in these days so the out put of their it delivery are the financial products so due to the customer demands releases of those products with new features have become more frequent and they take sometimes several releases a day so let's talk a little bit about delivery in a sourcing model and you can see this on the graph in front of you so sometimes sourcing can be quite complicated but if we learn to think in terms of sourcing tires the that you can see right here in front of you it becomes way more comprehensive and easy to understand so using this model we can break the it delivery into three main phases or tires the first one is the Strategic Tire or the Strategic level and this is the tire for the Enterprise governance the Enterprise defines strategic business goals that are translated to Enterprise architecture so the overall architecture principles are the outcomes of the Enterprise architecture and drive all it architecture including the devops then in Tire two or the Tactical level is where the it architecture is developed including the devops so here is where the service level agreements and the key performance indicators are defined to measure the outcomes of the it delivery and finally the final Tire which is Tire three it's called operational or Services level at this tire the components of the architecture are detailed including the interfaces to various suppliers and service providers so the agreements that were defined in Tire 2 should be adopted at this level so that all work that's been done for the development and operations is done in the same way so in practice the service providers are normally acting on Tire to if they're involved in larger programs spinning on multiple products so at this tire the provider is responsible for aligning the different streams in the lower Tire or Tire one so here is how the sourcing models are looking like guys this is going to be the end of this video and in the next one we will talk about creating the reference architecture thanks for watching hi everyone and welcome back to the course in this video we will talk about how we can create a reference architecture integrated with the devop in the previous video we were looking at the different processes of the it delivery and how it is integrated into the devops so any devops architecture will have to address Planning Development integration deployment and operations but we have to keep in mind why we're doing devops and this is to achieve the business goals in a faster more agile way and to continuously improve products the devops architect doesn't stand on its own it has to be linked to Enterprise architecture to do that we're normally using the open group architecture framework designed with architecture development method to draft architecture and this is what you can see in front of you in the figure right here you can see the architecture development method so just like the devops the architecture development method is a cycle except in the very first phase which is where the need for architecture is formed this has to be done in tier one in the sourcing model that we discussed in the previous video and the core of the architecture developing methology is a sequence of architecture design which is business first and setting the principles and the requirements for the actual solution those principles Drive the architecture data usage the applications and finally the technology that will be used this is important because architecture is not about technology in the first place technology is purely the ability to achieve business goals in the Enterprise level so the architecture developing method assumes that the architecture is not static it changes as soon as the business requirements change too if the business demands change there will likely be a need to adapt to the architecture in the future Solutions so as we discussed in the previous video there were three tires and entire one is where the goals are set for the entire Enterprise in the next level entire tool is where the devops teams we will translate the goals into product features and start developing the product develops teams will have to work according a set of principles so let's take a closer look at those principles so in fact there are actually six devops agile skills principles and those are very first the customer Centric action which is to develop an application with the customer in mind what they need and what does the customer expect in terms of functionality there's also the goal of another concept domain driven design which contains good practices and ideas then when you build a project you always need to have the end result in mind or how will the product look when it is completely finished with the end to end product in mind it comes the end to end responsibility and here the teams need to be motivated and enabled to take responsibility from the start to the finish of the product life cycle principles such as you build it you run it you break it and you fix it is one of the key principles for taking responsibility then we need a cross functional automation teams they need to be able to make decisions themselves in the development process then as we talked in the previous video a continuous Improvement is quite important for every project and we need to continuously improve our product and finally in order for our project to be very efficient the goal of the teams should be to always automate as much as possible the only way to regain speed in delivering and the going is by automating as much as possible automation also limits the occurrence of failures and bugs in your code So based on these principles they're leading to some architecture statements that are the core of the devolves those principles are automation which is basically responsible for automating everything with automation the amount of time between testing and employment will be significantly reduced enabling a faster release process but on the other hand automation will also lead to way less manual interaction and therefore less errors the next architecture statement is collaboration so you remember to of the six principles are cross functional Automation and endtoend responsibility this can be only achieved by a very good collaboration development and operations will have to work very closely together to speed up the delivery and the releases although this is also a cultural change collaboration requires common tool set that supplies that collaboration so automation also requires a clear portfolio that contains building blocks that can be automated easily these building blocks are called artifacts in the ADM cycle and they represent packages of functionality that can be used to fulfill requirements some of the most famous tools for keeping artifacts are for example artifactory the building block is reusable and replaceable and therefore it must be clearly specified and defined so the configuration of the building blocks needs to be well documented and brought under control of the configuration management if this is done well those building blocks will also have clear interfaces so that they can be fully automated and finally let's talk about integration which is where the development and the operations come together in devops we integrate the business demands and the IT delivery user stories code is normally integrated with new functionality that is coming out of the business demand the demand is changing faster those days so development needs to keep in mind the continuous integration this will lead to change changes that adopt new development at some very good speed so the final step to merge the devops principles into one model for our reference architecture is to build the reference architecture model so the model contains two main cycles and as you can see from the figure there is outer cycle which is right here and also there is an inner cycle so the outer cycle is the product cycle while the inner cycle represent presents the operational activities and so as a logical Consequence the outer cycle is governed by the Enterprise itself the inner cycle it's about actually delivering the product using devels and as you can see they interfaces between the outer and the inner cycles and those are collaboration automation integration configuration management and the interface layer so in the outer cycle the business goals are translated into architecture from the architecture a portfolio is created with business blocks to create products and services products are released to the market and adopted but due to changing demands there will be requests for changes so those changes will drive new Enterprise planning and some change into the business goals so in that way the business will continuously absorb the change in the demands so the plans and the actual builds are expected to be executed into the Inner Circle as you can see here in this circle the product is broken down into product backlog items that will be developed and eventually operated by the devop steams these teams do not operate bys but they are triggered from the outer cycle so this is why we need the interface layer this is the interface between the business and the execution teams doing the it delivery so there is a collaboration between the architecture and development and as we mentioned before the releases should be automated as much as possible and so requests and changes should be integrated with the planning on the backlog of the devop steams and so the builds are pushed to production and those ones that are actually pushed they need to be monitored and brought under control from the configuration management so let's see how this would actually work in practice by the next figure so here you have some people within the model that you saw in the previous slide and this is basically the devops workflow for Enterprises so you can see that from the Enterprise we're getting the business plan and then this is passed to the business unit from there it goes to the product owner which distributes the work to the different teams or on the other hand the Enterprise architect works with a security officer to build the Enterprise architecture that translates into portfolio that builds a road map from there we can actually trigger the devops cycle where we create a product architecture and backlog and we're running continuously everything on Pipeline in order to implement every new change and have a continuous development and integration so I hope that from this guys you really got a good introduction on how devops is implemented into it delivery that it thank you very much for watching and in the next video we will talk about the devops components hi everyone and welcome back to the course so far we learned how to start defining the architecture looking at the architectural principles of the devops and drafting the reference architecture model so the next step is to look at the different components within the dev Dev Ops so in this video we will learn what components must be included into the devops architecture and this is actually the tire tree from our tires figure of the target Enterprise model and this is the level where all activities should be executed so here in front of you you can see the agile infrastructure and the reason that this looks as infinite Loop is because feedback from the life product product that is managed by operations will continuously be looked back to develop and improve the product and this is from where the dev Ops comes Dev from development and Ops from operations so as you can see here the different components are planning creating testing or verification then preparing releasing configuring and finally monitoring and from the monitoring if we see that we need certain Improvement then we jump back to the plan station so remember that at large Enterprises you will likely work with several service providers that are fulfilling parts of the it delivery process so when we want all of these components to work together in a devops way we need to make sure that the process and the tools are aligned and so the next thing is to have common understanding for for the various activ ities that are executed as part of these principles and so all def Ops components must be defined and implemented in a consistent way every developer and operator should work accordingly to the same definition and within the same components and so from here you might have the question at what stage the devop should be developed and the answer to that question is as early as possible the operations play a a key role in defining new products and bringing them to life they should set the requirements and the acceptance criteria before the product go to life and so if developer build something that cannot be managed by Ops the product will fail and the business demands will not be met so typically the Enterprises contact SAA or kpis to fulfill their processes so that these are enabled into the business goals if one of the processes fails the product will be impacted as an ultimate consequence and so the business will not achieve its goals so understanding SAA and the kpi is important for any architect this is why it is included into the surrounding model that we will also discuss so the service level Agreements are position between the Tactical process of devops and the Strategic level at the Enterprise level where the goals are set so the SAA and the kpis should support those goals and guide the devops process so you can see here in front of you six of the most important matrices that should be included into the SAA for devs and those should be frequency and development and this refers to that the devops teams usually work in Sprints this is a short period of time in which the team works on a number of backlog items as part of the next release of a product the kpi measures how often new features are launched on a regular basis and here keep in mind that the new features can be scheduled on a monthly weekly or daily basis then the development time is usually the time between the code being released after the test phase then the deployment failure rate on their site refers to the rate of failures that occur after deployment ideally this number should be zero but this is not realistic deployments especially when the change rate is high will fail every now and then obviously here we should aim that this number is as slow as possible then we have the Matrix that is called deployment failure detection time and this is due to the fact that kpi strongly relies on the pr one so failures will occur but then the question is how fast are those failures detected and when we will be able to fix them so this type of kpi is also referred as meantime for recovery and usually this is the most important kpi on the devop Cycles then we have the change lead time which is the time that is between the last release and the next change to occur subsequently it is measured in terms of how long the team will need to address a change shorterly at times indicate that the team Works quite efficiently and finally we have the full scale time I know that there are a lot of times here but as you can see the definition of time during devops is quite important so the full scale time is a total time that is between the different interations for each development so this is the main list that I wanted to share with you it this list is of course not executive and so the Enterprises can think of a lot of different matrics and kpis that they want to implement but I would always advise you to keep the things simple keep in mind that every Matrix that is included in any contract needs to be monitored and reported and this could become quite expensive and time consuming so let's talk a little bit about the team value on the project and and this is usually implemented with the voice model so the devops teams need to deliver value to the End customer this is their main goal so the voice model is specifically defined to address this as you can see from the graph the voice model includes value objectives indicators confidence and experience voice the idea behind the model is that any it delivery should deliver value to someone typically the End customer of the business value say the objectives of the it delivery and those objectives are measured using indicators so indicators measure whether the pursued value is achieved then confidence is all about the indicators and if they contain relevant information to confirm that the it delivery actually results in the targeted value and lastly The Experience basically tells us if the delivered system is fulfilling the business demands and which improvements will lead to more business value and then the circle starts again from Balo so since the voice model also involves looping feedback back to the beginning of the cycle with the aim of improving products and adding more value to business the model can be used for any devops project so thank you for watching this video and this section guys I hope you learned a lot about the devops and in the next video we will continue our introductory sections with the dev SEC Ops so we're going to add the security level there thanks for watching and I will see you in our next section hi everyone and welcome to this section in which we're going to introduce ourselves to Def secc Ops ecosystem so as everything into the IT world the def cops requires architectural foundation so in this section we will learn how to compose a reference architecture for the def SEC cops and there's some practices to design the pipelines with the defc Ops we will also discuss the best defc OPS practices for the major public Cloud providers such as AWS azour and Google Cloud so in the previous section we were discussing the principles of the devops and so we concluded that security should be at the heart of every setup in the development of development life cycle from the moment where the code is pulled from the repository to the actual code being committed and push to the production so the defs cops consists of three main layers the first one is the culture and this is not a technical layer but it is quite often forgotten that devops is much more than applying tools and creating cicd pipelines obviously the same applies for the def SEC cops within the def SEC cops every team member FS responsible for security and acts accordingly taking ownership of it this doesn't mean that the security Specialists have become obsolet though it's good practice to have security engineer or or professional in your team sometimes referred as a security champion this person should lead out the processes in the team for applying security standards and policies to ensure compliance the next layer is a security by Design so the security is embedded at every layer of the system this typically means that the Enterprise has a detailed architecture that covers every aspect of security and and enforcing security postures into systems such as authentication authorization confidentiality data integrity and more software developers do not need to think of security every time they design and build new applications or features and the fin the final layer is the automation because in devops we want to automate as much as possible and the same accounts for the def set cops because this also includes security so by automating the security we can easily prevent human error and we can create automation tools to scan for possible vulnerabilities or non-compliant components such as unlicense code automation also implies automated Audits and collection of evidence in case of attack so the automation process makes sure that security Matrix are collected and sent back for feedback in the defc cops practice for example when you scan vulnerability in the code an evidence from that scan will be collected and will be sent for feedback so to manage those three layers the def secops relies on four main components and those are harnessing responsibilities application security Cloud platform security and vulnerability assessments and testing and so the defc cops should not be mixed with the security as a service the security as a service can be compliant with the defs cops practice but the concept of the security as a service is mainly about shifting security as responsibility to a service provider this a sourcing model that allows Enterprises to get cyber security delivered from a service provider on subscription base there are good reasons for implementing the security as a service and one of them is that a provider is responsible for all security updates based on the latest insights Enterprises can Define security level agreements for incident response times and the timely application of security practices so before we discuss the reference architecture of the def seops we need to understand what the role of the devops is and how the security fits to it so devops is about the software development life cycle an important note is that we have to make sure that we know the fact that the developers increasingly use open source tools this makes sense since provides great flexibility When developing new code open source is Community Driven so developers can contribute to each other's code and speed up the process projects can be shared at open- source platforms such as git and GitHub repositories but they can also be shared internally within the Enterprises so the inner Source type projects are a good example of projects that are held internally so inner Source uses open source best practices for software development within the boundaries of the organization in fact we need security from the start of the devops process in practice this means that we start scanning for security issues from the moment the C is put from the repositories the repositories are part of the software development life cycle as well so they should be protected from unauthorized assess and so this calls for role based assess and identity and assess management are quite important for the repositories they're called rbac and I am so you should keep in mind that we can create reference AR architecture for defc Ops within those four components the first one is a repository assess within the RO based assess control the static application security testing which will detect errors in the source code the software composition analysis which will detect dependencies in the source code and finally the dynamic security testing which will dynamically scan the code every single time those those components are embedded into the def secops Pipeline and are quite important so that said thank you very much for watching guys this video and in the next one we will talk about how we can compose the def SE cops pipeline thanks for watching hi everyone and welcome back to the course in this video we'll talk about the concept for building def SE cops pipeline so right here in front of you you can see two pictures and here on the top you can see the devops pipeline first and then in the bottom you can see the dev SEC Ops pipeline so let's first talk about this graph here on the top so you can see that here we're having four steps which are pull code from the repository then build that code then test it and finally one once the code is tested then we need to deploy it so you might think what else do we actually need to add to this pipeline it looks perfect well the main thing that this pipeline is missing is security so this is the main thing that we are adding to the def SEC Ops pipeline using def secops we are actually integrating the security standards and principles as part of the pipeline so security is a layer that's applied to every step in the pipeline but it does actually include several steps and you can see them right here on the figure below so you can see that pretty much most of the steps are the same and we have the pull Cod from the repository build it tested but here we're adding this security scan test before the deployment and you can actually see that the security scan is composing of five more items so let's talk first about the dependency check so the first vulnerability that exposes the code to a risk is the fact that the code actually relies on different pieces of code in order to run and there are differences in the code dependencies so developers can have controlled and uncontrolled dependencies as a common practice we don't want dependencies in our code the risk is that if the code that the dependency relies on is breached the entire application might fail alternatively malware that's injected in certain piece of cod but has a dependency with other code will infect the entire code stack for that reason the main rule here is to eliminate the dependencies which is one of the basic principles for zero trust so you might know the most common tools to manage your packages such as the pipe in Python and the npm in the nodejs so commands such as pipe EnV check and npm audit will actually show you the dependencies and we're going to try and test each of those commments into the next sections which are more practical ones then there the static analysis and those are checking the BET coding practices such as the BET configurations there are quite a few open- Source tools for almost every coding language then in the scanning section developers will likely use continuous usage of containers and container images to build and package their applications these images need to be scanned for vulnerabilities in the libraries that they are used the scanning is done with some basic lists of nonn vulnerabilities you are basically checking some known vulnerabilities and execute commands in order to see if they exist in your code the scanning can be built by Industries such as the nist cyber security framework or from other software providers and finally data Dynamic analysis are basically automated web applications that are continuously scanning to check to check B handlers or missing tokens on your application so those tokens prevent typing unauthorized commands that might come from untrusted user or it can simply be a function on a different website so following the diagram now we're having a security embedded cicd pipeline that will automatically cover the most common vulnerabilities recognized in the code there is one specific item that we didn't touch in this section and this is the use of containers and how they can secure our applications so let's talk about them next so most developers will use containers to wrap and and deploy their code typically those are the docker containers Docker is very famous application that is used particularly for this Reaser there are some best practices when it comes to using security containers to keep containers consistent and secure they should be scanned regularly even when the application has reached a stady state and updates are done less frequently or dtive development has sto if the application still runs with its underlaying containers hosting the the different application components those containers must be scanned since there is always a possibility that the dependency is creating can vulnerability applications consisting of containers are normally defined by the doer files you can use linting which is analyzing the codes for errors or bet syntax and that can be used in the docker files to make sure that those files remain secure cure a popular linting tool to use for that purpose is the head lint tool so this tool is available as a Docker image and can easily be executed with the command that you can see in front of you so what you simply need to do is to write Docker run rm- I headand SL headand and we explore these options once we start installing and organizing our Docker images into the next sections Docker also recommends some best practices for keeping containers secure so Docker already takes care for the name spaces and the network Stacks to provide isolation so data containers can obtain privileged access to other containers unless this is specifically specified in the configuration so some specific things to consider is that Docker always uses Docker domain this domain requires root assess which implies security risks the first thing is that only trusted users should be allowed to set controls on the domain you also need to take action to limit the attack surface of the domain by setting assess rights to the docker host and the guest containers especially when working with containers provided through API web server so if you follow up the recommendation for Docker we will have trusted immutable images that we can use to deploy containers on kubernetes for instance and kubernetes for which we'll really talk in detail in our next sections will actually use the trusted image repository and take care of the provisioning scaling and load balancing for the different containers the security features of kubernetes is a support for rolling updates if an image repository is updated with patches and enhancements kubernetes will deploy new versions and destroy the previous ones so in that way developers will always be sure that only the latest versions of images are used so database credentials API Keys certifications and assess tokens must be stored in a safe place at all times the use of cicd and containers doesn't change that it is strongly recommended to use a vault outside repositories that the pipelines assess for cicd so the best practices for secret management you can see right here in front of you there account to three and the first one is encryption with AES 256 encryption keys the second best practices is that Secrets such as Keys must never be stored on git or GitHub repositories they should be stored in some local space it's also advised that secrets are injected into application using secure string as an environment variable so for example if you have a project it is not recommended to just type this key as a string into your code but instead you can create an an environmental variable assigned to that variable the key and then simply pass it to your code so for example har Corp offers vault as an open-source solution for security assessing Secrets the service allows to easily rotate manage and retrieve database credentials API keys and other Secrets throughout the life cycle even more robust solution can be provided by a service called cyber Arc which is a platform independent on Secrets management solution specifically created for securing containers and microservices so these tools or these security tools could be integrated to azour and AWS and so you can use for example azour key Volt or AWS secret management so this is actually something that we will talk in the next video and there I will explain you how you can secure your AWS azour and Google Cloud platform using def secops thanks for watching hi everyone and welcome back to the course as I promised in this video we will talk about how to use defc Ops with AWS azour and Google Cloud so let's get started so in the previous videos we discussed the def cops principles and how the pipeline is built with embeded security so in this video we'll look at the best practices for applying defc Ops in the major public Cloud platforms such as AWS azour and Google Cloud so let's first talk about how we can Implement defc Ops on AWS called pipeline so before starting to explain the defc Ops in AWS we need to understand that deployments in AWS should be based on the principles of the cloud adoption framework the framework covers specific security tasks and responsibilities grouped into four categories and those are prevention detection correlation and Direction so as you can see on the figure in front of you AWS offers native solutions to provide controls for managing security postures for cicd pipelines so here on the the pipeline you can see the different steps organized so an important artifact here are the security groups as you can see those could be beans where the security posture of all of the components that are developed and deployed in the pipeline are defined it contains templates and policies that have to be applied to those components so we can see that the code is going from the git repository to S3 bucket which basically is something like a drive in which we can place any type of data mainly used for images then we pull the code to the code Pipeline and then it goes to the different security groups so the static code anal static code analysis are performed on the code that is pulled from the S3 bucket right here so once we pull the code from the S3 bucket we're creating a stack of containers that contains virtual private cloud in AWS to run tests so those those tests are used to run the code on the stack and validate the build AWS calls this stack validation and then certain Lambda functions validate the stock against each and every Security Group okay so everything that has to be defined into the security we need to meet in those security groups so this is the second state which is a test and so after that we're running into the production so in the production after the successful stack is validated a LDA function is triggered to prepare the stack for production using the cloud formation templates so examples of items that are checked against security groups can be validated using assess and permissions assess controls to the S3 bucket and policies to create instances using for example ec2 Computer Resources so Microsoft azour actually uses a different approach from AWS when implementing defs cops this solution leverages the scan possibilities of GitHub it also leverages the features of the azour kubernetes services next to the azour pipelines and you can see it right here so here in this diagram you can see high level architecture for a security embedded C AC CD pipeline using GitHub and aour services you can also see certain numbers in the diagram like right here is one then two uh three and four okay so those numbers represent the order in which each step is taken when working on the cloud as soon as the containers are pushed to the azour cloud repository they're scanned against the port polies that are stored in the azour polies right here that you can see so so first we push the containers and after that in the next step we're validating against theour policy so after that the appropriate security keys right here are fetched to authenticate the containers to the azour kubernetes service only when all checks have passed successfully the code will be then pushed to the application gway so if we look a little bit more in detail we can see that theour cloud is quite secure and the solution starts with the code analysis in GitHub which involves using code ql to the take vulnerabilities in the source code and dependencies respectively then once the code has been validated it's packed to Docker container and the point to a test environment using theour def spaces this is done through the azour pipelines so the azour de spaces will build an isolated test environment for the azour kubernetes service then when we scan our code containers are stored into the azour Container registry where they're scanned against the security protocols to do this azour uses the azour security Center which is a huge library that holds all security policies for the envir Ms and roll in aour and finally when we reach our production stage the scan containers are pushed to kubernetes Cluster using the aour kubernetes service azour policies are used to validate the compliance of the provision clusters and containers so just like AWS azour uses several different solutions to provide endtoend solution that embeds security roles policies and postures throughout the whole cicd process however all those Solutions start with repository where those security guidelines and guidances are sorted and managed security groups that are managed through AWA security Hub are located into the azour security Center so let's see how the def say cops are actually implemented on Google Cloud well this is normally done using the antos and the JFR methodologies so the Google Cloud involves interesting best practice solutions for implementing the def secops pipelines using jrog so it doesn't only provide Cloud native pipeline but also a solution for developing and deploying hybrid environments using the Google cloud and the on permission systems this this is quite interesting for Enterprises since a lot of Enterprises will not move their it systems completely to public clouds most Enterprises are expected to move more and more system to the cloud but some of their system will remain private in the stock so cicd pipelines that are suitable for both cloud and on permission Solutions are quite favorable and with kubernetes they're quite easy to set up so you can see the architecture here in front of you using jrog right here and kubernetes so there are two major Solutions and the Google Cloud advocates for using jrog artifactory and jrog x-ray so jrog artifactory takes care of storing artifacts that are used when building applications so we already saw how by pipeline starts by pulling codee from The Source repositories developers need to be able to rely on the tool link that stores and other artifacts code building blocks in a comprehensive and safe way so that software delivery to the pipelines can be automated and I've been actually using and still use artifactory for my current projects in the company I work for the jrog X-ray scans the artifacts and the Cod building blocks through artifactory against known vulnerabilities and license compliance x-ray uses the shift left mentality by already scanning The Source artifacts and you can see the complete diagram right here in front of you so jrog is basically using x-ray as the main security solution that is embedded to the pipeline builds are taken and then pushed to the production using the kubernetes Google Cloud platform and on a platform called anos so anos ensures a consistent layer of deploying and managing kubernetes clusters across the native cloud of the Google kubernetes engine and on permissions this solution is only phable with Google Cloud but can be also used on the VM wire Stacks as well as on the AWS as well so this is how def secops can be implemented on all the major cloud service providers thanks for watching the video and in the next one we will talk about how you can plan your development and about the main cyber security Frameworks thanks for watching hi everyone and welcome back to the course in this video we will plan a our deployment of the defc Ops and also we talk about the industry security Frameworks so let's get started now so far we discussed the reference architecture of the def secops pipelines and the best practices for AWS azour and Google Cloud if we have the architecture The Next Step would be planning and deploying defc Ops and the pipelines to our Enterprise the three major steps that most of the companies will need to implement the def secops the first one is to assess the Enterprise security so Enterprises will likely already have adopted security policies and taken measures to protect their systems they will also need to adhire security standards and Frameworks because of the governmental and Industry regulations so security Specialists will have to consider consider the risk assessments and have to analyze the possible threats those Specialists should understand and manage the security controls there is by default a starting point which involves these practices of managing security into the devops practice So Def cops should never start without including security policies and standards for developing and deploying new code or a pilot project so the next step would be to embed security to devops security policies and standards are integrated into the development process the devops workflows are matched against the security guidelines this includes vulnerability testing and code scanning so without processes and two in plays devops can't start developing new code the risk of increasing the attack surface and causing permanent damage image to the Enterprise is too big in that case so companies both big and small are under consistent threat of hackers and security threats and this bring us to the final step so the devops and the dev SE Ops are not only about technology it's a way of working and even thinking it is basically a culture and people need to be trained in adopting that culture so stuff Developers and operators need to be trained consistently to do their tasks developers operators and security Engineers need to fully be committed to applying the security controls in their work and that implies that they should always be aware of the risk for the Enterprise or that the Enterprise is facing in terms of security and of course proper tooling is quite essential in order to achieve that at least at a minimum and so tools for testing are quite recommended and testing is actually one of the critical elements of the def SEC Ops the market provides a massive number of tools for performing tests the next type of tools that we need to have are the alerting tools so when security threats are detected alerts need to be raised and sent out then we need to have tools for automated remediation and those are tools such as stackstorm to help providing remediation as soon as the security issues are detected and finally it's always good to have visualization tools because developers and operators need to be able to see what's going on in the systems so you can use plenty of tools in order to achieve all those goals and they would be mainly thirdparty tools that can be integrated into the devops tooling for either AWS azour or Google cloud of course public Cloud platforms themselves offer extensive security tooling so the benefits of the defc Ops should be clear now but we should know that with the defsec Ops we can achieve better collaboration between developers operators and security engineers and with that we will ensure that security threats and vulnerabilities are detected at an early stage of development so the risks to the Enterprise are minimized so because the it becomes more and more complex over the years the it security is of a key importance the level of security that is required will of course differ for every industry but first of all financial institutions will want to make sure that bank accounts can be compromised and that money are not not being illegally transferred or on the other side Health Care institutions will need to protect their patients personnal and heal data and in order to do that you need to have some common rules that everybody follows and an organization is certified if it complies with those rules so before to learn how the security Frameworks impact the cicd and the devops we need to understand those Frameworks so basically a framework is a set of policies and documentation guidelines on implementing and managing those best practices for different types of organizations and actually I have very good courses on nist cyber security and risk management framework that you can definitely take a look because they cover all the security practices that can be applied to devops and Def say cops so some of the most famous practices that you should be aware about are firstly the iso ISC 271 which is an international standard for system security controls that emphasizes on controls that detect threats that will have severe impact on the V on the availability and the Integrity of the systems then you have the nist cyber security framework that doesn't specify controls but it does provide five functions to enhance security and doer to identify protect detect respond and recover from a risk those functions will allow organizations to set controls to manage data breach risks controls should include assess controls measures to protect data and also awareness among stuff then we have the CIS framework or the center of Internet Security framework which offers extensive framework with specific controls for platforms operating systems and databases even containers some of the CIS Frameworks are embedded into the platforms such as azour or AWS and the CIS benchmarks could be assessed easily through aour security Center and AWS security Hub and finally there is the cobit framework and cobit stays for information system audit and Control Association so kit was about identifying and mitigating technical risks in the it systems but it also covers the business risks and the release of those risks that are related to it so those control Frameworks guys could have additions that cover specific industry requirements but typically Industries have to incorporate their own standards or be fully compliant with the existing ones this is important when Industries are audited so that's it thank you very much for watching that video and actually this will be the end of uh this introductory section for the dev SEC Ops I hope it was helpful guys and in the next sections we will start with more practical exercises using Docker and kubernetes thanks for watching hi everyone and welcome back to the course in this video I will show you how you can install doker from the command line into your Linux machine so I'm using a basic Linux distribution which is a Bund toate but this installation will work pretty much on any type of abunto so firstly make sure that you're logged in as a root user and you're into your terminal so you can be sure that you're logged in as a root user by writing who mi I and you can see that you're root and if you're not you can simply do sudo Su and this will give you the root privileges so after you do that let's make sure that you have removed all Docker distributions from your device if you previously have installed it so let's do sudo opt get remove doer and then doer engine doer do IO contained run C so it is okay if D get reports that none of the packages are installed this is completely normal especially if you don't have previously installed doer so then go to uh do.com engine install auntu and let's copy the rest of the lines so you can save some time so you can copy the sudo app get installed for all of those certifications so let's go back and simply paste it okay paste so that will upgrade told out get certifications then let's make a new directory called key wrs so let's do that then copy the next line okay and just paste it so this is pretty simple simply copying and pasting what you get from doer. comom select yes here okay then let's set up the repository with a command below so let's copy that paste it as well okay and finally you can install the docker engine so let's copy this command here and this will pretty much install the command line tool for Docker so let's do that and as you can see now guys doer is successfully installed into your device and if you write doer Das Dash version you can see that Docker is installed with version 20.10 do8 so this is how you can install Docker for Linux and if you do for example images you can actually doer images you can see that I have installed a few images for you this will be empty but once we start installing images then you will be able to see something here and also you will be able to create containers from the command line that said thank you very much for watching this is how you can install doer in auntu the guide is pretty straightforward and please enjoy the rest of the course so let's now go to the terminal guys and here let's see how you can use the AIT tools you can do that that by a command called AIT CTL okay and you can see that I currently have this command installed but if you don't have that set up you can simply write op get install AIT D okay and this will pretty much install the Tool uh as I already have it installed my installation is quite quick but for you it could take about 1 minute or less so in order to indicate to the audit framework which directory or file we want to observe by using the path for example if you want to check out the ETC passwords we can simply do audit C- a comma always- fpat equals SLC SL password okay and then you can do- f perm equals to wa and so when I hit enter here uh also make sure that there is no space between the comma and always so when I hit enter you can see that for me this error this um rule already exist but for you since you write the command for first time the rule will be created so let's now go to the rules file and you can do that by doing CD Etsy SL oit and then rules K let's go there let's do L and here you can see that there is a file called audit. rules okay and after that let's open this file so just use your favorite editor and I will use Vim a. rules and you can see that here you're having a few arguments so here you can see the different rules and those are the rules that enable you to audit certain directories so you can copy and paste those rules right here okay I will attach those in the uh description of that video or actually in the into the description of the whole section so you can use them and write them right here so let's save this file okay and then simply write sudo service audit D restart so now logs will be generated and if I write CD okay here I'm in AIT so if we go now to CD VAR okay then go to log AIT it okay you'll see there that we have a file that is called audit dolog and let's check out this file so let's do more audit lock and you can see that here we're having a huge lock for the audit that has be completed on your Docker system so you don't actually need to know what is inside this lock and I'm not going to manipulate the parameters right here and observe them but the whole idea of that lesson is that you're you know how you can export loog for a particular system by adding the rules and also knowing the location of the loog that is generated so this is how guys you can audit the entire files using the audit CTL tool thanks for watching this video and in the next one we're going to talk about we'll talk about the upper more which is a quite useful tool for assigning each running process on your system to a secure profile and we observe things like file system assess Network capabilities and execution rules that's it thank you very much for watching and I will see you in our next video hi everyone and welcome back to the course in this section I'll review the main containers or container platforms that provide infrastructure for both developers and operation teams and the main ones that we will cover are the docker and the kubernetes I will also review some Alternatives like bman and other platforms so the containers are one thing that are very important and they're helping to streamline the process of moving applications through development testing production and so on Docker and kubernetes are really helping to reinvent the way applications are built and deployed as collections of microservices rather than previous monotonic approaches so devops aims to improve the quality of the new software versions and accelerate the development delivery and implementation because of its effective cooperation with the continued automation so automated devops tasks include automated build processes static and dynamic code analysis and performance testing the core spine of the devops is still reflecting The Continuous integration and continuous delivery of the automatic deployment of appli ations so of course Docker must offer options for consolidating cicd tools like Jenkins and allows you to automatically load your images from the docker Hub repository or Version Control repositories like GitHub or bitbucket this is how the container platforms represent the base of the devops workflows and in those developers can create new components for application in com and run them into any testing environment so let's talk a little more about Docker so Docker is a container platform to quickly develop deploy and manage applications and it main task is to package software into serialized units called containers that include everything necessary for the software to run including libraries tools or code so with Docker you can deploy and quickly adjust the scale WF applications in any environment and be sure that your code will run the same you can do that in production environment or in cloud or local location you can always access the hub. do.com which is a repository where Docker users can share images that they have created with others so for Linux or Mac or even Windows it is quite easy to download and install those container ERS or applications into the docker package manager so let's try to make a difference between containers and virtual machines now you can see in Fr New View a diagram that is defined by Docker in which applications run inside completely independent containers so we will compare the docker application containers with other type of environment such as the virtual machines so as you can see containers have multi-level abstraction above the hardware abstraction virtual machines allow the abstraction of physical Hardware while the hypervisor allows multiple virtual machines to run at the single computer each virtual machine includes a full copy of the operating system applications and the required libraries however with the use of containers resources can be isolated and services can be restricted in that case processes are given the ability to have an almost completely private vision of the operating system with its own process space identified so as you can see multiple containers share the same core but each container can be restricted to using defined amount of resources such as the CPU memory or input output resources so because of this container structure Docker takes better advant an AG of the hardware and only needs a minimum system files for its services to work it doesn't need multiple versions of the same operating system installed in Virtual machines so the containers in Docker are self-managed so they don't need anything more than an image of the container for the services to work a Docker image can be understood as an operating system with dependencies for supporting installed applications and so the container is Created from a particular image and it always works docker's images are also portable between different platforms with the only requirement being that Docker is installed and the service is running on the host system you can also use repository of images such as GitHub or bit bucket code repositories and this service is called Docker Hub registry so let's talk about the docker architecture so you can see on the graph in front of you the complete Docker architecture so because because of its architecture Docker offers a great portability because all the all the containers are portable so we can take them to any other Docker device without having to reconfigure anything and so Docker allows you to run your applications locally on any operating system and on any Cloud Server such as Google cloud or aw you're getting great performance because all the containers are based on Linux containers which which runs directly on the caral of the host machine avoiding traditional visualization layer that is Comon for other platforms and the best thing is that doer is responsible for everything so all the containers will have everything necessary for the application to work and you will see that in our next videos actually where I will show you how to install Docker and run Docker containers so as you can see on the figure Docker uses client server architecture where the client Port communicates with the server Port right here and the demon so that is in charge of building executing and distributing containers client and server are able to run both on the same host and on different platforms as well since the communication between them is performed using the restful API services so let's talk about different components in this diagram now firstly the docker engine or the demon which is basically the main core of Docker is a process that runs on every Linux distribution and exposes itself to external API for management of the images and containers this process is responsible for creating images uploading and downloading Docker registry and executing and managing containers then we have the docker client right here which allows you to manage the docker engine and can be configured to work with a local or remote Docker engine it allows us to manage both local development environment and production environment so this is connected to the docker image which is simply the template that is used to create container for the application that we want to deploy and then we have finally the container right here and those here the folders with everything necessary so that the application can be executed the docker tool offers the ability to package and run an application in an isolated environment that is actually called container so to use the docker you can use the command line interface communicate between the servers and the client and to do that the rest API used the docker demon process is responsible for creating and managing objects like images containers networks and volumes so actually the components that make the basic architecture of the docker engine are actually the API the command line interface or the client and the rest service the name demon process means that the server works in the background of the host system and allows central control of the docker engine the rest API specifies series of interfaces that allows other applications to interact with the docker Damon and finally the command line interface uses the terminal and this is the command that we you're writing from your terminal because you're the client and interacts with the demon using the rest API it is pretty simple hi everyone and welcome back to this course in this video we have a hands on tutorial on how you can install Docker and run containers so let's get started so here I am in Google and you can simply write ER download and this will help you to find the first page from which you can download Docker for your distribution so let's hit on it and depending on your operating system you can download Docker from here since I'm using Mac OS I will download Docker for mac and depending on whether you use Intel chip or apple M1 chip you can choose either download I'm using an apple chip so I'll hit Mac with apple chip okay and you can see that the the docker image starts downloading inside my device so once downloaded just click on it and you will see this window right here so just take it and drag it to your applications and after a very small amount of time in which your application will actually start installing you'll have Docker in installed in your device and you'll be able to run all the commands related to Docker okay so the installation is completed now I will close the window and I will search for Docker okay so now the application will run into your device and you will see a graphical interface click okay here and then you can see the docker application in your device that is starting right now so as you can see because I've never actually run anything into Docker so far we have no containers whatsoever so let's open the terminal and actually uh run some basic containers here so write terminal okay and I will actually increase the size of the terminal so you can see better and so here let's first write Docker okay just to make sure that you have it installed so if you uh see those commands this means that Docker has successfully been installed in your device then do Docker Das Dash version to see the version of Docker and make sure it is the last one now since we have that let's create a container a basic one let's run the hello world container so you can do Docker run hello world okay so you can see that this run properly and actually something happen on the back and here it is we have our new container uh that is called hello world okay let's create another container and you can do Docker run minus D minus P 880 Docker slash getting started okay so let's uh run that command and this will create an AR container called getting started you can see here and this container will actually run okay so you can see that this one is active and the other one is exited and you can play with the containers from the graphical interface so you can stop it from here or you can start it and you can either use a graphical interface here or you can simply copy the name so I'll copy this name and go to your terminal and write Docker start and then you can write the name of the container and you can see that one once I run this the container actually is running now and if you do for example stop instead of start so let's do Docker stop okay you can see that our container has stopped and you can also see the port so for example the other container will not be able to run it because we didn't Define Port while creating it so that's normal but of course if you have multiple containers you wouldn't want to activate them just from the graphical user interface but I would prefer of course to activate it from the terminal so as we said the docker client uses remote API of the docker engine and can be configured to talk with a remote Docker engine this allows us to manage both local development environments and production servers so in order to check all the commands available for Docker you could simply write Docker and then you will see all of them but the most common ones I would say are for example the docker images which will show you all the images that have that have been used or you have available in your device you can also do Docker info and this will give you info for the current Docker client server such as uh CPU course memory and so on and so on you can pull images for example from a git repository by Docker pull or you can push them by Docker push you can delete image for example with Docker RMI or you can run images with Docker run you can also CH check for example the processes with Docker with Docker PS and so on and so on there are actually a lot of commands so I'm really advising you to explore all the commands of uh Docker and of course throughout this course we're going to cover plenty of those commands because Docker is a key part of the def SEC Ops so let's now just for fun try the uh pull command which will help you to download a container from external repository that is not in your device so let's do Docker pool okay and let's do NG I NX and this will hopefully download the ngx container and install it so let's do that okay so you can see that this has downloaded the image right and so now once the image is downloaded let's check so let's do Docker images okay and you can see we have the ngx so once we have it we can actually run it so let's do Docker run minus I minus t ngi NX b bash okay so let's run that and you can see that another container appeared and this is the ngx container which we don't W it and then run into our device so now this is running for about 12 seconds and so let's stop it so it doesn't run on the background so let's actually run back this container again so I will do Docker run- i-t ngx and actually here when we're writing uh bash bin this is because we can assess the console of the nginx container so let's run that and you can actually see that here this changes and the reason is that when you write bam bash imagine the container as um another device so if you would like to write commands in this container then you can write bin bash and you assess the console of the particular container that you're running so for example if I do here AOS d-l okay you will actually see all the content of this container with all the file system and so uh if you would like to exit for example you can simply do exit and you exit the container console and you can see even here that this is exited so let's close both of those runs and this will be everything guys for this video thanks for watching and in the next one we will talk about how to manage your containers and I will show you the podman tool thanks for watching hi everyone and welcome back to the course in this video we'll talk about podman and how to manage your containers with Docker so before moving to the Practical exercises let's first talk about podman podman is native open- Source tool that doesn't use demons or background processes is designed to facilitate the search execution construction sharing and deployment of applications using open continuous initiative the key feature of podman is that it doesn't need demon process for controlling the instances of each container this provides opportunity to assess various visualization applications without root privileges so pman is container engine that allows us to lift containers in a similar way to Docker but with some key differences that you can see here in front of you the first one is as I mentioned it is rootless so it doesn't need root privileges in order for us to execute containers so thanks to the podman modular architecture it is not necessary to execute our containers as a root which is a quite a big Advantage as we can execute our containers with different users who have different privileges this happens without the risk for someone who has assess to container to execute the container as root user the second main advantage is that it is demoness it doesn't need to raise a single demon for many services to work so podman is quite similar to microservices architecture and executes the necessary services for each container then you have the pots so podman has pots which is actually term that also comes from kubernetes so we could lift pots from one or more containers and isolate them from their pots as you know pots are small Deployable units of computing that you can create and manage BS are usually a group of one or more containers with shared storage and network resources and finally podman has a command line tool quite similar to the one of Docker as well so there are not much differences there so if you started on Docker and you want to move to podman then you don't need to actually learn as many new things so in addition to the fact that you don't use domains as we mentioned one of the key features of podman are the pots so pots allow us to fuse multiple containers in a common Linux name space that shares specific resources you can also apply wide variety visualization in the same way so in that way you can run multiple containers as a user without root privileges so the main idea when using podman is to have a main container that has inside one or more side containers running in the same pot of the main container in this way containers within the same pot cooperate with each other to execute a particular service so here I bring you some of the most interesting characteristics that podman actually has the first one is that it has a syntax that is quite similar to Docker so you don't need to learn new set of instructions to manage your images into the containers and yes podman actually manages the entire container ecosystem including the pots containers images volumes and this is done by using the lipot library and so if you see this GitHub web page you you can see that you can mount podman compost and you can use it as a Docker compost from here but let's leave that page for now and let's check out what are the podman commands so I will open here a terminal and I will get a little bit closer so you can see and first of all if you want to check out all podman commands you can simply go to docs. podman doio and here you can see the different commands related to podman as I said they're quite similar to Docker so it shouldn't be that much of a learning curve there so guys in order to install podman into your device you can go to podman doio getting started installation and you can simply write Brew install podman for Mac OS for Windows you can click on the separate guide or if you have Linux you can check your Linux distribution from here and then write the appropriate command since I'm using Ma OS I will just need to do Brew install odman um Brew is actually another service that if you don't have it make sure that you install it so go to brew. sh copy this Command right here okay and you can install Brew which is the installation tool so I already have it installed but I will just paste that here and run it uh it is uh root privileges command so write your password okay and then Brew will start installing is inside your device if you don't have it installed yet so once this is installed then you can install podman by simply copying Brew install podman Okay click enter and then since I already have it installed I will see this message that podman is already installed and up to date but for you since you don't have it installed it will take you about 5 to 10 minutes until the installation is completed so if you want to check all the commands that are available for pod menu you can simply do odman das Das help okay and you get all the commands that as I said are quite similar to the on of Docker so in order to use podman firstly you need to do podman machine in it and once you write this command then a sent to image will start downloading into your device and this is because podman uses a Linux system to operate so you can start now the Machine by writing podman machine start so let's do that okay and now your virtual machine for podman will actually start so now your machine has started successfully as you can see in the background and if I write PS you can see that there are also some processes in the background as well so let's get our uh Docker container with this command podman pool Docker iio Library htpd okay and you can see that now we're pulling our Docker IO into podman so this is completed so then let's do odman images and here you can see the list of all the images that you currently have and obviously we have only the docker IO image that we just downloaded so now let's run it in the same way that we were running containers in Docker so here we use the same command actually which is the Run command so let's do podman run- dt- p then specify the ports the protocol and finally the image let's run that okay and now here you can see the process which you have running into your background but how you can find your process so if you do odman PS okay you can see that you have one process in the background that is currently running and this is your Docker image as you can see so now if you do uh curl HTTP Local Host and the same port that we stated for our uh Docker container you can see that we're getting something which means that this works okay so this is the uh output message here and so let's inspect the container that we just created by first writing podman PS and you can see here the name of the container or the ID actually and you can simply write podman inspect and then just copy and paste the ID okay so you can see now the configuration of all containers in our device so here is the ID and we only have this Docker container that we downloaded and so if you need anything related to its configuration you can definitely check it from here now another cool feature of podman is that you can actually search for images so let's do odman search python okay and you can see here all the images related to python that you can actually download in your device and you can download them by using the pull command in the same way as we did with the docker image so now let's actually close our container and exit podman and you can do this quite easily by doing podman stop and then let's place the ID of the container and this command will actually stop our container okay and you can also do OD men RM and then the ID of the container and you can see that now since we get this ID back our container is removed so if I write podman PS you'll see that there are no processes running or if I do curl you'll see that there is absolutely nothing so we fail to connect to the port because the port 880 is no longer connected to our container so I know that podman might not be as intuitive as Docker since uh pman doesn't have a graphical user interface but trust me is quite a powerful tool tool that you can definitely use so you should know that using those tools managing containers have completely change the way people think about software development deployment and maintenance containers are so light and flexible that they've given race for new architectures and applications this new approach consists of packaging the different services that are part of application into separate containers and then deploying those containers within a cluster of physical or virtual machines When developing a simple code the administration of that code doesn't require large resources so the need for container orchestration starts appearing into the project so container orchestration is a tool that automates the implementation Administration scaling the creation of networks and the viability of application based on this Tech ology so nowadays applications are quite complex and tend to appear towards microservices meaning that we should have at least one container for the front end and one or more service interfaces and another for database so all of these things give the need for the container orchestration that is having a tool or system that automates the development management scaling interconnecting and availability of container based applications the container orchestration is basically responsible for the development and deployment for automatic container Based Services for the self scaling and the load balancing of the system and to control the health of each container so to do that in Docker you can actually use the docker compose so Docker compose as you can see here we're on the front page of Docker compost and it basically allows you to connect several containers and execute them with a single command it is implemented in Python and it uses a file called yaml based on the yaml makeup language so the docker compose allows you to define a series of containers and the relationship between them at a level of a yam file that is very intuitive in its format so as you can see uh this can be done in a few step by creating a dictionary of the project for example with flask creating a Docker file and compos file which is actually our yaml file this file includes all the dependencies and their inter relations this allows the management of multicontainer applications and to manage that the techniques that you need to use don't actually differ from the standard techniques that we use with a single container with the docker compose command the corresponding SubCom commands actually manage the entire life cycle of the application and right now we will not actually go into too much detail on how to do that I just wanted to show you the front page so you're aware of this but we will definitely explore this more and apply it to our application so we definitely have projects where we will create a Docker file yam file and so the yo file basically has all the libraries volumes ports and other important information for each of the containers that we have so that's it thank you very much for watching this video guys and in the next video we'll talk about kubernetes so I can get you started there and then in the next sections we will explore how Docker and kubernetes are basically one of the key components for building the def cops thanks for watching hi everyone and welcome back to the course in this video we'll talk we'll talk about kubernetes so the advantage of using containers to run a group of software applications is already established using Docker however in a production environment managing and running containers is important to minimize the downtime of the service and this is where we're using kubernetes kubernetes is used to automatically start a new container if a container fails so kubernetes follows Master slfe architecture and it is also know known as k8s this is the most popular container orchestration engine on the market it is open source for applications executing in software containers automating the deployment scalability and management of the distributed applications so kubernetes groups the containers into logical groups called pots which represent the basic unit of the manager that can be distributed in the cluster by the kubernetes process one pot can have set of containers that share the same storage and a single IP address so the kubernetes as I mentioned has a Master Slave architecture and so the master container as you can see on the figure right here control and shedules all the activities of the container while the workers right here as you can see are actually the nodes and here is where the containers are executed you can have multiple nodes for one kubernetes master so the ability to have multiple nodes or orchestrated containers makes kubernetes a perfect addition to the microservices based applications so the master container right here acts as a central control level in the cluster and it's composed of four basic elements that allow coordination within the cluster and to distribute tasks so those are the IP server which is responsible for launching all automations in API server by using restful API Services the server acts as a central management point in the cluster then we have the etcd which is an open- Source key value store and can be considered a kubernetes cluster memory it's been developed especially for distributed systems to store configuration data then you have the scheduler which distributes the pots in the cluster so it finds out how many resources a PO needs and basically adjust them within the resources available for each node in the cluster and finally we have the controller manager right here which is a service of the kubernetes master that manages the status of the cluster and executes the routine tasks directing the orchestration so the main function is to ensure that the cluster State corresponds to the state that was previously defined as an objective so while the master is responsible for the orchestration the distributed pots in the cluster are running different nodes called workers to do this each node needs to run a container engine compatible with the container tool like Docker or podman so you can imagine that basically kubernetes runs the master and have multiple containers as Docker or actually a container tools as Docker which contain multiple containers so one kubernetes could have a few Dockers or podman or any other other such tool and so let's talk about what each kubernetes Noe includes so the first one is a cuet and this is used to direct and manages the note this process maintains the communication and ensures that the information is sent to the worker nodes the agent receives the requests and supervises their execution in each node then you have here the cube proxy which is a proxy service that is executed in each kubernetes node to serve the requests that come from worker nodes and provide services to the users of the container application such as Docker and as you can see this Proxes they go directly to the users right here and this is how the services are provided so in order to understand kubernetes better let's talk about some of the key terms using kubernetes so the advantages of using containers to run in a group software application are quite a lot and kubernetes extends that by creating the ability for us to have multiple container application tools that consist of multiple containers and here I bring you some of the key capabilities that kubernetes can provide the first one is the service Discovery and the loan balancing and this is quite important point kubernetes can expose a container using its own domain name and IP address it is also capable for balancing the workload and distributing the traffic in a way that the deployment is quite stable the next key point is that kubernetes provides storage orchestration it allows you to automatically mount a storage system of your choice such as local storages public cloud and any other type of computing providers it allows you to describe the desired state of your payload containers and you can change the current state of your containers to desire state for example you can automate kubernetes to create new containers for deployment remove existing containers and adopt all of its resources into a new container also kubernetes can actually restart the containers that fail and are is replacing the containers that are not responding within the cluster then you have the resource management as you can see kubernetes is actually all about management and allows you to specify how much CPU and memory or Ram each of the container needs so kubernetes can actually take better decisions than the developers to manage container resources when containers have specified resource requests it can also provide secret and configuration management it allows us to store and manage information related to the configuration of the containers as well as the most sensitive information like password keys and tokens this sensitive information and the configuration parameters can be updated without the need to reconstruct the container images and without the need to open the sensitive information so there are also some additional terms that you need to understand in order to get better understanding of kubernetes so firstly we already talked about cluster and cluster is a physical or virtual resource and storage resources used by kubernetes where pots are deployed managed and replaced so cluster is basically a storage location then you have the pots that are smaller unit that can include one or more containers in many cases a pot is composed of single container but its ability to accumulate several containers very close to each other is quite powerful feature then you have the replication controller which is the kubernetes mechanism to ensure that a pot has raised a certain number of replicas for example the replication container raises more more replicas if we need more and kills them if we need less it raises new replicas to keep the number defined if any of them fails or dies then the services Define how the or how to assess a group of pots and allows assess to containers with a unique domain name server and IP address and finally the labels are used to organize and select a group of objects in pairs of key and values so so a pot can contain one or more containers running at the same time and it is basically the unit that the kubernetes manages so there are several advantages that kubernetes brings to the management of containers as pots so before starting with the kubernetes configuration you should understand that there are four key Concepts that you need to know those are the kubernetes driver which acts as a note from which poorts replica controllers and services of the kubernetes environment are theoy and manage and later on I will show you how you can create a kubernetes driver then the kubernetes nodes provide environment in which containers can be executed later we run the command Cube proxy in order to start all the containers within a note then we have the cube CTL command which is used to do the CBE kues management into the master node so with the cctl we can create obtain describe or eliminate any resources that kubernetes manages such as poorts replica controllers and services and finally the resource files yo and Json are used to manage and create pot you can install and deploy kubernetes from kubernetes doio but that we will do in the next sections where we will talk specifically about kubernets so that will be everything for now guys thank you very much for watching and in the next section I will teach you how to manage your containers and Docker images in Docker hi everyone and welcome back to the course in this section we will talk about how you can manage containers and Docker images so the docker images are are simply a readon the templates that can be used as basis for loing containers this means that everything that we're doing in the container has only effect inside it so we don't make any modifications to the image that creates the container so we should create a custom image for our future containers if you want to have one so as you know and I will here open a terminal we can actually get Docker images by writing dock pull this will download the image and save it to our device so for example if I want to pull a bu to which is some Linux image you can simply do Docker Po abunto and if you click later you can see that the image is up to date since I already have it in my device but if you don't it will of course start downloading so you can easily save your image by writing Docker save abto d o and then let's write abunto do tar okay so that will save your image or it will actually packet and generate a tar file with a desired name from you so if you want to also create a backup for your image you can do uh Docker save abunto higher than sign and then backup underscore abunto do tar okay and this will create a backup for your image so if I hit here and check the docker images by Docker images you'll see multiple different images and one of them is a bundle as you can see here and so all those images are set of directories and files with specific structure so each folder refers to one of the layers in the image So within each layer there are some files to reference that layer and a compressed file within the file system that will form the image so when the image is extracted and constructed so that it can be usable we can unzip the content for each layer in order to form the final image which corresponds to the base image this will generate a file system that has content that is built and Modified by layer so you can look at the images as a permanently stored instance of container so if the image is the class then the container is the object or you can create multiple containers from a single image so if you want to run and create a new container from an image for example if I want to create an abunto container you can simply do so you'll see how easy is to create a container just by writing Docker create and then for example let's take one of the containers let's say this one okay run it and you'll see that here we have a new container and you can run it you can stop it you can basically do whatever you like with it okay so you can also unpack the tar container that you have for example let's say if you use auntu if you remember we created a tar file with it so what you can do is to write tar tvf okay and then auntu doar okay and this will unpack the file so let's uh run that and you can see that now our file is completely unpacked so if you want to check the containers that are currently working for example you can simply do Docker PS and this will show you that right now we have only one container that is currently working this is the one that we just created that is from the image NG INX okay so you can see that from one image we can actually create multiple containers and I'll create one more just to prove you that so let's do again Docker create NG INX which will create another one and then another one so right now if I want to check Docker PS you'll see that we have again only one running but if I activate the other instances of the containers that we just created and I write Docker PS again you'll see that now we're having three containers that have been created from the same image so you can see how easy is to create instances of any type of application and build container with it using do ER so of course I will stop those containers that we just created so we can have a clear start for the next commands so if you go back to the terminal we can try some more things so Docker layers are very much like git configurations and store the difference between the previous and the current version of a particular image so the layers use space and the more layers you have the thicker the final image would be and git repositories are similar to this git for example stores all changes between commits so the size of your repository also increases with the number of layers so when you request an image from repository it downloads only the layers that you don't have to download to your machine locally so for that reason the docker images use the same structure we can see the layers of an image by simply writing Docker image history and then the image name so for example if I do a to column latest then you will see all the layers of that image and you can do this for your other images too for example if I write here ngx and I hit that you see how many layers we have here for that particular image so another thing that I want to mention here are the image tags they allow you to identify the versions of the images because the different versions are of course associated with their TXS so you can see here in this page for example the tags that are available for auntu so in that way you can download a specific tag image using the docker pool command so for example if you do if you want to download the 14 the 18.4 version you can simply go back to the terminal and write Docker image p and then the specific tag so if I write auntu 18.04 you will download specifically that version and specifically that layer of auntu and if I do now Docker images you'll see that you have two abunto versions one of them is the latest while the other one is the specific layer that we wanted to get so this is why I'm keep repeating that images are pretty much layers that are mounted on the top of the previous layer of an image so the latest is the highest layer all layers in an image are read only when a new container is Created from that image and we need to copy it in order to create container so the original layered organization and copy op on right strategy promote two major best practices for creating Docker images and so they two B best practices which are the aim of Docker to have minimalist images and this is because Docker images get benefits from stability security and loading time when they are smaller you can always install tools and containers if you need to solve problems related to development on the other side on the other side the base image can contain many layers and add many capacities you can find official images for many distributions programming languages databases on the docker Hub repository you can simply type hub. do.com so this will be everything for this video guys thanks for watching and in the next one we will talk about a quite important topic which is how to build and manage the docker file thanks for watching hi everyone and welcome back to the course in this video we'll talk about the docker file what it is how to build it how to manage it and how to write commands related to it so one of the great things about containers is when we're building them automatically is that the docker Hub will show you the docker file used to build a container which provides some level of transparency over what you're downloading so for example you can see the docker file that is using auntu base image right here on this address and here the images are created using series of commands that are called instructions the instructions are placed in the docker file which is basically as you can see a text file that contains a collection of changes in the root file system and and the corresponding execution parameters within the container so the result of that file will be the final the final image so each instruction creates a new layer in the image which then becomes a parent layer for creating the next iteration of images so basically the docker file is a text document that contains all the commands we want to execute the command line to build images the image can be built by writing Docker build command and that will follow all the instructions within the file so once run the docker file will be executed by the docker engine so a lay will be created for each instruction that allows them to be reused if they are cached which will significantly speed up the construction process for example an instruction that requires an image of the registry in the cloud will suppose a great workload if you need to download this in each execution so C data is used for this so the image will be used directly if it has been downloaded in the previous run each time the C data is used a text is displayed to the console so that the user knows about it so as you can see on the figure in front of you you can see the process for using the docker file and so the docker build command builds an image following the instruction of the docker file and you can see that from here and this Square represents the docker file that is building the current image so it's important to note that the docker build sends the entire context so for that reason it's a good practice to put a Docker file in a clean directory and add the necessary files to that directory if necessary so if you go to the terminal you can simply write docker build Das Das help and this will give you all the commands for building image from a Docker file normally when you want to build this you simply need to write Docker build then pass the options and then the docker file path okay so you will need to have both in order to build an image and so you can see in front of you that the most commonly used options are minus t to create an image with a specific name you can use no cache to do not use a cache while building the image and this is because by default Docker will actually check if they are cached files and built from them and the final option is the pull option right here so the PO option is used for Docker to download the image specified from expression we can use this option to force the download of a new version of an image so the build command that we write can always be executed from the same directory with a Docker file or if it's not in the same directory if the docker file is located properly so you need to follow the path of the file as you can see here in the description so the build command can run from the same directory where the docker file is located and we can also assign labels to images to have them located uh by using the tag command and you can see an example in front of you so this is the final syntax for building an image uh using a Docker file path and this command ensures that the image created from the docker file will be built in independent part and it will have a specific name given by the repository tag so here on the docker web page you can see that a Docker file always starts with those uh cap commands and for example you will do something like from and you select the image which allows to establish a base image and initialize the the construction of the new image you can have the copy command which allows you to copy files and directories to the file system of the container and in somewhere in the end you would have the Run command the Run allows you to execute the command that you want to use in the context of the image so if you have one run command this means why layer if you have multiple this could mean multiple layers so if you look at the Run command sometimes you can even have multiple run commands on the same line just to save space so for example in the following instructions here firstly we're using the from command so from abunto to set the base image for the instructions and the image can be any local or Public Image if the image is not found locally the docker compilation command will try to download the image from a public record and then the Run instructions will execute the command in a new layer at the top of the current image and it will confirm the image as well the generated image will be used for the next instruction for the docker file and you will see that in practice here as well so the Run instruction is only interpreted and used when the docker build command is used to create an image the purpose of the Run instructions is to execute the command that modifi the image in the same way so let's see how can we create a completely new image from a Docker file to do that I'll create a new folder on my desktop so new folder and let's call it Docker one and there I will create an empty text file okay and I will name it Docker file okay make sure that the file is not txt but is simply called Docker file of a type document so in this file let's write some commands in order to create a local image in the way that we like using auntu as a base so what you can do is write from with caps okay then auntu then you can specify the version of the buntu that you would like to use so let's say 18.04 then let's write run up- get update and backslash then let's do up Das get install dy- Rus Das server okay then end back slash finally let's write up Das get clean and finally let's write expose okay uh and then 6379 which is basically going to be our Port okay and then I'll write CMD which means command and this will B basically allow us to establish the command that the container will understand and execute so I will write here radus Das server okay and let's write um columns here okay comma Das Das protected Dash mode space no okay and so those are going to be the commands that our container will understand okay let's add some space here okay so let's say now the docker file and if you go back to the terminal and go to the docker tree directory that we just created so if I go to Docker one and if I do AOS you can see that we have our Docker file here so we're in the right directory now the next thing you need to do in order to create your new image is to write Docker build- T my radius which is going to be the name of our image and then dot would mean that we want to execute all the files so let's hit return here and now your new image will be created okay so and so for me this work for one second because uh we already have that cached and I run this common before so now since you have this image you can easily create a container from that image and if I do uh Docker images okay you can see that I have my radius and I also created my redest one right here and those were created those were new images created from my Docker file so in that way you can easily create images and then you can run them like any other image you can combine applications as you wish and then create containers from those so this was everything guys I wanted to share with you in this video thank you very much for watching and in the next one we will talk about how you can manage your Docker containers thanks for watching hi everyone and welcome back to the course in this video I will teach you how to manage your Docker containers and we'll elaborate more on some of the key Docker commands that will help you manage your containers so here I have open a terminal and Docker with a few nginx containers from the previous videos you can open those by simply typing Docker create ngx so we already learned that dockerhub can provide you a new organization with a good host and deliver images to you you can configure Docker Hub repositories in two ways repositories which allow you to upload and update the images and this will allow you to configure GitHub or bit bucket account that triggers reconstruction of image when any changes have been made to the repository but when getting packages you always want to keep the things simple so for example if you would like to create a new container from image that you don't have let's say that you want to create a new container that will run the python engine so what you can do is simply writing Docker search python okay and actually let me make our terminal a bit bigger so you can see here you'll be able to see all images that are somehow related to python another way in which you can search for uh images is from the docker Hub and you can simply type here Python and then you'll find python images right here and so from here you can simply download the python image and then run it there is another actually very simple way to get the images that you like and you actually don't need to download them necessarily you can simply type Docker run Das t-i where um Das t means that you're creating a terminal device while- I SL specifies that the terminal session would be interactive then you can type the uh image that you want to run Python and then write bin bash in order to assess the python shell so obviously as I said we don't have currently installed in our device the python image so running it would normally make no sense right uh however when you execute this command docker automatically finds that you don't have uh the python image locally so it will download it for you and run the container so let's do that so as you can see we got the message that we're unable to find python locally and now we're downloading all the commits all the last commits of python so some of them would take not much time uh some files are a little higher as you can see this one is 18 9 megab so it will take some time until it downloads but once this is done you see that I will open the docker graphical user interface and you have the container there so you can see that here now we're running python so with one command we not only don't loaded the python image but we also run an instance of the image into the Container so I'll leave this container running and I will exit it from here from the terminal so when we do exit as you can see now this is exited and there's also another way to create a container and this is when we're using the Doh D command or the detach command this allows you to execute the container in background mode the detach option allows you to indicate that it runs in the background as a demon process so if I write Docker run Das Das theou alternatively you can also type dasd Das ti-- name so we're going to name this one uh python to and then let's define the image so python latest Das bin Das bash so we activate the shell so let's done let's run that and see what happens and you can see that here we created a process uh we didn't go directly into the shell because now our process runs in the background but you simply got the process ID right here so now as you can see you have one container that is running and I'll also activate uh the other python version that we have and I will also activate one of the NG INX containers okay so now we have three uh micr service applications that are currently running now if I go back to the terminal and I want to check that images or that container SE that are running firstly you can do a Docker okay inspect and then you need to find the image so to in order to get the idea of the image you can do a Docker images and then let's do uh Docker inspect and then let's uh check our python image okay and you can basically see all the dependencies actually uh from the previous video you should already be aware uh how to use the inspect options so you can inspect the containers and the images by ID as you just saw and so this command is quite useful we can also check the packages that are installed in the docker container for example we can use the DP kg command to check the packages installed in the docker container we first need to find the ID of the containers that we're running so for example if I do Docker exit and then let's find the ID or the container name for example let's copy the ID of uh this container python 2 okay let's P it here and then let's do DP kg minus one okay uh actually we have we need to write minus L instead of minus one so let's do that and yes we got it guys so here what you can see are all the packages involved in creating our python container so if you need to check the different packages that are building a particular container you can see they are quite a lot when building python uh you can definitely use this command in order to check all the all the libraries dependencies and so on and so on so this is the easiest way guys to manage your packages and to check the information related to them thanks for watching and in the next video we will talk about how you can optimize your Docker images hi everyone and welcome back to this video where we'll talk about how you can optimize your Docker images so to optimize space and reduce containers exercise you need to use a simple range of commands and planning but that would be essential for creating efficient container environments if you think that Docker is designed to be able to mount a big number of containers both space and speed are key factors in developing environment and production so one way to optimize images is to create as few layers as possible for example if you can see the set of instructions in front of you that is normally um assigned to a Docker file right here you can see four run layers even five of them during construction Docker tends to reuse the layer of an image of a previous construction whenever it's possible ignoring a step that could be quite costly so how you can do that is firstly placing the docker file instructions that could change the final part of the file so in that way Docker can reuse the previous layers you can also group instruction in the same layers just like you can see in the last instruction right here we can group together similar instructions for example the appgate command which usually requires an update of repositories and previous backups so the same command can be executed with one run instruction and so we can generate right here only one layer instead of few of those so the construction of Docker image from the docker file can be quite an expensive process since it can involve installation of large number of libraries at the same time it is a repetitive process because the successive builds of the same Docker file are quite similar to each other this is why Docker introduces the concept of caching to optimize the image building process each time an image is reconstructed from a Docker file Docker checks if the current installation have been executed correctly and so it might already have the instructions available in the cache if the results are correct and are cached Docker will use the instructions cach data by default and reuse it in the new compilation so starting with base image that is already cached all the instructions in that case are compared with the derived images from the base image to see if one of them is created using the extract instructions and in that way the C is validated or invalidated so for ADD and copy instructions the content of files in the image are examined and a checks is calculated for the file during each cash search the check sum is compared against the check sum of the images already check checked and so in that way the cach is unvalidated if something changes so here I bring you some of the key aspects of the docker cache that you need to bear in your mind so you have them as reference so the docker cache is always local so this means that all the docker file instructions will be completely executed if you're running this particular Docker file for first time and this will be even if the image has already been built in the docker registry the key cash will also be invalidated if an instruction has changed and you cannot use the cache so in that case the docker file instructions will be used without the cache also the behavior of the ad and copy instructions is different in terms of behavior of the cache although those instructions do not change they would invalidate the cach if the content of the files being copied has been modified so as you can see in order to use C it is quite important for pretty much everything to be the same of course you can also disable the cache by writing uh no cache equals to true and this flag will simply invalidate the cash automatically so let's now uh create a new folder on our desktop because I can show you how you can create create uh very easy to use image that would be a Javascript app Okay so I'll create a new folder and let's call it JS app okay okay so uh let's enter this app so let's enter the folder JS app and let's create a few files so let's do Vim Docker file okay and this will be our Docker file so you can basically use whatever text app that you like in order to edit your uh Docker file but in that case I will simply use them because I'm already in the console so you can right here the instructions so let's do from note latest okay then let's do expose 3,000 this will be the port then let's do work dur app okay so this will be the directory then let's do copy package.json index.js do back slash and actually we'll create those files so we'll create the Json file and the index JS files and those files will basically be packed from the docker file and we will be able to create a Docker image from a complete scratch okay then let's do run npm install and then let's do CMD for commands and then let's do npm comma start actually let's do it like this start okay and then let's let's close the bracket okay so let's save that with them you can do this by right quit and exclamation mark and now you will be able to see that we have a new uh Docker file right here and if I open it with any text editor for example let's open with text edit you can see that we're having a new file which has the exact instructions that we just wrote okay and the only uh mistake that I think I did was that I actually wrote a single quote instead of of double quote right here but that's fine I just fixed it so let's save the docker file and we're done so this our Docker file with the instructions now the next thing that we need to do is to create a JavaScript and Json file so you can simply create it as a text file and do uh JS or you can use any type of editor that you would like for example I have Sublime Text and I use that editor so so here we're going to have a new file and let's create a hello world command and connect it as an app so let's do const Express equals to require Express and before continuing let's save this file I will call it actually index.js so this will basically be our web page right so let's save it actually I want to save it in um our current folder so what I will do is simply go to the desktop and from there to JS app okay and I just saved it there as you can see index.js so this is a Javascript file and you can see as long as I saved it as a JS file then you are getting some interesting CS here that are little help when you want to see the different keywords in JavaScript so let's do now const up equals to Express okay and then let's do app dot get and you can do here a back slash comma and then let's do r q and rest okay and then let's uh point that to rest. send and we will send here hello world let's do some uh exclamation mark and let's close this bracket okay and I will delete this one that's good and after that let's do app do listen and here we will Define the port so I will do Port 3000 comma brackets okay and this will point to okay and then let's log some message here so I will do uh console do log and let's write some message let's say app is listening on Port 3000 okay so let's save that and this is our Javascript file and so once you're done with this let's create another file which will be a Json file responsible to run the app so let's save that let's create a new file and and let's save that file into our JS app so I'll call this file package.json okay and we're done here so uh let's create now the file and the idea here guys is is simply to create the dependencies and the name of the app so let's do name and then Hello World app okay so this will be the name um then let's choose the main file of this app so you can do main index.js okay then let's do scripts okay and here we will start up so I will do start okay and then let's do note and then the name of the file index.js okay so let's save that and this is our um Json file guys so I saved it and if you go back to our directory you will uh see quite well that we have now three files one Json file one uh Javascript file which will basically run our page and finally we have the docker file with the instructions for Docker so now we're completely ready to create our image so I'm here into our folder and let's do Docker build- T and then let's name our app node app okay so once I run that actually also need to do dot here so it gathers all the files so once I run that you can see that we are starting to get all the uh dependencies to create the image and our image is created so if I do now Docker images okay you can see that we have our uh node app and this was created manually by us okay um so also make sure that you actually install one of the dependencies that I didn't talk about uh and this is the express dependency so you can do npm install Express okay and make sure that you install it before running your app because otherwise you might get some errors and so if I now write for example Docker history note app okay you'll see the different layers within the docker history command for our app right here now since we already have the app you can see that um we have our image it is about 941 mbes and let's try to reduce that size by editing the docker file so let's open it in Sublime Text so I will just open with Sublime Text okay so since this video is about improving and optimizing your Docker images let's see how we can actually reduce the size of our image now currently if I write Docker images you can see that our image is actually 941 mbes so one very good way to do that is by using the Alpine Linux and I will show how this is done so we need to modify our Docker file which has the instructions always the instructions are the most important for the way that we are creating our image so firstly here in the from let's do note and instead of latest we will do 15 as build okay let's uh put the worker up above then let's do uh copy package Json and index JS okay that's good uh then let's do run actually I use the same stage so run and P install okay now leave also those two lines so so far we just uh move the expose to the second line from the bottom uh but if you add this step so if you do uh from note 15 slash or Dash I'll pin okay this will rather an extra layer so the Alpine Linux based images have the capacity to produce the smallest images to run application within minimal resources at the memory of the disc so at this point images based on distribution are much faster to download a configure so this should really reduce the size of our image so let's save that and let's go back to um our application and what I want to do now is to write Docker so we'll create another image build build- T and then let's do note slash or Dash Alpine okay so we'll simply create a new image called note Aline and this time we use the updated uh Docker file that we just created so let's run that and then let's Place dot after that so it collects all the files okay so here we have an error and let's see uh if there is some issue in the instructions yeah so I can see here that I actually missed one line so under the from uh you also of course need to copy it so let's do copy Das Dash from equals build slash up slash okay so this will basically connect the pine to uh our build image right here okay because we Define it as build okay and also Let's uh change here this to our Pine because I misspelled it okay so let's run the command again and you can see that now uh we are building the package successfully okay so it took quite a few seconds um since I run this again in the past and I also have a cach for me it's faster but for you it might take couple of minutes so let's now write Docker images okay and you can see that our new image uh when we built the application with Aline uh it actually took about nine times less resources comparing to our previous installation so you can see how significantly different are the instructions and how important they are and can really build you images that are of way smaller size and the final thing I want to talk about are the distroless images they contain only the application and its dependencies at run time they do not contain for example package management applications or programs that you can find in the standard Linux distribution and so you can see the size difference of the same package when is run into this ress mode or or not so if I do now Docker pool python okay and at the same time if I do Docker pool GCR do iio dis reless Python 3 okay so now this is installing and you can see once it is installed let's um check the packages so let's simply do Docker images okay and here's the python package which is 800 68 MB while the GCR dist Python 3 is only 50 mbes so you can really see the difference between the official python image and the other based python dist image so this really saves us disk space and network traffic and it can also improve our security so not having libraries and services that we don't actually need reduces the security risks and uh and the unty alerts from image scanners and obsolet or vulnerable versions so those are few ways in which you can reduce drastically the image size and finally to end this video I will show you how you can actually remove images that you don't need so you can simply do Docker image remove and you can either specify the name or the ID of the particular image so for example if I want to remove the dis ress image I can simply copy and paste the ID okay and now you can see that this image doesn't exist anymore in our system so if you want to try and run those applications that we just installed you can simply do run note Alpine for example actually Docker run okay and you can see that our application uh started and you got in the console that the app is listening on Port 3000 so you can play with the containers guys and with the images you can create multiple containers uh check if they work properly I'm sure that you have a lot of fun uh thanks for watching this video this is the last video of this this section so you learned quite a lot of things here on how to work with Docker manage the docker file and optimize your images thanks for watching and in the next video we will finally start talking about security and how to improve and manage the security in Docker thanks for watching hi everyone this section will cover the best security practices and Docker capabilities from which the containers can be provided with more features while Docker provides central registry to store public images you might not want your images to be accessible from the world so in this section I will teach you how to use private registry and I review the docker content trust and and Docker registry which will provide you with a secure way to upload your images in Docker Hub so from the security point of view Docker containers use the resources of the host machine but at the same time they have their own runtime environment this means a Linux container cannot assess other containers or underlying operating systems and so the only way to communicate with other networks and containers would be with a specific network configuration the most significant advantage of container-based virtualization is that applications with different requirements can run isolated from each other without having to assume the overhead of the separate guas systems so as you can see here the container technology takes advantage of two basic functions from the Linux carel those are the name spaces and the control groups so on one side the name spaces will provide isolation for the processes and mount points so the processes that run in a container cannot impact with each other or see processes that run into other container so the isolation of the mounting points implies that they cannot interact with the mounting points of other containers and on the other side the control groups or the c groups are a feature of Linux carel that supports the limitation of use of resources at the level of CPU and memory that a container can use this ensures that every container will get only the resources that it needs so the development team behind Docker is also aware of security problems considering them an obsolete for the consolidation of the technology in the production system so one of the key isolation techniques that Docker engine supports ports are firstly the app armor which allows you to regulate the permissions and the assess of containers and file systems then you have the SE Linux which provides a system of rules that allows you to implement assess controls to the caral resources and finally you have the secure Computing mode or the C comp which monitors the caral system calls so while docker facilitates virtualization work sometimes we might forget the security implications of the execution of Docker containers we must always keep in mind that Docker requires root privileges for working in normal conditions and you should always keep that from a security point of view so the docker demon is responsible for creating and managing containers which includes creating file systems assigning IP addresses routing packets Process Management and tasks that require administrator privileges so for that reason it is quite important as I mentioned to start all the demons as administrator so starting new containers stopping reconfiguring or running them some of the main actions we can perform on the containers so one of the docker ultimate goals is to be able to run even the demon as a nonroot user without affecting the functionality or delegating operations that require rot privileges so let's see some of the best practices when talking about Docker security so you can see that one of the first and the most important practices are the fact that you should always run the docker processes on a dedicated server isolated from other virtual machines it is always very important to take special care to link certain Docker host directories as volumes because a container can G full read and write assess and perform critical operations to those directories so from the point of view of security Communications the best option is to use SSL based authentication and you can definitely check on the web what the SSL protocol is it is pretty much the most popular security control out there also you should always avoid running processes with root privileges inside the containers we will also talk in the next videos for how you can enable and this is one of the key principles how you can enable specific security profiles and those are the app armor the SE Linux on the docker host so you should always keep in mind that all of your containers are actually sharing the docker carel so it is quite important to have the Kel updated with the latest security patches so since this course is covering the complete Dev secops features let's talk about some key principles for improving the container security so you should always use only one application per container that is using microservice oriented approach you should never run containers as Ru and you should also make sure to disable the set uid permissions so nobody can interfere with that you can also use and I will actually talk about this in uh the next videos so you can use the C drop and the cup ad flx to remove and add capabilities of your container it is also advisable not to use environmental variables or run containers in privilege mode if you are going to share secrets you should also have Docker updated to the latest version to ensure that all security issues have been solved and at the same time you have the latest features that Docker is incorporating in its core and also you should always bear in mind that caral is one of the most V vulnerable components of the container management because it is shared between all containers so you should always take a special care to keep the Linux caral with the latest update as well so let's go guys to your terminal and hopefully you should already have Docker installed and Docker opened right here you don't need all of those containers that are that they currently have installed because we will update with new containers as well you don't need that in order to execute the commments in the video so let's go to our terminal and the first thing I want to cover is that all the containers are actually defaulted as having Roo by a default user okay so every time you create a container you immediately be assigned to the R so for example uh if I want to create a new abunto machine without any user specifications in the creation here is what I will get so you can type Docker run- V bin host bin then let's do it Das Das RM abunto sh okay so when I run that um we just created an abunto machine and you can see it right here in the bottom it is currently running because uh we selected the Run command so now we're here in that machine and you can see we're right now in the uh host shell so let's type who mi I okay and you can see that by default you're already the root user and if you do ID you will get that all the IDS that you currently have are the root user ID so you can see that containers are executed by default with the root user so root privileges will be available within the container however from the security point of view it is important to configure the name spaces to limit assess to The Container as user so while container engine must be run with the root privileges it is not a good practice for the containers to do the same thing so the container engine in that case is the docker however the containers are just entities into the Container engine so for that reason it is quite necessary to create a user for each running container so the best way to do that is to indicate the user who wants to be able to execute the creation of the image in the docker file you can users inside the docker file and I will show you how this is done it is quite simple so I'm going to create a new folder on our desktop since it is the most convenient for me and this folder right here I'll name it 3core root user okay so the name actually doesn't matter um but what's inside actually matters so in that folder I create a new file with the Sublime Text so let's do Sublime Text and let's create a new file I'll save it and let's name it Docker file and this file will be used for us to create a new image which will have different sort of privileges so let's see how this is done I write here from python latest so this means that we will create a new image from the python image let's do run user at-s and then let's do bin bash Unix user okay and then let's do user Unix underscore user and finally I will do entry point K and here I write bin SL bash okay so let's save that file and now let's go in the terminal and from there I will go into this folder so let's exit our container okay and just close the container so uh I will do CD and then I will go to our folder you can see that our Docker file is right here so you can build the new image with the following command doer image build- T and then the name would be python image okay so this will create a new image from our python image and it will use a Docker file to customize this image so let's run this and yes you need to place dot in the end I always forget about about this in order to get all the files so let's run that and you can see that now a new image is created and if now I write Docker images you can see that here we're having a python image that is currently created okay so if you want to run this particular image you can write a Docker run- TI and then let's write python image okay so let's run that and if I go back here you can see that we're having our python image and it is currently running so this is the exact same image that we just created and you can even see from the name that we are already a Unix user so if you write who am I okay you see that you're Unix user but but you're not rot anymore so only with this simple thing when creating images you can make sure that the user that is using your image is not a root user and if I add ID here you can see that absolutely everything the user ID the git and the groups you're all there as Unix user and so you can see that now the user is added inside the container by inspecting actually the file Etc SL password and you can actually check the content of this file but by simply writing more Etc password and if I search here for the users you will find out that first of all this is our route and also if you go down here at some point you'll find your Unix user so you can see that he is right here in the bottom since we lastly created it and this is where it is right it's a bin bash user okay so this is how guys you can pretty much automatically create your root user without any issues so another quite good practice that is recommended for Linux systems to have minimum privileges so Flags such as read on only can be applied to the existing container so limiting the use of the file system can really prevent potential attacker writing and executing scripts inside your container so we can use in that case the docker run command with a read only flag for this so how do we do that let's first exit our container okay and you can see that there is nothing running uh into our Docker application so you can do Docker run it Das Dash read only okay bython sh okay so let's run that and you can see that here in Docker we actually run a python version so for example uh if you want to touch some of the files you'll see that we have only read only permissions so if I write as you can see Touch file uh we're getting an error so we cannot touch file because we have read only file system so this is quite a good practice but it also has disadvantages and the main dis advantage of using cre only option is that the most applications need to write files and dependencies such as the ones in the/ TMP folder and this will not work in readon environment in those cases you can use folders and files in which the application needs right say to use volumes to mount only those files so that type of volume can be provided to make persistent changes into the Container if the container needs that and needs to write a file into the file system so it is recommended if you're using Docker to use Docker volumes only in case of temporary files so a volume as you can see here is actually a directory that is separate from the root file system of the container it is managed directly by the docker demon process and can be shared between containers so for example you can create a MySQL container configure it as read only with exception of some of the folders in which you would want to write files and we'll try that so for example if I want to create a MySQL container since this container needs a lot of dependencies it will be hard to create it without volume s in a way that it will have read only permissions so let's see what would happen if we try to do that so let's do Docker run name MySQL read only okay and then do MySQL so this should create a container called MySQL from the MySQL image right and this container should have read only permissions let's try to do that and you can see that we have some locks here in front of us so we started the container and so far so good however at some point when we tried to create the docker entry point sh we had to write this file so since we're read only file system we cannot write this file and for that reason um if I go here to mySQL and I try to run the container constantly you can see that it doesn't work it never wants to run and this is because this container is dependent on uh overwriting files and you can even see that from the log so if I go to view more details right here um you'll see that we read only file system so for that reason we couldn't add dependencies so how do we fix that well so firstly I can see that here we need assess to this folder so what I will do is before specifying which image I want to use use I will do minus V which will create a volume and I will create this folder here so we will do user local bin okay so I'll copy that and paste it right here and I also create other volumes so let's uh create a volume to the MySQL Library where we will have our dependencies so I will do here VAR lip MySQL let's do another volume of the TMP folder and another volume on the V run my SQL ID so let's try to run this command as it is and you will see we will get one specific error that I want to talk about so um let's run that first of course make sure that you remove the previous MySQL container so you can create another one with the same name so let's run that and you can see that here we got another error which says database un initialized and password option is not specified so always when you add volumes you need to specify password and to do that you can see that you have a few options here and I will use one of those actually so I will do minus D minus E and here we will create a password so let's create my SQL root password okay okay and this password let's Beal to password okay because we don't care but if you create it for your file system um you would want to choose something more complicated of course so let's run that yes and we need to First remove the uh previous container okay so let's try to run it again so before running it I actually remove one of our volumes I'll remove the uh user local bin because this is covered by one of the other volumes that I currently created so this shouldn't give any problems so and so this will be the final command that I will use um one typo that I notice is that here I wrote my SQL ID but in fact the right folder is my SQL D okay um so fix that and then you can run the command as it is so again what we're creating here is simply uh creating the container with read only permissions but we're creating certain volume in which you can read and write in order for us to be able to instantiate all the dependencies for MySQL and written right in those volumes and then we got required to create a password when we were creating a read only container so let's run that okay and you can see that there is a process that has been created so let's go back and check if we have something running into our Docker Hub so if I go here you'll see that now my SQL is is running so we successfully uh created it and if I go to the logs you will see that we have no errors here so this is how you can start a container and mount volume in a read only mode and so the final security feature that I want to share with you is how to disable the set uid and set git or G because if you can uh set those then this user would have permissions that will assess the directories and the files of the operating systems okay so if you don't disable those even users without Root permissions can actually edit those files which means that they can edit your operating system which is something that from security point of view is definitely not optimal so the best practice would be in the first place to disable the set uid permissions and to actually disable them from the docker file so there are two ways in which you can um disable those and one way is from the docker file so for example um if if I open our Docker file from our previous project okay here where uh the rule the Run statement is you can simply write run find sl- perm plus 6,000 Das type f- exit CH mode a-s cly brackets same colum and then two pipes this command performs a search for executables and withdraws any set uid or set G permissions from the user so if that sounds little bit uh complex for you of course you can use that command while creating the image but you can also disable the set uid and set GID from the creation of the container so maybe you have an image that don't have those disabled but if you want your particular container to have those disabled you can do doer run DOD cap drop okay and then let's do set G and then C drop set uid and then the container name that you want to uh run so let's do python for example so what this would do is simply uh create a python container and it will drop so we have here C drop and we'll talk about C drops in more detail in the next videos but this will simply uh drop some of the functionalities so it will drop the set GID and it will drop setting the uid so if I now hit enter this would create a new container which will be a python container that we have those options so that said thank you very much for watching this video guys and in the next one we will talk about the doer content trust thanks for watching hi guys and welcome back to the course in this video we'll talk about the docker capabilities the doer capabilities allow us to manage the permissions that a process can use to assess the car now and segregate root user privileges to limit actions that that can be assessed with those privileges so we already talked about that when we execute a Docker container it runs as a root user so as we talk this practice is actually not a good idea especially for services that receive requests from users or other sources so it is quite important to note that in a container doesn't have some of the root privileges of docker in the same way as a Docker host even if you run this container as ruled this is because Docker containers run with limited number of capabilities by default and so here I decided to show you some of the key capabilities that the docker container have so the first one is the cap sis walk which which is responsible for modifying the behavior of the kernel lock then we have the cap net admin which is used to modify by the network configuration then we have the capsis module for managing the Cardinal modules we have the capsis row iio which is for modifying the caral memory then we have the capsis nice for modifying the properties of the processes we have the capis time for modifying the system clock we have the capis TTY config for for configuring the TTY devices and finally we have the cup audit control for configuring the audit subsystem so because of its granularity the capabilities are quite useful method to execute privilege tasks with minimum permissions this way the capabilities are used into virtualization environments like Linux or doer containers where they play aun fundamental role in the management of the security context the main advantage is to avoid granting a process elevated privileges when you actually need only certain permissions for specific operation so here you can see the commands Incorporated by the lip cap packages for listing and managing cure capabilities so you saw the capabilities already but there are certain commands that can manage them so the first one is the get cap and this allows you to list the capabilities for a particular file then you have the set cap which allows you to assign and delete capabilities of a file then you have the get P caps which allows you to get the processes for particular capability and then you have the cap sh which is providing you with command line interface for testing and exploring the capabilities you can check all the capabilities by starting a container connecting to a shell and listing those capabilities so let's see how we can do that so if we go back to our terminal uh let's try to create an abunto image and we'll install the lip cap package so let's do run actually doer run it and then a bu to okay so let's do app update okay so let's run that and this will give us all the backups and update our system so after that so let's do app install Dy and then let's do lip C to dash bin okay let's run that and you can see that everything is installed um successfully and we have the newest version so if we now do uh grip which is used for searching cap into the proc then bash P id/ status you can see all the different capabilities that are activated into the docker container so let's exit of this container and the next thing I want to cover is to or how Docker provides commands to install or remove Linux permissions to different containers so you can apply those by cap add and cap drop capabilities obviously if you do cap- add you're going to add a capability and if you do cap- drop you drop a capability it makes sense right so let's try to create an image and see how we can uh add and drop some capabilities so if I do doer run Das Dash cap at equals to O and then if I do das Dash cap Dash drop equals to CH o wn- TI aun to sh okay so let's run that and this will create on a bunto container so if I check our Docker containers you'll see that now we just run on abunto container so here in this abunto container you can do user at test okay and you see that once you do that uh you cannot add user and there's failure while writing changes in the shadow file and this is because we simply remove that capability however if I do for example CH o w n test and let's say user share okay you're actually getting a response that this operation is not permitted simply because we dropped the CH own capability from our VM or container so when EX executing this command we can see that the action of changing the ownership or CH own of the particular file share fails immediately and you can see that we're having that the operation is not permitted so obviously we don't have the this permission to change the owner of the file even as a root user and this is simply because that when we created the container we strictly disabled the ownership capability so all the docker containers start with a little reduced capacity set but still you can have quite a few very nice capabilities immediately when you create a Docker container so you can use the following capabilities changing corers ships override CIS net row net bind service and so on so you can really play with those capabilities and either enable or disable all of them you can of course remove all those capabilities by simply writing C drop and remove each and every of those and you can try it yourself too I also recommend you to always try to remove the set uid is and set G capabilities as we did in the previous video so you can be sure that no editing of the user IDs can be done so if we go back to the terminal and try let's say to disable the set GID a new ID we can do it in the following way you can do desktop or Docker run it cap drop set G cap drop set uid python sh okay so this will simply create a new python container in Docker and it will disable the set G and set uid functionalities so let's run that and you can see that now we're in the shell and we also have our python container running and you can check whether you have permissions to enable uid and GID by writing cut proc self status okay and here if I move to the top of this file you can see that you don't have any permissions to check to change uid or JD which is exactly what we aimed so this is a very good way to manage your capabilities using the cap drop and the cap ad flag this will really improve your security and will give better control to you on the containers that you're creating in doer so that was everything I wanted to share with you guys in this video thank you for watching and in the next one we will finally start with the doer content trust hi everyone and welcome back to the course here we will talk about the docker content trust and public registry so of course we will start by talking about what Docker con content trust is so the DCT or the doer content trust is a mechanism that allows the developers to sign their content for Reliable distribution mechanism so when a user downloads an image from a repository which is an external entity this mechanism allows you to check the image signature receiving a a certificate that includes the public key that lets you verify the image origin so this option is disabled by default and you normally need to define the docker content trust environment variable in order to run the docker engine trust capability we will do that by simply use the export command in Linux and you just need to set it to one so the content trust actually protects against some very specific attacks that include protection of of malicious code in the images so for example this mechanism protects you if a possible attacker wants to make a modification in an official image to add some code that might Buck your computer this will also help you to protect yourself versus repeated attacks the security mechanism of the docker content trust allows you to maintain the Integrity of the image through the use of time stamps and find finally the protection against key commitments will create a new key if a key is compromised and we can create a new image or actually a new version of the image with this new key so we can verify a Docker image normally by using the docker trust command and let's see how this will be done from the terminal so here I am in my local terminal of my local device so the first thing you can do is to enable the doer content trust by simply creating this parameter so let's do export doer content trust equals to one okay and so this is how you can enable it and if you want to disable it you simply set this export parameter uh to zero so now I want to show you how you can actually verify image signatures and for that I use a python image that we have so let's first do doer images to see all the images that are installed in your device and you can see that we're having right here our python image which is the latest so if you want to verify the signature of that image you can do doer trust inspect and then we can do pretty and then python latest okay so let's hit return here and you can see all the signature for uh python latest and so you might see some things that make nonsense to you but actually when an image is downloaded Docker client will return a string that is representing the image hash and hash is something that is a commit ID or the last commit for that image right so using this tag you can directly refer to the particular version that you have so this hash is the one with which the image will be validated when performing the pull of the image from the repository so for example if you have this hash here but you actually um don't load image or your dat the image and it has another hash then you would get a check some error because the two hashes will not be verified so by enabling the doer content trust what your system will simply do is when you're trying to download images it will compare this hash and will deny any images that when you compare this hash with the external one there is no match so let's check a tool that allows you to securely publish and manage images so some notary objectives are to improve confidence in images that we download either from public or private repositories and to delegate trust between users so here I will show you how you can download this application that has a server and a client part and so the client part is installed on the local machine and handles the storage of the keys locally at the same time it handles the communication with not server so let's see how this application can be installed and work in our devices so let's download the zip file and I'll unzip that file on my desktop so here we have it double click and this will automatically unzip and now as you can see you have your notary Master folder so here you can see a few files but the one that is doing the whole execution is the docker compos yaml file so if I open it here you will see the way that the server is built and the signer which is our client both of them have specific Docker file so they can load their images Okay so let's run this file and the application should start so if I go back to the terminal and I go to the notary Master Okay you can see that we have all the files here and so let's do doer Das compose build okay and when I select that this will actually build our image so that's done it might take about um 2 to 5 minutes and after that you can do doker das compose then let's do build actually let's do up dasd okay and and this will basically run our server and client okay so everything is created and run so if I go now to the docker you can see that we have our notas application that is currently running and you can check the locks from here okay and you can see here on the side even though this is one container it actually includes three containers right here so we have the master server the signed server and a SQL database so this is a complete application that has server client and database that can store information so if you do export doer connect trust equals to one this will create the connect trust variable which will verify the signatures and then if you do export doer connect trust underscore server here you can assign the particular server that you're going to use so you you can do https double slash notary server and let's do it on Port 4443 okay so the first environment variable allows you to enable and disable the docker content trust verification and so if enabled as with it the Integrity of the image will be verified relying on the docker not a reserver indication in the second environment variable that we created so that allows you to define the URL specifically where the not server is located and check if this section is completed properly so this is how guys you can uh set and manage the content Trust of a real time application thanks for watching and in the next video we'll talk about the docker registry hi everyone and welcome to this lesson where we will talk about the docker registry so the docker registry pretty pretty much provides a software distribution mechanism and this is specifically known as the docker registry this registry facilitates the discovery and distribution of Docker images the concept of registry is fundamental and it provides the complete set of utilities to package send store and discover new images for Docker so I already showed you the most famous registry for docker the docker Hub and registry is one of the key pieces when creating our Docker environments immediately after we start creating our own images so having images on a registry saves us a lot of bandwidth and gives us better access and download time to the images that we want to get so the idea of the docker registry is that developers can extract images from the registry to create containers and and deploy them either in a public or organization private Cloud so the docker registry works almost like it every image also known as repository is the base of the layers so every time we build our image locally the docker registry Only Stores the difference from the previous version but not the entire images so this makes the creation and distribution process way more efficient as I said and I've shown you before Docker registry is actually the most famous registry for Docker and if we search here let's say python you can easily find images for Python and related packages however the main problem with the official Docker Hub image repository is that it has some limitations regarding the number of images that we can upload and download which means that the number of pools and pushes is limited to certain number for period of time so to avoid this problem of course you can set your own Docker registry for your organization and this option is pretty logical especially since the users are doing builds consistently and have some cicd system that depends on a registry for down images there are of course other registries such as QA or Harbor and those allow you to open account and bring images as a registered user another option is by using the git lab especially if you manage your images directly from git of course you can use other registries such as uh Amazon AWS and so on so the idea of this video is to teach you how to actually create a Docker registry and manage it and this is possible because since the docker registry is an open source project it can be installed to any server to create your own registry with uploads so you're able to upload your images privately this project aims to have a tracking function on images hosted on your own server so you can deploy a Docker registry on your own server in several ways and distribute your own Docker images for example for Linux distributions that include docker registration package you can install the package and start a service so if you go back to our terminal let's see how you can create a fully functional registry so firstly you need to open a registry container that will be listening on a particular Port so let's do doer run- d-p and here is where we Define our ports so let's do 50001 on 5,000 then let's do restart equals to always do das name will be registry and finally I will do R registry to okay so um when we run this command our uh registry container should be created so I hit enter here and you'll see a new process was added and if you go here and check on Docker you can see that we have the registry container that is listening on Port 501 so now let's take or pull an image from the docker Hub as we discussed and push it to our own registry so I decided to pull aun to 16.04 and you can do doer pull aun to column 16.04 okay and this will get you the image now if you don't have it already installed it will take some more time but since I already have it it just updates it okay so once you're done with that the next Second Step would be to tag the image this creates an additional tag for the existing image so when the first Port of the tag is a host name and then a port doer will interpret this as a location of the registry and for that reason you will do doer push actually tag and then let's do abon to 16.04 and then walk host and then we're going to pass the port 50001 slm my abunto okay so this will create a new tag as you can see and actually if I do now doer images you'll see that uh here we will find on Port 50001 my abunto image okay and I created this just now so once you created the tag now we can actually push the image into our local registry so you can do Docker push Local Host 50001 my abunto okay let's click um enter here and this is actually pushing the image into our local registry that I created here okay so now once this is done we can uh actually remove all the locally cached images of the buntu because they are already into our registry so if if you do doer image remove abunto and then 16.04 okay this will actually remove our image and after that let's also remove the registry as well that you saw um right here okay so how we can do that is by Docker image remove and then let's do actually I will just copy it so we don't waste too much time so go here copy the image we display those images by uh Docker images and then simply paste it here okay so now this is done and you'll see that if I do Docker images this image no longer exist into our image database so now our image is stored into our registry and the best way to assess it is by simply writing doer pool Local Host 5,000 one and then do my abon to okay so when you click enter uh you can see that we're actually pulling our image from the registry and now it is going to be installed into our device so if I do now Docker images you can see that we again have our image here and we downloaded it from our local registry so it is exactly the same way guys if you want to uh push your files into either local or public registry okay uh the only difference it will be that you will not actually do a local host here but actually you do the host of the uh actual application so it is really really really simple so finally once we uh pulled our image from our locker registry uh let's now delete the registry but before that let's first stop it so simply do Docker container stop registry okay in that way you can pretty much stop any container so now this container is stopped and if you see here the registry is not no longer running and then let's do Docker container stop registry in case uh it is not stopped properly and doker container rm- V registry okay and this completely removed our container as you can see guys so we no longer have our Reg so this is the complete life cycle for um creating a registry uh passing an image there you can also pass multiple images if you wish and then deleting the registry and as I said it's exactly the same way if you want to uh do either um public or private registry uh with the only difference that with the public registry it's already there so you don't need to create it so that's it guys thank you very much for watching that video this is the last video of the docker security section so in the next one we will dig a little bit deeper and we will talk about the doer host security thanks for watching hello everyone in this video we'll talk about demon security so analyzing the security on the doer host is very important because most of the attacks take advantage of the kernel vulnerability or could occur because some package haven't been updated so here I review some tools for auditing the security of the docker host now here we're going to actually cover topics like the docker demon and the app armor and the second profiles which provide caral enhancement features to limit the system calls also we'll talk about tools like the doer Branch security with Lin which follows the best security practices in the docker environment I will also share with you some of the most important recommendations that can be followed during auditing and Docker deployment in a production environment so probably the most important element of the docker architecture is a Docker demon the process guarantees communication between containers and the traffic and it is protected by the https protocol which is an HTTP plus a security protocol so doer works primarily as a client that communicates with a demon process called Docker the process has root privileges and is Socket located into the docker do sock file so it's important to note that Docker socket exposure can result a privilege escalation so you should always check the assess permissions by the users that are using the docker file so only the root user has to write permissions while the rest of the users should not be able to compromise a container so let's go to the terminal and let's try to create a container inside another container on the docker host we'll be using the docker do sock process and we'll mount it as a volume so you can simply do Docker run- it- V for a volume and the volume will be created into the VAR run Docker do sock and then let's do VAR run Docker do sock then let's do Dean bin SL bash okay let's remove that space here and let's run the command so once we run that a new container has been created and now we're inside it if I go to the docker application you can see that we created a Dean container and it is currently working and running properly so once we are into the directory we can simply write who am I and you can see that we are root and if we go to the bin bash and we do again who am I you can still see that we are ruled so that way we can see that the docker container starts a new Mount point in the host container the second container connects to the host and you can check how effectively it is using the root user so in that way we checked that we have a root assess to the cost from the process so if I exit this container okay and then do clear so we can run our Command so let's say Docker run- it- V and here I'm going to create the volume around the host so I will do host okay and then let's do Debian and then bin bash okay so now if I do CH root SL host you you can see that we're actually going into the second container which is into the first one okay so this is our host container and if I do again here bin bash you can see that we're going back to the previous container let's do now who am I and you can see that were root in both containers so as you saw the docker demon runs the root permissions so it is important to limit the users who have control over the docker demon I can give you series of recommendations on how you should configure the assess to the directories and the files to the docker demon so in front of you you can see a table with a list of default permissions of each file that is part of the docker demon so obviously here we're having R for read W for write so if you're not familiar with the permissions and how they selected uh definitely check them out they are an important part of every Linux distribution so what you can simply see here is that the group that has access to those files here or those volumes are the root users and they have those permissions there right so you can see that the permissions are actually quite limited so at this point we have reviewed the docker demon security and the default permissions for each service that this process is using so those files they represent different services and if you ever think to work with them you can definitely use this table as a reference to what permissions you can gain so let's talk about how we can audit files and directories with on the docker demon so the docker demon runs with root privileges so all directories and files should be consistently audited to know all the activities and operations that are running we can use Linux audit Damon framework to audit all events that take place on the docker host so the demon framework has some very nice features such as the audit process and the file modification you can monitor dynamically the system calls you can detect intrusions and also register commands by user so the Linux Audi demon is a framework work that allows you to audit events on Linux systems and it is normally configured using just two simple files the audit config and the audit results so the demon itself is the audit config and this file configures the Linux AI demon and specifies on where and how events should be traced it also defines how to behave when for example the dis is full or how many loocks to keep at the same time you wouldn't necessarily need to modify this file because the standard settings are normally appropriate for most of the systems the audit results on the other side is the file that configures which events should be selected so let's now go to the terminal guys and here let's see how you can use the AIT tools you can do that that by a command called audit CTL okay and you can see that I currently have this command installed but if you don't have that set up you can simply write up get install AIT D okay and this will pretty much install the Tool uh as I already have it installed my installation is quite quick but for you it could take about 1 minute or less so in order to indicate to the audit framework which directory or file we want to observe by using the path for example if you want to check out the Etc passwords we can simply do audit C- a exit comma always- FP equals SL ec/ password okay and then you can do- f perm equals to wa and so when I hit enter here uh also make sure that there is no space between the comma and always so when I hit enter you can see that for me this error this um r already exist but for you since you write the command for first time the rule will be created so let's now go to the rules file and you can do that by doing CD Etsy slash audit and then rules okay let's go there let's do OS and here you can see that there is a file called audit do rules okay and after that let's open this file so just use your favorite editor and I will use Vim a the. rules and you can see that here you're having a few argument so here you can see the different rules and those are the rules that enable you to audit certain directories so you can copy and paste those rules right here okay I'll will attach those in the uh description of that video or actually in the into the description of the whole section so you can use them and write them right here so let's save this file okay and then simply write sudo service audit D restart so if we go now to CD VAR okay then go to log audit okay you'll see there that we have a file that is called audit dolog and let's check out this file so let's do more audit lock and you can see that here we're having a huge lock for the audit that has be completed on your Docker system so you don't actually need to know what is inside this lock and I'm not going to manipulate the parameters right here and observe them but the whole idea of that lesson is that you're you know how you can export loog for a particular system by adding the rules and also knowing the location of the lock that is generated so this is how guys you can audit the entire files using the audit C2 tool thanks for watching this video and in the next one we're going to talk about we'll talk about the upper more which is a quite useful tool for assigning each rning process on your system to a secure profile and we observe things like file system assess Network capabilities and execution rules that's it thank you very much for watching and I will see you in our next video hi everyone and welcome back to the course in this video we will talk about the upper more and the Secom profiles both of those profiles both upper more and Secom are responsible to provide caral enhancement features to limit system calls so let's first talk about upper more which is the profile with which we're going to work mostly in that section so upper more enables the administrator to assign each running process to secure profile and Define file system assess Network capability ities and specific execution rules so it basically provides protection for external and internal threats enabling the system administrator to associate secure profile with a specific application with the goal to restrict some specific application capabilities depending on the use case so normally the upper more is enabled by default for Debian base distributions and let's go now to the terminal and check if we have it already installed uh so here I'm on my Linux VM and I have a terminal opened and the first thing I will do is I'll actually make sure that I'm a root user because right now I'm with my username so just write sudo Su then write your password and go to the main directory so if now you write who am I your root okay and let's get all the information for Docker that we have you can do that by writing Docker info okay so you can see that now we're going to get some information about Docker okay and right here if I go down and look for Security Options okay you can see that we're currently having installed the upper more and the second comp options okay so let's do Docker info then let's do pipe and let's search so let's do grip upper more okay with a okay so you you can see that here we're having the upper more which is one of your Security Options and if you want to find the directory of the upper more this is the Etsy uper more. D okay and if I write a you can see all the files related to the uper more distribution so if you want to install the different up armor profile packages you can simply write let's actually first go back to our um root repository and you can write sud sudo app get install app AR more Dash profiles okay and this will pretty much get you all the profiles and packages related to app armor for Linux so there are few main files that you should and directories that you should pay attention for so the first one is the Etsy app armor okay and this folder contains all the files to configure the demon then you would have the app armor. D which is a folder that contains the rule set files that limit an application assess to the rest of the system so you can of course reconfigure the app armor but for the purpose of this video the default conf configuration will be completely fine so from the security point of view up armor proactively protects the operating system and applications against external and internal threats by applying specific set of rules for each specific application so when you install Docker it normally generates and loads a default profile for containers called the docker default and so Docker with simple words generates this profile and loads it to the Linux caral so basically when you create a container it will use the docker default policy unless you override it with a specific security option so for example if I write Docker run-- rm- it and then if you write security Das op and then upper more equals to doer default and then we run the application hello world okay this will create the hello world application with upper more as a security option you can check the status of the app armor in the docker host and determine whether docker containers are running up armor profile or not so you can do that by writing up armor status okay and you can see right here that we are currently having 82 profiles that are loaded and we have 62 profiles that are in EN Force mode and if you scroll down here you can see that we're also having this profile Docker default which is basically assigning the security options for our Docker containers so keep in mind that the default doer option is now displayed in application mode as well so as you can see by simply running the doer default you're already adding valuable Security Options for your containers and you pretty much don't need to configure absolutely anything you can just use the security option right here and Define the app armor Docker default which is pretty simple and if you want to apply app armor to your containers you can definitely use this option in order to add some level of protection so this is how you can configure upper more don't forget to use the upper more status command in order to always be able to check the status and the profiles of your processes so in this video I just also want to give you a brief introduction on the Secom profile which is a different profile related to security so each of the processes that we are executing on operating system have the option of interacting with the Cardinal through system calls so the process could ask the Cardinal to perform some task such as modifying a file creating a new process changing permissions to the directory or using API by which the caral gives access to the services many of the system calls are accessible to absolutely every process of the user area but a very big part are not used for the entire life of the process at this point the SE comp is a tool that allows you to limit the exposure of the caral to the system CS for a specific application so when you combine the SE comp with other tools like Nam spaces we can design a relatively secure application so the SE comp is pretty much a Sandbox facility for the Linux cardal that acts like a firewall for the system calls or the CIS calls it uses Berkeley bucket filter rules to filter the CIS calls and to control how they're handled so those filters limit the container access to the docker host Linux caral especially for simple containers applications so this option is normally automatically enabled on your toer applications and we saw it when we were checking the terminal that was also one of the available options alongside with up armor so since it is enabled automatically in most cases you can be sure that set comp is applied to your project and even though you don't need to interact with it it is good to know that it is there and it is protecting your containers that's it thank you very much for watching this video and I will see you in the next one where we are going to talk about the doker bench security thanks for watching doer bench security is a very very useful tool to test the security in your doer containers so the main purpose of the bench Security is to perform doer check against the container and generate a report that tells you if your container is potentially insecure so the docker bench security mainly focuses on the best practices in areas like file permissions and registry settings so the docker bench security is basically a shell script that looks for or common best practice patterns around implementing Docker containers in production it is pretty much set of bash scripts which must be run as a root user on any machine where Docker is installed and so the tool will produce reports for every system that is checked while it is installed on that system so from Docker host and doer demon point of view this is the best tool you can use to check those best practices so here I will bring you some components which this tool tests so using the bench security we will test the host configuration the demon toer configuration the configuration files of the demon the image container and the compilation files the runtime container and the docker Security Options so you can see how many things you can check just with this single tool so to run the docker B security you first need to go go to doer bench security repository so to clone it you can simply do git clone and then H ttps github.com SL doer SL doer bench security dogit okay so run that and this will copy the whole repository so now once you have it if I do AOS you can see that we're having the uh doer B security right here so you can do CD Docker bench security and then run the execute file which is this one so you can simply do sudo or actually since I'm root user I don't need to do sudo so you can do um sh Docker bench security sh and you can see that this will run a complete Diagnostics of all your systems so what we just saw is how the doer Bas security executes a container with high privilege it and runs set of tests against all the containers in the docker host so here are some of the configuration checks that the docker bench security is doing and the first one is the host configuration this section checks the security over the doer host then we have the the demon doer configuration files and here in this section we'll see information about the configuration files used by the docker demon this ranges from permissions to reports and properties sometimes those areas may contain information that you don't want others to know in a plain text format the next one is the demon Docker configuration and here this section shows information about Docker demon configuration and can detect containers that are running on the same Docker host and checking the areas for for each other's Network traffic by default all containers that can run on the same Docker host will will have access to each other's Network traffic so if you go back to the terminal and we try to observe what we just got you will see that right here you're having different sections that are specified with numbers right those look like a section in a book so the first section right here is the host configuration and so wherever you have a warning this should be reviewed at system configuration level for example here in the host configuration we can take a look at ensuring a separate partition for for containers that have been created to ensure that auditing has been made in the docker demon and also in files and directories so you can see that here straight away you're getting some advices for what passes and what can get better in your system so we can see that for example here we have many OD that are currently performed and they are passing but also we have some warnings as as well then if I scroll down at Point 2 you'll see that here uh we're checking the docker demon configuration so the docker demon configuration checks the file permissions related to the docker demon such as the docker service the docker socket and so on so it basically verifies that those three files can pretty much run only with with root privileges so you can see that here we're having a warning that we should ensure that Network traffic is restricted between containers we should note that Docker is runable right now as not nonroot user which we actually verified by being signed to my personal account but not to rot so we know that this is true here we get some recommendations for namespace support and so on and here you can see section three which is the docker demon configuration files so you can see here we're having almost actually everything passed which is great actually everything is passed okay and after that in section four we are checking the container images and build files and you can see that for many containers we have no health check found okay so this might be a good suggestion for improving those images right here here we will check things such as whether the docker content trust is enabled on the docker host or not which was this parameter that we set up early on if you remember the docker content trust which is pretty much verifying the images on our devices then in section five we're checking the container runtime and you can see here that there are no containers currently running so if for example you run some of your containers you'll get some additional suggestions here as well so normally if you want to execute the most significant warnings for example those ones here and to solve them we can execute the container limiting resources at memory CPU levels we can also add read only permissions or use nonroot user for a particular container so for example if I run the python container you'll see that right here in the container run time we have few changes because now we have container running so let's try that so let's do doer run- i-t NG I NX which a type of container then let's do pin and Bash and here you know the ngx we're going to create it as an interactive container so we will be able to type things inside let's hit enter okay and you can see that we run this container and right now we're inside I open another terminal right here okay and on the other top I will simply write first of all sudo Su so I'm a root user okay and then write Docker container AOS okay and you can see that we're right now having one Docker container that we're currently running right so let's now try to execute the bench security so let's do AOS let's go to Docker bench security okay and let's do uh sh Docker bench security Dosh okay let's run that and if I move up here on five which is container run time you can see that now we're having plenty of options here and this is because we're now running a container so you can see all the recommendations and some of them are really related to something that we uh studied already so for example uh here you can see that we have up armor already enabled you can see the CL Linux security option is not enabled there so this is a warning we also didn't start the container with Security Options anyways so that's why we're getting another one and so on and so on you can see the different options and pretty much see what you can improve or maybe you see options that are not enabled but you would like to keep them the same so let's go to the other top and write exit to stop this container and here if I do docker container AOS you can see that right now we're having nothing running so in order to improve those privileges that are not added right here you can add commands such as read only for the container um also you can allocate specific CPU shares by CPU Das shares equal to let's say 500 you can also do bits limit equal to one when you create create the container and so on you can also use the detach option so actually adding those additional things would really help to reduce the number of warnings that you have um and the last of course is if you want to add security option okay you can add additional Security Options in order to improve the security of your container so here is how you can really improve the runtime of the container in 0.5 so another very good option to actually audit your system is with a tool called Linus okay so this is an open- Source audit tool for evaluating every Linux or Unix based systems so if I write lens right here you can see that I currently have it installed and I can uh check out all the commands that I can use if you don't have it of course you can install it using the up get install option so let's try to audit our system so first I run uh one container let's do the Ang INX so this will create a new container and here in the other terminal I will do Linus command actually audit system okay and you can see that now Linus in this first phase is starting to check our system okay so now it executes and you see that uh in green you'll be able to see everything that is a positive about your system so whether you have the Right Packages installed the right ports running and in red you would see things that your system might be missing and there is a warning there you can see that is actually quite uh long of execution when run this tool this is because it is really checking a lot of components of your Linux system so you can see that the execution of our system is completed and this will check all the configuration related to the boot services and the caral IT checks also things related to the uh users groups authentication and finally it will check information related to the shells the file system the containers and the security Frameworks right here you can see that we're having right now the upper more enabled okay but for example we don't have those two other Frameworks right so this is how you can use the lens to check the configuration of your doer host and I hope with this tool that you have now a good number of software that you can use in order to analyze whether your Docker application and specifically your doer host is properly configured or Not So based on those analysis So based on the bench security whether you analyze it with a doer B security or with lens you can really see clearly what Security Options might be needed in order to improve your Docker image and container so that's it thank you very much guys for watching this section and in the next one we'll continue with the doer image security thanks for watching now let's talk about the doer Hub security scanning so in addition to ensuring that the containers are properly configured from a security point of view you should also ensure that all image layers are are also free of known V vulnerabilities as you know every image can create multiple containers so even if you create containers from certain image and your container is secure you might still have issues if there are some bugs into your image and in order to inspect the images you can use certain tools to scan that images in the doer repositories so here in this section we'll review some tools such as Clare and Ure to discover vulnerabilities in the container images by learning static analysis tools that analyze different layers that compose an image the result of that will be the ability to be able to detect vulnerabilities in the container applications before uploading them to production so let's talk about the docker Hub repository and security scanning so as you already know the docker Hub is a repository of Docker images in which any user can create an image and upload and share its images with a community so there are two main types within that repository official images that are normally maintained by official suppliers such as Apache ngn MB abunto and so on all the famous providers that are professional and you heard about them on the other hand we can find images that are created from users they've been customized and adapted to the specific needs for a specific project so let's see how we can actually scan those images so Docker security scanning is service available in the doer private repositories and it normally compares the container content which is scheduled by different layers of packages we're inspecting those binary packages layer by layer against some list of common vulnerabilities and exposures database so the scanning tools Effectiveness depends on two main capabilities the static analysis depth and integrity where the scanner discovers the image inner layers and the nature of those layers and the vulnerability feeds quality this indicates the coverage and how much the vulnerability list needs to be updated so you should always bear in mind that in order to inspect our images for vulnerabilities we can refer to those lists with non vulnerabilities and sort of automate them and use tools to check those lists and databases with vulnerabilities and see if our images is having those in automatic manner so this is the whole idea of the image inspection so let's see how the docker security scanning process looks like so the docker security scanning is a process and we can actually call it a tool that integrates directly with the official Docker Hub repository and allows us to automatically review images found in the public and in the private repositories this service is available on dockerhub public and private repositories in the docker cloud and it is normally paid service in all cases with some free trials so when a new image is uploaded to the docker Hub or the docker Cloud it launches a process that extract the image seats and sends the image to the scanning service that scans the composite layers of packages analyzing each of those layers with a CV or the common vulnerabilities exposures so finally once the docker analysis are completed we're provided with a result with different vulnerabilities that are found and the level of criticality for each vulnerability so the results report is generated and there you will be able to see how severe the issues in your system are normally the level of criticality depends on the score signed to the CV called by the common vulnerability score system and I will show you right now the standard codes of that system so firstly you would have the high result which means that the vulnerability is within the range of 8 and 10 then you have the medium vulnerability which has a score between 4 and 7.9 and finally is the lowest impact vulnerability which is between 0 and 3.9 so this scan process can be EAS eily integrated into containers integration and continuous delivery workflows so that scanning can be started automatically every time the developer completes a new container so today most of the devops teams only work on discovering new vulnerabilities with high criticality levels that are part or the of the CV database so the main problem here would be that the lower criticality problems could not even be discussed covered but can be reach by the potential attackers so with the docker ecosystem the docker file describes the dependencies and what will be installed in the container so that the application can run on it so when running within a continuous integration environment it will automatically generate and publish in the docker registry including the container software dependencies so now once we stated the levels of criticality and some of the vulnerable components of doker let's now see some tools for vulnerability analysis in doer so in the recent days the attackers are becoming more and more advanced they have more complex techniques on finding vulnerabilities in the docker containers and images so when hackers try to find more sophisticated attacks methods and techniques the cyber security analysists and the researchers should always work to prevent that attacks and they're looking to prevent the resources in Risk so due to that devops requires the establishment of functional image scanning and validation mechanisms in order to comprehensively protect the image creation and container creation process so here are some recommendations to guarantee the control of the source code code and the deployment of different environments so to do that we need to include tools to automate and organize a source code and that's why the first rule is the source code control the source code control should be a command practice in the devop security to ensure high quality while contributing with unit and integration testing and you probably know the main tools for doing that those are for example GitHub gitlab or bucket also project should always have cicd tools this is because the development teams use construction tools that are essential part of their automated compilation process through the cicd tools and the most famous ones are bamboo and Jenkins this will really ensure that your code is constantly tested and the versions are up to date and finally there is one very nice tool that I'm also using in my current company and this is the the jrog X-ray this is a security tool for container and image analysis this solution allows you to scan any dependencies for security vulnerabilities and policy compliance jrog x-ray proactively identifies security vulnerabilities that could potentially impact your environment and the best thing is that it runs natively with the jro artifactory which is trust me one of the most popular artifact tool tools used by many organizations so let's look at some tools that can actually perform static vulnerability analysis in the docker images tools like CLA dagda and anchor so first let's talk about CLA security scanning now CLA is an open source project for static vulnerability analysis in container based applications normally layers can be shared among many containers so it's important to create a package inventory and compare it with the known issues or the CVS so this tool provides vulnerability analysis service that works with API that analyzes each container layer looking for existing vulnerabilities the tool can actually generate a report a list of known vulnerabilities that are currently existing in your container it basically extracts all layers of the image and notifies the user for the vulnerabilities that were found another quite useful tool is the dagda this is also an open source tool but in that situation the developers chose to use Python to perform the static analysis of non vulnerabilities in the docker images and containers it also helps you to monitor running Docker containers and to detect anomalies so data retrieve retrieves information about the software installed in your Docker image such as operating system packages libraries dependencies and modules and matches them against bigger database which stored all the vulnerabilities this database is Created from the collecting vulnerability data and it is normally a database made from mongod DB exploited DB basically a database that stores static data analysis scans performed on the docker images you can definitely feel free to check out the project on github.com and you can see the link below you can see right here the basic framework or the scheme of DGA it normally has distributions for um Red Hat Linux sentos Dean abunto and Alpine so you can pretty much use it on most of the famous Linux distributions so data internally uses the oasp dependency check to analyze the packages and dependencies on many languages such as Java Python nodejs and more so let's talk a little bit more actually about the oasp dependency check now the oasp is analysis tool that pretty much checks the dependencies into the docker images it normally performs scans about the pom XML and the Manifest files into the Java projects and in case you're using for example JavaScript it would check the package.json file so the next vulnerability to I want to share with you is the 3v and it's again an open-source tool that focuses on dependencies vulnerabilities in packages at the open source system level and also it can check dependency files on many different languages so with try you can get information for vulnerabilities for metadata such as Library vulner ility ID or this is a vulnerability identifier the severity of your vulnerability and those could be uh critical between 9 and 10 high between uh 7 and 8.9 medium between 4 and 6.9 and low between 0.01 and 3.9 you'll also get the installed version the fixes version and finally the vulnerability description so definitely feel free to check out 3v and apply it to your images from the repository that you can see below that said thank you very much for watching that video guys this was our review for the vulnerability tools and in the next video we will cover CLA and QA thanks for watching extracting other layers from the image can be also quite beneficial with tools such as CLA and QA CLA for example provides adjacent API that extracts layers of the image and can be executed as containers created from those images and you can integrate this with continuous integration and continuous delivery process and in that way this would be quite helpful for finding all the vulnerabilities inside your images so you can check out QA into this GitHub repository you can see the link in front of you it is an open source with more than 1,500 commits another useful tool to provide static imit Anis is the QA iio this tool is also used to find Obsolete and vulnerable packages and binaries with this service we can see information related to the image scan including packages with vulnerabilities that have been detected in each of the layers so you can see in front of you a screenshot of the packages and vulnerabilities detected with a QA this is a screenshot from the security scanner and here you can see the vulnerabilities divided into different levels from high to low so for each vulnerability we can see the cve number the level of criticality and the package that has a specific vulnerability so with the QA each vulnerability defines a series of matrices that will give a final score and level of criticality that you can see in front of you then this will be plotted on a pie chart so you can can get a good visualization of the vulnerability score so some of the main matrices that were used are the assess complexity which is a matric that measures the complexity of the attack required to exploit vulnerability once an attacker has assessed the target system then we have the authentication which measures the strength or the complexity of the authentication process for example whether the attacker is required to Prov credentials before they execute the exploit so the fewer authentication instances are required the higher the vulnerability score will be in that case then you have your confidentiality impact which is the ability to limit access to information only to authorized user as well as preventing access or disclosure to unauthorized users increasing the impact of confidentiality increases also the vulnerability score here and finally we have the Integrity impact which is a matrix that measures the impact on the Integrity of a successfully exploited vulnerability the increase in the impact of the Integrity increases also the vulnerability score so definitely check out the qa. if you want to get a little bit more information about the QA tool that would be everything for this video guys thank you very much for watching and in the next next one we'll talk about how you can analyze Docker images with one very famous tool called answer so let's see how we can analyze images using the enter tool if you're not aware enter is open source tool that inspects analyzes and certifies the docker images this analyses are done against a priority database such as pogress that is normally formed by collection of information of vulnerabilities and security problems aner will also collect some information from the locks of the note JS NVM and Ruby it will use postgress as a database in order to store all the security problems that are recorded by the providers so aner can pretty much download any image from the registry compatible with the docker and it will give you a result of the analysis and generate a report with details for the image and a list of artifacts for example npm python Java and so on it will also give you a list with operating system packages list of image files and the different vulnerabilities that were recorded as well so let's have a look at the different components that the aner is built from so you can Implement those components to either a single container or kubernetes cluster so the first one is the enter engine command line interface and it is provided for you to be able to rule and control the way the solution is calculated it is mainly responsible for interpreting and sending commands that are passed to the enter engine API and the engine API is actually the next main component of enter this service allows the the users to control the entire solution it is also used to analyze the images and obtain policy evaluations and govern the solution complexity then we have the enter policy engine which is responsible for scanning the vulnerabilities in the artifacts found in the image and provide quick assignment and assess the current policies on that data and after that we can activate the answer engine an izer which is responsible for downloading the images and their analysis and the final component is the answer engine database which is again built around the postgress SQL database and contains tables with all necessary services that are communicated through the API calls so basically the answer engine allows the developers to perform a detailed analysis of images executing queries generating reports defining policies and so on the open source tool is highly customizable and reusable for different jobs it will allow you to extract packages and components from the docker images and scan images for known vulnerabilities so this type of engine is provided as a Docker image and this can be also merged with kubernetes Docker slam and Rancher so let's go now to the terminal and I will show you how you can run the enter in your device it is actually quite straightforward so normally this engine will start in form of Docker containers you will have a postgress database and you have another other microservices that will generate reports for your images actually this tool will be quite useful for you if you want to analyze your Docker images make sure that the docker is on in the background and it's installed in your device you can make sure that there are no uh containers running with Docker container OS okay you can see that nothing is running okay and now let's do the quick installation for the answer now do C- zero and then we'll go to the answer.com doio actually docs quick start Docker compose yo okay so what the curl command will do it will go to this domain and we'll run the docker compos yl file to install the answer in your device so let's run that okay and this will pretty much show you the file so once you verify that this is the correct file let's actually create this zero to o okay- o okay and this will actually open the file so you can see that you got it now you download it and after that let's do doer Das compose up- D okay and this will run all the containers related to the anure engine so now you can see all of those containers that were run so we have the que you have the policy engine that we're talking about you have the uh database with all the vulnerabilities so now if you do Docker container OS you can see that now you have containers that are currently running and those are all all the uner containers that you just installed inside your Docker machine and if you want to see them graphically you can actually open the docker dashboard and you'll see that now here you have a cluster of containers and you have the database which is the postgress DB of course I'm getting some warnings because my uh chip is IMD right which is normal so if you use um MacBook with M1 chip you might get the same thing but you can see that all the containers are running which means that the API the analyzer the queue the policy engine and the database are up and running now once we have that let's install the answer command line interface so you can control the tool from the terminal so let's do pipe install answerer CI and I misspelled it so un okay and for me I will get a message that all the requirements are satisfied since I have it already installed but for you the installation we actually begin now let's clone the command line interface repository so let's do git clone htps github.com slun sl- CI okay and this will actually clone the repository into the folder that you're currently in so if we do OS you can um see that we have the answer CI so let's enter that repository okay AOS let's do get status and get Branch so we're in the master branch and then let's do python setup.py install Okay so this will go to the setup py file that we have right here and it will install all the dependencies from there now once this is done you can actually run the command line interface and with this command line interface you can pretty much analyze every single image in your device so let's write doer compose exit API answer C CI okay let's run that and now you can see all the commands that you can actually use in order to analyze your images okay I will show you a few of those but definitely feel free to explore all of them and use them use those commands to analyze your images so for example if you want to check the status of the answer engine you can simply do doer compose exit API answer CI system status okay and you can see the status of all of your systems that are currently running so the uh analyzer the Q the policy engine the catalog and the API are currently up and are running properly since we have all our containers currently working you also have um the version of the database and the code version as well so you can also check the synchronization of the answer with the following command so do uh Docker compose exec API answer Cy system feeds list okay so here uh you can pretty much see the answer database with all the known vulnerabilities depending on the image type so you can see here distributions for auntu Alpine um dban Amazon and so on so this is a really easy and quick way to check those vulnerabilities so if you want to check for example the commands that are available for analyzing the docker image you can just do Docker compose exit API answer CI image okay so here you can see how you can add image delete image get import listo images and so on so let's try to add an image for the answer to analyze you can simply use now the add command so if I do add actually answer c a image add and then the name of the image for example um let's do a bun to you can see that it is unanalyzed let's check out uh what images we currently have installed so let's for example take radius so let's do again answer Cy image add radius okay and we get here unanalyzed so in order to start analyzing images you first need to add them to the engine so let's do that you can do doer. iio actually Dash compos exit API answer CI image at and here you can see that we actually need to specify image after that okay so after add let's actually add a an image so let's do docker.io SL Library SL Deon latest okay so let's run that and here you can see that the analysis begins automatically after adding this image to the answer engine without any user interv intervention so let's write now Docker compose exit API answer CI image list okay and you can see that now we have one image into our image list and we're currently analyzing it right and you can see that this was the dbn image that we just added so depending on the number of images that we are analyzing they will be analyzed depending on their size and the complexity on the analysis on them if you have multiple images then you would have a Quee for analyzing those images so if you want to return the packages that are installed inside a particular image you can do um Docker compose exec API answerer CI image content and then we simply need to specify the same debn image okay so I will just go uh up here and I will copy it so Docker IO Library de and latest okay and I will paste it here so you can see that after some time you can rece all the packages into that Library you can see the OS files python Java binary go and so on if you want to get for example the metadata you can do Docker compose exit API answer CI image metadata okay and then we can again paste the image that we're currently analyzing so if I do that you can see the Manifest the docker history and the docker file and to make sure that of those components are currently available so if I want to do a similar thing but let's try actually something else let's try to do evaluate check so let's instead of image let's do evaluate check okay and you can see here the image digest okay you can see the status that is passed and pretty much some general information about the image and the result which is successful in this case with a status pass this implies that the image has pass the evaluation of answer default policy sometimes you also get additional information if there are some things to improve into the image right now in our image there is absolutely nothing that um answerer is suggesting us to improve but sometimes under this policy ID you might see some additional information for something that is not found doesn't exist or need to be changed so except that answerer can also evaluate image based on you user defined policies a policy as you probably can imagine is pretty much set of rules to evaluate the container image and these rules of course include security vulnerability checks image black and Whit list checks configuration file content and so on and usually enter is evaluating those images based on separate units or you know those units as packages so every package can have some number of policies that can be executed specific for that particular package when we're evaluating the whole image so let's see the ENT policies by writing doer compose exit API answer Cy policy list okay and as you can see there is only one policy that can run at a time and right now this current policies with this ID are activated and this policy pretty much corresponds to the default answer policy and you can get more information about this policy of course you might want to know more than the uh policy ID so you can do Docker compose exec API answer CI policy get and you can simply place then the ID of that policy so let's do that let's do control contrl V and in that way you can see that we can simply get um The Source the status when it was created and when it was lastly updated okay so if you have a different policy you can definitely get the information from there uh another way to check information about um policy is if you use the option policy describe so let's do policy describe okay and here in this uh huge list you can see the different triggers within the policy gate and they're pretty much the evaluators that will capture the result of the analysis of a rule so you can see the description right here that for example malware checks for malware scan finix in the image you can see what the metadata is doing and all the rest of other Gates or the different rules so you can extend this command by also writing gate for example um let's take the gate vulnerable abilities so let's do vulnerabilities okay and here you will get additional description specifically for for the vulnerabilities gate you can see that here we're triggering triggering Blacklist IDs and all the different functions and techniques that are used during checking for vulnerabilities so I hope guys that this was really useful for you this is a really good tool and you can directly apply it to your uh Docker images and check for vulnerabilities you can also use it during your work if you're working on Docker installing images and you want to do a simple cyber security check on your images whether they're secure or not so that's it thank you very much for watching and I will see you in the next section in this section we'll talk about how you can analyze the vulnerabilities that appear in the docker containers and more specifically let's have a look are the docker threats and attacks so from a security point of view it is quite important to have some knowledge about the docker Docker container threats and system attacks that can impact the docker applications so here I will show you multiple resources from which we will explore the docker main container threads and Main vulnerabilities that you can find in the docker images and some services and tools that are related to Docker so if we go to this page CV details.com and you go to the docker page you can see organized tables with all known Docker vulnerabilities by ear and see some nice graphs so nowadays it is critical to ensure that the images you're running are up to dat and do not contain software versions with known vulnerabilities so this page will pretty much help you to find out all known doer vulnerabilities by ear and separated in different categories so all the categories can be separated into three main most common attack sources that that are vulnerable so first we can have direct attacks to the kernel taking advantage of a vulnerability that has not been patched yet then you can have a denial of service attacks and here the main problem is that the container might monopolize the assess to certain resources such as the CPU memory and that could alternatively result in a denial of service and finally you should be very careful not to use troed images so if an attacker gets an image and toonize it with malicious code both the docker host and the data exposed within the container are all in Risk so going back to the page with all known Docker attacks we can see a very comfortable graphs with the vulnerabilities by year and you can actually see that for example for 2022 we had the year with the least vulnerabilities found however if you take a look at 2020 this is where actually doer R took a hit and a lot of vulnerabilities were discovered but later on fixed you can also see all the vulnerabilities by type so you can see that the most common vulnerabilities are the ones from the executed code denial of service and bypassing so take a look at this page at your own time and from there you can really discover in detail all the docker volume vulnerabilities that have been found and also if you're interested in any other type of tool and want to find out about the vulnerabilities of that to definitely feel free to refer to that page so the containers will always share the same kernel and doer host so the container can exploit many vulnerabilities in the caral interface to compromise the docker host so there are some known issues and threats inside the containers that you should be aware about for example your container would be under threat if it tries to download additional malware or scan internal systems for vulnerabilities or confidential data another way to put your container in Risk is if it is forced to use system resources because that might block another containers another thread is called a dirty cow that exploits the Linux kernel and allows root privilege escalation on a host or container you can also have attacks on inse Ure containers by m DB and elastic search containers another very common problem is actually the buffer overflow in specific programming languages libraries and this will allow the execution of militia Cod for example vulnerabilities such as buffer overflow can actually give control to the hackers and allow their attacks and finally you can also get SQL injection attacks that allows the attackers to take control on the datab base container in order to steal some data from that database and especially if you're working for a company providing financial services this data could be quite valuable for you and for your business so one of the most critical vulnerabilities in the doer was discovered in 2019 and this is the cve 2019 5736 this vulnerability allow the attackers to override the host runs binary and consequently to open the host with root assess so this vulnerability was actually discovered in the utility to run containers that is called runc and this made possible to obtain rout permissions on the host machine this attack was actually not blocked by tools such as up armor however it was possible to block this vulnerability by enforcing a specific mode and configuration of the name spaces so you can see here here that we have some number and this is the number according to the common vulnerability scoring system CVSs for which we talked about in the previous videos and this is being used to measure the criticality of the threats so this score here is pretty much based on factors like attack vendor complexity of the attack privileges user interaction scope and so on so let's see what are the key matrices for obtaining this number here so for obtaining the criticality of vulnerabilities we can also use matrices such as the assess Vector for example and this pretty much refers on how the vulnerability has been exploited so an exploitable V vulnerability means that the vulnerable software is bound with a network stack and this means that the attacker doesn't require local network assess and such type of vulnerability very often is called EXP voidable remotely vulnerability and an example here could be let's say a buffer overflow another very important matric is the assess complexity and this matric measures the complexity of the attack required to exploit the vulnerability once the attacker gains access to the targeted system for example if we consider the case with a buffer overflow in the internet service this would be a vulnerability with high assess complexity another very important matric is the authentication and this matric analyzes the number of times the attacker that he or her needs to authenticate to exploit certain vulnerability so according this matric we're measuring that an attacker is required to provide credentials before the vulnerability occurs so in general the fewer authentications are required the higher the vulnerability score would be here because the easier for the attacker would be be to assess the system then we have the matric that measures the impact of the confidentiality from the information that the attacker gets when exploiting certain vulnerability so confidentiality refers to limiting assess and disclose information to authorized users as well as preventing assess from unauthorized users so here the greater the impact on confidentiality is the higher the vulnerability score would be and finally the final Matrix is the Integrity impact and this Matrix measures the impact on the Integrity from a successfully exploited vulnerability so the greater the impact on the Integrity is the higher the vulnerability score would be for example if an attacker can modify any file in the Target system at that point you would have a very very high score so let's look at one of the example vulnerabilities that have been discovered in Docker so this is the dirty Cal privilege escalation vulnerability in the Linux Cardinal and this would allow to existing user without privileges to perform an elevation to Administration privileges right so if somebody logs into your Linux system that is just a standard user he could gain on his own access to Administration Privileges and perform actions that he doesn't supposed to do so cow here refers to change to write meaning that a standard user can actually modify the read only objects and by modifying those objects he can gain are root privileges so the vulnerability used in the diral is the one that exploits contents of the memory while the kernel is executing system calls and perform actions to the same memory address space and so here on this uh page you can pretty much see all the versions that are vulnerable to the dirty Co exploit it they are hopefully versions that nobody is using anymore but by checking out this vulnerability you can see what versions you should avoid so you can find the diral repository for Docker right here and this repository pretty much is responsible for simulating that vulnerability on some Linux versions so that's it thank you very much for for watching and I will see you in the next video so we already talked about analyzing the vulnerabilities in Linux and in the docker containers but let's have a look at the vulnerabilities in the docker images so a standard audit process ensures that all containers are based on updated containers and both hosts and containers are configured securely so in order to validate this audit process we should bear in mind mind that all containers should have as minimum as possible Privileges and should be fully isolated so we should always execute the containers with minimum resources and privileges for their execution so it's important to limit the memory and the CPU as well as the network functions we should make sure that we only give the Privileges and allocate the memory for a container specifically needed to execute its purpose and this brings me to the next point where we should also limit the memory and the CPU so limiting the amount of memory available to The Container will also prevent attackers from consuming all the memory on the host and killing your services limiting the use of CPU and the network can really prevent attackers from executing denial of services attacks so I would always prefer my containers to have a static memory allocated to them but not pulling as much memory as they need because imagine if an attacker attacks just a single container and this container is allocated all the memory of your space he can easily create a denial service attacks by simply leaking the memory of the container and of your entire system and finally you should also control very good the assess controls so Linux security models such as the app armor and the AC Linux should always be used to enforce assess controls and limit the system calls so you should always make sure that you're are checking your images and packages and making sure that they are updated with the last version you should always use file system in read only mode and this will make it way easier to find problems also as I said our images should be taking up as little space as possible and even bearing this in mind that the larger the memory of the images is the more hard you will be for us to audit them and if you are sure that you have the last version of the images also make sure that all the tools in your Linux machine and also the kernel are updated with the last versions so you avoid compatibility issues so we already talked a lot about the different vulnerabilities and their classification so I wanted to give you some examples of high medium and low criticality vulnerabilities for example some of the most popular high criticality vulnerabilities are the shell shock and the shell shock is a vulnerability that allows the attacker to remotely connect a malicious executable to a variable that is executed within your bash interpreter the other one is the heartbeat and this is a very critical vulnerability in the open SSL cryptographic software and will allow the attacker to find information that is normally sent in encrypted form using the SSL and TLS protocols then mid-range or the mid criticality vulnerability would be something like the Pudo attack which will allow the attacker to listen to the communication between two services and finally a low criticality would be something like a buffer overflow where the located memory can cause buffer overflow when assessing memory areas that have not been assigned so if for example you want to get the latest vulnerabilities using the nvd you can ly use this tool you can see the address here and you simply need to use Python 2.7 and then you can run this code from GitHub if you want to also for example uh get all the vulner vulnerabilities for the uh Red Hat Linux you can also do that on red hat SL security SL security updates slcv okay and here you can see different vulnerabilities discovered into the the red hat in Linux so if you want to explore more vulnerabilities I will definitely make sure that I attach to this section links with all pages with known vulnerabilities in Linux and Docker since those are the two tools that could be critical for exploiting vulnerabilities into your Docker containers or images that's it thank you very much for watching this section and I will see you in the next one hi everyone and welcome to this section where where we will talk about doer secrets and networking and both of those terms are related the secrets are essential for the docker networking especially when we want to have communication between different Docker containers so the secret management enables organization to consistently enforce security policies to make sure that only authenticated and authorized people can assess their resources such as applications platforms Cloud environments and so on so effective Secrets management enables the organizations to remove those specific secrets and store them into configuration files used by the cicd tools for example and offer full audit to verify those secrets so what is a secret actually well secret is a specific piece of information that is required in order to authenticate or authorize user that's using encryption so with the docker secrets you can manage the information that is needed at runtime but you don't want it actually to be stored in the docker image or some other source in a repository so some examples of sensitive information that can be protected by the secrets would be usernames and passwords Toc certificates and keys sh keys and database name or internal server Secrets also provide abstraction layer between the container and a set of credentials so consider scenario where you have to spare development test and production environments in your application those environments would have different credentials stting development test and production environments with the same secret name in that way the containers only need to know the name of the secret in order to be working in all environments so the doer secrets are provided to the containers and are normally transmitted using encryption from and to the node from which they run secrets are usually mounted in the file system at run/ Secrets slash secret name so this part can be normally assessed by a container service and so the secret will be assessed and here we're using something that is called creating a service on Swarm this means having a configuration within an image available on all hosts locally or mounted via network storage that set Secrets can contain files so we can use them easily to manage the configuration of the services since the information would be available to all hosts that execute the service task it is quite common for an image that we need to use credentials or files with information that we don't want to share so if we pass those elements to the image using commands for example a scopy and add they will be visible in the image and anyone who has assess will be able to see them so for that reason it is important for um the docker secrets to only be available to swarm services but not to Standalone containers this means that the secrets can be pushed to Containers only when containers are running as a swarm service and now let's see how we can manage the secrets in doer in practice so here I have a terminal and let's first run Docker or activate it so first let's start Docker if you're using Linux you can write Docker and if you have installed the image in your device you can simply write Docker and then the actual image will start as you can see so now once we have the docker started and we have all the environments set up let's go back to the terminal let's write Docker secret okay and you can see all the commands with which you can maintain the secrets in your device so you can either create inspect check the list of Secrets or remove them pretty straightforward so let's create a secret let's do echo my user let's do a pipe so firstly we initiate the name of the secret with my user and after that let's do Docker secret create PG uncore user Dash and you can see that we're having an error and this this is because we first need to start our swarm service so simply do doer swarm in it okay and then you will see that the Swarm Ser service will actually start so once we have that you can just repeat the command and your secret will be created let's create another one so let's do echo my secret then let's do pipe Docker secret create my secret and then Dash okay let's hit that and you will see that we have now another secret and so if you do now Docker AOS actually Docker secret OS okay you'll see that we actually created two Secrets um and both of them look like this you can see the time when they were created when they were updated and also you can see their name so once we have the secrets now you can actually start a service using those secrets for example we can create R service and Grant assess to one of the secrets that we created so for example if I do doer service create then name equals Rus Das Dash and here you can Define the secret equals to okay let's use here my secret okay my secret and after that let's write RIS okay so what we're simply doing here we're creating a service with name radius from the radius image with a secret my secret which is going to be this one and this how you can create a secret for your service okay so if I run that let me actually remove first my previous run of R so if you run that um here I'm getting a warning saying that uh this service already exist because I already use the same name but you can see in the back that it is now actually running okay and we have the service r with the specific key that we added so this is how guys you can create a secret and assign it to a particular service and the last thing that I just want to share with you is how to actually stop the Swarm service you can simply do Docker swarm leave okay and if this doesn't work as you can see you can simply do Force okay and once you do that you can see that even your container will actually be removed okay uh so your swarm service will stop executing including all the containers and the secrets there and now if I do Secrets Docker Secrets AOS you will see that we Have No Secrets since we didn't activate the Swarm service so always first make sure that you activate the Swarm service with the docker swarm in it then you can create your secrets and finally assign services to those secrets so that's said thank you very much for watching that video and in the next one we will talk about container networking in doker thanks for watching now let's talk about container networking in doker so doer networking is based on the Linux Network namespace and so this allow you to create a complete communication stack from images and that stack will be running on the docker host so when we execute a container or set of containers from distributed service we can use the D dashnet option to choose between those Network modes and there are few modes there you can use the default Bridge a different Bridge or provide access to network at all so here you have some of the options that you can use with a net option so the first one is the net equals to Bridge and this is the default behavior and creates a new network stack from the container in the doer Bridge the next one is if you choose the option none which allows you to execute the container without any network connection then you can use the host option which means that container uses the network stack directly from inside the container and finally if you choose the my container option this will inform Dockers to start the container with capacity for using the the container Network stack so let's have a look at the docker Network options now you can assess the docker Network options by writing Docker Network okay and you can see that you can actually create a network connect a container to a network disconnect a container from a network inspect a network check for the available networks or remove either the unused networks or a specific Network that you want to remove so if I do Docker Network OS you can see that we're having a few networks that are currently available and for example if you want to remove of course you will do Docker Network RM and the particular Network or you can do Docker Network through in order to remove remove all networks that are not used in the latest container so if I hit yes here and then I rerun Docker Network AOS you can see that we removed the docker GW Bridge since it is not used in at least one of the containers so if you want to initialize a container that doesn't have any communication with any network you can use the non keyword so let's try to do that let's do doer run-- net equals To None Das it for interactive D- RM and let's run the dbn distribution okay so let's hit that and you can see that now we're into our Deb and distribution right here okay so let's exit this network this is how you can create a network without uh actually container without any network so if I run the same container but here I write let's say latest okay so use latest version bin bash okay you can see that we just activated this uh DB network with um dban container with no network assigned and so if I open another terminal while this container is running and if I do Docker container OS and I can use this ID for the of the container so let's do Docker inspect then let's use the container ID let's do a pipe and then let's do grip Das I addr for address and you can see that when we create a network or a container without a network assigned you can see that we actually have no IP addresses assigned to that container whatsoever and this is a proof that the option actually worked and even if we try to connect any network to this uh dbn container this will actually not work okay so let's exit the container and the next thing I want to talk about is about bridges so let's see how we can actually create a bridge or communication between two containers now firstly uh we're going to use Aline image in order to create those so let's do doer run D name Alpine one Ash okay and this is denied because I need to also add the actual image which is Al Pine okay so let's run that again and you can see that now we have another container called Aline which is good now let's create another one called Alpine 2 and I will show you how you can connect those okay so now we have Alpine one and Alpine 2 and if I do doer container AOS you can see that we're having two Alpine containers currently working so now let's do doer Network inspect Bridge okay and this will give you so here you'll see what containers are connected to the current bridge that we're using so you can see that actually the two containers are working together because if you check the bridge that we already have which is um I had to say the default option so by default the containers can communicate between each other so if we check the current bridge and you go down here to Containers you'll see that you have the Alpine one image with the endpoint ID and you also have IP address of that image and on the other side you also have the Alpine one which is the other image that is also included in the bridge and it has IP address as well okay so now let's do Docker attach Alpine one okay and so let's Now display so right now you're in Alpine one container and let's Now display the addresses so let's do IP ADR show and you can see that there are a few interfaces so the first one is the loop back device while the second interface or actually the third one because we are ignoring the tunel has the IP address of 1721 17.0.2 which is the exact IP address of Alpine 1 so while you are in Alpine 1 let make sure that the networks actually work on Alpine one so you know if you're using the none option you'll not be able to use any network so since we didn't use it we should be able to actually ping or connect to any external web page so let's let's do pink DC to google.com Okay and you can see that from here you can actually pink Google which means that your network connectivity actually works now let's check if we can actually connect to the other container so uh if I see this is the IP address of Alpine 2 right so if the bridge works properly you should be able to Pink this IP address so let's do pink - C2 and then let's use the other internal IP address so let's hit enter and you can see that we're also having network connectivity uh to the other container which is this IP address which means that the two containers can communicate properly however if you do not use the IP address of Aline to you can see that you're getting bad address Aline to so you cannot use that okay so our experiments is is successful and you can see that by default the containers are connected successfully together and they can communicate with each other as well as with the external network so let exit this container right here and I close both containers in order to end this in a good state so let's do doer container stop Aline one and Alpine 2 okay and you can see that now Aline one is closed and soon up Alpine 2 will also get closed as well okay and now let's remove both of the containers so you don't have them in the backlog so let's do container RM Alpine one and Alpine to so that would be everything for today guys thank you very much for watching and in the next video we will talk about how the containers can communicate with each other and Port mapping hi everyone and welcome to this lesson where we will talk about how to map ports using Docker so as we saw in the previous section um doer actually uses three different types of networks Bridges is a network type loaded by default to all containers and this is a network that creates a bridge between the network interface of the container and the virtual network interface so this ethernet Bridge allows the docker demon to communicate with the docker host Network device so a container that connects to another container with an exposed Port can communicate with another exposed Port so what you can do is to assign a container port to a port of the host to make the port accessible from outside the container so it's important to know that you need to expose and publish the port for it to be assessed from outside the docker host Network for example if you expose a port the service on the computer is only accessible from inside the docker containers so this feature provides an inner container commun communication so the service in the container is accessible from outside the internal Network only if you Expose and publish the port so I will bring you some of the network configurations that can be established when we execute container and the first one is a DNS server that is what resolves a domain into the IP address of the service running the domain so this is a way that we move away from the number solution but we actually use words for example google.com also have some particular IP address however for people it's easier to Simply go to google.com instead of specifying the IP address then you have the DNS search which is the setup for the DNS search servers then you have the dashh option which establishes the host name that will be added as an entry in the ety host file then you have have the dash dash link option which allows a container to communicate with other containers without knowing their IP address the D- expose that exposes the container Port without publishing that port to the doer host you also have the publish option which lets you to publish the port of a container into the doer host with a particular format and that would be all the options plus the net option that we already looked at and those options pretty much Define the way that you can map ports between different containers so when we add a container to a network all of the ports are opened for machines that are within the same network and are closed for external connections by default for example we should make sure that we don't expose MySQL container ports as they are on the same network and since they're on the same network the application container can connect to the network without a problem how however we won't be able to assess MySQL Port from outside the network until we publish it so in fact the port forwarding is actually the easiest way to expose the services that are running in the containers so there are two main ways to start the container within port forwarding the first one is with the minus P option which is publish all and this usually makes sure that all the ports using the exposed statement will be published into the doer file so this option would select Rand of free ports on the host server when we have a request for that the other option is the minus P option which this time is a lower case or the dash dash publish so this option would allow you to explicitly indicate to the docker which Port should be linked to a port in the container with this option we should specify the port where we want to listen the container will fail if that Port is currently in use so one of the ways to do that is with a Docker run- P IP Command where you should specify the host port and the container Port the other option is with the specifying only the container port and the final option is without using IP but specifying host port and container Port so adding the exposed keyword inside the docker file will allow you to indicate that a specific port must be exposed from the image that it builds so let's go now to the terminal and find out how we can actually Define what ports to expose using the nginx image so here I am in my local terminal and let's do doer run-- name doer Das NG nx- p and here you can specify the host Port firstly and then the container Port that will beit here then let's do DD ngx and we got an error because I might have this container already created yes I do have it so uh if you get similar error just make sure that you remove all Ang INX containers from your Docker device in order to run this example so you will see that if I now repeat the command this is actually created so let's do now docker PS and you see that we're actually having one process which is the ngx uh container and the host is 000000 8080 going to Port 80 with a TCP protocol and this means that the doer host that's at port 8080 points to Port 80 inside the container so if I do doer inspect doer NG I NX okay you can see here the network configuration and if I go up you'll see that our host Port is 880 so this is how you can perform a manual mapping using the publish option now if you don't want to be specific but you want to use any type of available Port you can do it with the capital P so let's remove the ngx container okay and I will repeat the same command with the only difference that right now I will not specify any port and I will do a capital P which will allow us to publish any type of Port within our container for networking okay so now our container is created so let's again uh inspect the configuration of that container and if I go here you'll see that now our host Port is 55,000 which is a completely random number and this was just the first available Port that was currently in use if I also do the PS you can see that we're pointing from host 55,000 to 80 and this because we chose not to specify a specific Port this option is quite good as well because uh if the specific Port that you specify is busy then trying to specify it will return error in the container so with the capital P you actually get the first available port and there is higher probability chance of your commands to be successful so let's make sure that you remove this um container and that would be everything for this video guys make sure that you stick to the next lecture where I will show you how to do doker networks management thanks for watching hi guys in this video I will show you how you can manage the doer networks so doker also lets you create your own network configurations to use on your Docker containers so doer will allow you to create different virtual networks for your needs or segment different containers in the network this way we can create containers in multiple networks or connect their services with each other so this is a very good way to map the containers in the way that you like and Define networks which would include a specific container configuration and connections there so let's see how we can create networks on your own now to do that you can use the option Docker Network okay and here you will see the options that we already talked about but have a look at the option create so we haven't used that option yet and I will show you how you can use it in order to create a network let's do now Docker Network create okay and let's do Dash Dash help so here you can see the different additions to that option you can uh specify the subnet settings the IP addresses the driver settings you have multiple options with which you can experiment so let's create a bridge Network on our own so we can create our own network for any type of purpose we need so let's write doer Network create do das subnet and here we will Define the IP so let's do 10.10.10 SL 24 and let's call it DMZ which comes from the militarized Zone okay let's run that and you can see now here you're having some code which means that your network is being created so if I do now doer Network AOS you'll see that here you have an additional Network that is called DMZ so your network cre cretion was successful you can also inspect a particular Network by doker Network inspect DMZ and this will give you the subnet mask that we uh just created it will give you the name the type which is a bridge the scope which is local and a bunch of other options that you can control while creating the network so let's see how you can actually connect a container to that Network in order to connect a container to a network you should choose a dash Das Network option when you specify which network we want to connect to the container this should be followed by the name of the network to which we want to add the container to so um let's create a new NG INX container and put it into our DMZ Network so let's do doer container run dd-- name doer Ng INX do- Network and let's specify our Network DMZ the one that we created okay and I just missed the image so in the end of course I need to specify the image so let's do NG NX okay and you can see that we created our um new container so now let's do Docker Network inspect DMZ okay so you'll see that when we uh do this now we have this container section and right there there is our Docker ngx container which was successfully added to the newly created Network by us so now when a container note is created it's important to note that those containers can be connected by IP address or host name but if the container is actually restarted new parameters would be generated such as the ID and the IP address that it uses so in order to solve this problem within the network Docker offers functionality of linking one of one or more containers that allows each time one of the linked containers is restarted the assigned IP address does not change as it is assigned to The Container name so we will make the IP address in that way static so I will show you how you can establish links between the containers using the link system when one container acts as a data source and the other would act as a receiver so the link will allow ow the container to communicate with another container without knowing its IP address so we can do that by specifying the link flock now firstly I will remove the previously created uh container and let's write doer run- d-- name and then Docker ngi NX okay and then let's do NG I NX so once we have that container now we can link another one to it let's clear what we've wrote so far and then let's create a Linux container and Link it to our nginx container so let's do doer run- it-- name aun to and then let's use the dash Das link option and then Docker NG NX which is the name of our ngx container and then let's specify the to image with which we want to create the abund to container so let's do abund to and let's use version 20.04 and then let's do bash okay so once you hit enter now you have your abunto container created and it is up and running and let's now try to verify that the two containers are connected so within the container if you do cut Etsy slash hosts okay and by inspecting this file you can see that on IP address 17217 02 you have the docker ngx container with its specific ID which verifies that the two containers have been connected successfully so in addition to modifying the hosts file Docker also creates some envir environmental variables with information of the other container the information the docker makes available using environmental variables in includes things such as IP address of the link container and other information so if I do now set pipe grip dasi ngx you can see that we have multiple entries with information of the nginx including both the version the TCP Port the IP address and so on so here we can see all the information of the doer ngx container that is a available in the abund container so we can assess and discover the services of the other container using those environmental variables and this is why we not necessarily need the IP address so let's try another example now if I create a radius container so let's do Docker run- d-- name my radius and then let's use the radius image okay and then let's try to connect Adian distribution to it so let's do doker run Das Das link my Deon actually uh my radus radus and then Deon EnV so let's run that and here because of the options added you can see that in this output doer has configured the environment variables with the r sport insight and this is Insight specifically into the deban container that we just created so doer also had imported the environmental variables from the linked container into the radius environment as well so even though this functionality can be very useful we should always keep in mind that using environment variables to store Secrets such as API tokens or database password can increase the risk of data being exposed so I hope this was useful for you guys because now you know in detail how you can connect containers in Docker thank you very much for watching and I will see you in the next section now let's talk about the doer container monitoring and in more particular the matrices and the events that show all the information about a container and this is because when you work with Docker especially in production one of the most important things is how to measure the performance of your containers it's important to define a comprehensive strategy to monitor your Docker infrastructure with a collection of sources events statistics configuration records and many more so even though there are several ways to monitor the performance and to control the execution of doer container we always need to visualize the locks to observe the statistics in the containers those statistics have a very big importance and the most important are the statistics for the CPU monitoring and for the memory usage so most applications send locks to a standard output and usually when you're using Linux and other operating systems you're normally having standard input standard output and standard error so it is normal and it makes sense that the locks are part of the standard output for the system we can see the locks directly in the console while the container is running and enable them so lock management is probably the most important task in the world of security and this is because it allows you to monitor what is happening inside your containers and different containers can run simultaneously in the same Docker host and each of them can generate their own locks so a centralized management of the containers is quite necessary so for example if you want to check the locks for a particular container of course first you need to run the container so let's run a r container and and let's have a look at its locks so if you do Docker container AOS you'll be able to see the IDE of that container and let's do Docker log and then just place the ID of your container and as you can see you can have the assess of the locks immediately so you will see the version you will see that the container has been initialized and it is ready to accept connections you can also see things like the port basically everything that is happening while the container was created and while we're executing it we will appear right here in the loog section so in this case the docker engine collects all of the standard outputs of containers in an execution log file and if you write Docker logs Das Das help you'll see all the options related to Locks so you can either follow them for example follow the output you can get additional details and so on so now on our s side if you want to check the statistics of one or more containers in real time you can use the stats command so write Docker stats and here you'll be able to see the stats of the container that is currently running so if you want to check the options and some additional information for the stats you can do Docker stats Das Dash help and you can see that the stats display a live stream of the container resource usage so when you have the when you write the stats command it will display you all the stats of all the containers that are currently running in your device so I will create another NG INX container or actually let's create a radius so let's do my radius one let's then do my radius to okay so now we have uh three containers running as you can see and now if I do doer stats okay you will basically see how much CPU is used by each container how much memory is used you can see that this is a very little just 2 megabytes what is the limit of storage that they have you can see 7 gabt and also the internet that they have okay you can also see of course their uh name and their ID as well so if you want to monitor quickly your container and pretty much see how much resources they're using because you suspect some memory leak then you can definitely use the docker stats for that so another matric that we're going to look at are the the events into the docker containers so every process in every container normally generates a flow of events in the container life cycle so the docker events command is helping to see the life cycle of events occurring in in real time in absolutely every container that Docker is using the sequence of events is useful for monitoring scenarios and Performing additional actions such as receiving an alert when a specific task ends so when we're running many containers in the docker host it will be useful if we can see the container events in real time in order to monitor or debug them so let's see how we can use the docker events so firstly let's open a terminal and there you can type Docker events-- help and there you can see the different options mainly you can filter between the different events so you have different filter options here so you can normally filter by ID container event image plug-in so it pretty much depends on how you want to structure your output you can also use the Das Das since and Das Das until command in order to monitor processes for a particular time frame and let's actually try that so let's do Docker events D Das since and then let's do 2021 - 01-01 okay so from the beginning of 2021 Das Dash until 2022 Das 10-10 okay so you can see here that since I had contain ERS running with it within that time period and actually all of them they run on 7th of October 2022 you can see that you can see all of those event locks right here and so from here they're ready for analyzing if you want to have a look at the official Docker page for checking out the events and dig into little more detail into how to um collect and filter events you can definitely use that page on the docker Hub so that said thanks for watching guys and in the next video we will talk about performance monitoring hi everyone and welcome back to the course in this video we will talk about performance monitoring so today I will show you a tool which allows you to explore each layer of the content of the docker images and the sizes and the percentage of the image efficiency and this is the div tool you can find it on github.com in the link provided here and first I will show you some of the main features of that tool and then I will show you how you can actually use it so the Dave image shows the docker image layer by layer in its contents When selecting a specific layer the content of that layer will be displayed in combination with all the previous layers you'll also see an indicator of changes for each layer the file tree will display files that have been changed updated removed or inserted and finally in the lower left panel you'll see a display of the basic information for each layer and Matrix that tells you whether your image is space efficient in git so an image could be not space efficient due to file duplication across the layers and file transfers to other layers and for that reason the tool will give you the total wasted file space as well so let's go to the terminal and let's write doer pool qa. i/ Woodman which is the creator of the tool and then dive okay and this will pretty much get the image and install it so if you want to run it you can do doer run-- RM dasit back sl-v SLV run doer. sock VAR run doer. so/ qa. SL Goodman SL dive latest so let's pull the image by writing doer pull qa. slw Goodman dive okay okay dive and you will see that the image will be downloaded into your device so after that if you want want to check out how the tool looks like you can simply do doer run RM it- VV okay so we will create a volume and let's do VAR run doer sock okay then let's do /var SL run SL doer. sock then qa. slw Goodman SL dive let's get the latest version and let's do D Das help okay so once you do that you actually see all the options from that tool and you will see a brief description that will help you to understand what the tool is doing and also some options such as the help the CI and other type of options that you can explore on your own time but let's see how we can actually analyze a container or an image actually uh using that tool doer run D- RM dasit back slash and then let's do minus V VAR run doer dot sock column VAR run doer do sock back slash and then let's do qa. iio Woodman SL dive latest and let then let's write the name of one of our images so let's do ngx okay and that will run the analysis for that particular image okay so we're ready and if I extend this to a big screen we can obtain the metadata and the layers from the image that we're analyzing when executing the Dave container with a specific image identifier so we can see here a layer details and the folder structure of the whole image right here the different layers are separated with different hashes so we can keep track on that as well right here on this side you can you can see all the folders and the complete file system right then you can see the permissions to those files the size of those files and also here you have certain options with which you can check certain type of files so if you press contrl a this will give you all the added files if you press contrl R you'll get the removed files if you get the contrl M this will Mark the modified files and control U the unmodified okay so obviously you can see when we select modified and unmodified all files disappear as those two terms are completely reverse okay so this is all I want to share with you guys about the uh dive tool you can really make sure that you check all the layers okay the different images for each particular layer and you can really move in and out of them and explore and analyze the image completely that's it thank you very much for watching this this video and I will see you in the next one hi everyone and welcome back to the course in this video we will talk about the key Docker Administration tools so every container consists of complete execution environment and that includes application dependencies libraries binary files and configuration this is called conization and allows you to create a level of abstraction at your platform however in order to do that you need a specific management tools to move the doar rized applications to production containers and ensure their security Automation and administration and for this reason in our first video of this section we're going to talk about painer which is a tool that does exactly that so the organizations and the developers should consider the challenges associated with Ming doer environments and the need for implementing Business Solutions to support effective management of the docker containers which means that they should have a technology that allows them to successfully manage problems of distribution compliance and governance of some containers so there are three main stages into the Container life cycles the first one is development and this is the first stage developers create and deploy doer containers that include items like application codes and libraries then they test test the application correct the errors that might occur they add functions and improvements and then create a new Docker image and deploy this image in new containers and this process normally continues until the required standards are met in the second stage managers coordinate the automation of the application environments which include doer construction testing and deployment drivers and finally in the last stage the containers are deployed in production and remain operational and available until they are out of scope so this is the stage at which the final challenges are critical such as the orchestration and governance security and container monitoring so there are five major container management challenges that every organization struggles with and should address the first one is the lack of control so developers need Independence to quickly create Implement and test application containers on the other side the operation teams need control and governance to avoid excessive consumption of resources such as CPU usage or Ram the Second Challenge is a cycle from Rise to production so here it is important to maintain quality and safety while the Cod development increases so you should always know that even if your application works properly and you have no issues once you try to scale it or to make it available for bigger audience there variety of ises that might appear the cloud infrastructure does not disappear and will continue to coexist with the doer infrastructure the implementation of complete applications covering Docker and other infrastructures always require more advanced capabilities to orchestrate the applications and manage the running environments so as we saw even from the previous video the docker containers can also integrate vulnerabilities and this is simply because they include parts of the operating systems so the protection of the environment requires Security in the host Docker layer containers and in the images and finally the docker environment always requires special monitoring capabilities and those could be API level Integrations with doker or instrumentation built into the docker image so here we should make sure that we follow all specific requirements for our Docker containers so the first tool that we're going to look today is called painer and this is an open source web tool that can execute itself as a container and will allow you to manage the docker containers easily and intuitively using an easy graphical user interface so let's open a terminal and run painer and see how it works so first first run our Docker demon okay and now here from the terminal let's write Docker run minus D to run the container as a background process let's now specify the ports 9,000 on 9,000 Das Das name and let's call this container painer then let's do restart always okay let's do d v to Define some volumes in which we'll be able to write so let's do VAR run doer do sock and then V run doer do so- V painer underscore data okay and then let's do data with a slash that's it and then let's do painer SL portainer okay and then just go back and do minus P before the port so we Define actually that we want to uh State the ports there and also I no another typo that they did which is um right here before VAR we need to have a slash okay so run. command okay and you can see that now we're having a new process here and if you check back in your uh doer UI you can see that you have one container running that is called painer on Port 9,000 if you want to check that from the terminal you can do doer container AOS okay and you'll pretty much see the same thing you will see the ports the pro protocol and the name of the painer container so now you can assess painer on Port 9000 and let's go to our browser and just write HTTP local cost 9,000 okay and now you can see that you're in the pertain page that is running on your device and we will go back to this page here to be able to inspect your container system but now let's go to the terminal and so now let's uh clear and let's inspect our perer tool so do Docker volume L and you can see that right here you have your perer data in your volumes and let's inspect that volume so if I do doer volume inspect painer data okay you'll be able to see the perer volume so you can see the date the mount point of the volume the name and the scope that is local so now once we made sure that we have the volume and the parer is running into our Docker environment you can go back to your browser and Define a username and password so here you can enter your username and password and bear in mind that if you don't have registration you might see a different page but just follow the registration instructions to create an account then select login and you can see now that you logged into your account and you pretty much can see your docker distribution with all the volumes you have the containers running and so on now before to continue a very important note if you run into problems for example if you forgot your password you can use this page doc sper IO Advanced reset admin from where you can follow those three steps in order to reset your password if you forgot it for some reason it is pretty straight forward first you stop the container then you pull the perer helper for password and then you run this function here which will help you to reset your password so you will get your username and you'll get your new reseted password in case you forget it so let's go back to the parer application and so as I said you can see here information for your current containers which of those are running how many volumes you have and how many images you have currently installed in your device so let's click on the dashboard and what you can see here is pretty much a summary of your Docker system the total numbers of containers images networks and volumes one by one you can of course click to them and also see that you also can inspect the amount of space that your images take on your device so then if I click on the containers the container menu will show you a list of all the containers you currently have and you can execute several instructions that you can normally execute through the command line such as starting stopping or removing a container from your your device so let's hit one of the containers and here we can see the data for the container the ID and also as you can see you can start it stop it queue it restart it and so on or completely remove it let's try to start it and you can see that now the container is successfully started and if I go to our doer image I can see that the other container that I selected is now running as well okay so let's stop that other container and and it normally takes time until that's done okay so the container now is stopped and if I go back to our Docker image you can see that it is actually stopped so from here in my opinion you have way more options and better view on your containers comparing to the docker image so you can definitely use this page for uh starting stopping duplicating relocating your your containers it's really helpful to manage them and this is the main reason y Piner is a great tool to manage containers in addition to the menu options here above in the action section you can also see that there are other buttons such as locks inspect status console and attached so for example if you select the locks this will allow you to see all the locks that your container has created in real time so if you run your container you would see how the locks are changing in front of you so let's start the container back again and also try the other options as well so if you select inspect this will basically give you all the informations that you need for your container and you can extend those tabs so you can see things such as the host configuration the host paths mounts volumes and so on now a very interesting tab is the status tab because here you can really see in those graphs the status of your memory usage CPU Network and processes and you can see those over time so here is your network usage right you can see that the CPU doesn't use much because we're not doing active tasks with our container and it is the same for the memory okay you can select here your um refresh rate so on how much time you want those things to be refreshed and you can use this information to manage your containers and if you go back the final thing that I want to show you is the console which allows you to pretty much interact with your your container from here so you can see that here we have a bash console with a container that I just run so I just run the Ure engine okay so you can see that our console works perfectly fine and of course before exiting you can disconnect from the console and then go back and finally there is the attach button from where you can attach or detach to the console okay so you can see how many options you have here and you have really better user interface and can use plenty of options without the need to type a single command into the terminal so let's see now our images section and here of course as it makes sense you get access to all the images currently installed in your device so this would be the same if you write in the terminal doer images command it shows you the list of the images you can see that I have five pages of images so this about 50 images installed in my device and if you hit on the image of course you'll get some information for the image you'll be able to delete the image to export the image you'll see the size of the different image layers and so on so this is a very good option if you want to manage your images you can also select them from here and decide which one you want to remove or whether you want to build a new image directly from this button here you can also do the Import and Export commands in order to import or export particular image of your desire next if if you want to check out your networks you can select the network button and you know in the previous videos we had a section when we were were talking and we were learning how to create networks such as Bridges between different containers or external ones so in the same way you can add and remove networks from here without the need to type a single line of code so if I want to remove the DMZ bridge I can simply click it and then select remove remove and you can see that the DMZ bridge is completely removed you can also add Network and if you want to add a network here you will be sent to this page where you can specify what type of network you want to have the name of the network the subnet mask IP range and the gway so pretty much here you would specify how many possibilities for IP address you would like to have what is the range of those IP addresses and what is your gway and you are also able to add some Advanced configuration so do you want to restrict external access to your network do you want to enable container attachment for example and so on it is really convenient to use the user interface as compared to the terminal then in volumes you can rece volumes that are already created into the volum list section you can also add and remove volumes from there so you can create a new volume and here you need to specify the name of the volume and the path to that volume right and on the other side if you want to remove certain volumes you can simply click on them and select remove so these are the main options guys uh you can also check out the templates from where you can install different images and distributions from the most famous containers out there so if you want to get a bun to note JS MySQL you can see that you can have all of those here and even more so if I say select for example note JS I can easily by just selecting it and clicking deploy a container I can deploy a note JS container if you don't specify a name here an automatic name will be allocated to your container so if I want to deploy not ajs now it will take some time and then the container will be deployed so we will get the image and we will start the container into our Docker distribution okay so you can see that maybe in about 30 seconds the note JS image have been successfully downloaded and a container from note JS have been started into our machine so you can see that it is pretty simple to create new containers from new images using painer as well so I hope you liked this management tool and this will really help you to manage your Docker containers and have better control on them thanks for watching and in the next video we will talk about an R to so you'll be able to choose between the two of those or you might want to choose both it's entirely up to you but in the next video we'll talk about another tool called Rancher thanks for watching another tool that you can use to administrate your containers is called Rancher so let's see how Rancher architecture looks like now as I said Rancher is a platform that helps you to manure containers and stacks to do that you can simply use a remote server that is connected with Rancher so with Rancher you can simply initialize multiple clusters in one single central place and manage them so in a standard production Rancher would be typically run on its own highly available kubernetes cluster and if you want to actually learn more about kubernetes this is what we are going to start using from our next section and we will see the key security techniques and vulnerabilities addressed with kubernetes so here are some of the main advantages associated with Rancher so the first one is that Rancher allows you to create as many environments as you need and manage users and roles for the different environments it also allows you to select a container orchestrator from several options those are Cato masus Docker swarm and the most popular kubernetes there is also a public catalog that is called Rancher Community where you can contribute with people on runer applications it is also famous for making a single and multi clusters that can be Deployable quite easily and finally this is a very easy to use tool since you have a user interface and simplified cluster operations for security policy enforcement so you can see that there are many advantages in using Rancher and you can definitely check out how to install it and run it on your device and explore the options there on ranch.com I would say is a little bit less popular than the previous to that I showed you but definitely feel free to check it out because it has its own advantages that's it thank you very much for watching and I will see you in the next video hi everyone in this section we'll start using kubernetes which is one of the most essential tools that enables the devops to unify development and operations so in general kubernetes is an orchestration tool that enables us to take any software solution to a platform as long as kubernetes cluster is deployed and we will talk in detail about each and every component of kubernetes and we will deploy it in that section so today all major cloud computing providers are starting to offer kubernetes as a service freeing up the work that goes into maintaining and deploying a cluster in that way we can achieve a software solution that avoids a vendor lockin being aable to distribute the solution through the cloud vendor lockin means that a customer is tied to a company or service without the option to move away from them kubernetes allows us to run solution within the Clusters so you are not locked to a particular service or a company in addition to that kubernetes performs container monitoring tasks this way it allows us to ensure desired number of containers to be up and running bringing us cluster to high avilability so kubernetes has clusters that are made up from different nodes and those nodes are normally made from pots that offer services so a note corresponds to a real or virtual machine that contains of services that are necessary to run the pots that it contains and every pot represents a process that is running within the cluster and can be made up of one or more running containers so the use of kubernetes is not only oriented to the needs of the large companies but also to the smaller scale projects for developers who want to create their own content outside the market so some of the main features of kubernetes are the secret and the configuration management and so with secret objects kubernetes allow you to store and handle configuration information like password and authentication tokens securely you can deploy and outer the application without having to reboot the container images or exposing the stock configuration Secrets kubernetes also allows us to scale our application because using it you can generate containers within minutes to meet the demand of your application it also allow you to recover from error crash in the server instantly by restarting or replacing the damage containers it also allows you to do load balancing so we don't need external tools to generate services and load balancing outside the kubernetes so kubernetes will take care of everything automatically and it also assigns that to its own IP address and creates a DNS for the entire node it also allows you to do an automatic deployment so we can update our application or go back to previous version progressively while at the same time will give the users continuous availability so let's look at the main components of the kubernetes cluster so the elements that make up the architecture of kubernetes cluster fall into two main categories Master components and node components so let's look at those firstly as a master component you have the cube controller manager the cloud controller manager that connects to the cube API server then you have a data storage that is also connected to the API server and you have a cube scheduler so all of those are connected to our note layer which consists of different cuets or nodes and communicates to our Master layer using the API server so the master components are in charge of deciding which node each container runs on maintaining the state of the cluster and ensuring that the desired number of containers are running at all times it's also responsible for updating the application in coordinated manner when new versions are deployed so let's look at each of the components of the master node so firstly as I said the cube API server verifies the configuration data for API objects like poorts Services controllers and their cluster related items this component exposes the kubernetes API and serves as a control plane front end at this point the control plane nodes run the kubernetes API server scheduler and controller manager those nodes take care of the runtime tasks to ensure that the cluster maintains the configuration then we have the cube control manager it processes the control Loop that uses the API to monitor the cluster share State and makes modification in order to move cluster from its present state to the desired State they you have the cloud controller manager which is a demon process that runs on the master node and is in charge of managing the cloud controls and yes those controls actually have dependencies with the main providers such as the Amazon AWS Google cloud and azour then you have down here also connected to the API server the cube scheduler and this model is in charge of workload distribution as well as maintaining the relation between the pods in order to boost the cluster performance and then you have the cluster data storage or the ecd and this is responsible for maintaining all the status information on the cluster and its configuration in large clusters it can be distributed among several nodes that do not necessarily have to be Master nodes in the cluster itself so as the cube controller manager is one of the key elements of cube kubernetes it consists of four main controllers the first one is the node controller which is responsible for notifying and respond when a node goes down then we have the replication controller which is responsible for maintaining the correct number of pots in the system then you have the endpoint controller which is basically the glue that brings the pots and the kubernetes services together and finally you have the token and the service account controller which creates default accounts and apis assess tokens for each name space so normally all the information related to the kubernetes nodes and containers is stored in the etcd storage and we have assignment as a key Value Store the master components are in charge of making Global cluster choices as well as detecting and responding to various events those components can operate in any any server or cluster but they are often started at the same machine when deploying kubernetes cluster and user containers are rarely executed on the same machine so let's talk a little bit about the worker nodes right here now firstly here we have the cuet which is nothing more than a principal process that runs for each worker node and this process is responsible to manage the node connectivity to the cluster as well as keeping the cluster informed about various pots and workloads that are operating in its own node to every every cuet you have Cube proxy connected so the proxy module is in charge of managing and balancing various Network flows and it functions as a network proxy it keeps track on the set of network rules and allows the pods to communicate with one another inside or outside the cluster in fact it uses the operating system bucket filtering layer if it is available and its main job is to redirect the traffic and finally the container runtime is a software responsible for running the containers and mainly for that we're using Docker so these are all the components of the kubernetes architecture I hope this lecture was clear for you guys thanks for watching and in the next lecture we will talk about the kubernetes object okay let's now talk about the kubernetes objects kubernetes objects are P persistent entities on the cluster system that are used to display the cluster State they are configured using the yamamo file and are pretty much determining what applications are using the containers and what is the note that the containers are running on they also determine the resources available for those applications and the processes and the rules associated with applications so kubernetes verifies that an object exists and functions when it is formed in the cluster so the specification of a cluster describes the desired State the features and the configuration that you want to have on the object while the state for example describe the point at which the object is in the current moment both of those factors are supplied and updated by kubernetes to ensure that we have the correct State at all times so due to the large amount of information that is usually associated with each deployment it is not convenient to do the configuration directly using commands for that reason we're using this yo file that consists the data that you can see in front of you and specifically this yo file is taken from nginx web server and it is specifying two replicas of ngx web server running inside two containers so here are the key points for running the yo file which are the specifications for the applications that are running into the container and the nodes they run on the resources available for the application and the policies and the rules as we mentioned so there are actually three main fields that I want you to pay attention into the yo file and those are the API version which pretty much specifies which version of kubernetes API we're using to create an object then it is the kind and the kind means the type of object that we want to create and finally it is the metadata which is a single piece of data that allows an object to be differentiated by including string of characters such as Name ID and so on so let's have a look at some of the main components or objects from kubernetes that you need to know now one of the most important object that you need to understand are the pots so this is why of the kubernetes fundamental unit this is the smallest and the most basic drop- down object in your model and so the pot contains a software container storage resources and network resources to support that container then the next object are the controllers and those objects create and manage multiple pots they handle replicas and provide automatic repair capacity for example the controller can automatically replace a scheduled pot on the Note within identical replication on a completely different note if the first one fails there are different type of controllers such as deployment stateful sets and demman sets then we have the service object which is an abstract way of offering network service for an application that is operating a number of pots so kubernetes can assign a set of spots on their own IP address and demon name and balance the complete load between those services so the presence of services in is normally motivated by the fact that poorts and kubernetes have a finite life cycles kubernetes provides name resolution of the services within the cluster in addition to the IP address assignment to them so we will be able to communicate among the poorts using the names of the previously enabled services in this way then we have the Ingress object that provides externally accessible URLs load balancing D termination and name-based virtual hosting to kubernetes Services by exposing the HTTP and https roads from outside the cluster to the kubernetes services so in real life the Ingress is simply used to expose HTTP and HTP ports to the world so they can assess your application and the other final most important object is the Ingress controller which deploys a container into a pot in a cluster so there are several load balancing system providers such as ha proxy or nginx and both of those defined their own Ingress controllers that you can use so all the objects of kubernetes normally consist of metadata as if a specification and a state you must interact with kubernetes API by providing metadata and a spec in Json format within the required field in order to create an object the most typical method is to create a command line client such as qctl to which you can supply a file into yo format and then transform to Json in order to make an API request now since I mentioned that one of the most important components in kubernetes are the pots let's talk a little bit about them now the pots are actually the smallest deployment unit in kubernetes you can also specify pots with many containers which forces those containers to be deployed in the same node at all times this is quite useful if the containers communicate over the file system so here you can see the different components of a pot it has a volume file P web server from where we can update it to the customers and from the content manager so normally containers are inside a pot and they have access to its Network and storage resources in terms of network each poort is assigned to a unique IP address and each container uses the same network both IP and Port so if you want to define a container within the pot you can use the name of the pot to identify it within the cluster the image of the container to be displayed so you can pretty much Define the name of the container using an image key we can also specify a list of variable names and values that will be injected into the Container as environment variables uh using the standard EnV form key for Linux for example you can also Define containers using a list of ports that are used by those containers then you can not a pull policy which is a container unloading policy for example we can indicate that it always pulls the container image before deploying it you can also add reserve and resource limits we can always Define reserves in the resources such as Ram or CPU using the resources key and in that way we can also Define limits also using reading this you can find out when a container is ready to serve traffic after this has been started and if it is still ready for this task so in order to specify the structure of a PO you can use the yo format and for example again I will use the NG INX Library so here you can see that we're defining the kind that is pot you have the API version and then you have the metad data for the pot for example the name of the metad data is a agx you can Define name space and labels so each existing poort contains of needed application storage resources IP address and other container specific parameters in kubernetes a pot represents a single instance of application which might be on one or more shared containers so we should always remember that all the containers into a pot share absolutely the same IP address and are accessible from the local host address as well so a pot cannot recover by itself when it dies for some reason the kubernetes controller decide whether to create a new one to meet the total number of pots that are desired by user for storage each pot can specify a shared storage that all existing containers can assess and share the necessary data the volumes created can be persistent to save the necessary information even if the pot has to be restarted also when a pot is being destroyed all the information inside that pot also gets destroyed so for that reason we really have to use volumes in order to be able to create persistent applications and we really need volumes because the files on a disk related to containers are all running inside various spots that are not recoverable and of course this creates two main problems the first one is that the container is restricted when it stops its execution and the second one is that it usually necessary for two containers running simultaneously in the same pot to exchange information so you can see here on the figure in front of you how the Clusters are working and how you can Define volume for two particular containers so you can think of the kubernetes volume as a directory of various containers within a pot that can save the same information and as you can see on the right side there are some different main types of volumes that are normally used and the first one is the empty D this is a basic type of volume and this is created when a pot is first allocated to a node so it starts out as an empty directory and then the containers will fill it with necessary data this volume will be active as long as the pot that is contained within that volume is also active and also the data will be finished once once the port is finished as well the next type of volume is called the NFS volume and the NFS will let you mount an existing Network file system that is shared on the pot when the pot completes its execution the volume is dismounted rather than removed which allows the information that it contains to be assessed by other pots at the same time the other type of volume is called persistent volume claim and this is used to mount a assistant volume which is a way to use storage space in a durable way and the final type of volume is the secret which is used to pass confidential information such as passwords assess codes or tokens to the pots they are stored in a key value pay format using the tmpfs database that we talked earlier about now another topic that I want to talk about are the deployments you can find the deployments into this page right here into the kubernetes website and they use replica set which is in charge for controlling the number of pots and their location between the different nodes so you can see a yo file of nginx pots and um here you can see that the kind is the deployment you can see the API version and so on now there are few main components of the deployment those are the name of the deployment which is the name name with which the deployment will be identified then you have the deployment tax where we will be able to Define labels to be able to reference the deployment then you have the number of replicas desired which is obviously the number of replicas needed for the P that we want to deploy version history to control the version and upgrade strategies which are some set of specific strategies used during the deployment another interesting feature of kubernetes are the replica sets that ensure that the number of replicas of a given pot is always executed within the cluster this will ensure that our pods are always available and here you can see the yo file of that object kubernetes uses Cube controller and Cube schedular services to do replica sets so for example if a deployment specifies that it needs five replicas of a pot a replica set will ensure that there are always five active replicas in that cluster so the services are another important object into kubernetes and a service in kubernetes is an obstruction layer used to road traffic to the corresponding pots it is not necessary to find IP address for each of them whether they support TCP or UDP protocols so labels are commonly used to to Define which pot should be roted so the service simply needs that label to match regardless how the pods were formed so here on the figure in front of you I will draw you the basic scheme for executing the kubernetes services and you can see they have several components such as client cluster IP API server and Cube proxy which is connected to the backend pots so the cluster IP which is This Cloud here is the default service type and exposes the service with an internal IP to the cluster this means that the service will be accessible only by objects inside it if you notice here we have the note Port which exposes the service to each note using the note IP address and a specific port in that case 9060 this creates automatically cluster IP service to which the note Port service is routed so the service can be assessed from outside the cluster using the note IP and the port IP P for exposing the node you also normally have load balancer that exposes the service externally and it uses the note port and the cluster IP to follow the path between the pot and outside world so we already talked about the replica sets that are intended for pots that have the same state State and so can be exposed to the kubernetes using the same IP address with the load balancer between them there are also another type of sets that are called stateful sets that have completely different approach so the question is actually what happens if one of the pot replicates the application deployed and have different status than the rest so in this case we cannot go to any of the pots under a single IP address with a balancer so here the idea is to make CH the application where those imperfections occur so for that we're using the stateful sets that are intended for pots that require unique Network identifiers persistent storage ordered development or upgrades within a certain order so in that case ODS maintain unique identifier that persists even if they're reassigned to other nodes so since now you know how kubernetes ser Works let's talk about the kubernetes networking model so the decoupled microservices based applications rely heavily on net rely heavily on networks to mimic the right coupling that's available networks in general are not the easiest to understand so kubernetes is really not an exception because it basically needs to address four main networking challenges the first one one is the container to container communication within the pods that needs to create an isolated Network space for each container that starts with the help help of the kernel features of the underlaying operating system on Linux for example this isolated Network space is called a network name space and it is pretty much shared between containers or with the host operating system so a network name space is created within the pot when a pot is started and all containers running within the pot will share that network name space so that they can communicate via the same Local Host the other challenge for the kubernetes network is the pot topot communication through the cluster nodes so as you know pots are assigned to nodes in the kubernetes cluster in a random way so they should be able ble to connect with all other pods in the cluster regardless of their host name and all without the use of network address translation or not and this is very important rule for the kubernetes based Network implementation in this case kubernetes Network model aims to reduce the complexity and treats the pots in the same way that it treats the virtual machines that are on the same network in that case each virtual machine receives an IP address so in the same way each pot also receives a separate IP address so this model will ensure potto po communication in the same way in which the virtual machines communicate between each other and the final challenges are the external communication from the pot so a successful deployed application running on pods within kubernetes cluster also requires assess ability to and from external network in this case the services are processed to encapsulate Network rule specification in the cluster nodes and are used by kubernetes to provide connectivity in this situation the apps become accessible from outside the cluster using a virtual IP address after using the cube proxy to expose services to external networks so this will be the final slide from this video guys thank you very much for watching we covered quite a lot from the kubernetes based architecture and object and in the next video I will show you how you can deploy kubernetes in your device and run clusters thanks for watching now let's now let's talk about some of the main tools for deploying kubernetes yes there are more than one and every of them has its own advantages so every kubernetes 2 really depends on the tasks that we want to perform and because of that in each of those tools has its own characteristics and advantages using those we can deploy kubernetes into our system now the first and actually the most popular tool for deploying kubernetes is the mini Cube you can execute this tool on Linux Windows or Mac OS it relies on virtualization to deploy a cluster for example on a Linux virtual machine another tool is the Cube bottom and this tool can deploy kubernets in variety of ways and its main benefit is the ability to launch minimal visible kubernetes pools anywhere another tool is called cops or kubernetes operations and this provides set of tools for installing operating and removing kubernetes clusters on cloud platforms you can use it on the most popular Cloud platform such as AWS uh Google cloud and and so on the next tool is called microtis and this is quite similar to the mini Cube but the main difference here is that this tool is only available for Linux but not for Windows and Mac OS then we have the k3s which normally runs on any Linux distribution without any additional or external dependencies the specific in the k3s is that it replaces Docker and it's able to run container run time on its database another advantage of k3s is that it doesn't consume too much RAM or disk space it consumes actually only 512 mgab of RAM and 200 mgab of dis space another interesting tool is the kind or kubernetes in Docker it actually runs kubernetes clusters into the docker containers it supports multie clusters as well as high viability clusters client can also run in all types of operating system systems such as Windows Mac and Linux and finally we have some tool that is a new project so I don't recommend you to use it and this is meant to be an extension of the k3s it is called k3d so the final choice on which tool to use pretty much depends on the needs you have for each specific project so there is no better or worse solution so in this video actually we will use the simplest way to start kubernetes cluster and because of that we're going to use mini Cube mini Cube will configure a single note cluster so there's some limitation that could make the two useless if you need to orchestrate application that needs heavy loading or launch a product in production environment but is very useful in development and if you want to test your software products so running a to will pretty much launch a virtual machine with kubernetes installed so we will be able to run mini Cube directly into the doer host installed in our computer already so let's try to do that so in this web page mini cube. science. cuetes doio do start you can see the installation for your operating system so for example since I'm using Mac OS I will select ma OS and here you can find all the necessary fields that you need to select and the commands that you need to write so let's try that I will copy the first line here and let's go to a terminal so let's space. command here so what we're doing here is pretty much we're assessing this web page in order to get the version okay so now we're downloading the version of mini Cube and that's done then let's copy the next line which will basically install mini cube in our device just make sure that you write your password okay and then you can simply start mini cube with the command mini Cube start okay so as you can see we're now starting our mini Cube virtual machine and I'm getting a message here because my computer actually is with M1 processor so they advise me to use another version of mini Cube but this one is fine as well you can see that here it detected that there is no Docker container called mini Cube so now it will recreate it automatically and now it creates a Docker container for mini Cube and it also tells you how many CPUs it uses and also how much memory from your dis so now you can see that the process is completely done it's finished so so if I go to my doer machine you can see that now the mini cube is running into your device so this means that cuet is successfully installed so let's go to our terminal right here and write mini Cube dashboard and you can see that automatically we are sent to this address and right now you have all the information for your pots okay containers and also here on the left hand side you can see the different services and the configuration and storages in your in your environment so from here you can see your replica sets your pots and your deployments you can see the apps that you have and the PS that you have allocated to them and you can see here the PS below for example for my a ngx application so let's go back to the terminal and actually uh I'll close this so you'll be able to see that I'm no longer able to interact with that page so I'll close it and then let's write Cube CTL okay and you can see that you also have installed now the cube CTL command and using the cube CTL you can pretty much interact with a cluster that you have installed with kubernetes it takes commands written from the command line and you'll be able to check live versions apply configurations update resources and so on you have plenty of options and we definitely explore the cube CTL so for example if I write Cube CTL get notes okay you can see that I'm currently having one note which is the mini Cube that acts at the same time as a master and a worker if you want to display cluster status information you can do Cube CTL cluster Dash info and here you can get the IP address of that cluster and the core DNS address as well so you can see if I run the proxy IP address I can get some met data such as the API version the kind so here we have the status and so on so let's create now another deployment from the docker ngx image and I will create here another terminal so we can keep track on that okay let me increase the space and from the other terminal let's go to mini Cube dashboard so it starts here okay and then I'll go back to the first one and I'll write Cube CTL create deployment my app do do image equals ngx equals ngx latest okay okay and I did a small typo here so image should be ngx latest okay so let's hit that and the other thing I need to do is ex to change the name of my app because I already created one when I was testing this code so you can see that now we created my app one and if I go here guys right here you can see that we have two replica sets my app and my app one which are pretty much the same you we have two deployments since we deployed this app we have four pots and we have two replica sets so if I select for example my app one you can see the ID the name the age of my app and also some metadata to identify it so now if I write Cube CTL get get deployments you can see that now I have two apps my app and my app one and let's check the additional information for the my app one so you can do Cube CTL describe deployment my app actually my app one okay so you can see here the the appointment information such as um the age the conditions for the deployment the type of the package labels and so on you can also get the information for the running pots by running Cube CTL get pots D- all Dash name spaces okay so if I make the window a little bit bigger you can see that we have information about all the running uh pots including the name whether they're ready their status and the number of restarts that they've got you can see that the my app one since we just created it it has a zero number of restarts okay and now once that's that's ready let's also try to get some some information about the services so you can do Cube CTL get Services okay and you can see that we're having one service right now which is a cluster IP if you remember from the graph that is listening on the port 443 with a TCP protocol you can also increase the number of replicas so right now you can see that we are having two replica sets Okay but you can increase this number so you can make sure how much your set is getting replicated so you can do Cube CTL scale deployment let's say my app one Das Dash replicas equal to three okay and this scales your application and you can see that here in my app one we're actually getting now three pots instead of two as we created more replica sets here if you want you can do that also by directly editing the deployment file so you can do Cube CTL edit deployment my app okay and here you can see your file so you can add and remove for example the number of replicas the revision history limits the max usage and so on this is a very powerful file and from here you can really control absolutely everything related to your app so you can see that right now I have Max search of 25% and I can increase that from here here okay if you press I this would make you able to edit this file and to insert new variables there but I will not actually do anything now so I will just quit Okay but I just wanted to let you know that you can definitely use this file to modify your configuration now let's see how we can actually create web servers you can do that by doing Cube CTL run web server - ngi NX D image equals NG I NX okay now of course have to rename that since this service already exist so I will just add one here and you can see that now here we will have a new web server called web server NG ix1 that is currently running so how you can check that this is actually running from the command line is by doing Cube CTL get bs- o wide okay and you can see here all the services that are currently running and you can see that we have the web server ngx and the web service the web server nginx one and this runs along with the rest of our notes here but adding the dash all white option will also show you the uh web servers and finally if you want to see the information for this web server you can simply do Cube CTL Des scribe web server D NG nx1 and let me add some space here okay so in that way you can get all the information for your web server that you just created so this is how guys you can run your kubernetes cluster um review your applications from the kubernetes dashboard and also create new pods replicas servers and applications I hope this video was useful for you and now you feel a little more experienced and comfortable with kubernetes and in the next sections we'll start exploring the security enhancements that can be done by applying defsec Ops into kubernetes thanks for watching hi everyone welcome back to the channel and in this course we will talk about kubernetes security so kubernetes have became a standard way of implementing applications in contain ERS at scale and help us handle complex container deployments the more kubernetes grows and evolves some people have released their own solutions to many common problems that kubernetes presents in production so in this section we're going to look at those problems and how we can solve them and implement the kubernetes security so kubernetes and Docker are pretty much revolutionizing so so kubernetes and doker both of them are pretty much a revolution in the world of computing and that includes application development and specifically def secops so both Technologies combined together offer benefits like scaling and managing the implementation of an application or a service by using containers to the point of becoming a true standard for the organization so like many other infrastructures we should always think about procedures while implementing those Technologies and try to make them as secure as possible to offer the best final performance so from the perspective of devops kubernetes has three main characteristics so firstly it operates into the devops model so the devops model implies that software developers assume group greater responsibility for building and developing their applications another characteristic of kubernetes is the creation of common service sets so the application request a service from another application normally pointing to the IP address and a port number with kubernetes we can build application in containers that provide services that are available for other containers in use using specifically those IP add addresses and port numbers and the final and very important characteristic of kubernetes is the data center preconfiguration so kubernetes aims to create consistent application programming interfaces apis that result a stable environment for running application in containers so the developers should be able to create applications that work in any cloud provider that supports apis this means that Developers should be able to identify the version of kubernetes along with the services tied to that version and they shouldn't be worrying about specific configuration of the data center so while Docker manages services that we refer as images and containers kubernetes on the other side wraps all those entities in what is normally referred to pots so a pot can contain one or more running containers and is the unit that manages kubernetes so some of the main advantages of the kubernetes mainly coming from the fact that it manages the containers as spots are the multiple nodes where instead of Simply deploying a container on a single host kubernetes can Implement a set of pots on multiple nodes so a Noe provide an environment when a container is executed the second Advantage is the replication so the kubernetes can act as as a replication controller of every pot and what this means is that you can set how many replicas of a specific pot can be running at all times and we'll look into that in the next videos this will ensure that your application will constantly be running and in case something happens with one of the nodes you can use the replicated ones so service in the context of kubernetes implies that you can assign a service name such as ID and a specific IP address and port and then assign a port to provide a service kubernetes internally tracks the location of what service can be used to redirect a request from another pot to correct IP address and Port so today I also bring you some of the key Concepts which you need to know if you consider configuring kubernetes the first one is the kubernetes controller so the kubernetes controller Act as a note from which the pods the replica controllers the services and other components from the kubernetes environment are implemented and managed you should normally configure and run some of the commands such as the systemd CBE API server Cube controller Cube manager and the cube scheduler services in order to create a kubernetes controller then you have the kubernetes nodes that but provide the environment in which the containers run so to run a machine as a kubernetes node this machine should be configured to run Docker proxy Cube and cubet services so those Services should be run in kubernetes cluster for each node then you need to configure the cube CTL command so most of the kubernetes administration is performed from the command line interface that kubernetes has and this is called the cube CTL so with the cube CTL we can create describe obtain or eliminate any of the resources that kubernetes managed so this is quite an important tool in order to manage your kubernetes distribution and finally of course you should have resource files the cctl command expects the information needed to create a resource to be in nether yo or Json file from which you can create a pot replica controller service or another resource for kubernetes so the standard way on how kubernetes works is to configure the kubernetes cluster that has Master controller node and has at least two nodes each operating a separate system in the latest updates you can even split the master component into multiple nodes in order to have a better control so the kubernetes API that is managed by the cubet should be protected to ensure that it is not assessed from unauthorized developers that want to perform malicious actions so if unauthorized assess was made to one of the containers in a pot of a kubernetes environment the API can be attacked with very simple commands to enable to visualize information that is running on the entire environment so for that reason Security in kubernetes should be focused on preventing image manipulation and unauthorized assess to to the entire environment it is also essential not to deploy pots with root privileges and also checking the pots have defined security policies and that kubernetes is using secrets for credentials and password management and I will show you how to do each of those services in this and some of the next sections so that's it thank you very much for watching guys and in the next video I will show you some of the best kubernetes security practices thanks for watching now here we're going to start talking about security practices or the best security practices in kubernetes it is always advisable to follow some of the best practices at security level because in the case of kubernetes if some of the applications or containers are compromised you can have issues security issues in the whole organization and one of the easiest ways to protect the kubernetes security is using secrets so this is one of of the best practices especially if you don't want to store sensitive data such as passwords SSH keys or tokens in a place that's accessible so the use of Secrets allows you to control how the sensitive data is used and it significantly reduces the risk of exposure of sensitive data to unauthorized users another way to make sure that your security is on spot is to use firewall ports and the firewall ports are security practice that is very frequently used since it is not advisable to expose a port that doesn't need to be exposed so for that reason it is the best to define the ports exposure policies to prevent that from happening the first thing you should do is to check for the existence of some interface or to Define an IP to link the service for example Example The Local Host interface some processes are opening so many ports on all interfaces that it is better for them to have a public assess firewall this is because they allow purely confidential information and direct assess to a set of computers for that reason a public firewall which is the most secure should be implemented in that cases another very good option is to restrict the docker P image command so Docker is a resource that some sometimes could be a little uncontrolled just because it is so simple to use so everyone who assesses the kubernetes API or the docker container can obtain an image from them if they want data say could generate traffic from infected images and cause serious security problems to your kubernetes distribution and trust me this is quite a common practice because recently many clusters have also to become a network for Bitcoin miners so there is actually a plug-in called image privacy plugin that can if not Sol completely can significantly reduce and restrict the image assess and it connects directly with the doer API this plug-in adds series of strict security rules that reflect a black and white list of images that can be exposed another way is to use an image privacy web hook with admission controller this would intercept all images that are extracted and take care of the security just like the plugin that I already mentioned you should always note the authentication and the authorization mode that your system is using this can be done by verifying the parameters where you can also check if the authentication is configured anonymously it's important to know that this configuration will not affect the Q cuet authorization mode since it exposes an API to its own from where it executes commands that cuet can completely ignore so when we talk about giving permissions in a kubernetes cluster we'll have to talk about the role based assess control which manages the security policies for users groups or poorts it is implemented in a stable way in the latest version of kubernetes and you can actually Define those specific Rules by using the following code so you need to Simply add the API version of the authentication the kind which is the cluster row and then the metadata and the rules the security rules that you would like to apply to your project and finally it is important to have a good management of your resources and limits it is important to manage the resources and limits that you're going to assign to your applications when creating container in kubernetes infrastructure especially in production this is quite important at a security level because a single container can generate a denial of service when sharing a host with other containers so in the generation of the pot we can easily control it through the requests and the limit section in the deployment execution file a file that is quite similar to this one so let's look at some of the specific security feature components that are built in kubernetes this is because kubernetes already offers native security features to protect against some of the threats that I already described and mitigate the potential impact of those risks so some of the main safety features include the role-based assess control because kubernetes allows administrators to define the roles and the cluster roles that specify which users can assess which resource resources and in which name space or entire cluster in this way the role-based assess control provides a way to regulate the assess to the resources another quite good safety feature is the post security policies and the network policies so administrators can configure poort security policies and network policies which will pretty much place restriction on how containers and pots can behave for example pot security policies can be used very well to prevent containers from running as a rout users on the other side the network policies for example can easily restrict the communication between the different pots and finally another very nice build-in feature is the network encryption so the standard encryption that is used in kubernetes is the to encryption and this is set by default and provides additional protection for encrypting the network traffic so all those three security features of kubernetes provide a very good defense against C certain types of attacks but of course they do not cover all threats and so I also provide you a list with threats from which kubernetes doesn't offer native protection and the first one is the malicious code or in cor settings inside containers or container images for example a third party container scanning tool should be used in order to scan the container because kubernetes doesn't have buil-in feature for that another one is security vulnerabilities in the host operating system and because kubernetes pretty much doesn't control the operating system it in which it runs you should make sure that you have other tools to control that some kubernetes distributions like open shaft integrate Security Solutions like SE Linux in order to protect the Linux carel but this is not a root feature of kubernetes itself kubernetes also doesn't offer support for container R runtime vulnerabil finding because kubernetes has no way of alerting if a vulnerability exist in the run time or if an attacker is trying to exploit vulnerability at the time of execution another risk is a kubernetes API abuse because kubernetes pretty much doesn't do anything to detect or respond to API abuse beyond the security policy settings that you can Define and finally it doesn't have management tools for finding vulnerabilities or configuration errors this is because kubernetes cannot guarantee that management tools such as the qctl are free from security issues so normally a secret is everything that nobody else in the cluster should know neither the rest of the application nor the users that assess the cluster for example a password from certificate store an API key and so on so let's say that someone discharges those resources along with certain permissions from there it is the application that requests those secrets from kubernetes by presenting the information that authorizes them to consume those resources so authorization management is done through what is also known as role based assess control this means that the application can assess certain types of resources only if you have certain role so you can see that using Secrets will pretty much allow you to control how sensitive data is used and significantly reduce the risk of exposure of sensitive data to unauthorized users this information is often Post in the post specification or in the container images and so a secret can be generated both by the user or by the application so here you can see the basic model for how you can store secrets in the cluster you can see that the pot and the secret are into two separate places and once you have both of those you can go to the node and from there to the database other interesting facts about the secrets are that the secrets are pretty much objects with name spaces and so they exist in the context of a namespace you can normally assess them through a volume or environmental variable from the container that is running the pot so I would say let's see some examples on how you can create a secret with username and password for a postgress database so let's open a terminal and make sure that kubernetes is running in the background okay and now let's create some files okay so let's do Echo minus n user okay and we're going to save that into the user do txt okay now let's create a password so let's do Echo minus n password and then we will save that into a new file called password.txt Okay so let's now create a kubernetes secret from both of those files now let's do Cube CTL okay and actually before executing the command uh make sure that you also run kubernetes so let's create a new terminal and you can do that with mini Cube that you already installed so let's do mini Cube start Das Das weight equals to false okay so you can see that now your mini Cube distribution is starting and so now you can see that mini cube is running so let's go back to the terminal and let's do mini Cube or uh Cube C create secret and then this will be generic secret okay db- user- password D- form Das file equals user. txt Das Dash let's do here from instead of form and and let's now use the other file so from file equals password.txt okay so let's hit enter now and you can see that here this already exists for me but I guess for you this was a successful creation I will just edit the name a little bit so I actually create another secret file since I already have this one let's do password s okay and you can see that the secret has been successfully cre created into my device so if I do now Cube CTL get secrets you can see that I have that secret that I just created right here that was created 15 seconds ago as you can see and on data you can see two since we use two files so if you want to see the definition of that file you can do Cube C describe secret db- user- pass password DS in my case okay so you can see uh that here this secret has two files one is 8 bytes and the the other one is four bytes and both of those are text files the password and the user that we just used so once the secret is created there are pretty much two ways to consume the secret you can mount them as a volume or we can assess them from the pot so we're going to the second one and in order to do that we will need to create a yaml file with all the creation parameters so here is the deployment po yaml file and it has a pretty known structure so what we're doing here is we're defining the kind we're defining the API version which is version one the name the app that we want to use okay and also we're defining of course the specs the path within the image and also we're defining as you can see the files that we want to use use so we use username txt and the password txt files okay so you can take that file from here and just type the information that you can see or I also provide this file of course as a source code in this section so you can use it and see how it works what you simply need to do is to go to the directory where you have this file okay so for example I have it right here here it is deployment poo okay and then you just need to run the command Cube CTL apply minus F deployment po. yo okay so let's hit that and you can see that now we have the deployment app that has been deployed into kubernetes so D is being created so if you go to an our terminal and write mini Cube dashboard you'll actually be able to open the kubernetes dashboard and you you will see your postgress deployment app right here and you can see that you're using the image postgress and also your app is use the lab is using the label pogress okay so let's go back and here you can see the deployments the jobs basically everything you need to uh see from your kubernetes so I will keep that dashboard and I will go again to the terminal and I will actually run another file that will be called Secret pot. let me show it to you okay so this an our file which will pretty much run a pine image and will make sure that our secrets are within a secret volume which will help them not to get accidentally leaked okay so again I'll provide you this file into our source and let's actually run it so let's do Cube C create DF secret pot. yo okay so let's hit enter right here and you can see that now we created a new pot right so before we were creating this distribution but now we create a new pot and we applied our secrets there now if I go here and I go up you'll see that we have our pot up and running you can see that the light is in green so it is properly running into our kubernetes cluster so once this port is available you can actually list all secrets available if they are a regular data so if I do Cube CTL exit dasit secret V Po and you can do here AOS Etsy secret volume okay you'll be able to see the passwords from which and the files from which you're creating the password and the user so this is how guys you can create Secrets but let's see some additional strategies that we can follow in order to manage the risks of putting here application in kubernetes in danger when you want to run it into production so you should always try to integrate the security from the early stages of the development with kubernetes it is necessary to integrate the security at each stage of the software development process it is a mistake to leave the security settings for the last step where it might be pretty much too late you should also consider the commercial platform of kubernetes so when you participate into the kubernetes trading platform the most important benefit you get is the rapid structural responses from the kubernetes development team for any threat or problem that you face you will be updated quickly for any vulnerability and also you will have the most late security policies implemented even though this version might be paid if you're running a business on long term this might save you more than it would actually take from you and finally don't trust to Old security tools or practices this is because just like you the attackers update faster than the software so some of the technologies that are outdated might not work with the new attacker techniques you should not assume that your conventional security tools will always protect you many open- Source tools evaluate kubernetes clusters or perform penetration tests on the Clusters and the nodes it is always necessary to keep your software updated and patched so some of the most common security risks are normally appearing for example in the containers because they can contain malicious code that was included in your container image they can also be subject of misconfiguration and allow attackers to G gain unauthorized assess under certain conditions also keep an eye on your host operating systems because it is quite common to find vulnerabilities in militia scod within the operating system installed on the kubernetes Node kubernetes also supports variety of container run times all of them can contain vulnerabilities that allow attackers to control individual containers and escalate attacks from container to container and even gain control on the kubernets environment and finally you should also know that kubernetes always relies on internal networks to facilitate communication between nodes ports and containers it also quite often exposes applications to public networks so that they can be assessed over the Internet both Network layers in internal and external can always allow attackers to gain access to the cluster or escalate attacks from one part of the cluster to another that's it thank you very much for watching this video guys it was a long one but I'm sure that you learned quite a lot about the most common kubernetes vulnerabilities and how to manage and create kubernetes Secrets thanks for watching and I will see you in the next video hello everyone today we're going to talk about how you can analyze the kubernetes component security and those components are pots containers and other elements of the kubernetes architecture so as you already known pots are actually the main component of kubernetes and represent one or more components that share Network for that reason their security is very important and it needs to be implemented in one of the first steps of the cluster design using security policies so if you want to actually check out some examples on how you can apply the kubernetes security policies you can check out on kubernetes iio pots security policy you can see that now even today we're using actually use po Security Administration and also third party security plugins so definitely feel free to check out specifically post Security Administration and the post security policy since those are the core for understanding the security for pots so according to official documentation a pot security policy is a cluster level resource that pretty much controls some aspects from the pot security so those security policies are normally defined with a pot security object and using that object we can actually Define conditions that the pot should meet in order to be accepted by the system it also allows you to define the default values in fields that are not specifically assigned so as you can see to Define post security policy you're using the Manifest yamu file that would normally have the type of your object the metadata the specifications and the supplemental groups such as users groups and hosports so even though the privacy policy is appreciated it's actually important for you to see how a policy file or a policy yaml file looks like and how are the different fields defined so the post security policy will normally allow the administrators to control the containers in privileged mode and to do that there are four fields that allow us to define the behavior of a container with respect to the assess to certain parts of the host and those fields are the host pit which controls whether the pot containers share the same process space and the host then you have the host IPC which controls whether the containers in the pot share the same IPC space the host Network that defines whether a pot can use the same host Network space it also implies that the pot would have access to a loop back device and the process running in the host and finally you have the field host ports that defines the range of ports that are allowed in the host Network space so this range of ports p is given by the host Port range that is a field that has a mean and Max attributes that can pretty much Define the range of the ports the other key value is the volumes and the file systems and this has volumes which provides the list of permitted volumes that will not be read only and will be able to write files there then you have the fs group that allows you to indicate the groups where to apply certain volumes then you have the field allow host PS which specifies a list of pads that is allowed to be used by the volumes and for example here if you paste an empty list that would imply that there are no restrictions and finally you have the read only root file system which pretty much requires the containers to run with root file system in read only mode then you have the users and the groups and so the user groups can have the following parameters you have run a user that will specify which container will run inside the pot and also run a group which specifies which group ID of containers will run inside the pot so here pretty much the difference is that if you run as a user you just specify one container to run in the pot while at the same time you can actually group certain containers at an ID to them and once you specify this ID they can run in the same pot by a single reference and finally we have the privilege and the capabilities so for example the privilege escalation will prevent from changing the user effective ID and will also prevent enabling extra capabilities so your system will be more secured on other side the capabilities are pretty much series of super user privileges that can be enabled or disabled independently so let's look at the fields for privilege and capabilities so for example the priv the privilege escalation is defined by the allow privilege escalation field and this specifies whether or not to set the security context of the container by default this parameter is actually equal to true to avoid any compatibility problems then you have the default allow privilege escalation which allows you to set the default option of the allow privilege escalation so those two parameters are pretty much dependent so in the capabilities section you have the allowed capabilities function which is pretty much a list of capabilities that can be added to the container so as you know all the capabilities are actually added by default so if this field is empty this implies that you cannot add capabilities beyond the one that are already there by default and then you have the request drop capabilities parameter which is the list of capabilities that must be removed from the container they will be removed from the default capacity group and these capabilities should not be included in the allowed capabilities list so for example two very useful types of controls are the liveness and the Readiness props so health checks are very important in kubernetes and for that reason it is a very good practice to perform liveness and Readiness props for example the liveness props is used to check if the application is still running or has stopped kubernetes normally doesn't do anything if your application run successfully but it will actually launch a new pot and run the application if your application has been stopped the Readiness prope on the other side is used to verify that the application is ready to start to send traffic so kubernetes will stop sending traffic to the pot until it has heeld checks fails that's it thank you very much for watching guys and I will see you in the next section hi everyone and welcome to this section where we will talk about auditing and analyzing the kubernetes vulnerabilities so in this first video from that section we will talk about a tool called Cube bench security and I will show you how it works and how you can use it in your kubernetes installation so let's get started so here in this video we will tell how you can apply the CIS Benchmark standards with the Q bench so the CIS Benchmark standards are security standards for different systems that are carried out from the center of the Internet Security this Center aims to make our operating systems way stronger by identifying vulnerabilities so compliance with those standards is quite common especially for environments that are for government use so if you are concerned about security it will always be the right thing to do to make sure that we're meeting the CIS benchmarks so we can actually use a tool called Cube bench to verify that we're meeting the C benchmarks and this is a tool that will pretty much automate the entire process of validating the Cs benchmarks for kubernetes so we can normally install this tool by using the container from the docker Hub and I will show you in this video how you can install it so this tool pretty much supports tests for multiple version of kubernetes that are defined in the Cs guidelines so the easiest way to do that is by launching a kubernetes cluster and let's see how this is done so what I want you to do is to Simply go to the cube bench report itory in GitHub and download it as a zip file and I will show you how you can navigate to this ZIP file and very easily launch Cube bench for your device so as you can see here I have the cube bench now installed in my device I extracted the zip file and now this is the folder where I have it now make sure that you have doker and kubernetes running so you can run doer by clicking on the icon and you can run cuber is from mini Cube as we did before with that command here so once you have that up and running then go to the terminal and try to navigate to the folder of the cube bench installation so since m is on the desktop I will do CD desktop and then Cube bench main two okay then let's run the following kubernetes command let's do Cube C apply minus F job. yaml and the job. yo is a file that is already into the cube bench main folder and this file is used in order to run cubench let me show you how it looks so this file is pretty much a component and this a job component and here you have everything that you need for installing Cube bench okay so let's run that command and now you can see that a cube Ben job has been created and you can easily see it by clicking the mini Cube dashboard so let me show you how this is done you open a new terminal and let's do mini Cube dashboard and now a new tab will be opened with a dashboard and you can see that we have one job that is scheduled with the cube bench so you can see that now Cube bench is actually up and running in your device and you can use it so once you ensure that this is up and running let's let's go back to our terminal and let's write Cube CTL get pots okay and you can see that here I have a couple of pots running and one of them is the cube bench pot right so you can verify that this is completed okay so since here our status is completed then we can do Cube CTL logs and then Cube bench and then actually this ID here so you can even copy and paste that you can simply do crl c and then contrl V okay so let's run that and then you can see that we've got some output you can see that the cube bench performed 61 checks that 61 were passed 12 were failed and we got 52 warnings and if you go up you can see actually all the information that the cube bench will provide to you so you can see that there are many points and those are the different vulnerabilities that could be found in kubernetes and what you simply need to do is to go over them and make sure that you follow them so they will really give you a straightforward instructions for commands and for ways that you can improve your security you can see they start from 1 until five I think and after every section you'll see how many checks have been passed for example from section two we have seven tests passed from section one we have 38 passed but 11 failed so this would mean that there are 11 checks on which we could work in our system and you can pretty much see the pass and the fail result in most cases on the left hand side and then on the right hand side you can see the configuration related with those passes and fails so if you want to use Cube bench in order to improve your kubernetes you can definitely run it and go over the instructions specifically over the ones that have been failing so this is how guys you can install and use Cube bench to see how your system performs currently and to get a very good instructions on how you can apply the CIS guidelines into your kubernetes with a very direct instructions so that said thank you very much for watching the video and in the next one I will mention some very use ful kubernetes security projects that you can use in your system thanks for watching now let's see some of the other security projects that are used for analyzing vulnerabilities in kubernetes so in this lesson we'll review some different security projects that can help us both to secure kubernetes cluster and that will also offer the best possible performance for our infrastructure and the first one is called Cube Hunter you can find the complete code on github.com Cube Hunter and it basically represents a security framework that is taking into account that kubernetes clusters are mounted on set of nodes or services in which at least one has to take role as a master and the rest of those nodes are workers and they have visibility of each other in order to communicate so the cube Hunter is a python script that is developed to analyze the potential vulnerabilities in the kubernetes cluster as most of the other tools this tool relies on non attack vectors and information about the attack surface so it allows you to perform security vulnerability analysis of your kubernetes installation it also allows both remote internal or cidr scanning over kubernetes cluster and it can be run locally or through the deployment of a container that is already prepared so in order to install Cube hunter in your device you can simply do pipe install Cube Hunter okay so hit that and you can see that for me the requirements already satisfied but for you you will get some sort of installation since you don't have Cube Hunter already installed so this is how you can install it and then you can simply follow the instructions on the git repository now another quite useful tool is called cubec and cubec allows you to analyze the security r risks for kubernetes resources it will help you to quantify the risk for the kubernetes resources it always runs against your active kubernetes application for all the deployments and pods and also you have a specific plug-in if you want to run it from the command line with the cube CTL so some of the main CBE CTL tools that you can use actually directly from the command line is firstly the cube C Trace that you can find it in this link here and this tool will create control points in order to detect problems and make in-depth analysis of your kubernetes infrastructure another quite popular tool is the cube CTL debug which is a plugin that actually can be perfectly combined with a cube CTL trace and it's used for debugging tasks it allows you to execute a container within a pot that is running it normally shares the name space of the processes Network user and pretty much everything related to The Container and that information can all be analyzed and debug another quite an interesting tool which you can also find and download from GitHub is the case sniff tool this lets you to analyze all the network traffic of kubernetes spot using the TCP dump from wire shark if you haven't worked with wire shark this is a tool that allows you to explore the traffic of any type of application and showing you the TCP SMTP protocols pretty much everything on your network and you can apply that to kubernetes to so the kff uses the data collected from the TCP dump associated with a pot and then sends it to wire shark to perform analysis this plugin is essential if you're working with microservices since it is very useful for identifying errors and problems between them and their dependencies then you have the cube C dick which is another plug-in for the cube CTL and sometimes getting information from kubernetes cluster requires the use of several commands which in turn returns all kinds of information so this plug-in has quite nice userfriendly interface from where you can easily see the information related with a kubernetes cluster so let's actually look on one of the most used security tools for kubernetes and this is the cube Striker so this tool is created to tackle the key kubernetes security issues due to misconfigurations and really helps you to straighten the overall it infrastructure of any organization so it performs in depth check on range of services and can be used on either self Hostess kubernetes on Amazon azour or Google Cloud it will pretty much identify any miscellaneous attacks and make it easy for the organization to identify them so it would really help you to straighten over potentional attacks whether you're using Google Cloud AWS or azour and it will provide you with very nice visualizations and prevent the hackers to advance their attacks so let's see how you can actually install and use this tool in your device so let's open a few Terminals and I want you to run kubernetes cluster and Docker as well so let's run mini Cube start and actually before that let's run Docker okay so once we have the docker U up and running in your device let's wait a couple of minutes okay and then run the mini Cube then I open another window and so once this is installed let's run the mini Cube dashboard so write mini Cube dashboard dashboard that's it okay so once you make sure that you have all that then you can install the cube Striker so in order to install it you need to run this command I'll provide this for you the only difference from my installation will be that you actually need to specify your P here to your username okay this will pretty much show where you want to save your installation and it creates a volume you don't need to use the specifically the directory that has your name on it you can use any but please make sure that you keep the lines that are after that okay so let's hit enter here okay and you can see that immediately you started a cube Striker right here and if I check in Docker you have Cube Striker installed container that is up and running so then run python minus M Cube Striker and this will actually start the cube Striker engine as you can see so from here you can choose different options you can choose URL ip config file or IP range so let's choose IP and so the two will ask you to provide an IP here you can do that by opening another terminal and simply writing Cube CTL get po- o white okay and this will show all the pots that are currently running and you can see their IP from here okay so I'll copy one IP and I'll paste it here okay and now you can see that our vulnerability checks will start so we're performing service Discovery we're skating the kubernetes C cluster and the services and you can see we're scanning the etcd clients the Q proxy and so on so this two will pretty much automatically perform a good check of your kubernetes services and then it will send all your results in a Target file so this file will normally be saved in the same directory um in which you open the application so in my case this will be saved on the desktop so now from the other terminal if I go to the uh desktop and do OS you'll see that we're actually having a file that is with the same IP address and it's txt okay so you can go to your desktop and just open that file and from there you can check if you found any end points into this IP since this IP that I use doesn't have any end points your file will simply show the scanning and Below you will see nothing because you don't currently have end points to this IP okay so make sure that you explore also the rest of the options of this to because it's really amazing and if you need an IP you already saw how you can get one and explore the different commands so that would be everything for that video guys thank you very much for watching and in the next one we will talk about how you can analyze the kubernetes vulnerabilities so let's see how you can actually analyze the vulnerabilities that you already found in kubernetes so in this video we'll review some vulnerabilities that we can find in kubernetes and show you how those security vulnerabilities were solved so in the same way as we did for doer you can actually see all vulnerabilities related to kubernetes into the CV details.com website where you can simply go to the kubernetes page and see all the vulnerabilities found throughout the years so for example you can see in 2022 there were still about 11 vulnerabilities found in kubernetes while the worst year probably was 2019 when you had 16 vulnerabilities exposed and of course here we can see a very good graph by year so you can get a good understanding on how kubernetes was performing throughout the years so maybe one of the most critical vulnerability for kubernetes is the CV 12 125 2105 found in 20 8 and it was identified in the kubernetes API server so this vulnerability would allow any authenticated kubernetes user to obtain administrative assess to the cluster using Standard Security strings and allows the escalation of kubernetes PR privileges through specifically designed proxy request if you want to see the affected kubernetes versions from this V ability you can for example go to the red hat website and let's say for the red hat you can see this vulnerability and if you scroll down you'll see all the versions in which this was fixed and also versions if you go back that this vulnerability was existing so sometimes it is a good idea to check the website of your operating system so you can check if a vulnerability is still present in that system and if you know that your company can be in Risk because of a certain vulnerability you can make sure that from there from those websites of your operating system you can find whether you have or you don't have this vulnerability present so this vulnerability has already been solved by the kubernetes development team and it is recommended to update your kubernetes versions to the last one so you have it fixed and you can find the information for fixing this vulnerability on github.com where the kubernetes team have explained very well on what the fix is and how it might affect the operating systems that people are using so another critical vulnerability was actually found in 2020 and this vulnerability enables an attack called men in the- Middle attacks during that attacks the attacker can interpret the network traffic and intercept it to a pot in the kubernetes cluster and in that way he can find quite valuable information for clients the vulnerability was found in all versions of kubernetes up to until version version 0.8.6 this would allow malicious containers in kubernetes cluster to be used to perform this attacks so to do that the malicious container can send a flow of IPv6 router advancements to the host or other containers and redirect the traffic to the malicious container from where he can exploit all the information the attacker needs and the last vulnerability that I want to discuss with you guys it is the vulnerability with the pot security privacy this vulnerability allows you to bypass the port security privacy on any kubernetes cluster so in kubernetes the port security privacy is one of the resources that allow the administration controller to decide whether a poort can be created by a service account depending on the configuration for example if privilege mode pots are not allowed in the poort security policy any poort that tries to create privilege mode from that service will fail this usually works but some security audits have found cases in which the monitoring of the hostpath volumes is done instead of using a persistent volume so this restriction is not taken into account if you're working with present volume claim or PVC the result is that any user could mount a directory of the host machine from the container and have access to the file system managing to escape from the container so the solution of this vulnerability was to enable the poort security privacy not to limit the types of persistent volumes and this should only give allow user access to the cluster resources so this was everything from um this video and that section actually I hope now you're aware for some of the key vulnerabilities and also how you can audit and analyze your kubernetes vulnerabilities from the terminal that's it thank you very much for watching and I'll see you in the next video hi everyone and welcome to this section where we are going to talk about how you can monitor kubernetes using the kubernetes dashboard and cluster observing and monitoring is one of the most important parts of the maintenance of the applications because it is very important to get Matrix of the application behavior and monitoring is essential part of the infrastructure thanks to monitoring we can actually obtain information in order to take scaling measures and this will help us understand what is happening in our cluster Behavior so by definition monitoring is a realtime process that includes the collection processing and analyzing of any data in the system it saves money and it involves many aspects from knowing the status of the infrastructure to having a complex system that can be anticipating any type of events depending on what your company needs so monitoring is something purely Technical and has become a very good way way for obtaining valuable information and improving the conditions for the customers and reducing cost it helps us to evolve our products and even creating new ones so when constructing a new monitoring system it is important to take into account the objective that we have for the users and for the information that we want to build once this is identified we can Define the matrices and the tools to help us collect data from each part of the application so currently the development or the architecture of microservices is one of the strongest and the most relevant Technologies to monitoring so microservices architecture can grow rapidly so we need to know that everything is working correctly it is also important to determine if our system is degrading or if for example we're not capable of completing service level agreement so the monitor of our system will constantly provide us with Matrix Andes to analyze the correct operation in recent years many platforms have emerged to be able to know if all of our development is working as it should at this point observability is considered as the new form of monitoring so observability is all about when and why an error occurs and it has four fundamental components first one is the open instrument station that collects vendor specific or open source data from a service host application container or any other entity in the produced data this enables full phas visibility for critical infrastructure and applications it also prepares teams for the future as you introduce new products and data types to the system then we have the correction and context because the collection of data must be be analyzed so that all data sources can be connected you also need to incorporate metadata to allow correlation between various parts of the system in your data together with those actions you also need to create a context to shape the monitoring then we have the programmability because the organizations need the flexibility to create their own context with custom applications based on their goals for example an application can help teams to calculate and visualize the impact of errors on the end user and finally something that's a bit separated from the first threee is the artificial intelligence and it operations so unlike the traditional Incident Management tools the AI Solutions use machine learning models to automate the it operation process we can automatically correlate at or prioritize incident data into the AI so the kubernetes cluster already itself exposes cluster Matrix and kubernetes has Matrix server that is an aggregator of data on the use of resources so for example the cube State matrices exposes data obtained from the kubernetes API so other tools like prel that we will talk about in this section can collect these data and can consume the API and here's how it looks the cluster has the Matrix server and the Matrix server is connected to the command line interface and then we have the API server which is going to send our data to external devices and then the API server would communicate and exchange information with let's say PRS so monitoring the containers allows us to know the status of each container individually but the problem of having multiple containers arises when creating a cluster so reviewing all of them can be quite difficult and sometimes you could be exposed to quite a few errors within the kubernetes cluster multiple objects can be running consistently so a single name space with a service included can be monitored in addition the containers might disappear from the moment that the error occurs until the moment for debugging so for that reason creating log files is quite important for you to keep track on what was going on with your system despite this kubernetes has a great capacity of automatically recovering from failures such as restarting aot or balancing the load between the different nodes however sometimes it is not enough so this process should be performed manually for those cases it is necessary to monitor the execution of the cluster using different tools so there are two main factors to decide how you can monitor first of all is what can be monitored and the second one is how to do it so kubernetes itself already offers the capability to know what to monitor the first one is the CPU usage and even in kubernetes you can see the CPU usage when you're deploying container or a cluster then also the memory usage is quite important because it shows the amount of me memory available and the one used by both the free memory and the one that is cached kubernetes also indicates the disk usage so the lack of disk space can cause a failure in the ex execution of the program so we should always keep an eye on it then it also gives us information about the network bandwidth although it seems impossible to consume all the bandwidth it is important to monitor if there is a suspicious Behavior because sometimes you can get an attack that is trying to fill your Network bandwidth completely and finally it shows the pots resources so you can assess the different resources that a specific pot is using the information being used by the scheduler and the Placer of the pots and where you have available resources so the easiest way to monitor those things is to actually use the kubernetes control panel or the dashboard and let's see how we can actually assess it and use it so since we are already using mini Cube you can actually see that if I go to the terminal we already launched the dashboard in the previous section so you can simply do mini Cube dashboard and this will open an HTML file with a dashboard of all the services that you are currently using now if you completed the previous sections you already have information in your dashboard because we are creating quite a few applications so far so from the dashboard it is quite nice that you can perform certain actions without the need to interact with the command line so if I click for example those three dots here you can pretty much restart edit scale or delete your deployments your jobs your pots in your replica sets Okay for example if I want to delete this replica set I can hit delete delete and after certain period of time this will be completely removed from our device another interesting thing that you can see is for example if you click let's say the delete command you can see the exact action that is relevant to that command so it would be the same if you would write this from mini Cube okay so this is pretty much performing exactly the same action as the one that you would perform from the command line so let's see if our dashboard is actually showing the correct information so if I open a terminal let's say right here let's check for example the number of notes so we can do Cube CTL get nodes so you can see that we're having one note which is the mini Cube and let's do get pots okay so here are the pots we have about seven pots and if you go here you can see that we have exactly the same number so there are five running one scheduled and one failed so if you go into the deployments let's for example go to our my app function right here and and when you hit it you can see when it was created how many days ago what is the ID the status of the poorts and some additional information if I go back I can also see the image which is ngx I can see how the app is called when it was created and if I hit here on the options you'll be able to see the same options right here so if for example you hit on the edit button right here you'll be able to see the yaml file okay which is the file that actually creates that deployment okay and you can actually edit it you can edit your application directly from the file this is of course a bit dangerous because you don't have to do any typos in order to be able to edit this file and as you can see you can also see this file in uh Json version if you're more familiar with Json you can also do that from there so this is how you can edit your deployment after the update um your app will be restarted and you'll be able to add a new features from that file so you can also check if you click here um all of your deployment job spots replica set separately on a dedicated tab you can also click for example if you go on cluster you can click on name spaces and you can see all the name spaces that currently exist in your device you can see the notes which as we saw we have mini Cube as a note and so I would advise you to uh really try to explore the dashboard yourself too so you can spot some tools that might be suitable for you your job or your company so that's it thank you very much for watching guys and I will see you in the next video so once you already got familiar with the dashboard now let's talk about the prome tools now promos is an open-source monitoring and alert tool kit that was developed in 2012 by a company called sound SoundCloud now it is an open- Source tool since 2016 Pros is constantly reading the required data from your kubernetes installation by corresponding with the API so the software can send alerts According to some preconfigured rules to a manager callede alert manager so the prel manager is in charge of managing the alarms receiving them and sending them to another application in change of transmitting the message it is also possible to use another software that allows you to view the data that comes from kubernetes not only Pros can manage alarms but also it can group alarms and also can send alarms to other applications so if you want to check out prel web page you can go to github.com promel prels and from there you can get that tool so prels allows two types of possible rules to configure and evaluate your application in pretty fine times those are the recording rules and the alert rules so the recording rules will allow you to execute actions that are required repeatedly or are computationally expensive to save the complete result while the alert rules allow you to Define conditions under which the program will send you notification these alerts have to be written in the promel language for for prel to be able to understand and execute the alerts so the software contains a local database on the disk to store corresponding data but it can also be used on the remote systems there are several functionalities to make a PR and the first one is the alert manager which manages the alerts sent to the application or the prome server itself and the second one is the prome operator which provides monitoring definitions to the kubernetes services and the pril deployment making the configuration with a cluster native by managing the necessary instances so one of the outstanding advantages of prels is its cury language which is quite flexible it also has a pool model for matri collections and a discovery service for the objectives that really helps for facilitating the integration with the tools such as kubernetes and others so let's have a look at the prel architecture now prels works well to record numerical time series for example those based on time series data both machine monitoring and microservice based architecture monitoring so one of the first main features is that it provides multi-dimensional data and a very powerful cure language which is called prom ql it also collects information from more than 5,000 Matrix automatically with zero configuration zero dependencies and zero maintenance it also offers four types of matrices counter gou histogram and summary so prels is made up of multiple components which includes the prel server which is the core and the main component in charge of collecting and storing appli application Matrix in the time series database then you have the service Discovery so it can autodiscover applications automatically in real time it's essential when working with containers that were consistently changing their IP addresses it also has client libraries and those libraries are pretty much in charge of exposing individual matrices of the applications to be monitored to pril format so that can they can be collected and run by the prto server and finally you have the alert manager which manages the alerts that have been sent to prels now you also have uh a connection for data visualization tools from where you can have better visualization of the prils data using external tools like graina so in order to download prels for your device what you need to do is to go to prs.com and actually iio and then download the version for your device for example since I'm using Mac I will download the Darwin zip file from here okay and it downloads almost instantly so if you unzip the file then you will have this folder here so once we have prel downloaded Let's uh create the prel name name space so let's do Cube CTL create namespace PR ts- system okay so now the namespace is created and you can see that here you have a new name space called prel system then let's navigate to our prel installation okay so to the folder that the file is unzipped let's double click the prils file right here okay in order to enable it it will take just a few seconds that's it and make sure that you copy and paste the prils file into your root directory right here okay so I will do that so let's copy this file and just paste it in the ru directory okay so you can see that if I go to do LS here I will have the prel file that I copied and paste it in my root directory so once the file is in the correct folder then hit the prit Els right here two times and now the prto should start in your device okay as you can see here so if I go back to the dashboard you'll see that the prel system is is still active and up and running or alternatively you can simply run it from the installation folder by writing pr-- config do file equals to prom. Yo okay that should have exactly the same result so now once this is done uh if you write curl Local Host 9090 Matrix you'll be able to to see all the matrices that Pros is using you can also assess that from the browser as well so if I do Local Host 9090 okay I will be able to assess PR and you can see the dashboard quite easily or if you do Dash Matrix actually Matrix okay you'll be able to see all the matrices and analyses that prils is monitoring on your kubernetes installation so you can see that from here you can check for RS alerts once you connect promet to your application so right now promet is running locally and so you're able to find information such as the runtime and the build information tsdb information configurations you can Define the different rules and then look at them targets and so on so for example in order to create a que you can go to the main prel page and here in this search you can pretty much select this icon here and this will give you the ability to enter cues about collected matrices so for example if I use this matric go name spaces freeze total you can find that there is one end point here that's on the Local Host 990 and this is the prel end point you can also explore graphs as you can see for the particular end points and you can see how the particular queue evolves over time and how it changes so this is a very good way guys for you to explore and monitor your application by doing cues and connecting cuber netes to proms that's it thank you very much for watching that video and I will see you in our next challenges now in today's video we're going to talk how you can collect and explore matrices using a graina as a supplementary tool to promit Els so graina is an open- Source tool that you display graphs and data collected from promeals inux DB and many others those matrics can really help you to set reasonable performance targets while the lock analysis can uncover issues affected by your workloads so there are two main types of Matrix in the graina dashboard the system Matrix include the CPU memory dis for both master and kubernetes workers and also at the same time the cluster Matrix include data at cluster level for kubernetes C advisor end points The Matrix for example can be exploited in a dashboard which will help us understand the performance and behavior of our infrastructure and at low levels those Matrix will help us determine if the system or performance is degrading and can cause the system to fail it's important to use lowlevel data to help us prevent failures before they occur so graina will also make it easy to obtain data for from different data sources which can be mixed with the same dashboard you can also Define alert rules visually for the most important matrices you can see stuff such as CPU usage per note uh RAM usage per note and file users per note so let's see what graina is and how you can actually use it so let's go to graana.com select my account and you can see that I already have an account here so if you don't have it create one and then assess your dashboard then go down here and launch grafana so then you get that page where grafana is launching and now it's loading from your browser and then from there you can select collect data and from here you can select hosted promil Matrix from here you can set up your promet Els you can install graph to your operating system and set a configuration and in that way the grafana will get all the information from promeets and we actually use it to give you a very good insight into it and give you some amazing graphs so if you go back to collect data set sets you can also select data source here and just make sure that prels is also your data source right so hit on that and you can see that already I have this installed but you can create a new PR data source and Define the port so here we have the Local Host 990 which is exactly the one we use okay then you can select a timeout a name of your application a method that you want to use and so on so this is how guys you can use grafana with prels it is really easy to set up and install even for grafana you don't actually need to have any installation in your device and finally I want to just explore some other tools that you can use instead of grafana for analyzing your data so the first one is the data do on data do hq.com which is also a tool that can be used to obtain performance Matrix in applications and event monitoring also there is the New Relic tool which also allows you to measure the performance of applications beyond the cloud and allow you to analyze and visualize different Matrix in the software development environment then you have the info DB that is considered as a database that stores time series these databases allow you to store and evaluate data from sensors and protocols with time stamps for certain period of time the main advantage of this databases is that they're much faster than the standard databases for storing and processing data with timestamps and the final one is called Splunk and this is actually really popular tool I've been using it myself this is a big data software that can capture index and correlate log data it is also capable of monitoring data in the log files and generating charts reports alerts and so on so this was all the visualization tools that I wanted to discuss with you today thanks for watching this video and this is actually the end of the course I hope you really gain a very valuable experience and now you know how to add security capabilities to your devops applications thanks for watching and I will see you in the rest of our courses