Transcript for:
Comprehensive Overview of DevOps Concepts

hey everyone and welcome to our DevOps full course by simply learn Ever wondered how top tech companies release apps so quickly and without bugs that's the magic of DevOps A powerful approach that brings developers and IT teams together to build test and deliver software faster and even smarter By 2025 DevOps skills will be in huge demand as more companies shift to automation cloud computing and agile methods And with salaries hitting $150,000 in the US and around 30 lakhs perom in India it's a career path worth exploring In this course you'll get a solid grip on DevOps essentials like automation CI/CD pipeline and cloud tools plus hands-on training with popular tools like Docker Kubernetes and even Genkins and by the end you'll be ready to design smooth DevOp pipeline and take on real world projects with confidence So let's get started and build something amazing together Also if you're looking forward to make a career in DevOps definitely check out Simple Learn's professional certificate program in cloud computing and DevOps This comprehensive course offers in-depth learning with a thorough understanding of cloud computing principles and DevOps So hurry up and enroll now and find the course link in description box below and in the pin comments You know back in the day when Netflix was just starting to hit its stride they faced serious challenges managing their growing infrastructure keeping millions of people happy streaming movies and shows without interruption It was not an easy task Initially Netflix struggled with scaling issues monolithic architecture problems and deployments bottlenecks Their infrastructure couldn't keep up with the increasing user demand leading to frequent downtimes The monolithic architectures made it difficult to update or scale parts of the system without affecting the whole Deploying new features was slow and risky and often causing service disruptions And that's when they discovered microservices which allowed them to break the application into smaller and more manageable pieces This meant that they could tweak and tinker with different parts of the service independently and greatly improving flexibility and reliability Complementing microservices with DevOps practices like continuous integration and deployment Netflix transformed their operation ensuring seamless streamings for users worldwide So next time when you binge watch remember this epic journey they took to get there Now before you move on and learn more about DevOps I request you guys that do not forget to hit the subscribe button and click the bell icon for further updates So here's the agenda for our today's session We are going to start our session with an introduction to what is DevOps Then we will learn about why DevOps Moving ahead we will discuss principle of DevOps and phases of DevOps Then we will deep dive into DevOps tools And finally we are going to conclude our session with a hands-on And today we're going to go through and introduce you to Dev Ops We're going to go through a number of key elements today The first two will be reviewing models that you're already probably using for delivering solutions into your company And the most popular one is waterfall followed by agile Then we'll look at DevOps and how DevOps differs from the two models and how it also borrows and leverages the best of those models We'll go through each of the phases that are used in typical DevOps delivery and then the tools used within those phases to really improve the efficiencies within DevOps Finally we'll summarize the advantages that DevOps brings to you and your teams So let's go through waterfall So waterfall is a traditional delivery model that's been used for many decades for delivering solutions not just IT solutions and digital solutions but even way before that It has its history that goes back to World War II So Waterfall is a model that is used to capture requirements and then cascade each key deliverable through a series of different stage gates that is used for building out the solution So let's take you through each of those stage gates The first that you may have done is requirements analysis And this is where you sit down with the actual client and you understand specifically what they actually do and what they're looking for in the software that you're going to build And then from that requirements analysis you'll build out a project plan so you have an understanding of what the level of work is needed to be able to be successful in delivering the solution After that you got your plan then you start doing the development and that means that the programmers start coding out their solution They build out their applications to build out the websites and this can take weeks or even months to actually do all the work When you've done your coding and development then you send it to another group that does testing and they'll do full regression testing of your application against the systems and databases that integrate with your application You'll test it against the actual code You do manual testing You do UI testing and then after you've delivered the solution you go into maintenance mode which is just kind of making sure that the application keeps working There's any security risks that you address those security risks The problem you have though is that there are some challenges however that you have with the waterfall model the cascading deliveries and those complete and separated stage gates means that it's very difficult for any new requirements from the client to be integrated into the project So if a client comes back and it's the project has been running for 6 months and they've gone hey we need to change something that means that we have to almost restart the whole project It's very expensive and it's very timeconuming Also if you spend weeks and months away from your client and you deliver a solution that they are only just going to see after you spend a lot of time working on it they could be pointing out things that are in the actual final application that they don't want or are not implemented correctly or lead to just general unhappiness The challenge you then have is if you want to add back in the client's feedback to restart the whole waterfall cycle again So the client will come back to you with a list of changes and then you go back and you have to start your programming and you have to then start your testing process again and just you're really adding in lots of additional time into the project So using the waterfall model companies have soon come to realize that you know the clients just aren't able to get their feedback in quickly effectively It's very expensive to make changes once the teams have started working and the requirement in today's digital world is that solutions simply must be delivered faster and this has led for a specific change in agile and we start implementing the agile model So the agile model allows programmers to create prototypes and get those prototypes to the client with the requirements faster and the client is able to then send the requirements back to the programmer with feedback This allows us to create what we call a feedback loop where we're able to get information to the client and the client can get back to the development team much faster Typically when we're actually going through this process we're looking at the engagement cycle being about 2 weeks And so it's much faster than the traditional waterfall approach And so we can look at each feedback group as comprising of four key elements We have the planning where we actually sit down with the client and understand what they're looking for We then have coding and testing that is building out the code and the solution that is needed for the client And then we review with the client the changes that have happened But we do all this in a much tighter cycle that we call a sprint And that typically a sprint will last for about 2 weeks Some companies run sprints every week some run every four weeks It's up to you as a team to decide how long you want to actually run a sprint but typically it's 2 weeks And so every two weeks the client is able to provide feedback into that loop And so you were able to move quickly through iterations And so if we get to the end of sprint two and the client says "Hey you know what we need to make a change." You can make those changes quickly and effectively for sprint three What we have here is a breakdown of the ceremonies and the approach that you bring to agile So typically what will happen is that a product leader will build out a backlog of products and what we call a product backlog And this will be just a whole bunch of different features and they may be small features or bug fixes all the way up to large features that may actually span over multiple sprints But when you go through the sprint planning you want to actually break out the work that you're doing so the team has a mixture of small medium and large solutions that they can actually implement successfully into their sprint plan And then once you actually start running your sprint again it's a twoe activity You meet every single day to with the actual sprint team to ensure that everybody is staying on track and if there's any blockers that those blockers are being addressed effectively and immediately The goal at the end of the two weeks is to have a deliverable product that you can put in front of the customer and the customer can then do a review The key advantages you have of running a sprint with agile is that the client requirements are better understood because the client is really integrated into the scrum team They're there all the time and the product is delivered much faster than with a traditional waterfall model You're delivering features at the end of each sprint versus waiting weeks months or in some cases years for a waterfall project to be completed However there are also some distinct disadvantages The product itself really doesn't get tested in a production environment It's only being tested on the developer computers And it's really hard when you're actually running agile for the sprint team to actually build out a solution easily and effectively on their computers to mimic the production environment And the developers and the operations team are running in separate silos So you have your development team running their sprint and actually working to build out the features but then when they're done at the end of their sprint and they want to do a release they kind of fling it over the wall at the operations team and then it's the operations team job to actually install the software and make sure that the environment is running in a stable fashion That is really difficult to do when you have the two teams really not working together So here we have is a breakdown of that process with the developers submitting their work to the operations team for deployment and then the operations team may submit their work to the production servers But what if there is an error what if there was a setup configuration error with the developers test environment that doesn't match the production environment there may be a dependency that isn't there There may be a link to an API that doesn't exist in production And so you have these challenges that the operations team are constantly faced with and their challenge is that they don't know how the code works So this is where DevOps really comes in and let's dig into how DevOps which is developers and operators working together is the key for successful continuous delivery So DevOps is is an evolution of the agile model The agile model really is great for gathering requirements and for developing and testing out your solutions And what we want to be able to do is kind of address that challenge and that gap between the ops team and the dev team And so with DevOps what we're doing is bringing together the operations team and the development team into a single team and they are able to then work more seamlessly together because they are integrated to be able to build out solutions that are being tested in a production-like environment so that when we actually deploy we know that the code itself will work The operations team is then able to focus on what they're really good at which is analyzing the production environment and being able to provide feedback to the developers on what is being successful So we're able to make adjustments in our code that is based on data So let's step through the different phases of a DevOps team So typically you'll see that the DevOps team will actually have eight phases Now this is somewhat similar to agile And what I'd like to point out at time is that again agile and DevOps are very closely related that agile and DevOps are closely related delivery models that you can use With DevOps it's really just extending that model with the key phases that we have here So let's step through each of these key phases So the first phase is planning and this is where we actually sit down with a business team and we go through and understand what their goals are The second stage is as you can imagine and this is where it's all very similar to agile is that the coders actually start coding but they typically they'll start using tools such as git which is a distributed version control software It makes it easier for developers to all be working on the same code base rather than bits of the code that is rather than them working on bits of the code that they are responsible for So the goal with using tools such as git is that each developer always has the current and latest version of the code You then use tools such as Maven and Gradle as a way to consistently build out your environment And then we also use tools to actually automate our testing Now what's interesting is when we use tools like Selenium and JUnit is that we're moving into a world where our testing is scripted the same as our build environment and the same as using our Git environment We can start scripting out these environments And so we actually have scripted production environments that we're moving towards Jenkins is the integration phase that we use for our tools And another point here is that the tools that we're listing here these are all open-source tools These are tools that any team can start using We want to have tools that control and manage the deployment of code into the production environments And then finally tools such as Ansible and Chef will actually operate and manage those production environments so that when code comes to them that that code is compliant with the production environment So that when the code is then deployed to the many different production servers that the expected results of those servers which is you want them to continue running is received And then finally you monitor the entire environment So you can actually zero in on spikes and issues that are relevant to either the code or changing consumer habits on the site So let's step through some of those tools that we have in the DevOps environment So here we have is a breakdown of the DevOps tools that we have And again one of the things I want to point out is that these tools are open-source tools There are also many other tools This is just really a selection of some of the more popular tools that are being used but it's quite likely that you're already using some of these tools today You may already be using Jenkins You may already be using Git But some of the other tools really help you create a fully scriptable environment so that you can actually start scripting out your entire DevOps tool set This really helps when it comes to speeding up your delivery because the more you can actually script out the work that you're doing the more effective you can be at running automation against those scripts and the more effective you can be at having a consistent experience So let's step through this DevOps process So we go through and we have our continuous delivery which is our plan code build and test environment So what happens if you want to make a release well the first thing you want to do is send out your files to the build environment And you want to be able to test the code that you've been created because we're scripting everything in our code from the actual unit testing being done to the all the way through to the production environment Because we're testing all of that we can very quickly identify whether or not there are any defects within the code If there are defects we can send that code right back to the developer with a message saying what the defect is And the developer can then fix that with information that is real on the either the code or the production environment If however your code passes the the scripting tests it can then be deployed and once it's out to deployment you can then start monitoring that environment What this provides you is the opportunity to speed up your delivery So you go from the waterfall model which is weeks months or even years between releases to agile which is 2 weeks or 4 weeks depending on your sprint cadence to where you are today with DevOps where you can actually be doing multiple releases every single day So there are some significant advantages and there are companies out there that are really zeroing in on those advantages If we take any one of these companies such as Google Google any given day will actually process 50 to 100 new releases on their website through their DevOps teams In fact they have some great videos on YouTube that you can find out on how their DevOp teams work Netflix is also a similar environment Now what's interesting with Netflix is that Netflix have really fully embraced DevOps within their development team And so they have a DevOps team and Netflix is a completely digital company So they have software on phones on smart TVs on computers and on websites Interestingly though the DevOps team for Netflix is only 70 people And when you consider that a third of all internet traffic on any given day is from Netflix it's really a reflection on how effective DevOps can be when you can actually manage that entire business with just 70 people So there are some key advantages that DevOps has It's the actual time to create and deliver software is dramatically reduced particularly compared to waterfall Complexity of maintenance is also reduced because you're automating and scripting out your entire environment Uh you're improving the communication between all your teams So teams don't feel like they're in separate silos but that are actually working cohesively together and that there is continuous integration and continuous delivery so that your consumer your customer is constantly being delighted Welcome to the ultimate guide to the future of tech In the fast-paced world of DevOps staying ahead is the game changer Join us as we unlock the top DevOps skills needed in 2024 From mastering cloud architectures to building security fortresses we are delving into the vital skills shaping the tech landscape Get ready to unravel the road map to DevOps success and set your sights on the tech horizon Let's get started Number one continuous integration and continuous deployment CICD CI/CD the backbone of modern software delivery makes integrating code changes and deploying them smooth and fast Tools like Jenkins and GitLab take care of testing version control and deployment cutting down manual work Learning these tools might take a bit of time Focusing on version control scripting and how systems run To get better at CI/CD trying hands-on projects like setting up pipelines for web apps or automating testing can be a gamecher Number two cloud architecture and Kubernetes Knowing about cloud architecture and mastering Kubernetes is a big deal today Companies are all about cloud services and using Kubernetes to manage apps stored in containers Learning these involves understanding various cloud services and how to use them to build strong and flexible applications It also means knowing how to set up and manage containers in the cloud environment Getting good at this might take some effort especially learning about networks containers and cloud computing Hands-on practice like deploying small apps with Kubernetes or automating deployments can be a solid way to level up Number three infrastructure as code IA with Terraform Terraform is a star in managing infrastructure by writing scripts It helps set up and manage things like servers or databases without manual configurations Mastering it means understanding Terraform's language and managing resources across different cloud providers Getting good at Terraform might not be too hard if you get the basics of cloud architecture Doing projects like automating cloud setups or managing resources across different cloud platforms can boost your skills in this area Number four security automation and DevSec Ops Keeping systems secure is top priority and that's where DevSec Ops shines It's about integrating security into every step of the development process This needs understanding security principles spotting threats and using tools within the development cycle to stay secure Getting skilled at this might take some time focusing on security practices and how they fit into development Trying out projects like setting up security checks in your development process or making sure apps are encrypted can sharpen these skills Number five data ops and AI ML integration Data ops mixed with AI and ML is the new thing for smarter decision making It's about making data related work smooth and automated and then mixing that data with AI and ML to make awesome decisions Learning this might need digging into data processing machine learning and programming languages like Python R or Scala Projects like building models or setting up data pipelines can give hands-on experience in this fusion of data and smart tech Number six monitoring and observability tools Monitoring tools keep systems healthy by finding problems before they cause trouble Tools like Prometheus or Graphina help keep an eye on system performance and solve issues quickly Learning these tools might need some time especially getting used to metrics and logs projects like setting up performance dashboards or digging into system logs can really polish these skills Number seven microservices architecture Breaking down big applications into smaller parts is what microservices are about It helps in better scalability and flexibility Getting good at this might take a bit of understanding how these smaller parts talk to each other and using languages like Java or Python Trying projects like breaking down big apps or putting these small services into containers can make you a microservices pro Number eight containerization beyond Kubernetes Beyond Kubernetes there are other cool tools like Docker or Potman that help manage containers making life easier Learning these tools might need a basic understanding of system administration and containers Working on projects like creating custom container images or managing multicontainer apps can really amp up your container game Number nine serverless computing and fast Serverless platforms like AWS Lambda or Azure Functions let developers focus on writing code without handling the back-end stuff Mastering this might need getting familiar with serverless architecture and programming in languages like Node.js Python or Java Doing projects like building serverless apps or automating tasks with serverless functions can level up your serverless skills Number 10 collaboration and soft skills Apart from the tech stuff being a team player and communicating well is super important Working on open-source projects or joining diverse teams can really boost these skills Projects like leading teams through DevOps changes or driving cultural shifts in an organization can improve these skills in a big way Before we conclude this exhilarating expedition into the top 10 DevOps skills for 2024 envision this The future is a canvas waiting for your innovation and expertise to paint upon These skills aren't just a checklist They are your toolkit for crafting the technological future Embrace them immerse yourself in their practice and let them be the fuel propelling your journey toward mastery In this rapid evolving tech realm remember it's not just about knowing it's about doing Dive into project experiments fearlessly and let these skills be the guiding stars illuminating your path to success Thank you for joining us on this adventure Make sure to like this video and share it with your friends Do check out the link in the description and pinch comment if you are interested in making a career in DevOps Welcome to simply learn Starting on the AWS DevOps journey is like getting sale on a high-tech adventure In this tutorial we'll be your navigators through the past seas of Amazon Web Services helping you to harness the power of DevOps to streamline your software delivery and infrastructure management From understanding DevOps principles to mastering AWS services we will guide you through the transformative watch Whether you're a seasoned sailor or anise explorer our road map will unveil the treasures of continuous integration containerization automation and beyond So horse the Deos flag and get ready to chart a course towards efficiency collaboration and innovation in the AWS ecosystem That said if these are the type of videos you'd like to watch then hit that subscribe button and the bell icon to get notified As we speak you might be wondering how to become a certified professional and bag your dream job in this domain If you are a professional with minimum one year of experience and an aspiring DevOps engineer looking for online training and certification from the prestigious universities and in collaboration with leading experts then search no more Simply learns postgraduate program in DevOps from Caltech University in collaboration with IBM should be your right choice For more details head straight to our homepage and search for postgraduate program and devops from Caltech University or simply click on the link in the description box below Now without further delay over to our training So without further delay let's get start with the agenda for today's session First we will understand who exactly is an AWS DevOps engineer Then the skills required to become an AWS DevOps engineer Followed by that the important roles and responsibilities and now the most important point of today's discussion that is the road map or how to become an AWS DevOps engineer Followed by that we will also discuss this salary compensation being offered to a professional AWS DevOps engineer And lastly we will discuss the important companies hiring AWS DevOps engineers So I hope I made myself clear with the agenda Now let's get started with the first subheading that is who exactly is an AWS DevOps engineer The answer for this question is an AWS DevOps engineer is a professional who combines expertise in AWS that is Amazon Web Services with the DevOps principles to streamline software development and infrastructure management They design implement and maintain cloud-based solutions leveraging AWS services like EC2 S3 and RDS DevOps engineers automate processes using tools such as AWS cloud formation and facilitate continuous integration and deployment pipelines Their role focuses on improving collaboration between development and operations teams ensuring efficient reliable and secure software delivery with skills in infrastructure such as IA or infrastructure as code containerization scripting and continuous integration AWS DevOps engineers play a critical role in optimizing cloud-based applications and services And that's exactly an AWS DevOps engineer Now moving ahead we will discuss the important skills required to become an AWS DevOps engineer The role of an AWS DevOps engineer requires a combination of technical and nontechnical skills Here are the top five skills that are crucial for an AWS DevOps engineer Starting with the first one AWS expertise Proficiency in AWS is fundamental DevOps engineers should have a deep understanding of AWS services including EC2 S3 RDS VPC and much more They should be able to design implement and manage cloud infrastructure efficiently The next one is IA or infrastructure as code IA tools like AWS cloud formation or Terraform are essential for automating the provisioning and management of infrastructure DevOps engineers should be skilled in writing infrastructure code and templates to maintain consistency and reliability Third one is scripting and programming Knowledge of scripting languages example Python bash and programming languages is important for automation and custom scripting Python in particular is widely used for tasks like creating deployment scripts automating AWS tasks and developing custom solutions Next one is containerization and orchestration Skills in containerization technologies such as Docker and container orchestrization platforms like Amazon ECS or Amazon EKS are vital DevOps engineers should be able to build deploy and manage containerized applications Now the fifth one is CI/CD pipelines or continuous integration and continuous deployment Proficiency in setting up and maintaining CI/CD pipelines using tools like AWS code pipeline Genkins or GitHub CI/CD is crucial DevOps engineers should understand the principles of automated testing integration and continuous deployment to streamline software delivery Effective communication and collaboration skills are essential as DevOps engineers work closely with DevOps development and operations teams to bridge the gap between them and ensure smooth software delivery and infrastructure management Problem solving skills the ability to troubleshoot issues and a strong understanding of security best practices are also important for this role DevOps engineers need to be adaptable and keep up with the evolving AWS ecosystem and DevOps practices to remain effective in their role Moving ahead we will discuss the roles and responsibilities of an AWS DevOps engineer The roles and responsibilities of an AWS DevOps engineer typically revolve around managing and optimizing the infrastructure and development pipelines to ensure efficient reliable and scalable operations Here are the top five roles and responsibilities of an AWS DevOps engineer Starting with the first one that is IA management DevOps engineers are responsible for defining and managing infrastructure using IA tools like AWS cloud formation or terapform They create and maintain templates to provision and configure AWS resources ensuring consistency and repeatability Next one is continuous integration and deployment Continuous integration and continuous deployment or also known as CI/CD is very critical DevOps engineers establish and maintain CI/CD pipelines automating the build test and deployment processes They use AWS code pipeline genkins or similar tools to streamline the delivery of software and updates to production environment Next is server and containerization management DevOps engineers work with AWS EC2 instances ECS EKS and other services to manage servers and containers They monitor resource utilization configure autoscaling and ensure high availability and fault tolerance Managing and login is the fourth one Monitoring is a critical responsibility DevOps engineers set up monitoring and alerting systems using AWS cloudatch Analyze logs and respond to incidents promptly They aim to maintain high system availability at security and compliance is the fifth one So security is a priority DevOps engineers implement and maintain security best practices manage AWS identity and access management that is IM policies and ensure compliance with regulatory requirements They often work with AWS services like AWS security hub and AWS config to assess and improve security AWS DevOps engineers are involved in optimizing costs ensuring disaster recovery and backup strategies and collaborating with development and operations teams to enhance communication and collaboration They may also assist in automating routine tasks and prompting a culture of continuous improvement and innovation within the organization Now the most important aspect of today's session that is how to become or the road map to become an AWS DevOps engineer The AWS DevOps road map provides a highle guide for individuals or teams looking to adopt a DevOps practices in the context of Amazon web services DevOps is a set of practices that combine software development dev and IT operations or to enhance collaboration and automate the process of software delivery and infrastructure management AWS offers a range of services and tools to support AWS practices Here is a road map to help you get started with AWS and DevOps Creating a road map for AWS DevOps in 10 steps can help you guide your journey towards implementing DevOps practices on the Amazon web services platform The first one is understand DevOps principles Start by gaining a solid understanding of DevOps principles and practices DevOps is about collaboration between development and operations team to automate and streamline the software delivery process Second one is learn AWS fundamentals Get acquainted with AWS services and understand the basics of cloud computing including compute storage and networking services AWS offers a wide range of services that can be leveraged in your DevOps processes Third one is set up your AWS account Sign up for an AWS account and configure billing and security settings You may also want to consider using AWS organizations for managing multiple accounts and AWS identity and access management for user access control Fourth step is source code management Implement source code management using a tool like Git and host your code repositories on a platform like AWS code or GitHub Learn about version control best practices The fifth step is continuous integration Set up a CI/CD pipeline using services like AWS code pipeline AWS code build or genkins Automate building testing and deployment of your code Sixth one being infrastructure as code or IA Embrace IA principles to manage your AWS resources Use tools like AWS cloud formation Terraform or AWS CDK to define and provision infrastructure as code Seventh step being deployment and orchestration Use AWS services like AWS elastic bench AWS elastic container service or ECS or Kubernetes on AWS also known as EKS for deploying and managing your applications Orchestrate these deployments using AWS step functions or other automation tools Now the eighth step is monitoring and logging Implement robust monitoring and logging services using services like Amazon Cloudatch and AWS Cloud Trail Create dashboards set up alarms and analyze logs to gain insights into your application's performance and security Now the ninth one security and compliance Focus on security by following AWS best practices using AWS identity and access management AM effectively and automating security checks for AWS config and AWS security hub Ensure your infrastructure and applications are compliant with industry standards And now the last step continuous learning and improvement DevOps is an ongoing journey of improvement Continuously monitor and optimize your DevOps pipeline Incorporate feedback and stay updated on new AWS services and best practices Ensure a culture of learning and innovation within your team Remember that this road map is a highle guide and the specific tools and services you choose may vary based on your project's requirements DevOps is your culture of collaboration and automation So adapt your DevOps practice to best suit your team's needs and the AWS services that you use Now moving ahead we will discuss the salary compensations being offered to an AWS DevOps engineer Now if you are in India and a beginner in AWS DevOps domain you can expect salaries ranging from three lakhs to six lakhs peranom If you're an intermediate candidate with minimum two years of experience then you can expect salaries ranging from six lakhs to 12 lakhs per If you are an experienced candidate with more than four years of experience the minimum salary you can expect is 12 lakhs and it can go all the way up to 20 or more based on the project you're working with company you're working with and the location Now if you are in America and if you are a beginner in AWS DevOps domain then you can expect an average salary of $80,000 to $120,000 perom and if you are an intermediate candidate with minimum two years of experience then you can expect salaries ranging from $120,000 to $150,000 peranom If you are a highly experienced candidate maybe with four or more than that you can expect salaries ranging from $150,000 to $200,000 per And again it might also go up based on the project you're working with based on the company you're working with and in the location Now moving ahead we will discuss the next important and also the last important topic of today's discussion that is the companies hiring AWS DevOps engineers There are a lot of companies hiring AWS DevOps engineers The prominent players in this particular field is Amazon Web Services Google Microsoft IBM Oracle Netflix Adobe Cisco Slack Salesforce Deoid and much more Talking about the salary figures of a senior DevOps engineer according to Glasto a senior DevOps engineer working in the United States earns a whooping salary of $178,362 The same senior DevOps engineer in India earns 18 lak rupees annually To sum it up as you progress from entry level to mid-level and eventually to experience DevOp engineer your roles and responsibilities evolve significantly Each level presents unique challenges and opportunities for growth All contributing to your journey as a successful DevOps professional So excited about the opportunities DevOps offers Great Now let's talk about the skills you will need to become a successful DevOps engineer Coding and scripting Strong knowledge of programming languages like Python Ruby or JavaScript and scripting skills are essential for automation and tool development System administration familiarity with Linux Unix and Windows systems including configuration and troubleshooting Cloud computing proficiency in cloud platforms like AWS Azure or Google Cloud to deploy and manage applications in the cloud Containerization and orchestration Understanding container technologies like Docker and container orchestration tools like Kubernetes is a must Continuous integration or deployment experience with CI/CD tools such as Jenkins GitLab CI or CircleCI to automate the development workflow infrastructure as code Knowledge of IA tools like Terraform or Enible to manage infrastructure programmatically Monitoring and logging Familiarity with monitoring tools like prompt the graphana and logging solutions like ELK stack Acquiring these skills will not only make you a valuable DevOps engineer but will also open doors to exciting job opportunities So to enroll in the postgraduate program in DevOps today click the link mentioned in the description box below Don't miss this fantastic opportunity to invest in your future So let's take a minute to hear it out from our learners who have experienced massive success in their career through a post-graduate program in DevOps So what are we going to cover today so we're going to introduce to the concept of version control that you will use within your DevOps environment Then we'll talk about the different tools that are available in a distributed version control system We'll highlight a product called Git which is typically used for version control today And you'll also go through what are the differences between Git and GitHub You may have used GitHub in the past or other products like GitLab And we'll explain what are the differences between Git and Git and services such as GitHub and GitLab We'll break out the architecture of what a Git process looks like Um how do you go through and create forks and clones how do you have collaborators being added into your projects how do you go through the process of branching merging and rebasing your project and what are the list of commands that are available to you in git Finally I'll take you through a demo on how you can actually run git yourself and in this instance use the software of git against a public service such as GitHub All right let's talk a little bit about version control systems So you may have already been using a version control system within your environment today You may have used tools such as Microsoft's Team Foundation services but essentially the use of a version control system allows people to be able to have files that are all stored in a single repository So if you're working on developing a new program such as a website or an application uh you would store all of your version control software in a single repository Now what happens is that if somebody wants to make changes to the code they would check out all of the code in the repository to make the changes and then there would be an addendum added to that So um there would be the version one changes that you had then the person would then later on check out that code and then be a version two um added to that um code And so you would keep adding on versions of that code The bottom line is that eventually you'll have people being able to use your code and that your code will be um stored in a centralized location However the challenge you run in is that it's very difficult for large groups to work simultaneously within a project The benefits of a VCS system a version control system demonstrates that you're able to store multiple versions of a solution in a single repository Now let's take a step at some of the challenges that you have with traditional version control systems and see how they can be addressed with distributed version control So in a distributed version control environment what we're looking at is being able to have the code shared across a team of developers So if there are two or more people working on a software package they need to be able to effectively uh share that code amongst themselves so that they constantly are working on the latest um piece of code So a key part of a distributed version control system that's different to just a traditional version control system is that all developers have the entire code on their local systems and they try and keep it updated all the time It is the role of the uh distributed VCS server to ensure that each client and we have a developer here and developer here and developer here and each of those are clients have the latest version of the software and then that each person can then share the software in a peer-to-peer like approach So that as changes are being made into the server of changes to the code then those changes are then being redistributed to all of the development team The tool to be able to do an effective distributed VCS environment is Git Now you may remember that we actually covered Git in a previous video and we'll reference that video for you So we start off with our remote git repository and people are making updates to the copy of their code into a local environment That local environment can be updated manually and then periodically pushed out to the git repository So you're always pushing out the latest code that you've code changes you made into the repository and then from the repository you're able to pull back the latest updates and so your git repository becomes the kind of the center of the universe for you and then updates are able to be pushed up and pulled back from there What this allows you to be able to accomplish is that each person will always have the latest version of the code So what is git git is a distributed version control tool used for source code management So GitHub is the remote server for that source code management and your development team can connect their Git client to that remote hub server Now Git is used to track the changes of the source code and allows large teams to work simultaneously with each other It supports a nonlinear development because of thousands of parallel branches and has the ability to handle large projects efficiently So let's talk a little bit about Git versus GitHub So Git is a software tool whereas GitHub is a service and I'll show you how those two look in a moment You install the software tool for git locally on your system Whereas GitHub because it is a service is actually hosted on a website Git is actually the software that used to manage different versions of source code whereas GitHub is used to have a copy of the local repository stored on the service on the website itself Git provides command line tools that allow you to interact with your files whereas GitHub has a graphical interface that allows you to check in and check out files So let me just show you the two tools here So here I am at the git website and this is the website you would go to to download the latest version of git And again Git is a software package that you install on your computer that allows you to be able to do version control in a peer-to-peer environment For that peer-to-peer environment to be successful however you need to be able to store your files in a server somewhere And typically a lot of companies will use a service such as GitHub as a way to be able to store your files So Git can communicate effectively with GitHub There are actually many different companies that provide similar service to GitHub GitLab is another popular service But you also find that development tools such as Microsoft Visual Studio are also incorporating Git commands into their tools So the latest version of Visual Studio Team Services also provides this same ability But GitHub it has to be remembered is a place where we actually store our files and can very easily create public sharable is a place where we can store our files and create public sharable projects You can come to GitHub and you can do a search on projects You can see at the moment I'm doing a lot of work on blockchain but you can actually search on the many hundreds of projects here In fact I think there's something like over a 100,000 projects being managed on GitHub at the moment That number is probably actually much larger than that And so if you are working on a project I would certainly encourage you to start at GitHub to see if somebody's already maybe done a prototype that they're sharing or they have an open-source project that they want to share that's already available um in GitHub Certainly if you're doing anything with um Azure you'll find that there are thousands 45,000 Azure projects currently being worked on Interestingly enough GitHub was recently acquired by Microsoft and Microsoft is fully embracing open-source technologies So that's essentially the difference between Git and GitHub One is a piece of software and that's Git and one is a service that supports the ability of using the software and that's GitHub So let's dig deeper into the actual git architecture itself So the working directory is the folder where you are currently working on your git project and we'll do a demo later on where you can actually see how we can actually simulate each of these steps So you start off with your working directory where you store your files and then you add your files to a staging area where you are getting ready to commit your files back to the main branch on your git project You will want to push out all of your changes to a local repository after you've made your changes And these will commit those files and get them ready for synchronization with the service and we'll then push your services out to the remote repository An example of a remote repository would be GitHub Later when you want to update your code before you write any more code you would pull the latest changes from the remote repository so that your copy of your local software is always the latest version of the software that the rest of the team is working on One of the things that you can do is as you're working on new features within your project you can create branches You can merge your branches with the mainline code You can do lots of really creative things that ensure the that a the code remains at very high quality and b that you're able to seamlessly add in new features without breaking the core code So let's step through some of the concepts that we have available in git So let's talk about forking and cloning in git So both of these terms are quite old terms when it comes to development But forking is certainly a term that goes way way way back um long before uh we had distributed CVS systems such as the ones that we're using with Git To fork a piece of software is a particular open- source project You would take the project and create a copy of that project and but then you would then associate a new team and new people around that project So it becomes a separate project in entirety A clone and this is important when it comes to working with git A clone is identical with the same teams and same structuring as the main project itself So when you download the code you're downloading exact copy of that code with all the same security and access rights as the main code And then you can then check that code back in and potentially your code because it is identical could potentially become the mainline code uh in the future Now that typically doesn't happen Your changes are the ones that merge into the main branch But also but you do have that potential where your code could become the main code With git you can also add collaborators that can work on the project which is essential for projects where particularly where you have large teams This works really well when you have product teams where the teams themselves are self-empowered You can do a concept what's called branching in git and so say for instance you are working on a new feature that new feature and the main version of the project have to still work simultaneously So what you can do is you can create a branch of your code So you can actually work on the new feature whereas the rest of the team continue to work on the main branch of the the project itself and then later you can merge the two together Pull from remote is the concept of being able to pull in services software the team is working on from a remote server And get rebase is the concept of being able to take a project and reestablish a new start from the project So you may be working a project where there have been many branches and the team has been working for quite some time on different areas and maybe you kind of losing control of what the true main branch is You may choose to rebase your project And what that means though is that anybody that's working on a separate branch will not be able to branch their code back into the mainline branch So going through the process of a get rebase essentially allows you to create a new start for where you're working on your project So let's go through forks and clones So you want to go through the process So you want to go ahead and fork the code that you're working on So let's use this scenario that one of your team wants to go ahead and add a new change to the project The team member may say yeah go ahead and you know create a separate fork of the actual project So what does that look like so when you actually go ahead and create a fork of the repository you actually go and you can take the version of the mainline branch but then you take it completely offline into a local repository for you to be able to work from and you can take the mainline code and you can then work on a local version of the code separate from the mainline branch It's now a separate fork Collaborators is the ability to have team members working on a project together So if you know someone is working on a piece of code and they see some errors in the code that you've created none of us are perfect at writing code I know I've certainly made errors in my code It's great to have other team members that have your back and can come in and check and see what they can do to improve the code So to do that you have to then add them as a collaborator Now you would do that uh in GitHub You can give them permission within GitHub itself It's really easy to do Super visual um interface that allows you to do the work quickly and easily And depending on the type of permissions you want to give them sometimes it could be very limited permissions It may be uh just to be able to read the files Sometimes it's being able to go in and make all the changes You can go through all the different permission settings on GitHub to actually see what you can do But you'll be able to make changes so that people can actually have access to your repository and then you as a team can then start working together on the same code Let's step through branching in git So suppose you're working on an application but you want to add in a new feature and this is very typical within a DevOps environment So to do that you can create a new branch and build the new feature on that branch So here you have your main application on what's known as the master branch and then you can then create a subbranch that runs in parallel which has your feature You can then develop your feature and then merge it back into the master branch at a later point in time Now the benefit you have here is that by default we're all working on the master branch So we always have the latest code The circles that we have here on the screen show various different commits that have been made so that we can keep track of the master branch and then the branches that have come off which have the new features and they can be many branches in git So Git keeps your the new features you're working on in separate branches until you're ready to merge them back in with the main branch So let's talk a little bit about that merge process So you're starting with the master branch which is the blue line here and then here we have a separate parallel branch uh which has the new features So if we're to look at this process the base commit of feature B is the branch F is what's going to merge back into the master branch And it has to be said there can be so many divergent branches but eventually you want to have everything merge back into the master branch Let's step through get rebase So again we have a similar situation where we have a branch that's being worked in parallel to the master branch and we want to do a get rebase So we're at stage C and what we've decided is that we want to reset the project so that everything from here on out with along the master branch is the standard product However this means that any work that's been done in parallel as a separate branch will be adding in new features along this new rebased environment Now the benefit you have by going through the rebase process is that you're reducing the amount of storage space that's required for when you have so many branches It's a great way to just reduce your total footprint for your entire project So get rebase is the process of combining a sequence of commits to form a new base commit And the prime reason for rebasing is to maintain a linear project history When you rebase you unplug a branch and replplug it in on the tip of another branch And usually you do that on the master branch and that will then become the new master branch The goal of rebasing is to take all the commits from a feature branch and put it together in a single master branch It makes it the project itself much easier to manage Let's talk a little bit about pull from remote Suppose there are two developers working together on an application The concept of having a remote repository allows the code to the two developers will be actually then checking in their code into a remote repository that becomes a centralized location for them to be able to store their code It enables them to stay updated on the recent changes to the repository because they'll be able to pull the latest changes from that remote repository so that they are ensuring that as developers they're always working on the latest code So you can pull any changes that you have made to your fork remote repository to your local repository The command to be able to do that is written here and we'll go through a demo of how to actually do that command in a little bit Good news is if there are no changes you'll get a notification saying that you're already up to date And if there is a change it will merge those changes to your local repository and you get a list of the changes that have been made remotely So let's step through some of the commands that we have in git So git initializes a local git repository on your hard drive Get adds one or more files to your staging area Git commit-m commit message is a commit changes the git command commits changes to headup So the git command commits changes to your local staging area Get status checks the status of your current repository and lists the files you have changed Get block provides a list of all the commits made on your current branch Get diff views the changes you've made to the file So you can actually have files next to each other You can actually see the differences between the two files Uh get push origin branch name So the name of your branch command will push the branch to the remote repository so that others can use it And this is what you would do at the end of your project Get config- global username will tell get who you are by configuring the author name We'll go through that in a moment Get config global user email will tell get the author of by the email id Get clone creates a get repository copy from a remote source Get remote add origin server connects the local repository to the remote server and adds the server to be able to push to it Get branch and then the branch name will create a new branch for you to create a new feature that you may be working on Uh get checkout and then the branch name will allow you to switch from one branch to another branch Get merge branch name will merge a branch into the active branch So if you're working on a new feature you can then merge that into the main branch A get rebase will reapply commits on top of another base tip And get rebase will reapply commits on top of another base tip And these are just some of the popular Git commands There are some more but you can certainly dig into those as you're working through using Git So let's go ahead and run a demo using Git So for now we are going to do a demo using Git on our local machine and GitHub as the remote repository For this to work I'm going to be using a couple of tools First I'll have the deck open as we've been using up to this point Uh the second is I'm going to have my terminal window also available and let me bring that over so you can actually see this And the terminal window is actually running Git Bash as the software in the background which you'll need to download and install You can also run Git Bash locally on your Windows computer as well And in addition I'll also have the GitHub repository that we're using for SimplyLearn uh already set up and ready to go All right so let's get started So the first thing we want to do is create a local repository So let's go ahead and do exactly that So the local repository is going to reside in my development folder uh that I have on my local computer And for me to be able to do that I need to create a drive in that folder So I'm going to go ahead and change the directory So I'm actually going to be in that folder before I actually create make the new folder So I'm going to go ahead and change directory And now I'm in the development directory I'm going to go ahead and create a new folder And that's gone ahead and created a new folder called Hello World I'm going to move my cursor so that I'm actually in the Hello World folder And now that I'm in the hello world folder I can now initialize this folder as a git repository So I'm going to use the git command in it to initialize And let's gone ahead and initialize that folder So let's see what's happened So here I have my hello world folder that I've created And you'll now see that we have a hidden folder in there which is called.git And we expand that we can actually see all of the different subfolders that Git repository will create So let's just move that over a little bit so that we can see the rest of the work And now if we check on our folder here we'll actually see this is users Matthew uh development hello world.git And that matches up with hidden folder here So we're going to go ahead and create a file called readme.txt in our folder So here is our hello world folder and I'm going to go ahead and using my text editor which happens to be sublime I'm going to create a file and it's going to have in there the text hello world and I'm going to call this one readme.txt If I go to my hello world folder you'll see that we have the readme.txt file actually in the folder What's interesting is if I select the get status command what it'll actually show me is that this file has not yet been added to the commits yet for this project So even though the file is actually in the folder it doesn't mean that it's actually part of the project For us to do that we actually have to go and select For us to actually commit the file we have to go into our terminal window and we can use the get status to actually read the files that we have there So let's go ahead and use the get status command and it's going to tell us that this file has not been committed You can use this with any folder to see which files and subfolders haven't been committed And what we can now do is we can go and actually add the readme file So let's go ahead and we're going to say get add So the git command is add readme.txt So that then adds that file into our main um project And we want to then commit those files into the main repositories history And so to that do that we'll hit the the get command commit and we'll do a message in that commit and this one will be first commit and it has committed that project What's interesting is we can now go back into readme file and I can change this So we can go hello git Git is a very popular version control solution and we'll we'll save that Now what we can do is we can actually go and see if we have made differences to the readme text So to do that we'll use the diff command forget So we do get diff and it gives us two uh releases The first is what the original text was which is hello world and then what we have afterwards is what is now the new text in green which has replaced the original text So what we're going to do now is uh you want to go ahead and create an account on GitHub we already have one And so what we're going to do is we're going to match the account from GitHub with our local account So to do that we're going to go ahead and say get config and we're going to do dash and it's going to be a global user.name and we're going to put in our username that we use for GitHub In this instance we're using the simply learn dash GitHub account name And under the GitHub account you can go ahead and create a new repository name In this instance we called the repository uh hello-orld And what we want to do is connect the local GitHub account with the remote hello world.get account And we do that by using this command uh from get which is our remote connection And so let's go ahead and type that in open this up so we can see the whole thing So we're gonna type in get remote add origin https back slash backlash github.com/simplylearn dash github And you have to get this typed in correctly when you're typing in the location hello-world.get That creates the connection to your hello world account And now what we want to do is we want to push the files to the remote location using the get push command commit get push origin master So we're going to go ahead and connect to our local remote GitHub So I'm just going to bring up my terminal window again And so let's select get remote add origin and we'll connect to the remote location github.com/simplylearn dash github slash hello-world.git Oh we actually have already connected So we're connected to that successfully And now we're going to push the master gish So get push origin master And everything is connected and successful And if we go out to GitHub now we can actually see that our file was updated just a few minutes ago So what we can actually do now is we can go and fork a project from GitHub and clone it locally So we're going to use the um fork tool that's actually available on GitHub Let me show you where that is located And here is our branching tool It's actually changed more recently with a new UI interface And once complete we'll be able to then pull a copy of that to our account using the fork's new HTTP URL address So let's go ahead and do that So we're going to go ahead and create a fork of our project Now to do that you would normally go in when you go into your project you'll see that there are fork options in the top right hand corner of the screen Now right now I'm actually logged in with the default primary count for this project So I can't actually fork the project as I'm working on the main branch However if I come in with a separate ID and here I am I have a different ID and so I'm actually pretending I'm somebody else I can actually come in and select the fork option and create a fork of this project And this will take just a few seconds to actually create the fork And there we are We have gone ahead and uh created the fork So you want to say clone or download with this And so this is the I select that'll actually give me the web address I can actually show you what that looks like I'll open up my text editor That's correct I guess that is correct So I'm going to copy that And I can fork the project locally and clone it locally I can change the directory So I can create a new directory that I'm going to put my files in and then post in that content into that file So I can now actually have multiple versions of the same code running on my computer I can then go into the forked content and use the patchwork command to actually so I can create a copy of that code that we've just created and we call it that's a a clone and we can create a new folder that we're actually putting the work in And we could for whatever reason we wanted to we could call this folder patchwork and that would be maybe new feature And then we can then paste in the URL of the new uh directory that would has the forked work in it And now at this point we've now pulled in and created a clone of the original content And so this allows us to go ahead and fork out all of the work for our project onto our computer So we can then develop our work separately So now what we can actually do is we can actually create a branch of the fork that we've actually pulled in onto our computer So we can actually then create our own code that runs in uh that separate branch And so we want to check out um the uh the branch and then push the origin branch uh down to our computer This will give us the opportunity to then add our collaborators So we can actually then go over to GitHub and we can actually come in and add in our collaborators And we'll do that under settings and select collaborators And here we can actually see we have different collaborators that have been added into the project And you can actually then request people to be added via their GitHub name or by email address or by their full name One of the things that you want to be able to do is ensure that you're always keeping the code that you're working on fully up to date by pulling in all the changes from your collaborators You can create a new branch and then make changes and merge it into the master branch Now to do that you would create a folder and then that folder in this instance would be called test We would then move our cursor into the folder called test and then initialize that folder So let's go ahead and do that So let's call um create a new folder and we're going to first of all change our root folder and we're going to go to development and we're going to create a new folder and call it test and we're going to move into the test folder and we will initialize that folder and we're going to move some files into that test folder Call this one test one And then we're going to do file save as And this one's going to be test two And now we're going to commit those files Get addit And then we'll use the dot to pull in all files And then get commit dash m committed Make sure I'm in the right folder here I don't think I was And now that I'm in the correct folder let's go ahead and get commit And it's gone ahead and added those files And so we can see the two files that were created have been added into the master And we can now go ahead and create a new branch and call this one get branch test branch And let's go ahead and create a third file to go into that folder This is file three And do file save as We'll call this one test 3.ext text and we'll go ahead and add that file and do get add test 3.txt and we're going to move from the master branch to the test branch hit check out test branch and it switched to the test branch and we'll be able to list out all of the files that are in the in that branch now and we want to go through and merge the files into one area So let's go ahead and we'll do get merge test branch and it's well we've already updated everything so that's good So otherwise it would tell us what we would be merging and now all the files are merged successfully into the master branch There we go All merged together Fantastic And so what we're going to do now is move from master branch to test branch So get checkout test branch and we can modify the files the test three file that we took out and pull that file up and we can Now modified and we can then commit that file back in And we've actually been able to then commit the file with one changes And and we see it's the text change that was made And we can now go through a process of checking the file back in switching back to the master branch and ensuring that everything is in sync correctly We may at one point want to rebase all of the work It's kind of a hard thing you want to do but it will allow you to allow for managing for changes in the future So let's uh switch to it back to our test branch which I think we're actually on We're going to create two more files Let's go to our folder here and let's go copy those And that's created We'll rename those tests four and five And so we now have additional files and we're going to add those into our branch that we're working on So we're going to go in and select get add- a and we're going to commit those files Get commit dash A - M adding two new files and it's added in the two new files So we have all of our files now We can actually list them out and we have all the files that are in the branch and we'll switch then to our master branch We want to rebase the master So we do get rebase master And that will then give us the command that everything is now completely up to date We can go get checkout master to switch to the master account This will allow us to then um continue through and rebase the test branch and then list all the files so they're all in the same area So let's go get rebase test branch And now we can list And there we have all of our files listed in correctly If you are here you're probably wondering how to become a DevOps engineer Well you are in the right place Today we are diving into the ultimate DevOps engineer road map DevOps is all about blending development and operations to streamline and speed up the entire software development process DevOps engineers are in hot demand and the salaries are pretty amazing too Depending on your experience and where you are you could be making anywhere from $90,000 to over $150,000 a year Craving a career upgrade subscribe like and comment below Dive into the link in the description to fasttrack your ambitions Whether you're making a switch or aiming higher SimplyLearn has your back So stick around In this video we'll walk you through the ultimate road map to becoming a DevOps engineer We'll cover everything you need to know step by step to help you succeed in this fantastic field So these are the contents that you must learn to become a DevOps engineer So better take a screenshot of this Also if you're looking forward to make a career in DevOps definitely check out Simple Learn's professional certificate program in cloud computing and DevOps This comprehensive course offers in-depth learning with thorough understanding of the cloud computing principles and DevOps practices guided by expert instructors with real world experience You'll engage in hands-on projects and real world scenarios building a robust portfolio that showcases your skills Plus the program is designed to help you gain industry recognized certifications making you a standout candidate in the job market Don't miss this opportunity to advance your career and stay ahead in the ever evolving tech landscape Check out the course link in the description box and pin comments So let's get started So first up we have the software development life cycle or SDLC So the software development life cycle is a process used by the software developers to design develop and test highquality software It consists of several stages Each stage helps ensure the software is reliable functional and meets user needs So understanding SDLC is crucial because it gives you a holistic view of software development It's like knowing the recipe before you start cooking So the different phases of SDLC are requirements gathering understanding what the stakeholders need design planning the solutions architecture implementation which is writing the code then comes testing which is ensuring the code works as intended Then comes deployment which is releasing the software to users and finally maintenance which is updating and fixing the software as needed So each phase has its own importance and knowing these phases helps you understand how DevOps practices integrate to make the development and deployment processes more efficient and reliable So next let's talk about Linux Linux is a type of operating system like Windows or MacOSS that runs on many servers computers and devices around the world It's known for being stable secure and free to use But why Linux because it's the backbone of most server environments you'll work with Here are the essentials you should focus on which are command line operations shell scripting learn batch to automate repetitive tasks system administration like understand how to manage users permissions and processes and package management So Linux is used everywhere in the server world and knowing it well will help you fix problems automate tasks and manage servers easily Now the next one is learning a scripting or programming language So knowing a scripting language like Python Ruby or even Bash is essential These languages help you automate tasks write scripts and manage infrastructure So here's why you should learn scripting Automation Write scripts to automate repetitive tasks such as backups deployments and monitoring Configuration management Tools like Ansible use Python for automation Infrastructure management Use scripts to manage cloud resources databases and more So choose a language and start building small projects to get hands-on experience I highly recommend Python due to its simplicity and extensive libraries Now Git is next on our list Git is the most popular version control system out there It allows you to track changes collaborate with others and maintain a history of your code So key concepts to learn include repositories how to create and manage them commits recording changes to the repository branches which is working on different features simultaneously and merging which is integrating changes from different branches So familiarize yourself with platforms like GitHub GitLab and Bitbucket These platforms facilitate collaboration and code management in a team environment Now networking and security are critical components of a DevOps engineer skill set You'll need to understand how data flows through networks how to set up firewalls and secure your applications So focus on these areas Basic networking understanding IP addresses DNS HTTP HTTPS and TCP IP protocols Network security Learn about firewalls VPNs and encryption techniques and application security implement security best practices such as input validation authentication and authorization So this knowledge will help you build secure and reliable systems ensuring data integrity and confidentiality Now let's move on to cloud providers So AWS Azure and Google cloud platform are the big players here So start with one and learn the basics So number one compute services like EC2 AWS VMware in Azure and compute engine in GCP Then comes storage services like S3 in AWS blob storage in Azure and cloud storage in GCP And then database services like RDS in AWS SQL database in Azure and cloud SQL in GCP So understanding cloud services is crucial as most modern applications run on cloud infrastructure So learn about AM which is identity and access management for security and explore cloud specific services and tools offered by these providers Now next you need infrastructure as code or AC which is a gamecher So infrastructure as code is a way to set up and manage computer resources like servers and networks using code instead of doing it by hand So you write scripts that describe what you need and then tools like Terraform or Ansible will read these scripts and set everything up for you automatically So this makes it easy to create update and keep everything consistent every time This means you can version control your infrastructure just like your application code So the key benefits include consistency which ensure that environments are identical Then scalability It easily replicates environments across multiple regions And then version control with track changes to your infrastructure over time So you can start by writing simple terapform scripts to provision resources or use anible to automate configuration management Now next up we have microservices and containers So microservices architecture allows you to break down your application into smaller independent services So containers with tools like Docker package these services and their dependencies ensuring they run consistently across environments So you should definitely focus on microservices which is understand the principles of designing and building microservices Then Docker learn how to create Docker files build images and run containers And then container registries So use Docker Hub or private registries to store and share images So these concepts will help you build scalable and efficient applications that are easy to deploy and manage Now following containers we have container orchestration So Kubernetes is the go-to tool here It manages the deployment scaling and operations of containerized applications So the key components that you need to learn of Kubernetes are number one pods The smallest deployable units that can contain one or more containers it's called pod Next services networking components that define a set of pods and a policy by which to access them And then deployments which are controllers that manage the desired state of pods So learning Kubernetes can be challenging but it's incredibly powerful It automates many operational tasks allowing you to focus on building great applications Now moving on to next Continuous integration and continuous deployment or CICD are at the heart of DevOps So tools like Genkins CircleCI and GitLab CI help automate the process of testing and deploying code So here's why CI/CD is crucial Continuous integration automatically tests your code to catch issues earlier Continuous deployment It automatically deploy your code to production reducing time to market and then pipelines It define the steps to build test and deploy your application So mastering CI/CD will make your development process more efficient and reliable allowing for faster and more frequent releases So next monitoring and logging So monitoring and logging are essential for maintaining and troubleshooting your applications So tools like Prometheus Graphana and ER stack which is elastic search lock stash and Kibana provide insights into your systems performance and help you diagnose issues So you must focus on metrics which is track performance metrics like CPU memory and network usage Logging collect and analyze log data to troubleshoot issues Alerting set up alerts to notify you of potential issues before they become critical So by setting up proper monitoring and logging you ensure your systems run smoothly and can quickly respond to any problems So now DevOps is not just about tools and technologies It's also about people So collaboration and communication are crucial You'll be working closely with developers operations teams and other stakeholders which mean you must definitely focus on communication tools like start using Slack Microsoft Teams or other tools for effective communication Then comes project management So utilize tools like Jiraa or Trello to manage tasks and projects And then you must develop soft skills So develop empathy active listening and clear communication to work effectively in a team So being able to convey ideas clearly and work effectively in a team is a key to your success in DevOps So finally let's talk about leadership and strategy So as you grow in your career you may take on more responsibilities and lead teams So understanding the strategic aspects of DevOps such as implementing best practices driving cultural change and aligning DevOps initiatives with business goals is crucial So focus on best practices Implement and advocate for DevOps best practices within your team Next cultural change Foster a culture of collaboration continuous improvement and learning And then strategic alignment Ensure DevOps initiatives align with business objectives and deliver value So leadership skills will help you inspire and guide your team towards success making a significant impact on your organization Do you know friends that Kubernetes is also called K8 or cube it is an incredibly powerful platform that helps you manage and scale applications automatically but it can feel complex and overwhelming at the same time Many people find Kubernetes a bit tricky when they read through the documentation especially when they are trying to understand how all the pieces fit together to manage containers In this video we are going to break it down for you in easy terms We will explore two types of nodes in Kubernetes The master node and the worker node We will talk about how these nodes work together inside the cluster to manage and orchestrate your applications So guys without further ado let's get started But before that just a quick info guys simply learn has got DevOps engineers master program You can become a DevOps expert by joining this course In this course you are going to learn about Docker Kubernetes Ansible Terraform Prometheus Genkins Microsoft Azure and many more technologies So guys hurry up now and join the course The course link is mentioned in the description box So guys let's start with understanding first what is pod A pod is the smallest unit in Kubernetes It is like a wrapper around your application Inside a pod there's usually one or more containers Now you'll be wondering what is a container a container is where your actual application runs It includes everything that the app needs to function like code system libraries and dependencies Containers are lightweight and can be easily moved across different environment making them very popular in modern software development You can think container like a box that has your app and everything it needs to run Whether you run it on your laptop or on a cloud server or inside a Kubernetes pod the container will always behave the same way Let me give you one example Suppose you run an online e-commerce store You have a front end web app that the customer see and the back end the database that stores the product information and orders In Kubernetes you might choose to package the front end and backend as two separate containers So you could run both the web app and the database inside the same pod In this case both containers front end and backend share the same resources such as memory and network This might be useful if they need to be closely coupled and always run together Now pods are basically responsible for managing resources for containers inside them like the memory CPU and storage and each pod run on a node and Kubernetes decides which node will run on each pod Now let's understand the Kubernetes architecture So guys as we all know that Kubernetes is an open-source platform designed to automate deploy scale and manage centralized application It provides a powerful way to ensure that your applications are running efficiently and can easily scale across multiple machines and can also recover if something goes wrong At the heart of Kubernetes architecture there are worker nodes and master nodes These two components work together to ensure your apps are always running smoothly In this video we will take a closer look at each of these component and how they interact with each other So let's understand first the worker nodes Worker nodes are the machine which can be either physical computers or virtual machines where your applications actually run Think of them as a workers for your Kubernetes cluster They execute your app workloads handle the task required them to run Each worker node in Kubernetes runs these three main processes that is container runtime cublet and cube proxy Let's understand each one of them one by one The first process is container runtime The container runtime is like the engine of your worker node It is responsible for running your applications which are packaged into containers Containers are basically lightweight standalone units that contain everything your app needs to run This includes code system libraries and dependencies The container runtime is a software that ensures these container are properly managed and executed on each worker node One of the most popular container runtime is docker as you can see all over here So there are two instances of first there is a my app which can be a front end then you can consider this as the back end So guys you can consider something like this these two as your two containers this can be a front end this can be a back end or database Now there's a container runtime all over here which can be docker in this case So the container runtime in this case is docker and it is ensuring that the container for your web app is running on the worker node So if you have multiple applications they will be packaged into separate containers and container runtime will manage them making sure they are running as expected The next process that we are going to discuss about is called cublet Cublet is like a manager that oversees everything happening on a worker node It talks to the master nodes which are responsible for managing the entire cluster The cublet gets instruction from the master node detailing which application or pods needs to be run on the node It ensures that these applications are running by managing containers inside the pods Unlike container runtime which is specific to managing containers the cublet handles the interaction between kubernetes and the worker node It is responsible for making sure that the right number of containers are running that the resources like CPU memory and storage are allocated properly to those containers So you can say for example we have the master node that sends a request to the cubelet saying run two containers for the web app So one container is there for web app and one is for the database The cubelet will check the available resources on the worker node and ensures that the containers are up and running It also continuously monitors the health of these containers to make sure they don't crash or run into problems If container fails the cublet can restart it based on the policies defined in the cubernetes ensuring that the application remains highly available I hope so you would have got an idea regarding cublet Now let's move ahead and understand about cube proxy Think of cube proxy as a traffic director for your cubernetes cluster In a distributed systems like Kubernetes your applications which are running on different nodes Q proxy is responsible for managing network traffic and ensuring that data is routed correctly between different services and ports When an application needs to talk with each other in this case Qroxy sets up the necessary network rules and ensures that the traffic flows smoothly between different parts services and nodes It manages the internal networking of the cluster and ensures that each pod has a unique IP address Now let's move ahead and understand about the working of master nodes While worker nodes handle the execution of the application master nodes are the brain of the Kubernetes system The master node manages the overall state of the cluster and makes decision about which application should run and where they should run The master node constantly monitors the clusters to ensure everything is working as expected There are four key components that make up the master node The first one is API server The API server is like the front desk of the Kubernetes control plane It acts as an entry point for all the request you send to the Kubernetes Whether you are creating a new application checking the status of your pods or scaling your app you communicate with the Kubernetes through the API server The API server handles all these requests and ensures that they are passed onto the correct components within Kubernetes For example let's say you want to deploy a new web application in your Kubernetes cluster You would send a request to the API server which would receive the request It will validate it and pass it to the appropriate components like it can be a scheduleuler or a control manager Now let's move ahead and understand about the second component that is the scheduleuler If I talk about the scheduleuler guys the scheduleuler is like a smart planner for the cluster It is responsible for deciding which worker node should run a new application When you create a new app in Kubernetes the scheduleuler looks at all the available worker nodes and determines the best node for the app to run on Based on available resources like CPU memory and network the scheduleuler ensures that your apps are distributed efficiently across a cluster so that no single worker node is overloaded Now let's move ahead and understand about control manager The control manager is like the quality control department of Kubernetes It constantly monitors the state of cluster and ensures that everything is running as it should If something goes wrong like a pod crashes or a node goes offline the control manager steps in to fix it The control manager is responsible for ensuring that the desired state of the cluster matches the actual state If you define that you want three replicas of an app running and one of them crashes the control manager will automatically create a new replica to maintain the desired state Now the final component is etc which is also called as the cluster brain ETC is a database that stores all the data about the Kubernetes cluster It is often referred to as brain of the cluster because it keeps track of everything including the apps which are running where they are running and the overall state of the cluster ATCD is a distributed key value store meaning it can store data across multiple machines and ensure that it is highly available and fall tolerant This is crucial for Kubernetes because the entire system relies on etc to know how the current state of the cluster is For example if you want to deploy a new app Kubernetes stores information about the app like its configuration location and state in etc If something happens to the cluster Kubernetes can recover the current state from etc Now let us look at the example of setting up a cluster Now that we understand how worker nodes and master nodes work let us go through a simple example of a Kubernetes cluster setup In this you have a basic cluster with two master nodes and four worker nodes running on it Let us say these pods contain a web app and a database You start by creating pods and each pod contains one or more microservices for your web app Then the scheduleuler steps in Once you submit the request to Kubernetes through the API server the scheduleuler looks at the available worker nodes and assigns both the pods to your worker nodes Then comes the cublet which manages the pod The cublet on the worker nodes receives the instruction from the master node to run on the two pods It starts a container inside each pod using docker or another container runtime and it ensures they are running smoothly Then we have Q proxy which handles the communication The web app pod needs to communicate with the database pod QRoxy setups the network routes and ensures that the two applications can exchange data securely and efficiently Then we have control manager which ensures the stability If one of the pod crashes or fails to start the controller manager detects the issue and creates new instance of the pod ensuring that both your web app and the database stay online Then finally we have the etc which keeps up the track All the information about the state of the cluster including running pods their location their status is stored in etc This ensures that the cluster can recover from any issues and always knows what is happening So this was a simple example illustrating the cluster setup Kubernetes is a powerful platform for managing containerized application across cluster of machine By understanding the roles of worker nodes and master nodes you can see how Kubernetes automates the deployment scaling and management of your apps Hello and in this video we're going to cover a common conversation which is Kubernetes versus Docker But before we jump into that I want you to hit the subscribe button so you get notified about new content as it gets made available And if you hit the notification button that notification will then pop up on your desktop as a video is published from SimplyLearn In addition if you have any questions on the topic please post them in the comments below We read them and we do reply to them as often as we can So with that said let's jump into Kubernetes versus Docker So let's go through a couple of scenarios Let's do one for Kubernetes and then one for Docker and we can actually go through and understand what the problem specific companies have actually had and how they're able to use the two different tools to solve them So our first one is with Bose and Bose um had a large catalog of products that kept growing and their infrastructure had to change So the way that they looked at that was actually establishing two primary goals uh to be able to allow their product groups to be able to easier more easily catch up to the scale of their business So after going through um a number of solutions they ended up coming up with a solution of having Kubernetes running their IoT platform as a service inside of Amazon's AWS cloud service And what you'll see with both these products is they're very cloud friendly But here we have um Bose and Kubernetes working together with AWS to be able to scale up and meet the demands of their product catalog And so the result is that we're able to increase the number of non-p production deployments significantly by taking the number of services from being large bulky services down to small micro services being able to handle as many as 1,250 plus deployments every year An incredible amount of time and value has been opened through the use of Kubernetes Now let's have a look at Docker and see what a similar problem that people would have So uh the problem is with PayPal and PayPal processes something in the region of over 200 payments per second across all of their products and PayPal doesn't just have PayPal they have Brainree and Venmo So the challenge um that uh PayPal was uh really being given is that they had different architectures which resulted in different maintenance cycles and different deployment times and an overall complexity from having a decades old architecture with PayPal through to a modern architecture with Venmo Through the use of Docker paper was able to unify the application delivery and be able to centralize the management of all of the containers uh with one existing group The net net result is that PayPal was able to migrate over 700 applications into Docker Enterprise which consists of over 200,000 containers This ultimately opened up a 50% increase in availability for being able to in additional time for building testing and deploying of applications Just a huge win for PayPal Now let's dig into Kubernetes and Docker So Kubernetes is an open-sourceh platform and it's designed for being able to maintain a large number of containers And what you're going to find is that your argument for Kubernetes versus Docker isn't a real argument It's Kubernetes and Docker working together So Kubernetes is able to manage the infrastructure of a containerized environment And Docker is the number one container management solution And so with Docker you're able to automate the deployment of your applications being able to keep them in a very lightweight environment and being able to uh create a nice consistent experience so that your developers are working in the same containers that are then also pushed out to production So with Docker you're able to manage multiple containers running on the same hardware much more efficiently than you are with a VM environment The productivity around Docker is extremely high you're able to keep your applications very isolated Uh the configuration for Docker is really quick and easy You can be up and running in minutes with Docker once you have it installed and running on your development machine or inside of your DevOps environment So we look at the deployment between the two um and the differences Kubernetes is really designed for a combination of pods and services in its deployment Whereas with Docker it's around about deploying services in containers Uh so the the difference um here is that Kubernetes is going to manage the entire environment and then and that environment consisting of pods and inside of a pod you're going to have all of your containers that you're working on and those containers are going to control the services that actually power the applications that are being deployed Kubernetes is by default an autoscaling solution it has it turned on and is always available whereas Docker does not and and that's not surprising because Docker is a tool for building out solutions whereas Kubernetes is about managing your infrastructure Kubernetes is going to run health checks on the livveness and readiness of your entire environment So not just one container but tens of thousands of containers Whereas Docker is going to limit the health check to the services that it's managing within its own containers Now I'm not going to kid you Kubernetes is quite hard to set up It's it's if of the tools that you're going to be using in your DevOps environment It's it's not an easy setup for you to use Um and for this reason you want to really take advantage of the services within Azure and other similar cloud environments where they actually will do the setup for you Docker in contrast is really easy to set up You can as I mentioned earlier you can be up and running in a few minutes As you would expect the fault tolerance within Kubernetes is very high And this is by design because the architecture of Kubernetes is built on the same architecture that Google uses for managing its entire cloud infrastructure In contrast Docker has lower fall tolerance but that's because it's just managing the the services within its own containers What you'll find is that most public cloud providers will provide support for both Kubernetes and Docker Here we've highlighted Microsoft Azure because they were very quick uh to jump on and support Kubernetes Uh but the reality is is that today Google Amazon and many other providers are having first level support for Kubernetes It's just become extremely popular in a very very short time frame The companies using both Kubernetes and Docker is vast and every single day there are more and more companies using it and you should be able to look and see whether or not you can add your own company to this list In today's rapidly evolving digital landscape the importance of mastering cloud computing cannot be overstated To thrive in modern IT environments professionals must possess a robust set of crucial skills These include managing and maintaining infrastructures that are increasingly complex and distributed ensuring robust security measures to protect sensitive data against emerging threats optimizing cost to make the most of IT investments and mastering effective deployment strategies that ensure scalability and reliability across various platforms At Simply Learn we understand these skills form the backbone of successful cloud computing practices With over a decade of leadership in cloud solutions our company has been at the forefront of providing top tier education and services to a wide range of clients We are committed to making highlevel cloud skills accessible to everyone from aspiring IT professionals to seasoned experts That's why our courses are designed not just to inform but to transform your approach to cloud technology Whether you are just starting out or looking to deepen your expertise we provide the tools and training you need to navigate and excel with a cloud domain With our guidance you will not only keep pace with current technologies but also stay ahead of the curve in this dynamic field So let's start with the courses And number one on our list is the post-graduate program in cloud computing A comprehensive course designed in collaboration with Celtic CTME Let's explore what makes this program a standout choice This post-graduate program is your gateway to becoming a cloud expert It's structured to take you through a deep dive into the world of cloud computing covering crucial areas from basic concepts to advanced applications And by joining this program you're setting yourself up for success in a field that's at the forefront of technological advancement It's designed to arm you with the skills needed to design implement and manage cloud solutions that are innovative and efficient Moreover graduates of this program find themselves well prepared for competitive roles worldwide such as cloud architects cloud project managers or cloud service developers working for top tech companies across the globe Moreover you will gain mastery or essential skills like managing cloud infrastructures cloud security data compliance and disaster recovery planning Completing this program earns you a post-graduate certificate from Keltech CTME recognized globally and respected in the tech industry And this course offers hands-on experience with leading tools such as AWS Microsoft Azure Google Cloud Docker and Kubernetes You will tackle real world projects that challenge you to apply your learning to build scalable secure cloud architectures and manage extensive cloud deployments If you're ready to start your journey in cloud computing with a top rated program visit the description box or pin comment to enroll So next on our list at number two is the cloud solutions architect masters program So let's delve into why this course is crucial for those aspiring to become cloud architects This masters program is designed to turn you into a solutions architect capable of designing and implementing robust and scalable cloud solutions This course ensures you gain comprehensive knowledge and skills So choosing this program means selecting a path to mastering cloud architecture It's perfect for IT professionals who want to excel in creating high performing cloud solutions that meet critical business needs And this program opens doors to roles like cloud solutions architect and enterprise architect among others Graduates often step into significant positions into multinational corporations and tech firms around the world where they strategize and oversee cloud infrastructure deployments And throughout this program you will master skills in cloud deployment migration and infrastructure management You will also learn about securing cloud environments and optimizing them for cost and performance Completing this course will provide you with a holistic view of cloud architecture backed by a master certification recognized in the industry You'll get hands-on experience with essential tools used by cloud architects including AWS Microsoft Azure and Google Cloud Platforms The course also includes real life projects that challenge you to solve problems and design solutions that are not only efficient but also scalable and secure If designing top tier cloud solutions is your goal then the cloud solutions architect masters program is your stepping stone Visit the description box or pin comment to check out the course link Moving on to the third course in our series we have the AWS cloud architect certification training This program is essential for those looking to specialize in Amazon Web Services the leading cloud services platform This certification course is tailored to develop your skills in designing and managing scalable reliable applications on AWS It's ideal for solutions architects programmers and anyone interested in building robust cloud solutions using AWS technology Enrolling in this course will elevate your technical understanding and capabilities preparing you to lead cloud initiatives using AWS powerful platform You will gain in-depth knowledge of AWS architectural principles and services Learn to design and deploy scalable systems and understand elasticity and scalability concepts This course culminates in earning on AWS certification adding significant value to your professional credentials Practical hands-on labs and project work will let you work directly with AWS technologies ensuring you can apply what you learn immediately in any cloud environment So are you ready to become an AWS cloud architect sign up for this course today to start your specialized training with the link mentioned in the description box and pin comment Next up at number four in our series is the Azure Cloud Architect Certification Training This course is designed for those who aim to excel in designing and implementing Microsoft Azure solutions Azure is one of the leading cloud platforms and this training equips you to master its complex systems and services preparing you to tackle real world challenges with confidence With this program you will transform into a skilled cloud architect capable of managing Azure's extensive features and services It's an ideal path for IT professionals looking to specialize in Azure for advancing their careers The course covers a range of key areas including Azure administration Azure networking security and identity management You will also prepare for the Microsoft certified Azure Architect Technologies exam a highly respected credential in the industry Moreover hands-on labs and projects throughout the course ensure you gain practical experience with Azure This training includes simulations and real life scenarios to provide you with the skills needed to succeed in any Azure environment So are you ready to harness the power of Microsoft Azure enroll in our Azure cloud certification training today with a link in the description box and pin comment Rounding out our top five cloud computing courses At number five we have the Azure DevOps Solution Expert Masters Program This advanced training is tailored for those who wish to blend cloud technology with DevOps practices using the Microsoft Azure platform This masters program is designed to empower IT professionals with the skills to implement DevOps practices effectively in Azure environments It's perfect for those aiming to specialize in building testing and maintaining cloud solutions that are efficient and scalable by integrating DevOps with cloud innovations This course ensures your adapt at speeding up to IT operations and enhancing collaboration across teams It's an essential skill set for increasing business agility and IT efficiency The program covers a comprehensive range of topics including continuous integration and delivery CI/CD infrastructure as a code and monitoring and feedback mechanisms Graduates are wellprepared to lead DevOps initiatives and handle complex cloud infrastructures Moreover you'll work with popular tools like Jenkins Docker Enible and Kubernetes alongside Azure specific technologies Real world projects are integrated throughout the course to provide hands-on experience and insights into actual DevOps challenges If you are ready to advance your career by mastering Azure and DevOps enroll in our Azure DevOps solution expert masters program today Check out the link in the pinned comment and description box So thank you for joining us as we explore these top five cloud computing courses Each program is designed to not only meet but exceed the demands of today's digital landscape preparing you for a future where cloud technology is ubiquitous Genkins is the powerhouse behind modern software development streamlining the entire build and deployment process In this comprehensive course we will unlock the potential of Genkins teaching you how to automate tasks integrate diverse tools and orchestrate the software delivery pipeline like a pro From setting up Genkins pipelines to managing configurations and scaling for large projects we will cover it all Whether you are a seasoned developer looking to boost productivity or a beginner eager to dive into DevOps then this course will empower you to harness the full potential of Genkins for efficient and error-free software development If these are the type of videos you'd like to watch then hit that subscribe button and the bell icon to get notified when we host as we speak You might wonder what it takes to become an expert DevOps engineer If you are a professional with at least one year domain experience and is looking for online training and certification from the prestigious universities and in collaboration with building experts then search no more The post-graduate program in DevOps by Caltech University offered by simply learn in collaboration with IBM should be the right choice for you For more details head straight to our homepage and search for DevOps postgraduate program or simply click on the link in the description box below With that in mind over to our training experts Genkins is a web application that is written in Java And there are various ways in which you can use and install Genkins I have listed popular three mechanisms in which Genkins is usually installed on any system The topmost one is as a Windows or a Linux based services So if at all you have Windows like the way I have and I'm going to use this mechanism for this demo So I would download a MSI installer that is specific to Genkins and install this service So whenever I install it as a service it goes ahead and nicely installs all that is required for my genkins and I have a service that can be started or stopped based upon my need any flavor of Linux as well One other way of running genkins is downloading this generic WAR file and as long as you have JDK installed you can launch this WAR file by the command opening up a command prompt or shell prompt if at all you're on Linux box specifying java - jar and the name of this war file It typically brings up your web application and you know you can continue with your installation The only thing being if at all you want to stop using genkins you just go ahead and close this prompt You either do a control C and then bring down this prompt and your genin server would be down Other older versions of Jenkins were run popularly using this way in which you already have a Java based web server running up and running So you kind of drop in this war file into the root folder or the HTTP root folder of your web server So Jenkins would explode and kind of bring up your application All user credentials or user administration is all taken care of by the Apache or the Tomcat server or the web server on which genkins is running This wasn't very older way of running but still some people use it because if they don't want to maintain two servers if they already have a Java web server which it's being nicely maintained and backed up Genkins can run attached to it All right So either ways it doesn't matter however you're going to bring up your Genkins instance The way we're going to operate genkins is all going to be very very same or similar one with the subtle changes in terms of user administration If at all you're launching it through any other web server which will take care of the user administration otherwise all the commands or all the configuration or the way in which I'm going to run this demo it is going to be same across any of these installations All right So the prerequisites for running genkins as I mentioned earlier genkins is nothing but a simple web application that is written in Java So all that it needs is Java preferably JDK 1.7 or 1.8 2GB RAM is the recommended RAM for running genkins and also like any other open-source tool sets when you install JDK ensure that you set in the environment variable Java home to point to the right directory This is something very specific to JDK But for any other open source tools that you've installed there's always a preferred environment variable that you got to set in which is specific to that particular tool that you're going to use This is a generic thing that is there for you know for any other open source projects because the way opensource projects discover themselves is using this environment variables So as a general practice or a good practice always set these environment variables accordingly So I already have JDK 1.8 8 installed on my system But in case you do not what I would recommend is just navigate on your browser to the Oracle homepage and just type in or search for install JDK1.8 and navigate to the Oracle homepage You'll have to accept the license agreement and there are bunch of installers that is okay that you can pick up based upon the operating system on which you're running So I have this Windows 64 installer that is already installed and running on my system So I will not get into the details of downloading this or installing it Let me show you once I install this what I've done with regard to my path So if you get into this environment variables All right So I have just set in a Java home variable If you see this SQL and program files Java JDK 1.8 update This is where my my Java is located C program files C program files Java Okay So this is the home directory of my JDK So that is what I've been I've set it up here in my environment variable So if you see here this is my Java home All right One other thing to do is ensure that in case you want to run Java or Java C from your command prompt ensure that you also add that path into this path variable So if you see this somewhere I will see yes there you go SQL and program files Java JDK 1.8 bin So with these two I'll ensure that my Java installation is nice and you know good enough So to check that to double check that or to verify that let me just open up a simple command prompt and if I type in java - version all right and java c - version so the compiler is on the path java is on the path and if at all I do this even the environment variable specific to my java is installed correctly So I am good to go ahead with my Genkins installation Now that I have my prerequisites all set for installing Genkins let me just go ahead and download Genkins So let me open up a browser and say download Genkins All right LTS is nothing but the long-term support These are all stable versions Weekly I would not recommend that you try these unless and until you have a real need for that um long-term support is good enough and as I mentioned there are so many flavors of genkins that is available for download all right so what I want is yes this is the war file which is generic war file that I was talking to you earlier and this is the windows MSI installer so go ahead and download this MSI installer I already have that downloaded so let me just open that up all right so this is my downloaded Genkins instance or rather installer this is a pretty maybe a few months old but this is good enough for me Before you start uh genkins installation just be aware of one fact that uh there is a variable called genkins home This is where genkins would store all this configuration data jobs workspace and all that specific to genkins So by default if at all you don't set this to any particular directory if at all you install an MSI installer all your installation gets into C program files 86 and genkins folder If at all you run a war file depending upon the user ID with which you're running your war file the genkins folder there's a genkins folder that gets created inside the user home directory So in case you have any need wherein you want to back up your genkins or you want genkins installations to get into some specific directories go ahead and set this genkins home variable accordingly before you even begin your installation For now I don't need to do any of these things So I've already downloaded the installer Let me just go ahead with the default installation All right So this is my Genkins MSI installer I would just I don't want to make any changes into the Genkins configuration CQ program files is good for me Yeah this is where all my destination folder and all the configuration specific to it goes I'm happy with this I don't want to change this I would just say go ahead and click installation Okay So what typically happens once the genkins installation gets through is it'll start installing itself and there are some small checks that needs to be done So and by default genkins launches on the port 8080 So let me just open up localhost at80 There's a small checking that will be done as a part of the installation process wherein I need to type in a hash key All right So there's a very very simple hash key that gets stored out here So I'll have to just copy this path If at all you're running as a war file you would see that in your logs All right So this is a simple hash key that gets created every time when you do a genkins installation So as a part of the installation it just asks you to do this So if that is not correct it'll crib about it But this looks good So it's going ahead All right one important part during the installation So you would need to install some recommended plugins What happens is the plugins are all related to each other So it's like the typical RPM kind of a problem where you try to install some plugin and it's got a dependency which is not installed and you get into all those issues In order to get rid of that what Jenkins recommends there's a bunch of plugins that is already recommended So just go ahead and blindly click that install recommended plug-in So if you see there is a whole lot of plugins which are bare essential plug-in that is required for genkins in order to run properly So Genkins as a part of the installation would get all these plugins and then install it for you This is a good combination to kind of begin with And mind you at this moment Genkins needs uh lots of bandwidth in in terms of network So in case your you know your network is not so good few of these plugins would kind of fail And these plugins are all you know on available on openly or or mirrored sites and sometimes some of them may be down So do not worry in case some of these plugins kind of fail to install You'll get an option to kind of retry installing them But just ensure that you know at least most or 995% of all these plugins are installed without any problems Let me pause the video here for a minute and then get back once all these plugins are installed My plug-in installation is all good There was no failures in any of my plugins So after that I get to create this first admin user Again this is one important point that you got to remember key in given any username and password but ensure that you kind of remember that because it's very hard to get back your username and password in case you forget it All right So I'm going to create a very very simple username and password something that I can remember I will that's my name and um an email ID is kind of optional but it doesn't allow me to go ahead in case I don't So I just give in an admin and I got a password I've got I remember my password This is my full name All right I say save and finish All right that kind of completed my Genkins installation It was not that tough was it now that I have my genkins installed correctly let me quickly walk you through some bare minimal configurations that is required These are kind of a first-time configuration that is required So and also let me warn you the UI is little hard for many people to wrap their head around it Specifically the Windows guys but if at all you're a Java guy you know how painful it is to write UI in Java You would kind of appreciate you know all the effort that has gone into the UI Bottom line you was a little hard to you know wrap your head around it but once you start using it possibly you'll start liking it All right so let me get into something called as manage genkins This can be viewed like a a main menu for all the genkins configuration So I'll get into some of those important ones Something called as configure system Configure system This is where you kind of put in the configuration for your complete genkins instance Few things to kind of look out for This is a home directory This is a Java home where all the configurations all the workspace anything and everything regarding genkins is stored out here System message You want to put in some message on the system you just type in whatever you want and is possibly show up somewhere up here on the menu Number of executors Very very important configuration This just lets Jenin know at any point in time how many jobs or how many threads can be run You can you can kind of visualize it like a thread that can be run on this particular instance As a thumb rule if at all you you're on a single core system number of executors too should be good enough in case at any point in time if there are multiple jobs that kind of get triggered at the same time In case the number of executors are less compared to the number of jobs that have woken up no need to panic because they will all get queued up and eventually Jenkins will get to running those jobs Just bear in mind that whenever a new job kind of you know gets triggered the CPU usage and the memory usage in terms of the disk right is very high on the genkins instance So that's something that you got to kind of keep in mind All right But number of executors two for my system is kind of good Label for my genkins I don't want any of these things Usage how do you want to use your genkins this is good for me because I only have a primary uh server that is running So I want to use this node as much as possible Quiet pair Each of these options have got some pair minimal help kind of a thing that is that is out here By clicking on these question marks you will get to know as to what are these particular configurations All right So this all look good What I want to show you here is there's something regarding the docker timestamps get plug-in SVN email notifications I don't want that What I want the yes I want this SMTP server configuration Remember I mentioned earlier that I would want Jenkins to be sending out some emails and what I've done here is I've just configured the SMTP details of my personal email ID In case you are in a in an organization you would have some sort of an email ids that is set up for a genkins server So you can specify the SMTP server details of your company so that you know you can authorize Genkins to kind of send out emails But in case you want to try it out like me I have configured my personal email ID which is on my Gmail for sending out notifications So the SMTP server would be SMTP.gmail.com I'm using the SMTP authentication I have provided my email ID and my password I'm using the SMTP port which is 465 and I'm you know reply to address is the same as mine I can just send out an email and see if at all this configuration works Again Gmail would not allow you to allow anybody to send out notifications on your behalf So you'll have to lower the security level of your Gmail ID so that you can allow a programmatically somebody to send out email notifications on your behalf So I've done already that I'm just trying to see if I can send a test email with the configurations that I've set in Yes All right So the email configuration looks good So this is how you configure your uh you know your Gmail account in case you want to do that If not put in your organization SMTP server details which are with a valid username and password and it should all be set All right So no other configurations that I'm going to change here All of these look good All right So I come back to manage genkins Okay One other thing that I want to kind of go over is the global tool configuration Imagine this scenario or look at it this way Genkins is a is a continuous integration server It doesn't know what kind of a code base it's going to pull in what kind of a tool set that is required or what is the code that is going to pull in and how is it going to build So you will have to put in all the tools that is required for building the appropriate kind of code that you're going to pull in from you know your source code repositories So just to give an example in case your source code is a Java source code and assuming that you know because in this demo this is my laptop and I've put in all the configurations JDK everything on my laptop because I'm a developer I am working on the laptop but my continuous integration server would be you know a separate server without anything being installed on it So in case I want genkins to you know run a Java code I would need to install JDK on it I need to specify the JDK location of this out here this way Okay since I already have the JDK installed and I've already put in the Java home directory or rather the environment variable correctly I don't need to do it Git If at all I want the genin server to use git Git is a you know command bash or the command prompt for for running git and connecting to any other git server So you would need git to be you know installed on that particular system and set the path accordingly Gradel and Maven If at all you have some Mavens as well you want to do this Any other tool that you're going to install on your system which is your continuous integration server you will have to come in here and configure something In case you don't configure it when Jenkins runs it will not be able to find these tools for building your task and it'll crib about it That's good I don't want to save anything Manage genkins Let me see what else is required Yes configure global security All right the security is enabled and if you see by default it's the uh security uh access control is set to genkins own user database So what does this mean you know genkins by default it uses file system where it stores all the usernames which hashes up these usernames and kind of stores them So as of now a genkins is configured to use its own database Assuming that you are running in an organization you would probably want to have a you know some sort of an AD or an LDAP server using which you would want to control access to your Jenkins repository rather Jenkins tool So you would specify your LDAP server details the root DN password or the manager DN and the manager password and all these details in case you want to connect your Genkins instance with your LDAP or AD or any of the authentication servers that you have in your organization But for now since I don't have any of these things I'm going to use this own database That's good enough All right So I will set up some authorization methods and stuff like that once I put in few jobs So for now let me not get into any of these details of this Just be aware that Genkins can be connected for authorization to an LDAP server or you can have Genkins managing its own servers which is happening as of now So I'm going to save all this stuff that's good for me So enough of all these configurations let me put in a very very simple job All right So job new item you know little difficult to kind of figure out but then that's the new item So I'll just say you know first job this is good for me I just give a name for my job I would say it's a freestyle project That's good enough for me I don't want to choose any of that So unless until you choose any of these this particular button would not become active So choose the freestyle project and say okay At a very high level you would see general source code management build triggers build environment build and post build In case you install more and more plugins you will see a lot more options But for now this is what you would see So what am I doing at the moment i'm just putting up a very very simple job And the job could be anything and everything So I don't want to put in a very complicated job for now For the demo purpose let me just put in a very very simple job I'll give a description This is an optional thing This is my first genkins job All right I don't want to choose any of these Again there are some helps available here I don't want to choose any of this I don't want to connect it to to any source code for now I don't want any triggers for now I'll come back to this in in a while Build environment I don't want any build environment as a part of this build step You know I just want to you know run few things so that I kind of complete this particular job So since I'm on a Windows box I would say execute Windows uh batch command All right So what do I want to do i let me just echo something Echo Uh hello this is my first Jenkins job and possibly I would want the date and the time stamp pertaining to the job I mean the date and time in which this job was run All right very very simple command that says you know this is my first job It just puts out something along with the date and a time All right I don't want to do anything else I want to keep this job as simple as this So let me save this job All right So once I save this job you know the job names comes up here and then I need to build this job And you would see some build history out here Nothing is there as of now because I've just put in a job Have not run it yet All right So let me try to build it now You see a build number You would see a date and a time stamp So if I click on this you would see a console output If I go here okay as simple as that And where is all the job details that is getting into if you see this if I navigate to this particular directory All right So this is the directory what I was mentioning earlier regarding genkins home So all the job related stuff that is specific to this particular genkins installation is all here All the plugins that is installed the details of each of those plugins can be found here All right So the workspace is where all the jobs that I've created whichever I'm running would you know there will be individual folders specific to the jobs that has been put up here All right So one job one quick run that's what it looks like Pretty simple Okay Let me do one thing Let me put up a second job I would say second job I would say freestyle project All right This is my second job I just want to demonstrate the powerfulness of the automation server and how simple it is to automate a job that is put up on Genkins which will be triggered automatically Remember what I said earlier about Genkins because at the core of Genkins is a very very powerful automation server All right So what I'm going to do I will just keep everything else the same I'm going to put in a build script pretty much similar to second job that gets triggered automatically every minute All right let me do that percentage date and I'll put in the time All right So I just put in another job called second job And it pretty much does the same thing as what I was doing earlier in terms of printing the date and the time but this time I'm just going to demonstrate the powerfulness of the automation server that is there If you see here there's a build trigger So a build can be triggered using various triggers that is there So we'll get into this GitHub uh triggering or hook or a web hook kind of a triggering later on But for now what I want to do I want to ensure that this job that I'm going to put in would be automatically triggered on its own Let's say every minute I want this job to be run on its own So build periodically is my setting If you see here there's a bunch of help that is available for me So for those of you you have written chron jobs on Linux boxes you'll find it very very simple But for others don't panic Let me just put up a very very simple regular expression for scheduling this job every minute All right So that's 1 2 3 4 5 All right Come up Come up Come up All right So five stars is all that I'm going to put in And Jenkins got a little worried and he's asking me do you really mean every minute oh yeah I want to do this every minute Let me save this And how do I check whether it gets triggered every minute or not i just don't do anything I'll just wait for a minute And if at all everything goes well Genkins would automatically trigger my second job in a minute's time from now This time around I'm not going to trigger anything Look there You see it's automatically got triggered If I go in here yep second job that gets triggered automatically You know it was triggered at 42 1642 which is 4:42 my time That looks good And if everything goes well every 1 minute onwards this jog would be automatically triggered Now that I have um my genkins up and running a few jobs that has been put up here on my genkins instance I would need a way of controlling access to my genkins server This is wherein I would use a plug-in called role based access plug-in and create few roles The roles are something like a global role and a project role project specific role I can have different roles and I can have users who have signed up or the users whom I create kind of assigned to these roles so that each of these users fall into some category This is my way of kind of controlling access to my genkins instance and u ensuring that people don't do something unwarranted All right So first things first let me go ahead and uh install a plug-in for doing that So I get into manage genkins and uh manage plug-in A little bit of a confusing screen in my opinion There's updates available installed and advanced As of now we don't have the role-based plugins So let me go to available It'll take some time for it to get refreshed All right Now these are the available plugins These are the installed plugins All right So let me come back to available and I would want to search for my role based access plugin So I would just search for RO and hit enter Okay role-based authorization strategy enables user authorization using a role-based strategy Roles can be defined globally or for particular jobs or nodes and stuff like that So exactly this is the plug-in that I want I would want to install it without a restart All right looks good so far Yes Go back to the top of the page Yes Remember Genkins is running on a Java using a Java instance So typically many things would work the same way unless and until you want to restart Genkins once in a while But as a good practice whenever you do some sort of a big installations or big patches on your Genkins instance just ensure that you kind of restart it Otherwise there would be a difference in terms of what is installed on the system and what is there on the file system You would need to flush out few of those settings later on But for now these are all very small plugins So these would run without any problems But otherwise if at all there are some plugins which would need a restart you know kindly go ahead and restart uh your Genkins instance But for now I don't need that It looks good I've installed the plug-in So where do I see my plug-in i installed the plug-in that is specific to the user control or the access control So let me go into yes global security and uh I would see this rolebased strategy showing up now All right So this comes in because of my installation of my role based uh plug-in So this is what I would want to enable because I already have my uh own database set up and for the authorization part in the sense that who can do what I'm going to install I mean I've already installed a role-based strategy uh plug-in and I'm going to enable that strategy All right I would say save Okay now I've installed the role based access plug-in I would need to just set it up and check that you know I would go ahead and create some rules and ensure that I assign users as per these rules All right So let me go to manage engines configure All right Let me see where is this configure configure global security Is that where I create my roles nope Not here Yes manage and assign roles Okay again you would see these options only after you install these plugins So for now I've just enabled the plug-in I have enabled role-based access control and I would go ahead and create some rules for this particular Jenkins instance So I would say first manage rules So I would need to create some rules here and the roles are at a very high level These are global roles and there are some project roles and there are some slave roles I'll not get into details of all of these at a very very high level which is a global role Let me just create a role A role can be kind of visualized like a group So I would create a role called developer Typically the genkins instance or the CA instance are kind of owned up or controlled by QA guys So QA guys would need to provide some sort of you know limited access to developers So that's why I'm creating a role called developer and I'm adding this role at a global role level So I would say add this here and you would see this developer role that is there and each of these options you if you hover over it you would see some sort of a help on what what are these uh you know permissions specific to So what I want is like you know it sounds a little you know different but I would want to give very very little permissions for the developer So from an administration perspective I would just want him to have a read um kind of a rule Credentials again I would just want a view kind of a role I don't want him to create any agents and all that stuff That's looks good for me For a job I would want him to just possibly uh read I don't want him to build I don't want him to cancel any jobs I don't want him to configure any job I don't even want him to create any job I would just want him to read few things I would not give him possibly a role to the workspace as well I mean I don't want him to have access to the workspace I would just want him to uh read a job or check you know have read only access to the job run Um no I don't want him to give him any any particular access which will allow him to run any jobs View configure Yeah possibly create Yeah delete I don't want read Yes definitely And this is the specific role So what am I doing i'm just creating a global role called developer and I'm giving him very very limited roles in the sense that I don't want this developer to be able to run any agents nor create jobs or build jobs or cancel jobs or configure jobs at the max I would just want him to read a job that is already put up there Okay So I would save Now I created a role I still don't have any users that is there on the system So let me go ahead and create some user on the system That's not here How sec configure manage genkins manage users Okay let me create a new user I would call this user as yeah developer one sounds good Some password some password that I can remember Okay his name is developer one dat.com or something like that Okay so this is the admin with with which I kind of configured or brought up the system and developer one is a user that I have configured So still I have not set any roles for this particular user yet So I would go to manage and I would say manage and assign roles I would say assign roles Okay So if you see what I'm going to do now is assign a role that is specific to that particular de I will find the particular user and assign him the developer role that I have already configured The role shows up here I would need to find my user whoever I created and then assign him to that particular role So if you remember the user that I created was uh developer one I would add this particular user and now this particular user what kind of a role I want him to have because this is the global role that had created So developer I would assign this developer one to this particular global role and I would go ahead and save my changes Now let me check the permissions of this particular user by logging out of my admin account and logging back as uh developer one If you remember this role was created with very less privileges So there you go I have genkins but I don't see a new item I can't trigger a new job I can't do anything I see these jobs However I don't think so I'll be able to start this job I don't have the permission set for that The maximum I can do is look at the job see what was there as a part of the console output and stuff like that So this is a limited role that was created and I added this developer to that particular role which was a developer role so that the developers don't get to configure any of the jobs because the Jenkins instance is owned by a QA person He doesn't want to give developer any administrative rights So the rights that he set out by creating a developer role and anybody who is tagged any user who is tagged as a part of this developer role would get the same kind of permissions and these permissions can be you know fine grain it can be a project specific permissions as well but for now I just demonstrated the highle permissions that I had set in Let me quickly log out of this user and get back as the admin user because I need to continue with my demo with the developer role that was created I have very very less privileges One of the reasons for genkins being so popular as I mentioned earlier is the bunch of plugins that is provided by users or community users who don't charge any money for these plugins but it's got plugins for connecting anything and everything So if you can navigate to or if you can find genkins plugins you would see index of over so many plugins that is there All of these are wonderful plugins Whatever connectors that you would need if you want to connect Genkins to an AWS instance or you want to connect Genkins to a Docker instance or any of those containers you would have a plug-in You can go and search up if I want to connect genkins to Bitbucket Bitbucket is one of the git servers There's so many plugins that is available Okay so bottom line Genkins without plugins is nothing So plugins is the heart of Genkins for you to connect or for in order to connect genkins with any of the containers or any of the other tool sets you would need the plugins If you want to connect or you want to build a repository which has got Java and Maven you would need to install Maven and JDK on your Genkins instance If at all you're looking for .NET build or a Microsoft build you would need to have MS build installed on your on your Jenkins instance and the plugins that will trigger MS build If at all you want to listen to some serverside web hooks from GitHub you would need GitHub specific plugins If you want to connect Jenkins to AWS you need those plugins If you want to connect to a Docker instance that is running anywhere in the world as long as you have the URL which is publicly reachable you just have a Docker plug-in that is installed on your Genkins instance Sonar Cube is one of the popular static code analyzers So you can connect a genkins build You can build a job on genkins and push it to sonar cube and get sonar cube to run analysis on that and get back the results in genkins All of these works very well because of the plugins Now with that let me connect our genkins instance to GitHub I already have very very simple uh Java repository up on my GitHub instance So let me connect genkins to this particular GitHub instance and pull out a job that is put up there All right So this is my very very simple uh you know repository that is there called hello Java and this is what is there in the repos There is a hello hello java application that is here or a simple class file that is there It's got just one line of system.out So this is already present on github.com at this place and this would be the URL for this uh repository If I pick up the https URL this is my https URL So what I would do is I would connect my Genkins instance to go to GitHub provide my credentials and pull out this repository which is on the cloud hosted github.com and get it to my Genkins instance and then build this particular Java file I'm keeping the source code very very simple It's just a Java file How do I build my Java file how do I compile my Java file i just say Java C and the name of my U class file which is hello Java And how do I run my Java file i would say Java and hello Okay So remember I don't need to install any plugins now because uh what it needs is a git plugin So if you remember when we were doing the installation there was a bunch of recommended plugins So git is already installed on my system So I don't need to install it again So let me put up a new job here It says uh get job Let it be a freestyle project That's good for me I would say okay All right So the source code management remember in the earlier examples we did not use any source code because we were just putting up some echo kind of a jobs we did not need any integration with any of the source code systems So now let me connect this So I'm going to put up a source code and g would show up because the plug-in is already there SVN Perforce any of those additional um source code management tools if at all you would need just install those plugins and Genkins connects wonderfully well to all these particular source control tools Okay so I would copy the HTTPS URL from here I would say this is the URL that I'm supposed to go and grab my source code from But all right that sounds good But what is the username and password so I'll have to specify a username and password All right So I would say the username this is my username and uh this is my https credential for my job Okay So this is my username and this is my password I just save this I say add and then I would say you know use this credentials to go to GitHub and then on my behalf pull out a repository All right If at all at this stage if there's any error in terms of not able to genkins not able to find git or the g.exe exe or if my credentials are wrong Somewhere down here you would see a red message saying that you know something is not right you can just go ahead and kind of fix that For now this looks good for me I'm going to grab this URL What am I going to do the step would pull the source code from the GitHub And then what would be there as a part of my build step because this repository just has a Java file correct hello Java So in order to for me to build this I would just say execute Windows batch command and I would say Java C hello Java That is the way I would build my uh Java code And if I have to run it I would just say Java hello Pretty simple two steps And this would run after the repository contents are fetched from GitHub So Java C Java that sounds good I would say save this and let me try to run this Okay if you see there's a lot of you know it executes git on your behalf It goes out here It provides my credentials and says you know it pulls all my repository and by default it'll pull up the master branch that is there on my repository and it kind of builds this whole thing java cello Java and it runs this project Java hello and there you see this is the output that is there and if at all you want to look at the contents of the repository If you can go here this is my workspace of my system Hang on this is not right Okay get job If you see here this is my hello java This is the same program that was there on my GitHub repository Okay So this is a program that was there on the GitHub repository All right So this was the same program that was here and Genkins on our behalf went over all the way to GitHub pulled this repository from there and then you know it brought it down to my local system or my Genkins instance It compiled it and it ran this particular application Okay Now that I've integrated Genkins successfully with GitHub for a simple Java application let me build a little bit on top of it What I will do is I have a Maven based web application that is up there as a repository in my GitHub So this is the repository that I'm talking about It's called MVN web app It's got it's a Maven based uh repository as we would know Maven is a very very simple uh Java based uh build tool that will allow you to run various targets and it'll compile it will based upon the goals that you specify It can compile it can run some tests and it can it can build a war file and even deploy it into some other server For now what we're going to use Maven is just for building and creating a package out of this particular web application It contains a bunch of things and uh what is important is just the index.jsp It just contains an HTML file that is there as a part of this web application So from a perspective of requirements now since I'm going to connect genkins with this particular repository git we already have that set we only need two other things one is maven because genkins will use maven so in order to use maven genkins would have to have a maven installation that is there on the genkins box and in this case the genkins box is this laptop and after I have my maven installed I also need a Tomcat server Tomcat is a very very simple uh web server uh that you can freely download I'll let you know how to quickly uh download and install the Tomcat All right So download Maven first There are various ways in which you can kind of download this Maven There is zip files binary zip files and archive files So what I've done is I've just already downloaded Maven And if you see I've unzipped it here So this is the folder with which I have unzipped my Maven So as you know Maven again is is a one open source uh build tool So you'll have to set in a few configurations and set up the path So mvn - version if I specify this after I set in my path my one should work And if at all I echo M2 home which is nothing but the variable environment variable specific to my home it is already set here So once you unzip my just set this M2 home variable to the directory where you unzipped your maven also just set the path to this particular directory /bin because that is where your Maven executables are all found All right so that's with Maven and you know since I've set the path and the environment variable Maven is running perfectly fine on my system I've just verified it Okay next one is a Tomcat server Download Apache 8 Tomcat server 8.5 is what I have on my system So I'm just going to show you where to download this from This is where you download Tomcat server and um I already have the server downloaded Again this doesn't need any installation I just unzip it here and it kind of has a bin and configuration I have made some subtle changes in the configuration First and foremost Tomcat server also by default runs on port 8080 Since we already have our uh genkins server running on port 8080 we cannot let Tomcat run on the same uh port There will be a port clash So what I've done I have configured Tomcat to use a different port So if I go to this configuration file here there is a server.xml Let me open this up here All right Okay So this is the port By default it will be 8080 I've just modified it to 8081 So I've chained the port on which my Tomcat server would run All right So that is one chain Second change when Jenkins kind of tries to get into my Tomcat and deploy something for someone he would need some authentications so that he'll be allowed deployment by Tomcat So for that I need to create a user on Tomcat and provide this user credentials to my genkins instance So I would go to Tomcat users.xml file Here I've already created a username called deployer and the password is deployer and I've added a role called manager hyphen script Manager hyphen script will allow programmatic access to the Tomcat server So this is the role that is there So using this credentials I will enable or I'll empower genkins to get into my Tomcat server and deploy my application All right only these two things that is required Let me just start my Tomcat server first So I get into my bin folder I open a command prompt here And there's a startup.bat It's pretty fast It just takes a few seconds Yes there you go Tomat service up and running Now this is running on port 8081 So let me just check if that looks good So localhost 8081 Okay my Tomcat server is up and running That sounds good The user is already configured on this That's also fine So what I'll do as a part of my first job Maven is also installed on my system So I'm good to use Maven as a part of my genkins So I will put up a simple job Now I will say job MVN web app I call this freestyle job That's good Okay So this will be a git repository What is the URL of my git repository is uh this guy https URL Okay that's this URL I will use a credentials The old credential that I set up will work well because it's the same git user that I'm kind of connecting into All right so now the change happens here where after I get this since I said this is a simple Maven repository I will have some Maven targets to run So the simple target first is let me run maven package This creates a war file Okay So mvn package is the uh target Package is the target So when whenever I run this package it kind of creates it it builds it it tests it and then creates a package So this is all that is required Maybe let me try to save this and uh let me first run this and see if it connects well if there's any problem with my war file or the war file gets created properly Okay wonderful So it built a war file and if you see it all shows you what is the location where this war file was generated So this is will be the workspace If you see this this war file was successfully built Now I need to grab this particular war file and then I would need to deploy it into Tomcat server Again I would need a small plug-in to do this because I need to connect Tomcat with my Jenkins server Let me go ahead and um install the plug-in for the container deployment So I will go to manage plugins available type in container container container deploy to container okay so let's put this the plug-in that I would need I would install it without a restart seems to very fast Nope Sorry it's still installing Okay it installed the plug-in So if at all you see this if you go to my workspace okay in the target folder I would see this web application war file that is already built So I would need to configure this plug-in to pull up this war file and deploy it onto the Tomcat server For deploying onto the Tomcat server I will use the credentials of the user that I've created Okay so let me go to configure this particular project again and um okay all this is good So the package is good I'm going to just create a package That's all fine Now add a post build step So after the war file is built as a part of this package uh directive let me use this deployment to container Now this will show up after you install the plug-in So deploy this one to the container Now what is that you're supposed to specify you're supposed to specify the what is the location Okay so this is a global uh you know configuration that is there that'll allow you to from the root folder It'll pick up the war file that is there So star/star.war that's good for me Okay What is the context path context path is nothing but just the name of an application that you know under which it will get deployed into the Tomcat server I will just say MVN web app That's the name of my thing Now I need to specify what kind of a container that I'm talking about All right So the deployment would be for this Tomcat 8.5 is what I need Okay because the server that we have is a Tomcat 8.5 server that I have So this would be the URL So the credentials yes I need to add a credential for this particular server So if you remember I had created a credential for my web application So let me just find that my Tomcat server Yes configuration of this Okay so deployer and deployer username is deployer password is deployer Okay So let me use that credential I would say I would say add a new credential genkins credential The username is deployer and the password is deployer So I would use this deployer credentials for that And what is the URL of my Tomcat instance so this is the URL of my Tomcat instance So take the war file that is find found in this particular folder and then you know context path is m web app use the deployer deployment credentials and get into this local host which is there 8081 this is the Tomcat server that is running on my system and then go ahead and deploy it Okay So that is all that is required So I would say just save this and uh let me run it now Okay it built successfully built the war file It is trying to deploy it and uh looks like the deployment went ahead perfectly well So the context path was MBN web app So if I type in this all right if at all I go ahead into my uh Tomcat server there would be a web apps folder you would see the you know the date time stamp So this is the file that get got recently copied and this is the explorer version of our application So the application was built the source code of this application was pulled from the GitHub server It was built locally on the uh genkins test and then it was pushed into a Tomcat server which is running on a different port which is 808.1 Now for this demo I'm running everything locally on my system But assuming that you know this particular Tomcat instance was running on some other server with some other different IP address All that you got to go and change is the URL of the server So this would be the server in case you you already have that uh you know if you have a Tomcat server which is running on some other machine that's all fine with a different IP that's all good enough The whole bundle or the war file that was built as a part of this genkins job gets transferred onto the other server and gets deployed That's the beauty of uh genkins and automatic deployments or rather deployments using genkins and maven distributed build or master slave configuration in genkins As you would have seen you know we just have one instance of genkins server up and running all the time and also I told you that whenever any job that kind of you know gets started on the genkins server it is little heavy on on in terms of disk space and the CPU utilization So which kind of you know if at all you are in an organization wherein you are heavily reliant on um the genin server you don't want your genin server to go down So that's wherein you kind of start distributing the load that is there on the genin server So you primarily have a server which is just a placeholder or like a master who will take in all the kind of jobs and what he'll do is based upon trigger that has happened to the job or whichever job needs to be built he if at all he can delegate these jobs onto some other machines or some other sleeves You know that's a wonderful thing to have Okay use case one Use case two assuming that you know if you have a genin server that is running on a Windows box or on a Linux one and if at all you have a need where you need to build based upon operating systems you have multiple build configurations to support Maybe you need to build a Windows uh you know Windows-based net kind of a projects where you would need a Windows uh machine to build this particular project You also have a requirement where you want to build Linux Linux based uh systems you also have a Mac you you support some sort of an apps or something that is built on Mac OS you would need to build you know Macbased system as well so how are you going to support all these needs so that's wherein a beautiful concept of master slave or you know primary and delegations or agent and master comes into play so typically you would have one genin server who will just you know configurate with all the proper authorizations users configurations and everything is set up on this Jenkins server his job is just delegations he will listen to some server triggers or based upon the job that is coming in he will if there's a way nice way of delegating these jobs to somebody else and you know taking back the results he can control lot of other systems and these systems may not have a complete or there's no need to put in a complete genkins installation all that you got to do is have a very very simple runner or a slave that is a simple jar file that is run as a low priority thread or a process within these systems so with that you can have a wonderful distributed build server that can be set Yep And in case one of the servers goes down your master would know that what went down and kind of delegate the task to somebody else So this is the kind of distributed build or the master slave configuration So what I'll do in this exercise or in this demo is I will set up a simple slave But since I don't have too many machines to kind of play around what I'll do is I will set up a slave in in one other folder within my hard drive So I've got the C drive and D drive My genkins is on my C drive So what I would do is I would just use my E drive and set up a very very simple uh slave out there I'll just show you how to provision a slave and how to connect to a slave and how to delegate a job to that slave Let me go back to my genkins uh master and uh configure him to you know talk to an agent So there are various ways in which this client and server talk to each other What I'm going to choose is something called as JNLP Java network launch protocol So using this I would ensure that you know the client and server talk to each other So for that I need to ensure that I kind of enable this JNLP port So let me try to find out where is that Let me try this Okay Yes Agents and uh by default this JNLP agents uh thing would be disabled So if you see here there's a small help on this So I'm going to use this JNLP which is nothing but uh Java network launch protocol And you know I'll configure the master and server to talk to each other using JNLP So for that I need to enable this guy So I enable this guy instead of making the by default the configuration was disabled So I make him random I make him you know enabled and I say save this configuration All right So now I configured or I made a setting for the master so that the JNLP u port is kind of opened up So let me go ahead and um you know create an agent So I'll go to manage nodes So if you see here there's only one master here So let me provision a new node here So this is the way you know in which you bring up a new node You have to configure it on the server Jenkins would put in some sort of uh security around this particular uh agent and let you know how to launch this particular agent so that he can connect to your genkins master So I would say new node I would give a name for my node I would say Windows node because both of these are Windows only So that's fine I'll just give an identifier saying that Windows node I would say this is a permanent agent I would say okay So if you see the name let me just copy this name here with the description Number of executors since it's a slave node and both of these are running on my system I'll keep the number of executors as one That's fine Remote root directory Now this is where let me just clarify this Since I have both my my master is running on my C drive C drive program files 86 or hang on not 86 C program files It is indeed 86 All right Genkins So this is where my master is running So I don't want the C drive What I'll do is I'll use something called as A drive I have another drive in my system But please visualize this like you know you're running this on a separate system altogether So I create a folder here called genkins node and this is where I'm going to place my or I'm going to provision my slave and I'm going to run it from here So this is the directory in which I'm going to provision my slave node So I'm going to copy this here and that is the remote root directory of your particular agent of slave So I just copy it here the label you know possibly this is fine for me and usage how do you want to use this guy so I would don't want him to run all kinds of jobs I will only build jobs with label expressions that match this particular node and so this is the label of this node so in order for somebody to kind of delegate any task to them they will have to specify this particular label so imagine this way if I have a bunch of windows miss system I name it as windows star anything SAS from Windows I can give a regular expression and say that anything that matches Windows run this particular task there If I have some Mac machines I name all these Mac agents as Mac star or something like that and I can delegate all tasks you know saying that start with whatever starts with Mac and this node run the MAC jobs there So you identify a node using the label and then delegate the task there All right So launch method you know we will use Java web startar because we got to we got to use JNLP protocol Okay that sounds good Directory I think nothing else is required Availability yes we'll keep this agent Yep Online as much as possible That sounds good All right let me save this All right I'm just provisioning this particular node now So if I click on this node I get a bunch of commands along with an agent.jar So this is the agent.jchar that has to be taken down to the other machine or the slave node And from there I need to run this along with a small security credential So let me copy this whole text here in my notepad Notepad++ is good for me Okay I copy this whole path there I also want to download this agent.jar I would say yes And this agent.jar is the one that is configured by our server So all the details that is required for launching this agent.jar is found in this uh sorry for launching this agent is found in this agent.jar So typically I need to take this jar file onto the other system and then kind of run it from there So I have this aent.jar I copy this or rather I cut this I come back to my folder my Jenkins node I paste it here Okay So now with this provision agent.jar and I need to use this whole command ctrl arr c and then launch this particular agent Let me bring up a command prompt right here and then launch that So I'm saying in the same folder where there is agent.jar I'm going to launch this a particular agent java hyphen jar agent.jar jnlp this is the URL of my server in case the server and client are on different locations or different IPs Let us specify the IP address all these anyway would show up and then the secret and you know the root folder of your genkins or the slave node Okay So something ran and then you know it says it's connected very well It seems to have connected very well So let me come back to my genkins instance and see you know if at all you see earlier this was not connected Let me refresh this guy Okay now these two guys are connected Provision a genkins node and then I copied all the credentials of the slave.jar jar along with the launch code and then took it to the other system and kind of ran it from there Since I don't have another system I've just got a separate directory in another folder another drive and I'm launching the agent from here As long as this particular agent is up and running or this command prompt is up and running the agent would be connected So once I close this the connection goes down All right So successfully we've launched this particular agent Now this would be the home directory of this genkins node or the genkins slave So any task that I'm going to delegate to this particular slave would all be run here It'll create a workspace right here All right So good So let me just come back and let me kind of put up a new task here I will say that you know delegate job is good I say freestyle project I'm going to create a very very simple job here I don't want it to connect to git or anything like that Let me just create a very very simple echo delegated to the slave delegated to I don't like the word slave delegated to agent put this way all right so delegated to agent sounds good now how am I going to ensure that this particular job runs on the agent or on the slave that I have configured Right you see this if at all you remember how we provisioned our particular slave we gave a label Right so now I'm going to put in a job that will only match this particular label So I'm going to say that whatever matches this you know Windows label run this job on that particular node So we have only one node that's matching this you know Windows node So this job will be delegated out there So I save this and uh let me build this This is again a very very simple job There's nothing in this I just want to demonstrate how to kind of delegate it to an agent So if you see this it ran successfully And uh where is the workspace the workspace is right inside our Jenkins node It created a new workspace Delegated job It put in here So my old or the my primary master uh job is in SQL uh program files under genkins And this is the slave job that was successfully run Very very simple but very very powerful concept of master slave configuration or distributed build in Jenkins Okay Approaching the final section where um we've done all this hard work in bringing up our Jenkins server configuring it putting up some jobs on it creating users and all this stuff Now we don't want this configuration to kind of go away We want a very nice way of ensuring that we back up all this configuration and in case there is any failure hardware crash or a machine crash we would want to kind of restore from the uh existing configuration that we kind of backed up So one quick way to do that would be or one dirty way to do that would be just you know take a complete backup of our SQ program files col genkins directory because that's where our whole genkins configuration is present but we would don't want to do that Let's use some plugins for taking up a backup So let me go to manage genkins and uh click on available and uh let me search for some backup There are bunch of backup plugins So I would recommend one of these plugins that I specifically use So this is the backup plug-in So let me go ahead and install this plug-in All right So went ahead and installed this plugin So let me come back to my manage plugins So this plug-in is there So hang on Backup manager So you will see this option once you you install this plug-in So first time I can you know do a setup I would say backup this particular I'll give a folder Uh this folder is pertaining to the folder where I want Jenkins to back up some data And I would say the format should be zip format is good enough Let me give a name or a template or a file name for my um you know backup This is good I want it in verbos mode I don't want to shut down my genkins or should I shut it down no Okay One thing that you got to remember is that whenever a backup happens if there are too many jobs that is running on the server it can kind of slow down your genkins instance because it's it's in the process of copying few of those things and if the files are being changed at that moment it's little bit problematic for genkins So typically you back up your servers only when there is very less load or typically try to you know bring it to a shutdown kind of a state and then take a backup All right So I'm going to back up all these things you know I don't want to exclude anything else I want the history I want the Maven artifacts Possibly I don't want this guy I would just say save and then I would say back him up So this would run a bunch of you know steps and all the files that is required as a part of this pretty fast but then if at all you have too many things up on your server for now we didn't have too many things up on our server but in case you had too many things to kind of backup this may take a while So let me just pause this recording and get back to you once the uh backup is complete So there you go The backup was successful Created a backup of all our workspace the configurations the users and you know all that So all this is kind of hidden down in this particular zip file So at any instance if at all I kind of crash my system for some instance or say hard disk failure and I bring up a new instance of genkins I can kind of use the backup plug-in for restoring this particular configurations So how do I do that i just come back to my manage genkins come back to backup manager and I'll say restore Hudson or genkins configuration Now the first one is that what exactly is in DevOps here Now DevOps is basically a combination of two practices like that is the development and operations So development is having their own task of doing the development and preparing the source code and operation is responsible for deploying the source code to a specific environment whether it's a production or any you know other environment So they take care of all those tasks creating the virtual machines managing performing the patching any number of tasks there from the operations perspective Now development is uh something which keeps on working on the source code there on the development and they are responsible for keeping a particular uh product up and running So they um do the performance they you know they do the coding they do the uh particular interaction with the testing to you know validate their source code A huge number of activities is actually done by the development team and they eventually uses a number of tools like scripting tools coding tools development tools lot of tools they basically use to support their development because they are performing different kind of programming They it could be a possibility that more than one programming language is being used for your project So that's kind of you know wider scope is present as such over here when we talk about the DevOps here Now from the operations perspective uh it's basically a team which is responsible for managing the uh workforce right and it's something which we can use to uh see that all the daily uh activities and operations should be managed effectively and efficient So that's the main important uh uh point over here that whenever we are working with the operations whenever we are working on that we should be able to get a kind of a decent amount of work and decent amount of activities managed with the help of operations teams here So operation teams is pretty much responsible for keeping the environment up and running and whatever the activities and maintenance work we want to do we will be able to do on that Now DevOps really helps us to achieve a lot of milestones over here Now let's talk about that one by one So very first one of that is that it helps us to get a frequent release of deliveries here Now we were doing the releases prior to DevOps also but that was not that much frequent probably people were doing like every quarter every 3 months four months that kind of time duration was being used by the team to uh deliver the source code or deliver into a specific environment But the moment the uh specific DevOps comes into the picture the frequency of this release uh really increased a lot So some organization in fact uh trying to do like every month release twice a month So that's the kind of frequency which we are getting when we move on to the DevOps So that really helped and you know got efficient with the introduction of DevOps here Now second one is the team collaboration Now that has also improved drastically because earlier the operations and the development teams were not working in that collaboration They were like working involved in their own task but with the help of DevOps they really come along and you know had a very good team collaborations which really helps them to increase the overall productivity and the performance of the product So these are the prime milestones which we achieve with the implementation of DevOps as such into our project Uh another one is that it helps to uh get a kind of a management a better management here So a effective and efficient management is what we get with the help of DevOps because ultimately you have redefined your processes You have implemented certain development tools certain automations and that really helps you to increase the overall management of all your unplanned work So the planning is something which got really improved with the help of DevOps and faster resolution of issues because the way you are delivering your uh source code to the production environment you are pretty much doing it into a less duration of time and when that is happening definitely there is an kind of a increase uh in the number of bugs which is getting a resolved and there is another benefit that you know ultimately the number of bugs which you're getting in production that drastically reduced in case of devops So since we are getting less number of issues and bugs it's very easy for us to do the resolutions quite quickly and implement into a specific production environment Right so DevOps today is being implemented by you know most of the major organizations whether it's a financial organization whether it's a kind of a service organization every organization is somehow looking forward for the implementation and the adaptation of DevOps because it totally redefineses and automate the whole development process all together and whatever the man efforts you were putting earlier that is simply gets automated with the help of these tools here So this is something which get really implemented because of some of the important uh feature like a CI/CD pipeline because CI/CD pipeline is responsible for delivering your source code into the production environment in less duration of time So CI/CD pipeline is ultimately the goal which really helps us to deliver more into the production environment when we talk about from this perspective Now let's talk about that what exactly is a CI/CD pipeline Now when we go into that part when we go into that understanding so CI/CD pipeline is basically continuous integration and continuous delivery concept which is used or which is considered as an backbone of the overall DevOps approach Now it's one of the prime approach which we implement when we are going for a DevOps implementation for our project So if I have to go for a DevOps implementations the very first and the minimum implementation and the automation which I'm looking forward is actually from the uh particular CI/CD pipelines here So CI/CD pipelines is really a wonderful option when we talk about the DevOps here So what exactly is the pipeline term all about so pipeline is an series of events that are connected together with each other It's kind of a sequence of the various steps like you know typically when we talk about any kind of deployment So we have like you know build process like we compile the source code we generate the artifacts we do the testing and then we deploy to a specific environment All these various steps which we used to do it like manually that is something which we can do it into a pipeline So pipeline is nothing but a sequence of all these steps interconnected with each other executed one by one into a particular sequence Now the pipelines is responsible for performing a variety of tasks like building up the source code running the test cases uh probably the deployment can also be added up in when we go for the uh continuous integration and continuous delivery there So all these steps are being done into a sequence definitely because sequence is very important when we talk about the pipeline So you need to talk about the sequence the same way in which you're working on the development and in a typical world the same thing you will be putting up into a specific pipeline So that's a very important aspect to be considered Now let's talk about what is the continuous integration here Now continuous integration is also you know known as the CI pretty much you can see that a lot of uh tools are actually named as CI but they are referring to the continuous integration only So continuous integration is a practice that integrates the source code into a shared repository and uh it used to uh automate the verification of the source code So it involves the build automations test cases automation So it also helps us to detect the uh issues and the bugs quite easily and quite faster That's a very early mechanism which we can do as such if we want to resolve all these problems Now continuous integrations does not eliminate the bugs but yes it definitely helps them uh you know easily to find out because we we are talking about the uh automated process we are talking about the automated test cases So definitely that is something which can help us to uh find out the bugs and then you know the development can help on that and they can you know proceed with those bugs and they can try to resolve those things one by one So it's not an kind of automated process which will eventually remove the bugs Bugs is something which you have to recode and you have to fix it by following the development practice But yes it can really help us to find those bugs quite easy and help them to remove Now what is the continuous delivery here so continuous delivery also known as CD is an kind of a phase in which the changes are made uh into the code before the deployment Now in this case what happens that uh it's um something which we are discussing or we are validating that what exactly we want to deliver it to the customer So what exactly we are going ahead or we are moving to the customers So that's what we typically do in case of continuous delivery and the ultimate goal of the pipeline is to make the deployments That's the end to result because coding is not the only thing You code the programs you do the development After that it's all about the uh deployments like how you're going to that to perform the deployment So that is a very important aspect You want to go ahead with the deployments that's you can go there and that's a real beauty about this because it it's in kind of a way in which we can identify that the how the deployments can be done or can be executed as such here Right so the ultimate goal for the pipeline is nothing but to do the deployments and to proceed further on that Right so when both these practices are placed in together in an order so all the steps could be referred as an complete automated process and this process is known as CI/CD So when we are talking about like when we are working on this automation so in that case what happens that we are looking forward that how the automation needs to be done and since it's an kind of a CI/CD automation which we are talking about so it's nothing but the uh end result would be like build and deployment automation So you will be taking care of both the build and the test case executions and the deployments as such when we talk about as such the CI/CD here the implementation of CICD also enables the team to do the build and deploys quite quickly and uh efficiently because these are things which is you know happening automatically So there is no manual efforts involved and there is no scope of human error also So we have frequently seen that while doing the deployments we may miss some bindaries or some mis can be there So that is something which is you know completely removed as such when we talk about this the process makes the teams more agile productive and the uh confident here because um the automations definitely gives a kind of a boost to the confidence that yes things are going to work perfectly fine and there is no issues as such present now why exactly Jenkins like Jenkins is what we typically understand or we you know uh hear and there that it's CI tool it's a CD tool so what exactly is Jenkins all about so Jenkins is also known as a kind of orchestration tool It's an automated tool which is there and the best part is that it's completely open source Yes there are some particular uh paid or the enterprise tools are there like cloud bees and all but there is no as such offering difference between the cloud bees and the Jenkins here So Jenkins is an kind of open source tool which lot of organizations pretty much implement as it is itself So even if they don't want to go um we have seen in a lot of big organizations where you know they are not going for the enterprise tool like cloudbs and all and they are going for the pretty much you know core Jenkins software as such here So this tool uh makes it easy for the developers to integrate the changes to the project that is something which is very important because it can really help the teams to say that how the things can be done and how it can be performed over there So the tools is very easy for the developers to integrate and that's the biggest uh you know benefit which we are getting when we talk about these uh tools as such So Jenkins is a very important tool to be considered when we talk about all these automations Now Jenkins achieves continuous integration with the help of plugins that is also uh a kind of another feature or benefit which we get because there are so many plugins which is available there as such which is being used and uh for example you want to have an integration with kubernetes docker and all maybe by default those plugins are not installed but yes you have the provisioning that you can go for the installation of those plugins and yes those features will start embedded up and integrated within your Jenkins So this is the reason this is the main benefit which we get when we talk about the Jenkins implementation So Jenkins uh is you know one of the best fit which is there for building a CI/CD pipeline because of its flexibility uh opensource nature plug-in capabilities the support for plugins and it's quite easy to use and it's very simple straightforward GUI which is there which can definitely helps us you can you know easily understand and go through the Jenkins and you can grab the understanding and as an end result you will be able to have a very robust tool which using which pretty much any kind of source post code or any kind of programming language you can implement CI/CD whether it's a Android it's a NNET it's a Java it's a NodeJS all the languages are having the support for the Jenkins so let's talk about the CI/CD pipeline with the Jenkins here now to automate the entire development process a CI/CD pipeline is the ultimate you know solution which we are looking forward to build such a pipeline Jenkins is our best solution and best fit which is available here so there are pretty much fix steps which is involved when we look forward for any kind of pipeline It's generic pipeline which we are looking forward Now it may have like uh any other steps which is available there probably some additional steps you're doing like some other plugins you are installing but these are the basic steps which is there like a minimum pipeline if you want to design these are the steps which is available there Now let's see the first one is that we have to uh require a Java JDK like a JDK to be available on the system Now most of the operating systems are already available with a JRE like a Java JRE but the problem with JRE is that it's only for the build process u it will not be doing the compilation you can run the artifacts you can run the jar files you can you know run the application run the codebase but the compilation requires the Java C or the Java JDK kit to be installed onto the system and that's the reason why for this one we also require the JDK and certain Linux commands execution understanding we need to have because we are going to run some kind of steps some installation steps and you know process So that's pretty much required Now let's talk about how do CI/CD pipeline with Jenkins Now first of all you have to download the JDK and uh that is something which is installed So after that you can go for the Jenkins download Now Jenkins.io/d download is a website is the official websites of Jenkins Now the best part is that there you have the support for different operating systems and platforms From there you can easily say that if you want to go for a Java uh package like a war file Tucker Ubuntu Devian Cents Ferora Red Hat Windows Open Sushi uh FreeBSD Ganto Mac operating system In fact whatever the different kind of artifacts or different environment or different uh uh application you want to download you will be able to do that So that's the very first thing to start upon You download the generic Java package like a war file Then you have to execute it You have to download that into a specific folder structure Let's say say that you have you know created a folder called Jenkins Now you have to go into that Jenkins folder with the help of CD command and there you have to run the command called Java - jar and the Jenkins.bar there So uh these are the executables uh artifacts So uh war files can be easily executable um jar files war files can be easily deployed So just because uh with a java command you can run them You don't require any kind of web container or application container as such So here also you can see that we are running the java command and it runs the applications as such And once that is done so you can open the web browser and uh you can open like local host col So Jenkins uses port just like a top apache So um if you know once the deployment is done installation is done So you can just open the local host col Now if you want to get uh the Jenkins up and running in the browser probably you can you know go through the uh public IP address also there So you can put the public IP address col and that can also help you to you know start accessing the Jenkins application Now in there you will be having an option called create new jobs So you need to click on that Now once the uh particular new job new item new job that's a different naming conventions which is available there Now all you are going to do is that you're going to do like you are proceeding with the creating the uh pipeline job So you will be having an option called pipeline job over there Just select that and provide your custom name what pipeline name or job name you want to uh refer or you want to process there Now once that is available so what happens that it will be an easy task for us to see that how exactly we can go ahead and we can perform on that part So this can really help us to see that how a pipeline job can be created and you know performed on uh this modifications as such Now when the pipeline is selected and uh we can give a particular name that this is the name which is available and then we can say okay as such over there Now you can scroll down and find the pipeline section So uh there what happens that when you go over there and say that okay this is the way that how the pipelines are managed and you know those kind of things So you will scroll down and find the pipeline section and go with that pipeline script Now when you select that option there are different options which is available like how you want to manage these pipelines Now you are you know have the direct access also like if you want to directly uh create the uh create a pipeline script you can do that If you feel that you want to manage like you want to retrieve the Jenkins files also score management tool also can be used there So you can work on that also So like this there are so many a variety of things which is available like which you can use to work around that how exactly the pipeline job can be created So either you can fetch it from the source code management tool uh like get version or something like that or you can directly put the pipeline code as such over there right now So next thing is that we can configure and execute a pipeline job with a direct script So uh we can once the pipeline is selected so you can put the uh particular script like Jenkins file into your uh particular GitHub link So you you may be having like already a GitHub link so that there where the Jenkins file is there So you can make use of that Now once you process the GitHub link so what we can do is that we can proceed with that and uh once the processing is done so you can do the save and you know you can keep the changes and you know uh it will be picking up the pipelines you know the pipeline script is added up into the uh GitHub and you know you have already specified that uh let's just go ahead with this Jenkins file pipeline script from the GitHub repository and proceed further Now once that is done so what next you can do is that you can go with the build now process You click on the build now and once that is done so what will happen that you will be able to see that how the build process will be done and how the build will be performed over there So these are pretty much a kind of a way So you can click on the console output you will get all the logs that is happening in the inside that whatever the pipeline steps are getting executed all of them you will be able to get or you will be able to you know get on that part there So these are the different steps which is involved as such And uh the sixth one is that you know uh yes whatever the uh particular uh when you run the build now you will be able to see that the source code will be uh you know will be checked out and will be downloaded before the build and you can proceed with that part Now later on if you want to change the URL of this GitHub you can configure the job again the existing job and you can change that URL GitHub link URL whenever you require You can also clone this uh job whenever you go ahead and you work on that and that's also kind of you know the best part which is available as such right and uh then you can have the advanced settings over there so in there you can put like uh your GitHub repository you can say like okay uh the GitHub repository is there so I'm just going to put this URL and uh you know with that what will happen that the settings will be available there and the Jenkins file will be downloaded as such and when you run the build now you will be able to have a lot of steps like a lot of configurations going on So um then the check out SCM so uh we can have a declaration like check out SEM which is there So when the check out SM is there so it will check out a specific source code after that you go to the log and you will be able to see that each and every stage which is being built up and executed as such Okay So now we are going to talk about a demo here So on the pipeline here so this is a Jenkins portal Now you can see here that there is an option called create a job You can either click on the new item or you can click on the new uh create a job here Now here I'm going to say like a pipeline and uh then you know you can select the pipeline uh job type here Now you have the freestyle pipeline GitHub organization multi multi branch pipeline These are the different options which is available there But I'm going to continue with the pipeline here as such So when I selected the pipeline and say okay so what will happen that I will be able to see a configuration page which is related to the pipeline Now here the very important part is that you have all the uh general build trigger uh you know options which is similar to the freestyle but the build step and the post build step is completely removed because of the pipeline introduction Now here you either have the option to put the pipeline script all together You can also have some uh particular example For example let's talk about some GitHub Maven uh particular uh tool here So you can see that uh we have you know got some steps as such over here And you know it's pretty much running over there Now you run it it will work smoothly It will check out some source code But how we are going to integrate like the version the Jenkins file into the uh version control system because that's the ideal approach we should be following when we create a pipeline of a CI/CD Now I'm going to select a particular pipeline from SCM here Then go with the git here Now in there the Jenkins file is the name of the file of the pipeline script and I'm going to put my repository over here in this one Now this repository is of my git which is like having a maven build pipeline which is available there It's having some steps related to CI with for the build and deployments and that's what we can follow as such over here Now in this one the uh if it is a private repository definitely you can add on your credentials but this is a public repository a personal repository so I don't have to put any kind of credentials but you can always add the credentials with the help of add here and that can help you to you know set up whatever the credentials private repositories you want to configure Now once you save the configuration here now what it's going to do is that you it's going to give you a particular page related to build now uh if you want to run if you want to delete the pipeline if you want to reconfigure the pipeline all these different options are available there so we are going to click on the build now here and when I do that immediately the pipeline will be downloaded and will be processed now you may not be able to get the complete stage view as of now because it's still running so yeah you can see that the checkout code is done then it's going on to the build Okay that's one of the step which is there Now once the build will be done so it will continue with the next steps with the next further steps there So you can also go to the console output log here like you can click on this or you can click on the console output to check the complete log which is happening there or in fact you can also see the stage wise logs also uh because that is also very important when you go for the complete logs uh it may you know uh have a lot of steps involved and you know a lot of logs will be available there but if you want to see a specific log of a specific stage that's where this comes into the picture and as you can see that all the different uh steps like test cases executions the sonar cube analysis the archive artifacts deployment and in fact the notification So all this is a part of a complete pipeline This whole pipeline is done here and uh you know you get a kind of a stage view it's success over here and the artifacts is also available to download So you can download this war file is a web applications as such over here So this is what a typical pipeline looks like that how the automation the complete automations really looks like as such over here Now this is a very important aspect because it really helps us to understand that how the pipelines can be configured can be done and pretty much with the same steps you will be able to automate any kind of pipelines as such So that was the demo to build a simple pipeline as such with the Jenkins and uh pretty much in this one we understood that how exactly the CI/CD pipelines can be configured and we can use them and we can get hold on that part Now in this one we are going to talk about that how exactly we can integrate both the J Maven and the Jenkins here to just to implement the CI processes over here Now what is the purpose of Jenkins here now Jenkins is normally a kind of a CI tool which we use for performing the build automations and the test cases automation there It's one of the opensource tool which is available there and one of the most popular CI tool also available into the market Now this tool makes it easier for the developers to integrate the changes to the project here So we can easily integrate the changes and whatever the modifications we want to manage we will be able to do that with the help of Jenkins Now Jenkins also achieves the continuous integration with the help of couple of uh plugins Each and every tool which you want to integrate have its own plugins which is available there For example you want to integrate Maven we have a Maven plugin in Jenkins which you can install you can configure In that case you will be able to use the Maven there Now you can uh deploy the maven uh to build tool onto the Jenkins server and then you can prepare or you can configure any number of maven jobs in case of Jenkins So uh what exactly the maven uh or the Jenkins really do is the maven integrates with Jenkins through the particular plug-in So you can able to automate the builds because for automation the build you require some integration with the maven and that integration is what we are getting from the uh maven plug-in So in Jenkins you have to install the maven plug-in and once the plug-in is installed so what you can do is that you can proceed with the configurations you can proceed with the setup and this uh particular plug-in can help you to build out some of the Java based projects which is available there in the kit repositories and once that is done you will be able to go ahead and you will be able to process a complete integration of Maven within Jenkins right so let's see that how we can go for the integration Now I have already installed the Maven onto the uh Linux virtual machine uh which we are using So using the apt utility or using the yum utility you can actually download the Jenkins package and the Maven package onto the uh server onto the virtual machine And now I'm going to proceed further with the plug-in installation and the configuration of a Maven project So I have a GitHub repository which is having a Maven project Maven uh uh source code and the Mavenized test cases over there So let's see let's log into the uh Jenkins and see that how it works So this is the Jenkins interface which we have over here Now in this one what we can do is that we can create some maven jobs over here and once those jobs are created we will be able to do a custom build onto this Jenkins So first of all we have to install the uh particular plug-in here For that we have to go to the manage Jenkins In manage Jenkins you have the manage plugins option there So you have to click on that Now here you will be having different tabs like updates available installed advanced all these different tabs are available there So what you can do is that you can click on the available one When you go to the available tab so what will happen that here you can actually put up that what exactly uh plug-in you want to fetch here So I can put a plugin called Maven Now you can see that the very first one the Maven integration tool is available So I'm going to select that particular plug-in and click on download now and install after restart Now once that is done so what will happen that the plug-in will be downloaded but in order to reflect the changes we have to do a couple of restart Now for that you don't have to go to the uh virtual machine you have the option here itself that uh will allow you to do the restart over here when you click on this button So you check this option and say that restart Jenkins when the installation is done So what will happen that the installation will be automatically attempted whenever the uh particular plug-in installation is completed here So you just have to refresh the page again and uh you will be able to see that uh the particular Jenkins is being processed as such here Right so you can see that the screen is coming up that Jenkins is restarting So it will take a couple of five to six seconds to do the restart and uh the login screen to come up again over there You can do the refresh also if you feel automatically it will be reloaded once the Jenkins is ready But sometimes we have to refresh it so that we can get the screen over there So once the login is done so my Maven integration is done So next thing which I will be doing is that I will be creating a Maven related project So I'm going to put the admin user and the password So whatever the user and password you have created you are going to put that so that you will be able to log into the Jenkins portal Now this is the Jenkins which is available here So all you have to do is that you have to click on create a new job or new item So both the option is pretty much same only So here you will be able to see a maven uh project here So I'm going to select like Maven build That's the name which I'm going to give here and the maven project I'm going to select here and then press okay Now here you will be providing the first of all the repository from which you will be checking out the source code Now I can have a discard old builds over here So if I feel that I want to have like a loation so all the previous uh builds should be deleted So I'm just saying that dates to keep uh build should be 10 over here and uh the number of builds which I need to keep over here is 20 You can adjust these settings according to your requirement But uh over here we are you know doing a kind of configurations which we are trying to do a lot of configurations and settings here So these are the uh particular settings which we are looking forward as such over here So now we are going to have the uh log rotation here So we can have it like how many days we want to keep and how many number of builds we want to keep here So both the values we are providing over here And then now I'm going to put the g uh integration here like the repo url So I have this repository here in which I have the java source code and some uh particular uh jun test cases and all I also have the uh particular source code and it's kind of a maven project So that's what I'm trying to clone over here with the help of this plug-in So this plug-in will download this repository it will clone it onto the Jenkins server and then depending on our integration with Maven the Maven build will be triggered here So now I'm going to process with the uh Maven here So you can see here that it's saying that uh Jenkins needs to know that where the Maven is installed because that Maven version it needs to configure It needs to process on that part So I'll just do the save over here and uh or I can click on this uh tool configuration So I'll just save or do the apply Click on this uh tool configuration here Now here you have the options like where you can have the JDK installation But what happens that Jenkins is running there So JDK is automatically installed So in the tools configuration you don't have to put the JDK configuration but at least for the Maven configurations you have to provide that where exactly the Maven is available there So I'm just saying that Maven 3 I want to process and the latest Maven Apache web server I want to configure here So I just want to have like I just want to save this settings so that it will be automatically download the latest version Apache 3.6.3 version there and that same should be utilized over here in this case Now I'm just going to the maven build configuration here and click on the configure part So these get repository is available here and uh in the build step it automatically builds up that uh what Maven environment you want to select So you see that previously since I did not configured my Maven environment so it was throwing an error but once I have configured that uh I have to download it during the build process or before the build that utility should be downloaded So instead of uh doing the physical installation of Maven on the server what I have chosen over here is that I have selected the particular version like I have selected that uh particular 3.6.3 uh version should be installed for the Maven purposes over here Now once that is done I'm going to put the particular steps over here You can have it like clean install you can have clean compile test clean test or test alone you can give it's just a part of the uh setup or the goals which you want to configure here it by default says that pom.xml file is the current one in the current directory you need to refer you need to pick on that one But it's up to you only that how you want to configure and how you want to process as such these information So according to your requirement you can say that okay I just want to go for these particular goals and uh you can say like save over here the particular configuration will be saved Now you can just click on the build now and you will be able to see that the first of all the get clone will happen and then the desired maven executable will be uh the build tool will be configured and according to that it will be processed here So you can see here that uh the maven is uh getting downloaded It's getting uh configured here and once it's configured because I have explained over there that 3.6.3 version I have to select So that specific version will be configured and will be picked up over here Now even if you don't have the maven installed on the physical machine on which the Jenkins is running still you will be able to do the processing using this particular component here So you can see here that we have some particular test cases executed and in the end we are able to get a particular artifacts also there since I did not uh call upon the package or install goal that's the reason why the particular artifacts was not generated vf file or jar file whatever the packaging mode is available at pom level but still what happens that my test cases do gets executed and that's what I have got over here in this case so this is a kind of a mechanism where we feel that how we can configure a g repository once Once the Git repository is configured you are going to integrate the Maven plug-in In the Maven plug-in you are going to configure in the tools configuration that this and so and so version I want to configure to run my build And once that is done after that you just have to trigger the build and uh click on the build now option and once that is done you will be able to get a particular full-fledged build or compilation happened onto the Jenkins and this log will give you the complete details that what are the different steps which has happened on this one So what exactly is genkins genkins is nothing but a powerful automation server that is written in Java and it is a web application which can also be run on any web server But what makes Genkins an ideal choice for a continuous integration server genkins has got wonderful plugins that allows it to connect to all kinds of uh tools software development deployment coding build source code kind of a tools That is what makes Genkins very very powerful from a continuous integration perspective Genkins can connect to various source code uh servers and it has also got plugins that allows it to build deploy test all kinds of software artifacts So this is what makes Genkins an ideal choice for a continuous integration server But mind you for me Genkins is nothing but a very very powerful automated server At the heart of it there's lot of automation in it But the powerfulness of Genkins is more so because of the tools that it integrates with and the kind of plugins that it has got What is continuous integration from a software development uh life cycle assuming that the software delivery is happening in very small sprints maybe 3 to 4 weeks is your delivery life cycle and there are a bunch of developers who are located in different locations who are working on the same codebase on the same branch If the code check-ins do not happen quickly as in every day if at all developers stagger their code check-ins into the repository finding problems at a later stage would be very costly for the whole project Early detection of any such issues would be you know quick to resolve and would not affect your delivery schedules So as a part of continuous integration what is requested or what is demanded is that every developer checks in code pretty much you know every day as long as it doesn't break the code he checks in code pretty much regularly and at the end of a day you have an automated server which kind of wakes up pulls the latest code So this code has got the integrations of all the code bases that has been checked in by various developers So it pulls out the code it builds on a completely different server that is the CI server which pulls this code it builds it It's got all the tools that is required to compile it build it and test it And assuming that you got some good percentage of test case automation you're also having most of your regression test suites automated If at all there is a way by which in a couple of hours time when the team is out or rather team is sleeping you have verification that happens at a very crucial level and then any breakages even before the team arrives for the next day If these are notified to the whole team members saying an email going out saying that something got broken most of the code would be pretty okay from the perspective of compilation errors or a build errors It is the functionality and the regressions that the team is worried about So if these can be automated test it very very quickly and very very fast and then any breakages are detected early during the day right by the time the next day people come in they know what is broken and possibly they know what code uh check-in broke that particular thing and they can have a quick standup meeting and then they discuss what broke the code and able to fix it So this way any problem that could possibly arise at a later point of time if at all they kind of move to the initial phase of the project any detection that is early doesn't really hurt the team So this is all continuous integration that is about and genkins plays an important role in being the continuous integration server because it's got connections to anything and everything all kinds of tools I mean and then it is also got various ways on which triggering the job which is a part of its automation strategy Now that we know what is continuous integration and where does genkins come into picture let's look at the rest of the tasks of our software development life cycle So if at all I were to visualize the kind of steps that is involved in delivering my software possibly the continuous integration phase would be somewhere here where multiple developers are developing on that and then we have a little bit of a stable code that is there that can be kind of moved across because I want to go ahead with uh the particular build that I have and then I want to migrate that I I want to propagate that across various environments So if you consider the standard software delivery approach in the first cycle you just do some minimal testing and then you kind of move that to one of the environments and from there you kick off more and more tests They could be integration test they could be acceptance test they could be functionality check they could be a stress test they could be a load test it could be a system integration test all kinds of test that you can think about and all the way may be propagating the build across various environments If all this can be considered as various steps the workflow is such that as and when the build moves across various phases if there's any failure of course the build propagation kind of stops everyone gets notified But if at all everything goes well so your workflow is progressing well and at the end of the workflow you eventually have a code which is pretty much good to release Now mind you I make an assumption here that most of your test cases are automated and you have a good percentage of coverage of your test cases But if that is not the scenario then possibly there are some automated tests or checks that may be required in between But if the workflow can kind of accommodate all that as well you know you can visualize this as the steps that is required for your software development or software delivery life cycle Now in genkins the way this kind of translates is that each of these tasks can be put out as a job So now let me quickly uh let you know or let me quickly demo what existed in prejenkins 2.0 where I could put up a couple of jobs and I can connect them using the upstream downstream linking mechanism So if this job one if at all it is a build and unit test cases if at all that passes successfully job two gets triggered If the job two is more about running some more automated test or possibly deploying it to environment and then kicking off some more test cases that would be job two But if the deployment fails or if some of the other test cases fail it would not propagate to the third job All right So let me quickly bring up my genkins instance and put up some sample jobs and tell you how to connect that or rather how would one connect that using Genkins 2.0 or pre Genenkins 2.0 release I have now brought up my genkins instance and in case some of you don't know how to install genkins or you don't know how to bring up your genkins instance I would strongly recommend that you watch our previous videos on simple learn YouTube channel where I've detailed out the steps that is required for you to install genkins and bring it up So all right so I've brought up my genkins instance let me put up few of those jobs now mind you I'm going to cover the pre genkins 2.0 feature here All right so let me put up my first job All right I hope I don't have that job I say it's a freestyle project I don't want to change anything I'm going to put up a very very simple job here It's in batch command I say echo first job triggered at All right That's my first job Uh let me put up my second job freestyle project All right that's my second job All right that's my third job I've got a very very simple u echo statements in this So it just prints out the system date and the time in it All right So I could run these jobs individually if I want So let me just check running my third job So this is what I get in the console output Third job triggered at date and time Oops Let me fix that Right That should fix it Let me check my second job All right that's my second job All right so I've got three jobs Now if I were to link them together or if at all I want a scenario where after the first job is successfully run I would like to trigger my second job So I would do a small configuration change in here I would say after this first job is run I want to trigger the second job So I have something called as a post build action So I can say that trigger some other jobs from here So if you see this publish record deploy all right trigger trigger trigger trigger Let me check the other one Build other projects This is what I would want to do So after the first job is done I want to trigger my second job All right I would say save this Now let me go back to my second job and then trigger the third job after the second job is done All right All right I will add this post build other projects third job Again I'm not really sure if you guys notice this There is various configurations as to when exactly do you want to trigger the other job and the default one is trigger only if the build is stable So typically this is the configuration that would need We definitely don't want the third job to be triggered in case the second job fails All right So this is the combination that I want or this is the choice that I want And I save this Now I have three of my jobs If you see this the second job the upstream job is the first job So let me check this kind of a pipeline What I've set here is a very very simple pipeline So after my first job gets triggered If I build this guy right the second job gets triggered after the build first job is built So if I click on the second job All right So the first job was to get the second job and after the second job it is triggering the third job So this is how first job second job and third job were kind of linked But it's pretty hard to visualize this as to you know if I need to see one holistic picture where after the first job after the second job after the third job what was the flow it's not possible for me to visualize that That's wherein I install a plug-in So let me go to manage plugins right here I think I already have it installed For those of you who don't have it installed you can go to the available button I mean available tab and click on that The plug-in is called delivery pipeline plug-in I already have it installed In case you don't have it installed you just go to the available You click on this and say install without restart This is the plug-in that I want you to install All right So now we have that plug-in installed So what I want to do is after the plug-in is installed you see something like this So this is where I would create a new visualization for the pipeline that I've created So I would say my first pipeline or I give a name for my visualization I would say yes this upstream downstream dependencies This is exactly what I want and there are a bunch of settings here I'll not look at any of that Now what I want is I just want to tell this view that you know the I can give a name for this I would give it as simply learn pipeline And what's important is that I specify what is the first job that should be picked up as a part of this pipeline And the final job is optional because it knows that if the first job is triggering this other jobs it knows where to end this whole uh life cycle So I define a pipeline And I give a name for my component and I initialize that as my I mean I give it the first job So I say okay and there you go This is much better This gives you a beautiful visualization of as to what happened after the first job Second job was run second job If I click on any of these that will in fact take me to that job All right There is also one other option which is pretty good option in my opinion which is about edit view Yes this is where it is Enable start of a new pipeline build Let me apply and let me click okay on this What it gives me is a way in which I can trigger my whole pipeline from here So if I click on this there you see the first job getting triggered The second job is still running The green means it's all run properly and nicely The second one is triggered now The third one it's still running All right So this is the pipeline that existed prior to genkins 2.0 This is pretty decent enough And if you see there's a onetoone mapping but if at all you remember we could go and add multiple dependencies for the project that I set in Just to give an example let's have I go to my first job I can do a configuration here and nothing stops me from triggering multiple jobs after this by giving me a comma I can trigger multiple jobs here in case if I have to run few things parallelly this also gives me that option to do that But having said that this was the most primitive way in which the jobs were kind of visualized and run prior to genkins 2.0 Now this feature became such an important feature the users wanted more and more complicated because the pipelines was not a lot complicated It was not just one job after the other There were multiple job that has to be run and there was also an introduction of the genkins agents where multiple tasks could be parallelly run on different agents So they wanted to club all of that and the pipeline could have all the such complicated stuff That's where in post genkin 2.0 or in genkins 2.0 genkins uh released a version which has got the feature of pipeline which can be written in groovy scripts Now Groovy is wonderful scripting language It's very very powerful Anybody can visualize your pipeline or write your pipeline using programming language And the point of everything as code where this whole groovy script gets into your source code repository So instead of putting jobs here and in case my genkins kind of fails you know there's a crash on my genkins I don't get back these jobs how do I bring back all these jobs back So everything is good That's the DevOps principle So the pipelines will be written as scripts That is what I'm going to do in my next exercise In my previous example I showed you the crude way in my opinion of putting up a genkins pipeline But this is what existed prior to genkins 2.0 and now I have post genkins 2.0 in terms of my version genkins version is 2.107 So this supports something called as a scripted pipeline wherein you can write your pipeline in terms of groovy scripts No need to put up any jobs here and remember how exactly you put up these individual jobs You can write a pipeline script in terms of Groovy language Let me quickly show you a very very simple and elementary pipeline that I have This is what a Groovy script would look like Pipeline any agent can run this stages There are individual stages that is defined as a subset of these stages So the first stage is the compile stage and stage has got some steps in it You can have multiple steps in it and once only after all these steps successfully complete that's where in the stage gets through perfectly with with the pass So there's a compile stage there's a JUnit stage there's a quality gate stage there's a deploy stage and I'm really not doing anything much within this other than echoing you know some text within each of these stages And what's interesting is at the end there's something called as a post which is similar to or you can kind of equate that to what would be there in a try catch kind of a block Supposed always meaning this would run all the time Success only if at all all the steps that were above in terms of the stages they were completed successfully without any failures So typically you would have your email that is going out here saying that the build is successful and stuff like that failure If something went bad if any of the step resulted in a failure this particular block will get executed Unstable whenever any build is marked unstable If at all only few things that failed within your test run and you would want to mark the build as unstable are changed This is an interesting option So this compares the present run with the previous run and if there's any change meaning if the previous run was a failure and the present run is a success or vice versa this would get triggered So this is what a simple pipeline script would look like So let me copy this pipeline and let me put up a simple job for running this pipeline So let me open up my genkins say a new I would say scripted pipeline Yeah this is what I want I don't want to choose a freestyle project This is going to be a pipeline project So I would say pipeline and say okay All right This has got far less options than the other jobs that we put out So general I don't want anything here I don't want any build trigger Right So this is where I kind of I can paste in whatever I had copied There's also something called as a pipeline syntax or a syntax generator This is like a lookup where you can choose what you want to do and choose the option that is specific to those steps and you'll get a pipeline generated or a script generated for you Genkins knows that you're not very good at understanding these pipelines So this gives you this sandbox kind of an environment where you can check out whatever you want to do as a part of your pipeline and then get the equivalent Groovy script from here Let me look at this in a bit later So for now I have my pipeline syntax already copied So what I'm going to do is I'm going to paste what I copied All right So this looks good Okay I'm not connecting to any GitHub repository of any of that I'm just running a very very simple pipeline which has got some steps in it and it just compiles or rather it just puts out some messages saying that the stage completed successfully and stuff like that So let me save this and let me try to run this scripted pipeline All right If you see this you would see each of those steps going through and if at all I look at the console output Compile successfully Unit passed All the stages passed There was a pass The failures doesn't show up You would see the messages from our post or the try catch block that I was mentioning earlier So this is how one would put up a pipeline And you also get to see the visualized view of your pipeline that says which stage run after which phase how much time did it take and you can click on any of these and get into looking at the logs from that particular pipeline run That was pretty easy wasn't it now let me give you another scenario for a pipeline wherein the source code of my pipeline would be in a GitHub repository and I'll write scripts to grab this particular code and run some part of the code which is there as a part of the repository So let me look at the repository that I have I have a repository out here on the simply learn GitHub account which is called the pipeline script And if you see in there there are a bunch of batch files that are there So the first batch file would be a build.batch So there's nothing in it except that it is just trying to build a particular project You can visualize this as individual batch files which actually contain the scripts for building running deploying and checking the quality gate of your particular project So I have a couple of batch files that is here and this is on the GitHub repository So I would need to write genkins job which will log into my GitHub account and then check out this particular repository from my account and then run these batch files as a part of those individual steps within within the scripted pipeline So let me check as to how I could do that Let me put up a new project for this Let me call this scripted pipeline Scripted pipeline from GitHub All right So let this be a pipeline project That's good enough for me Let me see my scripts All right Now this is where I need to put in the scripts for pulling out the code repository from my GitHub server and running those batch files that are there as a part of the repository So what I want to do is I already have the skeleton of my pipeline that is written which is very similar to whatever was the pipeline syntax that I showed you in the previous step So I just copy this out here and then paste it here So what I have here is I've written all the highlevel skeleton without really putting in the actual steps required for checking out rather or rather running those build scripts So I've got four steps One is the get checkout step the build step unit test quality gate and possibly the deploy All right So I need to put in the actual scripts that is required for first checking out the repository from my GitHub server So this is where I will make use of this pipeline syntax So as I mentioned earlier you have a bunch of help that is available for you to figure out the actual scripts that is required for you to write within your pipeline So what I wanted to do is check out something from git So it's git related So search on git and you'll find this option So I got to specify my git repository URL and my credentials So let me look at the repository URL This would be my repository URL So let me copy this I'm going to copy the https URL of my repository and branch is good And uh one thing that you got to notice is for now the repository is anyway a public repository on GitHub So even though if I don't specify any credentials that would work for me still but in case you have a repository which needs strictly a username and password to be specified you can kind of add it out here using add jenkins and you can give your username and password out here but for now I don't need any of these things So I'm going to just say get checkout or rather the URL of my repository and what we want is the master branch For now I have only one branch on my GitHub server So this is good for me So this is what exactly I want to do as a part of the script So if I click this this is the script that I need to put in my build script So I come over here and this is what will check out the code from my repository All right So now once I get my code onto my repository from my repository rather it will grab those code all these batch files and get it onto my genkins workspace Now I'll have to run these batch files as a part of each of my step So let me look at what would be the syntax So the first one that I want to run would be my um build.bat All right So I want to run a batch file All right And what is the name of the batch file that I want to run i want to run this build.bat So generate pipeline script This is all that I got to specify as a part of my build step And then unit test I've got to just change this to unit I think that's what I have in my repository Okay that is unit and then deploy and quality Q capital quality and this one would be deploy All right So this piece of code will actually get into my repository and check out my source code and grab it and take it to the genkins workspace So from this workspace since all the files are there in the root directory of this workspace it will run these batch files one after the other All right let me save this and let me try to run my pipeline All right so it runs a lot of things in the background trying to get the source code from my repository Woah woah woah that was fast All right so it pulled out all the source code from my repository The last commit message from where the source code was pulled out it was this create deploy.bat That looks good I'm saying building checked out project Building the checked out project This is what I had in my build.bat if I'm not mistaken Okay Building the project That's what is there with the time stamp Running unit test cases unit.bat It is running the unit.bat and then giving me the date and time stamp Okay So all these kind of passed and if I go back to the project I will also see this beautiful view of how exactly what is the time that was taken for checking out the repository running the build.bat running the unit test cases quality gates and all this isn't that pretty simple now let me modify my previous job or rather let me put up a new job for making use of an agent wherein I could delegate a job to an agent Typically agents can be brought up on any other remote machines other than where your primary genkins server is running In case you don't know or you don't know how to start up these agents I would strongly recommend that you refer to our previous um genkins video on the simply learn YouTube channel All right So let me just check the status of my agent for now Yes he's offline So let me start this agent because agent is not running So I have the agent uh set up in my SQL and agents So let me copy the script file that is required for starting my agent Let me go to the agents folder Open up a command prompt and let me try to bring up my agent All right So the agent is up and running For now I don't have the luxury of starting my agent on a different machine So my agent is running on my the same machine but the agent workspace is C agent while my primary genin server is running uh has the workspace out here C program files 86 and this is the workspace of my genkins All right I hope you can kind of differentiate those two All right So now what I want is I look at the same job that I put in earlier or rather modify that Let the steps be the same But I don't want to run that on my master server Let me try to delegate that using the script So let me put up an agent scripted job All right It will be a pipeline rep job I would say okay And let me copy this was a step that I had put in for my previous job So agent any So what I'm going to do now is I don't want this to be running on any other agent I want this to be running on the agent whose label is let me check what is the label of my agent that is running Okay So this is the name of my agent Okay Windows node So let me just copy that there All right So with a very subtle change instead of saying agent any I'm going to run or rather I'm going to specify the agent who will be running this job is the one who has got the label as windows node So this agent that I brought on my system has got the name as windows node and it is configured to pick up any job that matches the label uh to which any job is kind of delegated So let me come back to my jobs Where is my scripted agent job i've got too many jobs running All right So this is my agent scripted job that I left halfway through So here in the pipeline what I'm going to do is is this is all I'm going to need So the job remains the same Get checkout is going to check out from the same repository I run the batch files accordingly But this change is just to ensure that this job is kind of delegated on the agent All right So this would be my agent job Let me save this and let me get back to the dashboard and let me run it from here If you see this you know the master and agent are both idle as of now Let me try to run this agent scripted job All right so the agent kind of kicked in and there was a job that was delegated to the agent If I look at what is there in the console output he's pretty much doing whatever was there as a part of the job But the interesting thing to notice that this is the new workspace or rather this is delegated to the agent and the agent's workspace is this particular folder So this is where it's going to get all this stuff uh run the whole thing and um you know the flow is pretty much the same The only thing is this whole thing ran on the agent If I need to check my agent I would see the workspace out here agent scripted job If you look at this all the batch files are here and this is where the job was kind of delegated to run So with a very subtle change in in the scripting I can ensure that the jobs are kind of delegated onto the agent the pipeline job specifically As I mentioned earlier Jenkins provides you two different ways of uh writing pipelines called the scripted and the declarative The first one that was launched was the scripted pipeline This is heavily based on Groovy scripting since genkins uh ships along with the Groovy engine So this was the first script or the first support for pipeline that was provided by Genkins in 2.0 This needs a little bit of a learning curve since uh Groovy is a wonderful script Understanding that may be a little cumbersome But then once you kind of master it you can write really powerful scripts based on Groovy at a very high level This is what um a typical scripted pipeline would look like Something called as a node node represents the agent or the actual box on which your job would be running and then a bunch of stages are put out which each of the stage along with the steps that needs to be covered as the part of those stages listed one below the other So all these stages if they run peacefully then the whole task is kind of marked as run successfully since understanding Groovy or learning Groovy was a little tough for many of the people So this is a new one from genkins wherein it provides you a much simple and uh friendly syntax for writing pipelines without really needing to understand or without really needing to uh learn some groovy scripting Again there's a very subtle change between these and you there are a lot of lookups for figuring out what is a better pipeline for you to kind of write But if you can figure out the difference or if you can try to find that particular piece of code which kind of helps you out with your pipeline either of the scripts there's not really any difference in writing or kind of delivering your pipeline using either of these two methods All right So declarative pipeline is something like this where you have an agent you can specify the agent label or if you can say agent any it will pick up whatever is an available agent and run the job and then you have something called a stages stages is nothing but a collection of uh stage and stage could have multiple steps defined within this So if any of the steps in any of the stage fails then that particular complete stage and the build is marked as failure So very subtle difference between both these uh two syntax but using either of these you can write powerful uh scripts for your pipeline Now let me come up with an example where I'm going to at least demo one of the feature where you could run a master and a slave job in parallel So let me come up with a demo for that particular scenario Let me put up a new job for my parallel agent pipeline Let me call this and this would be a pipeline project I don't want anything here Let me look at uh the pipeline script that I have uh pipeline agent none stages and there's a first stage where this would be a kind of nonp parallel stage where there's a need for you to possibly pull out the source code from one of the repositories and possibly unit test it if all the unit test cases pass then possibly you want to deploy it to one of the test environments that would be what would be there as a part of the nonp parallel stage and then you may have a bunch of tests that could be run and assuming that you know you have a windows node you have a Linux node you have some other kind of an operating system based node You could run these stages in parallel So for just for demonstration I've just put in two parallel stages In parallel is the keyword that you're going to use for running tests parallelly So I would say parallel stage test on Windows and I'm going to run this in my Windows node Well I could run a bunch of steps that I want out here And then in the other stage or other step what I want is I will run something else on on my master As long as this parallel keyword is encountered genkins will ensure that these two stages are run parallelly For now I have both these things running on my same machine But assuming that these were running on different boxes you could kind of visualize it as these two steps are going to be started parallelly without really any dependency on each other And then you could wait for the test results and then based upon whether both of these steps passed or fail if one of them failed then we could kind of mark the build accordingly So let me copy this pretty pretty simple script Let me put this out here and uh let me save this out and let me try to build this All right there you go The stage will be executed first This is the nonp parallel stage that's going to happen Then the task one on agent task one one on master followed by as I said since I have only one node or rather one system one with both these things are running simultaneously you would not really see a benefit of this but assuming that you have couple of boxes on which you have multiple agents running you possibly want to run your selenium tests on the windows box because selenium brings up some of those UI which needs a browser you could possibly need some regression test that could run on Linux boxes or Linux agents and then you can kind of break down your tasks into multiple things that is running on multiple systems at the same time and then uh collate all the results Okay one final thing All right now I have all my particular um job or rather the steps required for my um pipeline put down in in terms of the scripts and these is saved in this particular job That's not a good or recommended approach So what I'll do is I'll copy all of these steps out here and then what needs to be done is actually let's say let me go back to this repository The most preferred approach is where you create something called as a Jenkins file genkins file and you paste all the scripts that is required for your pipeline Now this is in a true sense the DevOps approach where I'm going to save this out So if at all you have a pipeline defined for your project uh the best place to kind of um put out your configurations for the pipeline is within your repository So this may be a different project that I was referring to but assuming that you have your project where you know you have to define your pipeline instead of putting that as a particular job on genkins and fearing that if genkins fails or the jobs there's a crash and then you lose out your job configuration the best approach is to use a genkins file put all the steps that is required the tried tested steps that is required as a part of your genkins file and then you can put out a job that can pull the source code from here as as well as use the steps that is defined in the Jetkins file So let me end up by putting up another job which is a true JavOPS kind of a job So I would say DevOps pipeline and this is a pipeline script and then I'm not going to say any of these things I would say the pipeline script is from the source code management So my pipeline script is already defined It is present in SCM So what is my SVN i mean what is my source code repository this is my source code repository where I already have this So let me copy this URL This is my URL I don't need any credentials because the um repository is anyway in public repository That is all that is required I would say and the scripted file it automatically picks up Jenkins file All right So let me save this Let me build this So that's the beauty of DevOps wherein I have a pipeline that is defined and instead of putting the pipeline as a job because pipeline is nothing but a configuration the configuration is also checked into the source code repository and any changes to this pipeline instead of putting that modifications in the job these are all captured as a part of my repository So the changes are nicely configured so that you know we know who's done what change right So let's talk about the demo now So let's see that how exactly we can go for this demo and we can perform the various kind of automations So this is the virtual machine which we have here on which the maven is already installed So we can run like mvn So maven will be available as in 3.6.3 here Now I'm going to run a particular command called uh MVN arch type generate here Let me create a directory here uh temp directory and perform this activity over there So mvn arch type generate Now once we run that so what will happen that it will uh download some of the binaries there because uh ultimately what we are trying to do is that we are trying to generate a new project like a maven project So a couple of uh particular uh plugins will be downloaded by the maven executable so that it can perchieve that particular execution So we just have to wait for downloading all these values Now here it's trying to give us a particular attributes like it's asking the different attributes over here So what exactly we want to configure So if you want to configure you can provide that details otherwise you can perform or whatever the setup you want to perform Now here it's asking for the version So uh which kind of version you want to follow So I'm going to follow like five here So I'll press five Then a group ID which is there So um it's basically a kind of a group mechanism So I can say like com dots simply learn So that's the value which I'm providing here artifact ID I can make it like a sample project or something like that uh I can do so that will be the artifact uh ID which is there so version I'm keeping the same only so and uh yeah so package same here so I just want to create so I'll just provide the value called yes and enter so with this what will happen that a sample project will be created here right so whatever the artifact ID you provided So according to that the project is created in this directory So you have to go into this directory and see that what exactly the files are created there So you have the pound.xml file Now this pound XML file when I open so you can see here that there are some attributes like uh you can have the values uh related to what version group ID you want to follow So this is the group ID This is the artifact ID So this is jar file by default You can change it according to your requirement Then this is the version and if you feel that you want to do the changes to the name that also you can perform here So by default the JUnit dependency is added but if you want to keep on adding your own custom dependencies you will be able to do that Now in this case if you run like MVN clean install so it will be considered as an particular maven project a power XML file is already there present in the local directory So according to that the execution of the steps will be performed and according to that you will be able to get some desired values here So ultimately in the target directory you will be able to see that some couple of jar file or a specific jar file is generated here So you can see that in the target directory this jar file or this artifact is generated here So this is a way that how we can actually go for a generic one like a new uh particular project and later on you can depending on your uh understanding you can keep on adding or you can keep on modifying the dependencies and that's how you can get the final result there So that's it for this demo uh in which we have find out that how exactly we can go for a particular project uh preparation with the help of MVN executable Welcome everyone to this topic in which we are going to talk about that what exactly is the different maven interview questions here Now in this one we are going to talk about what are the different uh questions some couple of questions we are going to go through and uh we'll try to understand that what exactly the answers are Now uh let's talk about the first question over here So what exactly is in Maven here so Maven is nothing but a kind of a popular open tool uh opensource build tool which is available there Now before uh Maven there were a couple of build tools which was present like and you know a lot of other legacy tools was present there but after that Maven is something which was uh released as an opensource tool and it really helps the organization to uh automate some couple of build processes and you know have some uh particular mechanisms like build publish and deploys of of different different projects at once itself So it's a very powerful tool which can really help us to do the build automations We can integrate them with the other tools like Jenkins and you know we can automate them We can schedule the builds So a lot of various advantages we can get with the help of this tool here It's primary written in Java and uh it can be used to build up various other kind of projects also like C Scala Ruby etc So all these other typical tools can also be built up with the help of this tool So this tool is primary uh used to do the uh particular development and management of the artifacts in the Java based projects So uh for most of the Java based projects nowadays this is the default tool and it's already integrated with the Eclipse also So when you go for a new project automatically uh it will be created for a Java project You can use it for other languages also But yes default choice of Java of of the uh Java programming language is Maven build tool only Now let's talk about the next question So what does the Maven help with so Maven Apache Maven helps to manage all the processes such as build process documentation release process distribution deployment preparing the artifact So all these tasks is being primary taken care by the Apache Maven So this tool simplifies the process of project building It also increases the performance of the project and the overall building process So all these things are something which is being taken care by the specific maven tool here So it also uses the particular uh you know it downloads the jar files of the different dependencies For example if your source code is dependent on some of the Apache web service uh jar files or some of the other thirdparty jar files in that case you don't have to download those jar files and keep in some uh repository or keep it in some live directory You just have to mention that dependency in the maven and that jar file will be downloaded during the build process and will be cached locally So that's a biggest advantage which we get with Maven that you don't have to take care of all these dependencies anywhere into your source code system So Maven provides easy access to all the required information It uh helps the developer to build uh the projects and uh without you know even worrying about the dependencies processes or different environments or different because it's an uh kind of uh tool which can be used in any platform Linux or Windows So they don't have to do any kind of conversions So all they have to do is that they have to just add new dependencies and that should be updated into the pom file and depending on that dependencies the source code will be built up and they don't have to refer any kind of third party jar files So they don't have to play with the class during the build process So no customizations is actually required with this one Now the next question is that what are the different elements that maven take cares of So there are different kind of elements which is being taken care by maven So uh the these particular parameters are elements are builds dependencies reports distribution releases and mailing list So these are the typical different different uh elements which is being taken care by the maven during the build process and during the preparation of the builds here So all these things you can uh they can explore they can extract on that part and they can fully understand that how they can work on all these different different processes Now next question is that what is the difference primary difference between the and and maven First of all both of them are primary used for the Javas project So and is the older version and maven is something which was launched after the ant here So and has no formal conventions like so which can be you know coded into the build.xml file there But yes the maven has conventions So information is not required as such in the pom.xml file there So ant is procedural whereas maven is declarative over here So ant does not have any kind of life cycle So it depends on you that how you program the ant there but maven is having a lot of life cycles there which we can configure we can utilize So the uh ant related scripts are not reusable because you cannot reuse it and you have to do some kind of customizations in order to work on that But yes maven is not having much of the project uh related any kind of dependencies they can be easily reusable there because there is nothing about uh the pom.xml file It's just the artifact name and the dependencies which is uh something we can uh override or we can change and then the same pom.xml file we can reuse as such for the new project also So that is where the reusability comes into the picture Now ant is a very specific build tool So we don't have to there is no plugins as such which is available there You just have to code everything that what build process you want to prepare Whereas in case of maven we have the concept of plugins which can really help us to understand that how we can make use of these plugins so that we can have the reusability implemented So these are some of the differences which is available there between the ant and maven here Now next thing is that what exactly is in pom file all about So palm file is nothing but it's kind of a XML file which is available there and it's have having all the information regarding the project and the configuration details So it primary used over here that how the configuration needs to be done and how the setup should be performed as such here So pomxml file is the uh build script which we prepare uh you can prepare it uh using a particular component or uh you can have a particular mechanisms or if you feel that you want to have some kind of setup So all these things typically can be implemented can be done with the help of build tools So build tools can be really helpful for us to do the automations and it can really help us to understand that how some build processes we can automate with simply with the help of pom.xml file here So the developers usually put up uh everything inside these uh the dependencies in the pom.xml file here So this is the file which is uh usually present in the home directory It's in the current directory So that once the build uh is triggered it will be picked up from that directory and according to the steps according to the content of the pom.xml file the build will be processed or will be created here Now what are included into the pom file here so the different components which is included into the pom.xml XML file Here is the dependencies uh developers and contributors plugins plug-in configuration and resources So these are the typical components which is a part of a pom.xml file which can be uh same for lot of projects You can do some customization and then the same pom file can be reused for the other projects also Now what are the minimum requirement of the elements which is there for a palm uh pomxl file So without which the pound XML file will not be validated and we will we'll be getting a kind of validation errors So the minimum required elements are project root model version So it should be 4.0.0 the group ID of the project the artifact ID of the project and the version of the artifact These are the minimum things which we want to define so that we can understand that what kind of artifact we are trying to prepare or we are trying to create here So these are the minimum required elements which is required in the pound XML file without which the validation of the pom file will fail and the build will also fail here Now what exactly is the mint uh with the term called build tool So build tool is an essential tool is a kind of a process for building or compiling the source code here So it's needed or it's required for the below generated processes If you want to generate the source code if you want to compile the source code you want to generate the source code you want to generate some documentation from the source code you want to compile the source code or you want to package a source code whether it's a jar file it's a VA file or it's a ER file So whatever the packaging mode you want to select you will be able to do it with the help of the particular build tool here And if you feel that you want to upload these uh particular artifacts to the artifact tree whether it's on remote machine or locally there so that also you can do it with the help of uh this uh particular build tools here So build tools can be helpful in doing a lot of activities for the developers Now one of the different steps which is involved to install Maven on Windows Now all you have to do is that you have to just first of all download the uh tar file from the u maven apache maven repository there Once that is done so what happens that you have to set up some couple of environment variables Now if you download the java jdk using the exe file in that case the java home will be configured automatically But if it is not available and you're not able to run the java command line in that case you have to set up the java home And then similarly for maven you have to go for the maven_home that particular variable you have to configure Now once that is done all you have to do is that you have to edit the path variable So the bin directory of the maven extracted folder you have to put it up into the path variable And once that is done what will happen that you will be able to check the latest version the version of the maven over there If it is like some old version again you have to extract the latest version and do the steps all together again So these are some of the ways that in which you can actually go for the installation or the configurations of Maven on the Windows platform Now what are the different steps which is involved for the installation of Maven on Ubuntu So Ubuntu it's fine You just download the package of Java JDK there Once the JDK is installed over there what you can do you can simply go ahead and say that yes I want to search for a particular Maven uh package which is available there So once the JDK is installed all you have to do is that you have to configure the java home m3_ome maven_ome and the path variable Once all the path variables are something which is configured then you will be able to check the latest version like whether it's uh the version is correct or we are getting the standard version over here or not over here So that's the main mechanism that how we will be able to you know configure the maven on Ubuntu here Now what exactly is the command to install GR into the local repository Now sometimes what happens that uh we are not able to fetch like uh some dependency is not present on the uh particular central repository m repository or your artifact repository in that case you have some third party jar which we want to install locally onto your repository So in that case we can go for the uh but we can download the jar file there and then we can run the command called mvn install install file and then we are giving the path like hyphen d file where the path of the file should be provided Now once that is done so what will happen that in the local m2 directory this specific uh artifact will be downloaded and will be installed there So this is a mechanism where you will be able to configure or you will be able to set up the artifacts locally the java file locally here in the local repository So next question is that how do you know that the version of the maven being used here So the version of the uh maven is pretty easy to calculate So all you have to do is that you have to just go for mvn and space- version The moment you do that it will let you know that what JDK or what Java version you are using and it will also show you that what particular Maven version you're going to use here So all that details you will be able to get with that particular command here Now what exactly is the clean default and frighten variable here So these are the build cycles which is available there in within Maven So these are the built-in build cycles So for clean what happens that this life cycle will help you to perform the project cleaning So usually during the build there are some files which is created into the target directory So the clean life cycle is essentially helping us to clean up all that directory all that particular target directory And when we talk about the uh specific default so default uh life cycle handles the project deployment that is the default uh life cycle and site is something which is uh you know helpful for creating the site documentation you know it's kind of a life cycle which is available there So clean default and site are the different life cycles which can perform different different kind of uh attributes or different tasks here Next question what exactly is a maven repository so maven repository refers to the directories of the package jar files that contain metadata Now the metadata refers to the pom files relevant to each project So here you can able to get your artifacts uh stored there You can download these artifacts also during the maven build If you put up that dependency you will there are different kind of repositories which is available there One is a local repository one is a remote repository and one is the central repository So these are the different typical type of repositories which we have where we can uh store the artifacts also and from where we can download the artifacts also whenever required Now the first one is the local repository So local repository refers to the machine of the developers itself where all the project related files are stored there Now whenever we work on the uh particular maven so there is an in the home directory M2 folder is created Now usually whatever the artifacts downloaded from artifactory or from the uh maven repository it gets cached locally there and once it is downloaded next time it will not download the same artifacts or the same dependency altogether again So this local uh repository is something which is available locally on the developer machine only So it contains all the dependent jars which a particular developer is downloading during the maven build Now remote repositories refers to the repository which is present on the server and uh from where we will be downloading the uh particular dependencies So the when when we are running the maven build on a fresh machine So usually over there the local repository does not exist So in that case what happens that the M2 directory is empty but the moment you run the build what will happen that the artifacts or the dependencies will be downloaded from the remote repository and uh once it is done once it's uh downloaded it will be added or it will be downloaded cached locally there and it will be uh helpful in the future run So that will be uh considered as an local repository because all the artifacts all the dependencies are downloaded there and central repository is something which is known as the maven community where all the artifacts is available there So usually we cache or we mirror these central repositories as our particular remote repositories because it could be a possibility that these remote repositories are something which we are hosting into our organization and central repository is something which is available centrally for everyone to use it So these are something uh you know uh some kind of repositories where each and every artifacts will be stored and anyone will be able to have the access to these uh particular artifacts here So these artifacts are every artifacts every open-source uh artifacts is something which is available over there to these central repository Now how does the uh maven architecture really work here so the maven architecture really works in the three steps The very first step is that it reads the pom.xml XML file here That's the very first step Second it downloads the dependencies uh defined in the pound XML file into the local repository from the central or the remote repository here Once that is done so it will uh you know create or generate the reports according to the uh life cycles which you have configured whether it's a clean install site deploy package or whatever the life cycle you want to trigger you will be triggering that particular life cycle and corresponding to that the build or a specific task will be performed So these are the three steps in which the overall build or any kind of execution of power XML file really happens here Now what exactly is the maven build life cycle so maven life cycle isn't nothing but collection of steps here that needs to be uh followed for doing a proper uh build of a project here So there are primary three built-in uh cycles which is available there Default which handles the project deployment clean which handles the project uh cleaning there and site which handles the creation of the project sites documentation So these are the three primary buildin build cycles life cycles which is available as such Now so build life cycle has you know different kind of phases or the stages here because in the previous uh particular slide we were talking about what what are the different uh particular build life cycles which is available there But these are the different phases like what are the different step by steps executions like further deep down which is available there inside a specific maven build life cycle So here you can see that it's compiling then the test compile test execution is there then package integration test verify install and the lastly deploy here So these are the different build phases which is available as such over here So what exactly is the command to use to do a particular maven site So mvnite is something which is used to create a maven site here Now usually whatever the artifacts is prepared that will be prepared in the target directory So here also you will be able to see a site directory which is available there in the target directory which you can refer for the site documentations What is the different conventions used while naming a project in Maven So usually uh it involves three components So the full name of a project in Maven includes first of all the group ID uh for example com.apache com.ample So these are some of the uh particular way that where you can provide the group ID Artifact ID can be exact project name like Maven project or uh whatever the project you are creating So sample project example project So these kind of things will be there in the artifact ID And lastly is the version like which version of your artifact you want to prepare like 1.0.0 snapshot 2.0.0 So like this information you are providing that what particular version you are trying to configure here Now let's move on to the intermediate level where we will be having a little bit more complex questions related to the maven here Now what exactly is a maven artifact now usually what happens that when we do a build process as an end of result of the build process we will get some artifacts For example when we build a net project so there we will be able to have a exe or dl files as an artifacts Similarly in case of Maven when we do a build process there we get the different kind of artifacts like depending on the packaging mode like jar file war v files or the er files here So these are something which is you know getting generated during the build process during the maven process and you can store them into your local repository or you want to push them to the remote repository It's something that totally depends on you So maven is a tool which can help you to create all the artifacts whether it's a jar file whether it's a v files or whether it's a er file here and every artifact is having three attributes the first one is the group ID the artifact ID and a particular version and that's how you will be able to identify a full-fledged artifact as such in maven so maven is not about only the name of the jar file it's actually referring to the attributes like crop ID the artifact ID and the version of the artifact here now what the different phases of the clean life cycle here So clean is something which is being used to clean the target directory so that a fresh build can be triggered there So there are three steps pre- clean clean and post clean here So if you wish that you want to override the particular life cycle configurations and you want to run some particular steps before the uh clean activity so you can do it into the pre-clean and if you want to do it like some steps after clean then post clean can be utilized Now what are the different phases of the site life cycle so pre-sight site post site and site deploy So these are the different phases which is available there in the site life cycle What is exactly we meant by the maven plug-in Now this is the huge difference between the ant and maven here because in ant we were not having this that much support of the plugins and that's the reason why we had to deal with all the build configurations So we have to simply put the overall build process that how the build should be triggered But that is not something which is there in case of Maven In Maven we have lot of flexibility because we can actually put up what exactly build configurations we want to put here We can put some features like important features over here in Maven and uh these plugins we can utilize For example I want to perform a compilation Now I don't really want to do any kind of configurations in this one So what I can do is that I can simply use the compilation plug-in in Maven and that can really help me because I don't have to unnecessarily write or rewrite the configuration that how the compilation should be done It's something which is preconfigured or pre-written in these plugins that I can simply import the plug-in and I can do the build process or the compilation process in a pretty standard mode So I don't really have to do any kind of workarounds with that and simply with a small automations I will be able to reach that how this maven plugins can be integrated into my pong XML file and I can desire or I can have some particular procedures and some steps executed there So that's the biggest benefit which we really get with the help of maven plugins Now why exactly the maven plugins are utilized So to create a jar files to create the war files to compile the code files to perform the unit testing to create the project documentation and to create the project reports So there are variety of things in which we can actually use this maven plugins through the integrations within the pound xml file there So it's all about the plugins You just import the plug-in and that desired activity will be performed there Now what are the different type of plugins which is there so you can have either a build projects um for performing the build activities You can have some build plugins for reporting plugins also there which can be only generated or utilized to generate the reports to process the reports and do any kind of formatting or any kind of processing on the reports here So that is where the reporting plugins are utilized Now what is exactly the difference between the convention and the configuration in Maven so convention is in particular process when the developers are not required to create the build processes So configuration is when you know the developers are supposed to create the build processes So the users do not have to rectify the configuration in detail and once the project is created it will automatically create a structure So they must specify every uh in case of configuration you have to provide each and every details So that's how the uh configurations really happens because um you have to put every detail into the power XML file and that's how the particular configurations really work as such So this is the huge difference between the conventions and the configurations here Now so why exactly said that Maven uses conventions over the configurations maven pretty much does not puts any efforts like on the particular uh developers that they have to put each and every configuration So there are some readymmit uh plugins which is available there and pretty much we are making use of that so that in such a case we don't have to worry about the executions and we will be able to pretty much work on that So conventions like maven uses the conventions instead of the configurations So the developers does know they don't just have to create the maven project the rest of the particular structure will be taken care automatically So they are not uh you know expecting that the developers should be doing the configuration work and everything should be taken care in such a way that you just have to start the things and rest of the things should be taken care by the maven itself So maven will be uh responsible because the due to the plugins it will be responsible to set up the default architecture the default folder structures and all you have to do is that you have to just place the source code in the desired folder structure here So that's something which you need to do as an particular developers So what exactly is the maven order of inheritance here so the order of inheritance is the settings CLI parameters parent pal and the project pal Which means that if you have some configuration and settings that's will be the highest value then the CLI parameters are there then the parent pom is there and then the project pal So this is the way that how the uh particular parameters or the configurations will be picked up by the maven So that's the order So what does the build life cycles and the phases imply in the basic concept of Maven so build life cycles consist of a sequence of build phases and each build phase consists of a sequence of goals When a phase is run uh all the goals related to that phase and its plugins are also compiled So you will be able to have a lot of particular goals which is residing inside of phase there And similarly life cycle is nothing but a kind of a sequence of the different phase So life cycle comes in the top then it comes on the phases and then it comes on the goals here Now what is the terminology called goal in case of maven the term a terminology goal refers to the specific task that makes it possible for the build the project to be built and organized So it's something which we can run so it's a actual implementation which is going on there For example in the build process in the build phase I have a different goals like clean install package deploy These are the different typical uh goals which is available there which I can execute into the maven here So these are the different goals like uh which we can execute and which we can run during a maven build Next question is what is exactly meant by the term dependencies and the repositories in maven here So dependencies refer to the Java libraries which we usually put up into the boundary.xml file there Now what happens that sometimes our source code is requiring some jar files like a secondary jar files for performing the build process So uh instead of downloading it and uh storing it into the class path for during the build process we just have to specify the dependency uh of that uh artifact what dependency we need to put and once that dependency is put up there we will be able to have that jar file downloaded and cached into the local repository during the maven build project Now if the dependencies are not present into your local repository then Maven will try to download it from the central repository and again if it is not uh uh you know uh it's something which is available which is downloaded from the central repository then it will be cached locally into the local repository So that's a cycle which is being implemented and utilized during this process Now what exactly is an snapshot in Maven so snapshot refers to the version already available in the Maven report repository It signifies the latest development copy That's what we do with the case of snapshot here So Maven checks for a new version of snapshot in the remote repository for every new build So during the build process like uh you know a new snapshot version is being downloaded and the snapshot is updated by the data service team which uh with updated source code every time to the repository for each maven build So snapshot is something which we will be using like very frequently we will be updating to that and frequently we will be updating the version to that and we will try to explore and we'll try to do the modifications Now what are the different type of projects available in Maven So there are thousand of Java projects which you know uh can be utilized or can be uh implemented by Maven here So this helps the uh as the user that uh they as they no longer have to remember every configurations to set up particular project For example Spring Boot Spring MVC Spring Boot etc These are the different projects which is already available in Maven So most of the uh we have already discussed that uh for the job based projects Maven is something which is you know considered as by default So a lot of organizations are actually using it for you know storing or utilizing it for the uh particular Maven project Now what exactly is the Maven uh archetype over here so Maven archctive refers to a maven plug-in that is uh entitled to create a project structure as per its template These archetypes are just project templates that are generated by maven when any new project is created there So this is something which we are using so that we will be able to create a fresh new projects Right So let's go on to the advanced level of this maven questions Now what exactly is the command to create a new project based on an archive type so MVN archype generate is used to create a new Java uh project based on the arch type Now this will take up some parameters from uh as an end user from you and depending on that parameters it will create the boundar file It will create the source directories uh inside that main java test all these different couple of directories dy structures will be automatically created Now why we require this command so that if you are going to create a project from scratch from the uh from the day one this command will help you to have all the folder structures created and then further on you can put up your source code and those files as such in this folder structure So that's how is the mechanism that where we will be able to see that how the setup can be performed really over here Now what does maven clean implies now maven clean is a plug-in that uh suggests that it's going to clean the files and directories there So whenever we do a build process usually in the target directory we have some class files some jar files or whatever the uh generated source code which is available that will be present in the target directory So the maven clean is something which is available which is going to clean all these directories and why we are doing this uh directory structure cleanup so that we will be able to do a fresh uh build process and there should not be any kind of uh issues as such over here So that's the main reason why we are looking forward for this uh particular mechanism or for this uh particular changes as such here Now so what exactly isn't build profile all about so build profiles refers to the set of configurations uh where we can have like typically two different kind of build processes there So if you feel that the same pomxml file you can uh use you want to run for different different uh particular configurations So that you will be able to do pretty much with the help of this component So build profile is used to do a customization processes so that you will be able to have the process and uh you will be able to perform the configurations and the setups all together there So that's a very important aspect to be considered that which we need to uh discuss when we talk about the build profile So build profile whenever you feel that you want to do some customizations and you want to proceed with the setup So that's where it's utilized Next thing is that what are different type of build profiles which is available there So the build profiles can be done on for a particular project like per project you can do you can uh even do the build profiles in the settings doxml file also and if you feel that you want to do it into the global settings doxml file so that also you can do as such over here So there are different ways in which you can do the customization and once the customization is done you will be able to have the different uh ways of doing the setups and the configurations over there So what exactly is meant by the uh particular system dependencies here so let's talk about that also So system dependencies refers to the uh particular mechanisms where we feel that uh how the dependencies should be you know present there So that is something which is having a scope of system there So these dependencies are commonly used to help Maven know the dependencies that is being provided by the JDK system dependencies are mostly used to resolve the dependencies on the artifacts that are provided by the JDK So these dependencies are somewhat which is being utilized and uh used over here so that we will be able to implement and go ahead through the system dependencies What is the reason for using an optional dependency here so optional dependencies are used to decrease the transitive burden of some libraries So what happens that when you download an artifact when you put up a dependency so it could be a possibility that some dependencies as an particular optional can also be downloaded Now these are not always required but yes sometimes what happens that these are downloaded so that you don't have to put u each and every uh artifact or dependency into the pound xml file For example you're trying to download some Apache tool and with that some like three four jar files or three four another dependencies are also getting downloaded Now if you are using that dependencies that totally create because you don't have to put that uh list or that entry in the dependency list in the ponder XML file and that can really save your time but if you feel that you don't want to have them and you these are the optional ones and you really want to uh get rid of that so that also you can exclude while downloading any kind of dependencies So these are the optional ones which depending on your requirement you can utilize you can uh process and if you feel that you don't want to get it you won't want to process it you can simply ignore it and you can get rid of that also Now what is a dependency scope and how many type of dependency scope are there So there are different type of dependency scopes which is there which is used on each and every stage of the build here So compile provided runtime test system import these are the different kind of dependency scopes which we have using which we can define that when exactly we want to go ahead for a specific build process So depending on your requirement you can explore all these build scopes and you can get benefits out of that What is exactly an intransitive dependency in Maven So Maven avoids the need to find out and specify libraries that our own dependencies require by including the transitive dependencies automatically So transitive dependency says that if E depends if X depends on Y and Y depends on Z then X depends on Y and both there So which means that you are not dependent on one artifact You also need the Z artifact there with the Y artifact So that is what you need to do so that you will get both the dependencies there because this is normal that if you are trying to download some particular artifacts or download some dependency and that dependency is also dependent on some other artifact or some other jar file then you have to include both of them So this is something which you will be able to get so that you uh can easily download all these uh dependent uh jar files also and the maven build can be success How can a maven build profile can be activated so maven uh build profile can be activated through a different ways So explicitly using the command uh command line you can talk about that which profile you want to execute through maven settings you can do uh based on environment parameters OS settings and uh present and missing file So these are the different ways in which you can actually activate that which particular profile you want to have So profiles configurations can also be saved in various situations and various files and from there you will be able to refer that which file you want to refer as such Now what is meant by the dependency exclusion the exclusion is used to exclude any transitive dependency because you never know that if you are trying to put up a dependency uh entry in the power x1 file that artifact is also further dependent on another artifact So in order to feel in order to see that you want to exclude that dependent artifact which is being uh automatically downloaded that also we can exclude with the help of exclusion So you can uh avoid the transitive dependency with the help of dependency exclusions here So what exactly is in mojo so mojo is nothing but maven plain old Java object here So it's an executable goal in Maven and a plug-in refer to the distribution of such mos So Moses enable uh the Maven to extend its functionality that already is not founded in So it's kind of an extension which is there and using this we can get some additional benefits and some executions over there So what is the command to create a new project based on a hard drive So again archype is something which we normally use to create uh the new projects Now you can give the parameters in the command itself or you want it to in in kind of a interactive mode where it will take the parameters from the end user and according to that the project will be created onto hard drive or onto server wherever you wish you want to create you can create a new project So explain about the maven settings doxml file So maven settings doxml file contains the elements that are used to define that how the maven execution should be there So there are different uh settings like local remote central all these different repositories are configured as such over here Now in this case what happens that uh usually the configurations are done in such a way that it can you know go for the uh executions it can go for the build process and the complete executions can be involved and can be achieved as such here All these executions are something which we can really perform and uh here we can put some credentials how to connect to the remote repository How to connect to remote repository all that stuff is something which we talk about over here What exactly is the mint py term super pom here so super pom refers to the default pom of maven So the moms of maven can derive from so it's nothing but a reference to a parent pom which is available there that is a super pom So if you define some dependencies in that super pom automatically the uh child pom will also be able to inherit all those dependencies So we can put some uh executions like uh we can put some configurations in the super pom so that if multiple uh projects are going to refer that they should be able to refer that easily So that's the reason why we primary use the super pom so that we can have the execution some uh processes uh put up over there and all the other project should be effort to refer or inherit from there So where exactly the dependencies are stored So dependencies are stored like in different locations like you have the local repository remote repositories there local repository is on the uh local developers machine and remote repository is something which is available on a server in form of artifactory Now let's talk about the cradle installation because this is one very important aspect to be done because when we are doing the installation we have to download the uh gradal executables right So let's see that what are the different steps is involved in the process of the gradal installation So when we talk about the gradal installation so there are primary four steps which is available The very first one is that you have to check if the Java is installed Now if the uh Java is not installed so you can go to the open JDK uh or you can go for the Oracle Java So you can do the installation of the JDK on your system So JDK8 uh is something you can uh most commonly used nowadays So you can install that Once the Java is downloaded and installed then you have to do the Gradle uh download Gradle there Now once the gradal binaries are executable uh or the user file gets downloaded so you can add the environment variables and then you can validate if the gradal installation is working fine as expected or not So we will be doing the gradal installation into our local systems and uh into the windows platform and we'll see that how exactly we can go for the installation of gradel and we'll see that what are the different version we are going to install here So let's go back to the system and see that how we can go for the gradal installation So this is the website of uh the JDK of a Java Oracle Java Now here you have different JDK So from there you can do whatever the uh option you want to select you can go with that So JDK8 is something which is most commonly used nowadays like it's most comfortable or compatible version which is available So um in case you want to see that if the JDK is installed into your system all you have to do is that you have to just say like Java - version and that will give you the uh output that whether the Java is installed into your system or not So in case my system the Java is installed but if you really want to do the installation you have to download the JDK installer from this website from this Oracle website and then you can proceed further on that part Now once the JDK is installed so you have to go for the gradal installation because gradle is something the which will be performing the build automations and all that stuff So you have to download the binaries like uh zip file probably in which we have the executables and all and then we have to have have some particular environment variables configured so that we will be able to have the system modified over there So right now we have got like the prerequest as in Java version installed Now the next thing is that we have to install or download the executables So uh in order to download the latest uh gradal distribution so you have to click on this one right now over here there are different options like uh you want to go for 6.7 Now it's having like binary only or complete we'll go for the binary only because we don't want to have the source we just want the binaries and the executables Now it's getting downloaded It's around close to 100 MB of the installer which is there Now we have to just extract into a directory and then the same uh path we need to configure into the environment variable So that in that way we will be able to see that how the uh gradal executables will be running and uh it will give the uh complete output to us over here in this case So it may take some time and once the uh particular modifications and the download is done then we have to extract it and once the extraction is done so we will be able to go back and uh have some particular uh version or have the configurations established over there So then let's just wait for some time and then we will be continue with the environment variables like this one So once the installation and the extraction is done now we just have to go to the downloads where this one is downloaded We have to extract it Now extraction is required so that we can have the setup like we can set up this path into our environment variables and once the path is configured and established we will be able to start further on that part on the execution So meanwhile these files are getting extracted Let's see So we already got the folder structure over here and uh we will see like we will give this path here There is two environment variables we have to configure One is the gradal_ome and one is the um in the path variable So we'll copy this path here So meanwhile this is getting uh extracted we can save our time and we can go to the environment variables So we can right click on this one properties in there we have to go for the advanced systems settings then environment variables Now here we have to give it like gradal home Now in this one we will not be going giving it till the bin directory So that only needs to be there where the gradal is extracted So we'll say okay and uh then we have to go for the path variable where we will be adding up a new entry In this one we will be putting up till the bin directory here because the cred executable should be there when I'm running the cred command So these two variables I have to configure then okay okay and okay So this one is done So now you have to just open the command prompt and see that whether the execution or the uh commands which you're running is is completely successful or not So meanwhile it's extracting all the executables and all those things It will help us to understand that how the whole build process or how the build tools can be integrated over there Now once the extraction is done so you have to run like cmd java iPhone version to check the version of the java and then the gradal version is what you're going to see that you see check the version of the gradel which is installed and now you can see that it shows that 6.7 version is being installed over here in this case So that's the way that how we are going to have the cred installation performed into our particular system and in this one uh we will be also working on some demos and some hands-on to understand that how we can make use of grid for performing the build activity So let's begin with uh the first understanding that what exactly is in griddle all about Now griddle is an kind of a build tool which can be used for the uh build automation performance and uh it can be used for various programming languages Primarily it's being used for the uh Java based applications It's an kind of build tool which can help you to see that how exactly automatically you can prepare the builds You can perform the automations Earlier we used to do the build activity from the Eclipse and uh we used to do it manually right but with the help of this build tool we are going to do it like uh automatically without any uh manual efforts as such Here there are like lot of activities which we will be doing during the build process Primary there are different activities like compilations linkage packaging These are the different uh tasks which we perform during the build process so that we can understand that how the build can be done and we can perform the automations uh this uh process also it's kind of a standardized because again if you want to automate something standards or a standard process is something which we require for that before we been going ahead with that part So that's the reason why we are getting this build tool because this build tool helps us to do an standardization process to see that how the standards can be met and how we can proceed further with that part Also it's something which can be used with variety of languages programming languages Java is the primary language for which we use the gradal but again other languages like Scala Android C C++ Groovy these are some of the languages for which we can use the same tool Now it's actually using like it's referring to as an based domain specific language rather than XML because Ant and Maven these are the XML based build tools but this one is not that uh dependent on XML it's using the Kuby based domain specific language DSL language is being used here right now um again uh it's something which can be used to do the build uh it can further on used to perform the test cases automotions also there and then further on you can deploy to the artifact also that okay I want push the artifacts to the artifact So that also that part also you can get it done over here So primary this tool is known for doing the build automations for the big and large projects The projects in which the source code the amount of source code and uh the uh efforts is more So in that case this particular tool makes sense Now griddle includes both the pros of maven and uh ant but it removes the drawbacks or whatever the uh issues which we face during these two build tools So it's helping us to remove all the cons which we face during the implementation of ant and and maven and again again all the pros of ant and and maven is implemented with this gradal tool Now let's see that why exactly this griddle is used because that's a very valid question that what is the activity like what is the reason why we use the gradal because um the first one is that it resolves issues faced on other build tools that's a primary reason because we are already having the tools like maven and which is available there but primary this grad tool is something which is removing all the issues which we are facing with the implementation of other tools So these issues are getting uh removed as such Second one is that it focuses on maintainability performance and uh flexibility So it's giving the focus on that how exactly we can manage the big large projects and uh we can have flexibility that what different kind of a projects I want to build Today I want to build in uh different ways Tomorrow the source code modifies gets added up So I have the flexibility that I can change this build scripts I can perform the automations So a lot of flexibility is available which is being supported by this tool And then the last one is like uh it provides a lot of features a lot of plugins Now this is one of the uh benefit which we get in the case of Maven also that we get lot of features But again when we talk about Gradle then it provides a lot of plugins like let's say that normally in a build process we do the compilation of the source code but sometimes let's say that we want to build an angular or a NodeJS application Now in that case we may be involved in running some command line executions some command line commands just to make sure that yes we are running the commands and we are getting the output So there are a lot of features which we can use like uh there are a lot of plugins which is available there and we will be using those uh plugins in order to go ahead and in order to execute those build process and doing the automations Now let's talk about the cradle and maven because again when we talk about maven like it was like something which was primary used for the java but again when we are talking about cradle so again it's just uh being used primary for the java here but what is the reason that we prefer gradel over the uh maven so what are the different uh reason for that let's talk about that part because this is very important we need to understand that what is the reason that gradle is preferred as an tool for the java as compared to maven when we talk about for the build automation layer Now the first one is that the uh gradle using the groovy DSL language domain specific language whereas the maven is considered as an project management tool which is uh creating the palms or XML for format files So it's being used for the Java project but XML for format is being used here and on the other hand griddle is something which is not using the XML formats and uh whatever the build scripts you are creating that is something which is there in the Groovy based uh DSL language and on the other hand in the palm we have to create the XMLs dependencies whatever the attributes you are putting up in the Maven that's something which is available there in the format of XML the overall goal of uh the griddle is to add functionality to a project whereas the goal of uh the maven is to you know uh to complete a project phase like to work on different different project phase like compilation test executions uh then uh packaging so uh then deploying to artifact so these are all different phases which is available there into the maven but on the other hand griddle is all about adding the functionality that how you want to have some particular features added up into the build scripts in griddle there are like we usually specify that what are the different tasks we want to manage so different different tasks we can add up into the case of griddle and we can override those tasks also In case of Maven it's all about the different phases which is being happening over here and it's in sequence manner So these phases happens in the sequence order that how exactly you can uh build up the sequence there but in case of griddle you can have your own tasks custom tasks also and you can disrupt the sequence and you can see that how the different steps can be executed in a different order So maven is something which is a phase mechanism there but griddle is something which is according to the features or the flexibilities Now griddle works on the tasks whatever the task you want to perform you uh it works directly on those task there On the other hand uh maven is something does not have any kind of inbuilt cache So every time you running the build so separate uh things or the plugins and all these information gets loaded up which takes definitely a lot of time On the other hand grad is something which is uh using its own internal cache so that it can make the uh builds a little bit faster because it's not something which is doing the things from the scratch whatever the uh things is already being available in the cache So it just pick that part and from there it will proceed further on the build automation and that's the reason why gradal performance is much faster as compared to Maven because it uses some kind of a cache in there and then helps to improve the overall performance Now let's talk about the cradle installation because this is one very important aspect to be done because when we are doing the installation we have to download the uh gradal executables right So let's see that what are the different steps is involved in the process of the gradal installation So when we talk about the gradal installation so there are primary four steps which is available The very first one is that you have to check if the Java is installed Now if the uh Java is not installed so you can go to the open JDK uh or you can go for the Oracle Java So you can do the installation of the JDK on your system So JDK8 uh is something you can uh most commonly used nowadays So you can install that Once the Java is downloaded and installed then you have to do the Gradle uh download Gradle there Now once the gradal binaries are executable uh or the zip file gets downloaded so you can add the environment variables and then you can validate if the gradal installation is working fine as expected or not So we will be doing the gradal installation into our local systems and uh into the windows platform and we'll see that how exactly we can go for the installation of gradel and we'll see that what are the different version we are going to install here So let's go back to the system and see that how we can go for the gradal installation So this is the website of uh the JDK of a Java Oracle Java Now here you have different JDK So from there you can do whatever the uh option you want to select you can go with that So JDK8 is something which is most commonly used nowadays like it's most comfortable or compatible version which is available So um in case you want to see that if the JDK is installed into your system all you have to do is that you have to just say like Java - version and that will give you the uh output that whether the Java is installed into your system or not So in case my system the Java is installed but if you really want to do the installation you have to download the JDK installer from this website from this Oracle website and then you can proceed further on that part Now once the JDK is installed so you have to go for the gradal installation because gradle is something the which will be performing the build automations and all that stuff So you have to download the binaries like uh zip file probably in which we have the executables and all and then we have to have have some particular environment variables configured so that we will be able to have the system modified over there So right now we have got like the prerequests as in Java version installed Now the next thing is that we have to install or download the executables So uh in order to download the latest uh gradal distribution so you have to click on this one right now over here there are different options like uh you want to go for 6.7 Now it's having like binary only or complete we'll go for the binary only because we don't want to have the source we just want the binaries and the executables Now it's getting downloaded It's around close to 100 MB of the installer which is there Now we have to just extract into a directory and then the same uh path we need to configure into the environment variable So that in that way we will be able to see that how the uh gradal executables will be running and uh it will give the uh complete output to us over here in this case So it may take some time and once the uh particular modifications and the download is done then we have to extract it and once the extraction is done so we will be able to go back and uh have some particular uh version or have the configurations established over there So then let's just wait for some time and then we will be continue with the environment variables like this one So once the installation and the extraction is done now we just have to go to the downloads where this one is downloaded We have to extract it Now extraction is required so that we can have the setup like we can set up this path into our environment variables and once the path is configured and established we will be able to start further on that part on the execution So meanwhile these files are getting extracted Let's see So we already got the folder structure over here and uh we will see like we will give this path here There is two environment variables we have to configure One is the gradal_ome and one is the um in the path variable So we'll copy this path here So meanwhile this is getting uh extracted we can save our time and we can go to the environment variables So we can right click on this one properties in there we have to go for the advanced systems settings then environment variables Now here we have to give it like gradal home Now in this one we will not be going giving it till the bin directory So that only needs to be there where the gradal is extracted So we'll say okay and uh then we have to go for the path variable where we will be adding up a new entry in this one we will be putting up till the bin directory here because the cred executable should be there when I'm running the grad command So these two variables I have to configure then okay okay and okay So this one is done So now you have to just open the command prompt and see that uh whether the execution or the uh commands which you're running is is completely successful or not So meanwhile it's extracting all the executables and all those things It will help us to understand that how the whole build process or how the build tools can be integrated over there Now once the extraction is done so you have to run like cmd java version to check the version of the java and then the gradal version is what you're going to see that see check the version of the gradel which is installed and now you can see that it says that 6.7 version is being installed over here in this case So that's a way that how we are going to have the credle installation performed into our particular system So let's go back to the content Let's talk about the cradle core concepts here Now in this one we are going to talk about what are the different core concepts of criddle are all about The very first one is the projects here Now a project uh represents a item to be performed over here to be done like uh deploying an application to a staging environment performing some build So gradle is something which is required Uh the projects um the gradal project which you prepare is not having multiple tasks which is available there which is configured and all these task all these different tasks needs to be executed into a sequence Now sequence is again is a very important part because again if the sequence is not met properly then the uh execution will not be done in a proper order So that's the very important aspect here tasks is the one in which is a a kind of a entity in which we will be performing a series of steps These tasks may be like compilation of a source code preparing a jar file preparing a web application archive file or a er file Also we can have like in some tasks we can even publish our artifacts to the artifact tree so that we can store those artifacts into a shared location So there are different ways in which we can have this uh particular tasks executed Now build scripts is the one in which we will be storing all this information What are the dependencies what are the different tasks we want to refer it's all going to be present in the build.gradle file there Build.gradle file will be having the information related to what are the different dependencies you want to download and you want to store there So all these things will be a part of the build scripts Now let's talk about the features of criddle What are the different features which we can uh use in case of grad here There are different type of uh features which is available there So let's talk about them one by one So the very first one over here is the high performance Then u high performance is something which we can see that we already discussed that in case you are using a large projects So griddle is something which is in better approach as compared to Maven because of the high performance which we are getting It uses an internal cache which make sure that you are using like you are doing the builds faster and that can give you a higher performance over there Second one is the support It provides the support So it yes definitely provides a lot of uh support on how you can perform the builds and it's being a latest tool which is available there So the support is also quite good in terms of how you want to prepare the build how you want to download the plugins different plug-in supports and the dependencies uh information also there Next one is multi- project build software So using this one you can have multiple projects in case in your repository you have multiple projects here So all of them can be easily built up with the help of uh this particular tool So it supports multiple project to be built up using the same gradal project and uh gradle scripts So that support is also available with this gradal build tool Uh incremental builds are also something which you can do with the help of gradel So if you have uh done only the incremental changes and you want to perform only the incremental build So that can also be possible with the help of a griddle Here the uh build scans So we can also perform the build scans So we can use some uh integrations with sonario and all where we can have the uh scans done to the source code on understand on how the build happens or how the source code really happens on there So that code scan or the build scans can also be performed with this one and then uh it's a familiarity with Java So for Java it's something which is uh considered as an by default not even Java In fact uh Android which is also using the Java programming language is using the uh particular gradal over here so that the build can be done and it can gain uh benefits out of that So in in all the manus in all the different ways it's basically helping us to see that how uh we can make sure that this tool can help us in providing a lot of features and that can help us to make a reliable build tool for our Java based projects or any other programming based project here Right now let's see that how we can convert a Java project with a Gradle here And uh for that we have to go back and gradle is something which is already installed We just have to create a directory where we can have like how we can perform some executions We can prepare some build scripts and we can have a particular execution of a gradal build happened over there So let's go back to the machine Okay So we are going to open the terminal here and we'll see that how we can create it So first of all I have to create a directory structure Let's say that we'll say like griddle- project Now once the project is created so we can go inside this directory So to uh create some uh griddle related projects and preparing the files Now uh in this one we let's first create a particular one So we will be saying like vi build dot gradal So in this one we are going to put like uh two plugins we are going to use So we are going to say like apply plug-in java and uh then we are going to say like apply plug-in application So these two plugins we are going to use and when we got this file over here in this one so it shows like build.griddle which is available there in this case So two these files are available Now if you want to learn like you know what are the different tasks so you can run like griddle tasks command over there So griddle task will help you know that what are the different tasks which is available over here by processing the build scripts and all So um this will definitely help you to understand on giving you the output So here all the different tasks are being given and it will help you to understand that what are the different tasks you can configure and you can work over here just like jar files clean and all that stuff build compile then uh initi there then all these different uh executions assemble then java dog then build then check test all these different tasks are there and if you really want to run the gradal build so you can run like gradle clean to perform the clean activity because right now you are doing like a build So before that you can have a clean and then you can run a a specific command or you can run the griddle clean build which will perform the cleanup also and it will at the same time will have the build process also performed over there So build and cleanup both will be executed over here and what is the status whether it's a success or a failure that will be given back to you Now in this case in the previous one if you see that when you ran the clean the cradle clean it was only running one task but when you go for the uh build uh process when you run the gradal g clean build it's going to give you a much more information in fact you can also give me uh further information like you can have the hyphen info flag also there so that if you want to get the details about the uh different uh tasks which we which is being executed over here so that also you're going to get over here in this one So you just have to put like hyphen info and then all these steps will be given back to you that how these uh tasks will be executed and the response will be there So that's a way that how you can create a pretty much simple straightforward project in form of gradal which can definitely help you to run some couple of gradal commands and then you can understand that what are the basic commands you can run and how the configurations really works on there right let's go back to the main content right now let's move on to the next one so in the next one we are going to see that how we can prepare a griddle build project in case of eclipse now we are not using the local system we are not directly creating the folders and the files here we are actually using the eclipse for performing the creating a new gradal project over here So let's move on that part Okay So now the Eclipse is open and uh I have opening this one The very first thing is that we have to do the Gradle plug-in installation so that we can create new projects on Gradle and uh then we have to uh configure the path that how the Gradle plug-in can be configured on the pre uh preferences and all that stuff and then we will be doing the build process So the very first thing is that we have to go to the Eclipse marketplace in there we have to search for griddle So once the search is done it will show us the plugins related to grad So we have to go for build ship gradal integration So we'll click on the install It will proceed with installation and it will download it In some cases maybe it's part of the Eclipse as in uh in the ID So you can go to the installed tab and you can see that also that if this plug-in is already installed or not But in this case we are installing it And uh once the installation is done we just have to restart the uh specific uh once we have to restart this uh Eclipse so that the changes can be reflected So it's downloading it's downloading the gradal here and once that is installed we will be able to use it over here in this case in this scenario So we have to just wait for that part So still downloading the jar files So once the jar file is done it's over there isn't downloaded So after that we will be able to proceed further on that download aspect So it's going to take some time to download it and once it's done we will be able to proceed further Now once the progress is done so it's asking us for the restart now So uh before that uh we just have to click on restart now and then the Eclipse will be restarted all together again here So you can do it manually or you can go for that options It just require a restart so that the new changes can be reflected over here So the plugins can be activated and can be referenced here Now we have to just uh put up like the uh you know the configuration where we can have the system So we can go for the uh gradal configuration So we can go for windows and then preferences Now in this case we have to go for the uh for the ones in which the gradle option is available there So cradle is what we are going to select Now user home the gradle user home is what we need to use right So you want to go for the gradle you want to go for local installation So so all these options you can use You can if if you go for the griddle wrapper then it will be downloading the gradle locally and it is going to use the griddle w or griddle w.bat BAT file But if you already have an installation locally so you can prefer that also Right now uh in the previous demo we have already got the uh griddle uh extracted So we just have to go for the downloads In the downloads already gradle is available So we are going to select that part here So this is what we are going to select right So this represents that this is the directory structure in which we are having the uh mechanism So you can either go for the bill scan So you can select the build scan also So once this uh is enabled then all the projects will be scanned and will be you know published and uh it's in kind of additional option which is available If you really want to disable it you can disable it also and you can go with this configuration So so uh this is where the uh particular grad folder uh is being put over here in this case and uh then we have to just click on apply and we just have to click on apply and close So with this one the particular execution is done Now we will be going for the project creation So you can right click over here or you can go to the file also So here we going to go for the ch project and in this we are going to have a gradal project So grad project is what we are going to create here and next So we are going to say like gradal project and then next So once that is done so finish So uh with this one when you create the project so what will happen that uh automatically there will be a folder structure will be available there right and uh there are some uh gradal scripts which will also be created there So we will be doing the modifications to there and we'll see that how the uh particular gradal build script looks like and how we can we will be adding some couple of uh selenium related dependencies and we'll see that how we can have more and more dependencies added and what will be the impact of those dependencies on the overall project So that also it's very important aspect to be considered So let this processing be happen over there It's just creating and uh some plugins and binaries are getting installed getting downloaded So we'll see that once the project is uh imported completely executed over here and got created we can extract that Now if you see here the particular option is available about the grad tasks So you can extract it also and you will be able to know that what are the different tasks which is available there Let's see that in the build they are running like build these are the different tasks which is happening inside the build process So grad executions will be also available over here in this case and grad tasks will be different will be represented over here in this one So you just have to extract on the grad project Okay this is the library which is available Now uh what happens that uh you will be able to have like settings.gradal in this one you will be able to have like okay gradel project is something which is available there in this one So that's what we're being referring Then we have over here as in these folder structures which is created like source main Java this is the one source test Java is the one which is available as in the folder structure and source test resources are also available here So the main source main resources are also available Now in this case what happens that uh these are the dependencies project and external these are the different dependencies are available there So let's see let's add an dependency over here in this one in the bail uh griddle script and see that how we can do that If we open buildg gridle file so you can see that these dependencies are there like test implementation junit is available there right and then we have a implementations of this one which is available now these jar files when you put up it will automatically be added up as an part of this one as in part of the uh particular uh dependencies over here and uh which means that you don't have to store them as in uh within the repository and automatically they can be happen over there So let's open a dependency page So we will be going to MVN repository where we will be opening a dependency link So this is the dependency link here So selenium java is available and it can give you the uh dependency for all the different options Now we have for maven this is the one and for griddle this is the one here So we have to just copy this one and uh we have to use it as a dependency So this is the group and this is the name and the version which we are using here Now we have copied this one So we will go back to the Eclipse So here we have to just put that dependency and uh we have to just save it So uh this is something which is providing like selenium dependencies which is available So now we have to just refresh the project So right click over here then you will be able to see the options in the gradle saying that refresh grad project Now once they move it you do that so you will be able to do like for the first time maybe it will take some time to download all the dependencies which is related to selenium but after that you will be able to see like the dependencies will be simply added up over here in this case So you can see that all the selenium related dependencies are added up For any reason if you comment these ones and you say like synchronize again so you will see that all the dependencies which you are adding up from this selenium represent uh from the selenium perspective will be gone back again So this is the way that how you can keep on adding the dependencies which is required for preparing your build for your source code and from there you will be able to proceed further on the execution part So that's the best part about this uh cradle here So that's a way that how we are going to prepare a griddle project within the Eclipse and now you can keep on adding like the source code in this one and that's the way that how the code base will be added up over here Right so that's the way that how the uh particular u executions or this uh griddle project is being prepared in case of Eclipse Selenium installation is a three-step process So it has certain prerexs The first prer is you need to have Java on your system So we will be installing Java first and then we will be working with Eclipse ID So we will be installing Eclipse and then we will install Selenium For Java we will install the version Java 8 And for Eclipse we have a version 4.10 Uh this was the last stable version which was released in December last year So I'll be using that version and Selenium we will download the latest 3.14 version Okay So let's get started with our first step which is the Java installation So to install Java let's go to the browser and simply just search for Java 8 download So now you'll see that there is an Oracle site which is listed there and that is where you would be downloading all your Java package So go ahead and click on that and for you to download any JDK package from the Oracle site you need to create an account So if you already have one you just need to login using that account and then you can download any of the JDKs and if you do not have one please go ahead create a new account on the Oracle log to that account and then you can just download the Java 8 So since I already have an account and I have already downloaded the package but I'll show you how and where to download it from So in this page if you scroll down so you will see this Java development kit 8211 So this is the version we'll be downloading it So click on the accept license agreement and then since we are working on the Windows system today so we will be downloading this the Windows package So just click on that and it'll get downloaded in your downloaded folder And as I said I've already downloaded the packages So here it is What I've done is I've just created a directory called installers and I'm going to be keeping all my installables here So here I have a folder called Java installer and this is where my installable is So now that we have this file so we will just go ahead double click on it and launch this installer The installer is launched and just click on run So this will take a few minutes to install Java The installer is launched Now just click on the next button here So here for the installation directory you can uh change the directory to the choice of whatever drive and the folder structure you want to I would like to leave it as default here And we'll just go and click on next And then the Java installation is in progress So let's wait until this is completed It really shouldn't take too much time Maybe just a few more minutes here Okay Accept the license term Just click on next We leave the destination folder as it is So JDK8 is successfully installed on your system So close the installer now and let's go ahead and check whether the installation is done properly So for that what I'll do is I'll go to my command prompt and I'll just say Java minus version So it says Java version 1.8 and this tells us that the Java is installed successfully Now after this installation there are couple of configurations which we need to do and what are those configurations one is you need to set the path variable and then you we are also going to set a Java home directory So for that first let's go ahead and check where is the Java installed actually let's figure out the directory first So if you remember the directory structure where the Java got installed was in program files Java I have there are certain previous versions which had installed and then uninstalled it So that is why you see some residuals here sitting here Let's not worry too much about that Instead let me go to the latest one what I have installed which is this Okay And there is a bin folder here And this is the path which we need to set in our path variable So what I will do is I'll just copy this path And then go to your control panel here Go to your where is my system Yeah So click on the system go to advanced system setting and here in the environment variables find the path variable Okay And then say edit Now what are we doing here in the path variable is we are going to add the Java bin directory to the path Be very careful whenever you're editing your path variable Do not overwrite anything Always go into the edit mode Go towards the end here and then just say ctrlv paste the path which you have just copied from the explorer window That's it Now just say okay done So your path setting is done So what's the next one we need to do we need to add a new environment variable called the Java Now what I'll do for that is I'll just say new I'll just type Java home here And what is the value of this we need to set we need to set the same path but without the bin directory So we just need to set the path till your Java directory That is this So we'll just copy the path again and paste it here That is all Just say okay Click on okay Click on okay here And we are done So again let's go to our command prompt and just say Java minus version So everything seems to be fine So now successfully we have installed Java on the system So what is our next installation step what we have now we need to install the Eclipse So let's go back to the browser again So to download Eclipse we will be downloading the package from the eclipse.org So when you go here to eclipse.r you can see the latest version which is available And the latest version available when this video was made was 201906 So especially with Eclipse since it's an open-source I prefer to work with the last table version and so does most of the developers too And hence that is the reason why I have picked up the version which is like last year's version which is um 4.10 which was released in last December So you can always choose to go with the latest version but then if there are any issues and if you're like first time working with the Eclipse you're going to get confused as where these issues are coming from Right so I would still recommend that you use the last table version which is available with your Eclipse So now to get the last table version what you need to do is go and click on this download packages And here if you scroll down this page you can see here more downloads So there is a list of all the previous releases of Eclipse which is available and this is what we need to download So just click on that 4.10 version and then click on the OS on which you want to install Eclipse For us it is Windows So I'll just click here on the 64-bit Windows and then click on the download and you will be downloading the complete package So once you download this is what it will look like So let's go back to our directory of installers So this is the installer for the Eclipse which I got Now what's the next step I need to do just launch this installer and install Eclipse So I'll just say double click on this I'll say run So here you'll see multiple options here for Eclipse installation So depending on your requirement you can go ahead and install any of these packages So for us we just need an Eclipse ID for Java developers So I'll select this and I'll say install So again you'll have a choice of directory where you want to install So I have chosen D drive here This is the default directory name it takes which is okay We can leave it as it is And then also you have an option to create a start menu entry and desktop shortcut So just leave the default selection as it is and go ahead and click on install So this will take a while to install the Eclipse This says select all You can close this window This says select all and accept it Okay So the installation has been completed successfully So let's go and click on this launch and let's see the first window what opens When you launch the Eclipse you need to specify a workspace directory Now what is this workspace directory so this is a directory or a folder wherein all the Java files or any programs or any artifacts which you're going to create through Eclipse will be stored in this particular folder So this could be any location on your system So this is you can go ahead browse the location and change it So for in our case what we will do is I'll go to the D drive and I already have a directory So here I'll create I'll just create select this folder and then create a folder called workspace I'll say my workspace and then I'll say launch So every time I open the Eclipse right so this is going to take as my default workspace and all my programs all my JavaScripts or my automation scripts are getting are going to be stored in this particular location So we'll say launch So this is a welcome window which opens We can just close this And there we go The Eclipse is open with a certain perspective So there are certain windows here which we do not need Let's just close them So now the first thing what you do after launching the Eclipse is go ahead and create a new project So I'll say file new and since I'm going to be using Java with Selenium I'll say create Java project So give a project name Let's say my first project Now you have an option here to select the JRE which you want to use So we just installed this JDK 1.8 Okay So I'm going to click on use default JRE Otherwise you also have an option to use a project specific JRE For example I could have two different projects where one project I'm going to be working with JRE 1.8 and there is another project which I want to work with the latest Java maybe Java 12 and I can have more than one Java installed on the machine So this give me an option to select whichever Java I want to work with So if you have another Java installed here it will show up in this list So and you can just go ahead and select that Now since we have only one Java installed on our machine which is Java 1.8 I will say use default GRE which is 1.8 and I will click on finish Now if you observe this folder structure the project which is created see all the reference libraries to this particular Java have been created here Now we are ready to create any kind of Java programs in this project So now we have successfully done the second step of our installation which is the Eclipse installation After this we need to install the Selenium So again let's go back to the browser and see what files we need to download to install Selenium So let me go to my browser and here I will be going to the seleniumhq.org So if you're working with selenium this particular website the seleniumhq.org is going to be a bible Everything and anything related to selenium is available in this website Whether you want to download the files whether you want to refer to the documentation anything regarding to selenium is available here So what we want now is the installables for selenium So here go to the download tab Now for you to install selenium and start working with selenium there are three things which are required for you to download One is a standalone selenium server So this is not required immediately when you get started with selenium However when you start working with remote selenium web driver you would be requiring this when you have a grid setup you will be requiring the standalone server So for that what you can do is you can just download the latest version available here So when you click on that it'll download the file into your download folder So this is one particular file which you need to keep Next Selenium client and web driver language bindings Now in today's demo we will be looking at Selenium with Java So that means my client package of Java is what I need to download So whatever programming language Selenium supports we have respective downloadables available with that Say if you're working with Python then you need to download your client library for Python and since we are working with Java you need to download this package So simply what you need to do click on this link and it'll download the Java package for you which are basically the JAR files So we have client libraries now and then there is another component what we need Now with Selenium you're going to be automating your web browser applications correct and you also want your applications to run on multiple browsers So that means your scripts the automation scripts which you create should be able to run on any browser Selenium works with multiple browsers like Edge Safari Chrome Firefox and other browsers Even it has a support for headless browser Now every browser which it supports comes with its own driver file Now say for example we want to say work with Firefox driver So that means for us to start working with Firefox browser we need to download something called as a GCO driver here And if you want to work with Chrome browser you need to install the Chrome driver So depending on what browsers you'll be testing with go ahead click on each of this link and download the latest driver files Now since we are going to be working with Firefox in this demo what I need to do is I just need to click here on the latest link So when I click on the latest link it is going to take me to this driver files So driver files are specific to each of the operating system So if you go down here you'll see there is a separate driver file available for Linux for Mac and for Windows So depending on which operating system where you'll be running your test download that particular driver file and this is the driver file I need because we are working on Windows machine So these are the three different packages which we need to download from the seleniumhq.org for us to install selenium So let me show you the folder where I have already downloaded all this So if you see here selenium Java 3.141.59 okay this is nothing but our client library which we saw here Let's go back to the main page here that is this So once I download this this is a zip file after I unzip the file this is the folder structure I see and let's see what is there in this folder structure So there are two jar files here and then in the lips there are multiple jar files and we will need all this to work with selenium and then we also downloaded the driver files So what I did was after downloading those driver files for the browser I created a directory here called drivers and I've kept all my browser drivers here So I have a driver file downloaded for Chrome I want to work with Firefox So I have a Gecko driver here and then for Internet Explorer That's it So this is all we need So once we have all this what you need to do is go to your Eclipse In the Eclipse right click on the project which you have created and then go to the build path and say configure build path Go to the libraries tab here Now do you see this JRE libraries here this is what got installed first And now similarly we are going to add the Selenium jars to this library And how do we add that on your right you can see this add external jars Click on add external jars Go to your folder where you have downloaded your selenium which is this Select all the jar files which is available So I have two jar files here I'll just say click open Again I will click on add external jar Now from the lips folder I will select all this five So select all the five jars and click on open So you should see all the seven jar files here So once you have this just say apply and close Now if you look into your project directory here you'll see some a folder called referenced library and this is where you will see all the selenium jars here This is a very simple installation in Eclipse When you want to install Selenium you just need to export all the jars of the Selenium into Eclipse And now your system is ready to start working with Selenium scripts All right So now let's just test our installation by writing a small selenium test script So for that what I will do is I'll go to the source folder right click new and I'll say Java class So let's name this as say first selenium test and I will select this public static white main and I will click on finish All right So now let's uh create a use case Say we want to launch a Firefox browser and then we want to launch the Amazon site So these will be just two simple things which we will be doing in this test script So for me to do that what I usually do is I create a method for any functionality which I want to create here So now I want to do a launch browser So I'll create a method here called launch browser Now whenever you start writing your selenium scripts the first line what you need to do is you need to declare an object of webdriver class So here I'll say webdriver driver So now if you hover over this error what it is showing it says import web driver from oropenqa.selium So if you remember when we installed the selenium we imported all these jars right So that means so what whenever we want to use a web driver we need to import this class from these packages So just go ahead and click on this import state done Now next step now for us to launch a Firefox browser it is a two-steps process which is involved here One is you need to set the system property and then you need to launch the driver So let's do that I'll say system dot set property So use this method set property So this takes two arguments the key and the value p Now what is the key I'm going to mention here i'm going to be mentioning the gecko driver and the path for the gecko driver Okay because since I'm working with a Firefox So in double quotes I'll say web driver.driver driver This is my key and the value is going to be sorry the fully qualified path for your driver files And you know where we have kept our driver files Let's go to that driver files in D colon I have selenium tutorial In installers I have driver folder Okay So I'm just going to copy the complete path from here Ctrl C and I paste it here Ctrl + V Along with this I need to provide the file name for the GCO driver which is GCO driver.exe And let's complete this step Next so once I've set the property I need to provide a command for launching my Firefox driver And how do I do that i simply use this driver object which I have created Driver equal to new Firefox driver Again similarly the way we imported packages for web driver we also need to import the package for Firefox driver So just hover over the mouse over that and select import Firefox driver With these two statements we will be able to launch the Firefox browser And as I said in our use case what is the next thing we want to do we want to launch say amazon.in website For that there is a command in selenium which says driver.get and you pass the URL here So for me to write the URL what I usually do is I go to my browser I open the website which I want to work with In our case it's amazon.in and I just simply copy this fully formed URL go to my eclipse and just paste it here Now this ensures that I don't make any mistakes in typing out the URL Let's complete this statement and we are done And now in the main function I'll just create an object of this and we will call this method So I'll copy this class first selenium test say obj equal to new first selenium test and now I'll say obj dot this is our function launch pro browser so let's save this and execute this c right click run as java application okay so the mozzilla firefox has been launched now it should launch your amazon bingo so there goes our first test script which ran successfully before you start understanding any automation tool it's good to look back into what manual testing is all about what are its challenges and how automation tool overcomes these challenges Challenges are always overcome by inventing something new So let's see how selenium came into existence and how did it evolve to become one of the most popular web application automation tool Selenium suite of tools Selenium is not a single tool It has multiple components So we will look into each of them and as you know every automation tool has its own advantages and limitations So we will be looking at what the advantages are and the limitations of Selenium and how do we work around those limitations All right So let's get started Manual testing A definition if you can say a manual testing involves the physical execution of test cases against various applications and to do what to detect bugs and errors in your product It is one of the primitive methods of testing a software This was the only method which we knew of earlier It is execution of test cases without using any automation tools It does not require the knowledge of a testing tool Obviously because everything is done manually also you can practically test any application since you're doing a manual testing So let's take an example So say we have a use case you are testing say a Facebook application and in Facebook application let's let's open the Facebook application and say create an account This is your web page which is under test Now now as a tester what would you do you would write multiple test cases to test each of the functionalities on this page You will use multiple data sets to test each of these fields like the first name the surname mobile number or the new password And you will also test multiple links What are the different links on this page like say forgotten account or create a new page So these are the multiple links available on the web pages Also you look at each and every element of the web page like your radio buttons like your drop-down list Apart from this you would do an accessibility testing You would do a performance testing for this page or say a response time after you say click on the login button Literally you can do any type of tests manually Once you have this test cases ready what do you do you start executing this test cases one by one You will find bugs your developers are going to fix them and you will need to rerun all these test cases one by one again until all the bugs are fixed and your application is ready to ship Now if one has to run test cases with hundreds of transactions or the data sets and repeat them can you imagine the amount of effort required in that Now that brings us to the first demerit of the manual testing Manual testing is a very timeconuming process and it is very boring also it is very highly errorprone Why because it is done manually and human mistakes are bound to happen Since it's a manual executions tester's presence is required all the time One needs to keep doing manual steps step by step again all the time He also has to create manual reports group them format them so that we get good-looking reports Also send this reports manually to all stakeholders Then collection of logs from various machines where you have run your test Consolidating all of them creating repositories and maintaining them And again since it's all as a manual process there is a high chance of creating manual errors there Scope of manual testing is limited For example let's say regression testing Ideally you would want to run all the test cases which you have written But since it's a manual process you would not have the luxury of time to execute all of them And hence you will pick and choose your test cases to execute That way you're limiting the scope of testing Also working with large amount of data manually is impractical which could be the need of your application What about performance testing you want to collect metrics on various performance measures as a part of your performance testing You want to simulate multiple loads on application under test and hence manually performing these kind of test is not feasible And to top it all up say if you're working in an agile model where code is being churned out by developers testers are building their test and they're executing them as and when the builds are available for testing And this happens iteratively And hence you will need to run this test multiple times during your development cycle And doing this manually definitely becomes very tedious and boring And is this a effective way of doing it not at all So what do we do we automate it So this tells us why we automate One for faster execution Two to be less errorprone and three the main reason is to help frequent execution of our test So there are many tools available in the market today for automation One such tool is selenium Birth of selenium Much before selenium there were various tools in the market like say RFT and QTP just to name a few popular ones Selenium was introduced by gentleman called Jason Huggin way back in 2004 He was an engineer at Thoughtworks and he was working on a web application which needed frequent testing He realized the inefficiency in manually testing this web application repeatedly So what he did was he wrote a JavaScript program that automatically controlled the browser actions and he named it as JavaScript testrunner Later he made this open-source and this was renamed as the selenium core and this is how selenium came into existence and since then selenium has become one of the most powerful tool for testing web applications So how does selenium help so we saw all the demerits of manual testing So we can say by automation of test cases One selenium helps in speedy execution of test cases Since manual execution is avoided the results are more accurate no human errors Since your test cases are automated human resources required to execute automated test cases is far less than manual testing Because of that there is a lesser investment in human resources It saves time and you know time is money It's cost effective As Selenium is an open-source it is available free of cost Early time to market Since you save effort and time on manual execution your clients will be merrier as you would be able to ship your product pretty fast Lastly since your test cases are automated you can rerun them any point of time and as many times as required So if this tool offers so many benefits we definitely want to know more detail about what Selenium is Selenium enables us to test web applications on all kind of browsers like Internet Explorer Chrome Firefox Safari Edge Opera and even the headless browser Selenium is an open-source and it is platform independent The biggest reason why people are preferring this tool is because it is free of cost and the QTP and the RFP which we talked about are chargeable Selenium is a set of tools and libraries to facilitate the automation of web application As I said it is not a single tool It has multiple components which we'll be seeing in detail in some time And all these tools together help us test the web application You can run Selenium scripts on any platform It is platform independent Why because it is primarily developed in JavaScript It's very common for manual testers not to have in-depth programming knowledge So Selenium has this record and replay back tool called the Selenium ID which can be used to create a set of actions as a script and you can replay the script back However this is mainly used for demo purposes only because Selenium is such a powerful tool that you should be able to take full advantage of all its features Selenium provides support for different programming languages like Java Python C Ruby So you can write your test scripts in any language you like One need not know in-depth or advanced knowledge of these languages Also Selenium supports different operating systems It has supports for Windows Macs Linux even Ubuntu as well So you can run your Selenium test on any platform of your choice And hence Selenium is the most popular and widely used automation tools for automating your web applications Selenium set of tools So let's go a little more deeper into Selenium As I said Selenium is not a single tool It is a suite of tools So let's look at some of the major components or the tools in Selenium and what they have to offer So Selenium has four major components One Selenium ID It's the most simplest tool in the suite of Selenium It is integrated development environment Earlier Selenium ID was available only as a Firefox plug-in and it offered a simple record and playback functionality It is a very simple to use tool but it's mainly used for prototyping and not used for creating automation in the realtime projects because it has its own limitations like any other record and replay tool Selenium RC this is nothing but selenium remote control It is used to write web application test in different programming language What it does it it basically interacts with the browser with the help of something called as RC server And how it interacts is it uses a simple HTTP postgate request for communication This was also called as Selenium 1.0 version but it got deprecated in Selenium 2.0 version and was completely removed in 3.0 and it was replaced by web driver and we will see in detail as why this happened Selenium web driver this is the most important component in the Selenium suite It is a programming interface to create and execute test cases It is obviously the successor of the selenium RC which we talked about because of certain drawbacks which RC had So what web driver does is it interacts with the browsers directly unlike RC where the RC required a server to interact with the browser And the last component is the selenium grid So selenium grid is used to run multiple test scripts on multiple machines at the same time So it helps you in achieving parallel execution Since the selenium web driver with you can only do sequential execution Grid is what comes into picture where you can do your parallel execution And why is parallel execution important because in real time environment you always have the need to run test cases in a distributed environment and that is what grid helps you to achieve So all this together helps us to create robust web application test automation and we will go in detail about each of the components So before that let's look at the history of selenium version So what did selenium version comprised of it had an IDE RC and grit and as I said earlier there were some disadvantages of using RC So RC was on its path of deprecation and web driver was taking its path So if you look at Selenium 2 version it had an earlier version of web driver and also the RC So they coexisted from 3 RC was completely removed and web driver took its place There is also a 4 version around the corner and it has more features and enhancements Some of some of the features which are talked about are W3C web driver standardization improved ID and improved grid Now let's look at each of the components in the Selenium suite Selenium IDE is the most simplest tool in the suite of Selenium It is nothing but an integrated development environment for creating your automation scripts It has a record and playback functionality and it's a very simple and easy to use tool It is available as a Firefox plug-in and a Chrome extension So you can use either of this browser to record your test scripts It's a very simple user interface using which you can create your scripts that interact with your browser The commands created in the scripts are called Selen commands and they can be exported to the supported programming language and hence this code can be reused However this is mainly used for prototyping and not used for creating automation for your realtime projects Why because of its own limitation which any other record and replay tool has So a bit history of selenium ID So earlier Selenium ID was only a Firefox extension So we saw that ID was available since the Selenium version one Selenium ID died with the Firefox version 55 That was ID was stopped supporting from 55 version onwards And this was around 2017 time frame However very recently all new brand selenium ID has been launched by apply tools and also they have made it a cross browser So you can install the extension on chrome as well as an add-on on Firefox browser So they completely revamped this ID code and now they have made it available on the GitHub under the Apache 2.0 license And for the demos today we'll be looking at the new ID Now with this new ID also comes a good amount of features reusability of test cases better debugger and most importantly it supports parallel test case execution So they have introduced a utility called Selenium SideRunner that allows you to run your test cases on any browser So you can create your automation using IDC on Chrome or Firefox but through command prompt using your siterunner you can execute this test cases on any browser Thus by achieving your cross browser testing control flow statement So initially in the previous versions of IDA there were control flow statements available However one had to install a plug-in to use them But now it is made available out of box And what are these control flow statements these are nothing but your if else conditions the while loops the switch case and so on It also has an improved locator functionality That means it provides a failover mechanism for locating elements on your web page So let's look at how this ID looks and how do we install it and start working on that So for that let me take you to my browser So say let's go to the Firefox browser So on this browser I already have the ID installed So when you already have an ID installed you will see an icon here which says Selenium ID And how do you install this you simply need to go to your Firefox add-ons here where it says find more extension So just type in selenium ID and search for this extension So in the search results you see this selenium ID just click on that And now since I've already installed here it says remove Otherwise for you it is going to give you an add button here Just click on the add button It will install this extension Once it is installed you should be able to see this selenium ID icon here Okay So now let's go ahead and launch this ID So when I click on that it is going to show me a welcome page where it's going to give me few options The first option is it says record a new test case in a new project So straight away if you choose this option you can start recording a test case in which case it's going to just create a default project for you which you can save it later Then open an existing project So you can open if you already have a saved project Create a new project and close So I already have an existing project with me for the demo purpose So I'll go ahead and open that So I'll say open existing project and I have created a simple script What the script does is it logs me into the Facebook using a dummy user mail uh sorry username and password That's all It's a very simple script with few lines and this is what it's going to do So what we will simply do is we'll just run the script and see how it works For that I'm just going to reduce the test execution speed so that you should be able to see every step of execution here All right So what I'll do now here is I'll just adjust this window and I'll just simply say run current test All right So I'll just get this side by side so that you should be able to see what exactly the script is doing Okay So now you are able to see both the windows Okay So now it's going to type in your user email here There you go And now it'll enter the password and it has clicked on the login button So it's going to take a while to say login And since these are the dummy ids it is you are not able to login here and you're going to see this error window Fine That is what is the expected output here Now on the ID if you look here after I execute the test case every statement or every command which I have used here is colored coded in green So that means this particular step was executed successfully And then here in the log window it will give you a complete log of this test case Right from the first step till the end and your end result it says FB login which is my test case name completed successfully Let's look at few components of this ID The first one is the menu bar So let's go to our ID All right So the menu bar is right here on the top So here is your project name So either you can add a new project here or rename your project So since we already have this project which is named as Facebook and then on the right you have options to create a new project open an existing project or save the current project And then comes our toolbar So using the options in this toolbar you can control the execution of your test cases So first one here is the recording button So this is what you use when you start recording your script And then on the left you have two options here to run your test cases The first one is run all tests So in case you have multiple test cases written here you can execute them one by one sequentially by using this run all test Else what you can do is if you just want to run your current test this is what you would use Then ID has this debugger option which you can use to do a step execution So say for example now whenever I run the script it's going to execute each and every command here sequentially So instead if I just select the first command and say do step execution All right So what it does is the moment it finishes the first command which is opening of Facebook Right I think which is already done here Yeah All right So once this is done it is going to wait immediately on the second command and it says pause debugger So from here you can do whatever you would like to do In case you want to change the command here you can do that You can pause your execution you can resume your execution here right you can even completely stop your test execution or you can just select this to run the rest of the test case So if we say run the test case what it is going to do is it's just going to simply go ahead and complete the com complete the test case Now there is another option here which is you see the timer there which says test execution speed So to execute your test cases in the speed you want Say whenever you're developing an automation script right and say you want to give a demo So you need to control the speed sometimes so that the viewer is able to exactly see all the steps which is being performed and this gives you an option to control that complete execution Right so do you see the grading here so we have somewhere from fast to completely slow execution So the previous demo which I showed was I control the speed and then I executed it so that we could see every command how it is being executed All right So what's the next this is called as an address bar So whichever wherever whenever you enter a URL here that is where you want to conduct your test And another thing what it does is it keeps a history of all the URLs which you have used for running your test Then here is where your script is recorded So each and every instruction is displayed here in the order in which you have recorded the script And then if you look here you have something called as log and reference So now log is an area where it records each and every step of your command as in when they get executed Right so if you see here it says open https facebook.com and okay So that means this command was executed successfully And after the complete test case is done it gives you whether the test case passed or failed So in case there is a failure you'll immediately see this test case is failed in red color Also there is something called as reference here For example say if I click on any of this command the reference tab what it is going to show me is the details of this command which I have used in the script It gives you the details of the command as well as what the arguments have been used or how how is that you need to be using this particular command Okay So now what we'll do is let's go ahead and write a simple script using this ID So with this you'll get an ideas how do we actually record scripts in ID So for that I have a use case here a very very simple use case So what we will do is we will open amazon.in then we'll search simply search for say product iPhone and once we get that search page where all your iPhones are displayed we will just do an assert on the title of the page Simple all right so let's do that So first thing what I need is an URL Okay So first let me go to my Firefox browser here and say amazon.in So why I'm doing this just to simply get the right URL absolute URL path here and so that I don't make any mistakes while typing in the URL Okay So I got this So let me close all this windows I don't need any of this Let's minimize this All right So here what I'll do in the tests tab I'll say add a new test and name this test as u Amazon search Done I'll say add Now I'll enter this URL which I just copied it from my browser Okay And then I'll just say start recording So what it did did was since I've entered the URL in this address box it just opened the Amazon.in URL Now let's do the test case So in my test case what I said was I want to search for iPhone Once I have this I'm just going to click on my search button So now this gives me a list of all iPhones And then I said I want to add an assertion on the title of this page So for me to do that what ID gives me an option is I have to just rightclick anywhere on this page and you'll see the selenium ID options here So in this I will select assert title and then I will close this browser So that kind of completes my test case So now take a look at all the steps which is created for me So it says open slash because I've already provided the URL here So either you can replace it with your regular URL or you can just leave it as it is So what I will do is since this is going to be a proper script and I might be using this to run it from my command prompt also So I'll just replace this target with the actual URL and then what it is doing it is setting a window size Then there are whatever I did on that particular URL on that website It has recorded all the steps for me So this is where it says type into this particular text box which is my search box And what did it type iphone This was the value which I entered Now there was one more feature which I told you in this new ID which had which I said it has a failover mechanism for your locating techniques Now that is what this is Now if you look here this ID is equal to two tab search text box This is nothing but that search box where we entered the text iPhone and it has certain identification through which this ID identifies that web element and that has multiple options to select that particular search box So right now what it has used is ID is equal to two tab search box However if you know the different locating techniques you will be able to see here that it has other techniques also which it has identified like the name and the CSS and the X path So how does this help in failover is say tomorrow if Amazon.in website changes the ID of this element right you are not going to come and rewrite the scripts again Instead by using the same script what it'll do is if this particular ID fails if it is unable to find the element using the first locator which is the ID it simply moves to the next available ones and it tries to search for that element until one of these becomes true That is what was the failure mechanism which has got added Now it's a very brilliant feature because most of our test cases break because of element location techniques Well let's come back to this So then we added an assert title right So what is assert title here it simply captures the title of that particular page and it checks This is all a very simple test case So what we will do now is we will stop the recording and then I've also given a close browser So right now what I'll do is I'll just comment this out Why because if I just run this test case it's going to be very fast and you might not be able to catch the exact command execution what has happened All right So right now I'll just disable it so that it'll just do all these test cases and it just stays there without closing the browser So now I'll just say run the current test So your Amazon in is launched Okay it is typed in the iPhone It's also clicked on the search So it is done So now if you look here since we are in the reference tab it is not able to show So let's go to the log And now let's see the log So it's going to be a running log So if you notice here the previous examples which we have run for Facebook is also in the same log So we'll have to see the log from running Amazon search because that's our test case So if you see here every command line right was executed successfully Assert title was also done and your test case was executed successfully So it passed Now what we will do is on this assert title I'll just modify this and let's say just add some text I'll just add double s here Now this by intentionally I'm going to fail this test case just to show you that whenever there is a test case failure how does the ID behaves and how do you get to know the failures All right so I'll just run the test test case again So before that let's close the previous window All right done And now here I'll also uncomment the close because anyway it's a failure which I'm going to see which I should be able to see it in the logs So I'll close the browser after the execution of test case Okay So let's simply go and run the test case Okay Amazon.n is launched It should search for iPhone now Yeah there you go All right Now it should also close the browser Yes it has closed the browser and it has failed Now see here Now this is the line where our command failed Why because the expected title was not there And if you look in the logs it says your assert title on Amazon.in failed actual result was something different and it did not match with what we had asked it for So this is how simple it is to use your ID to create your automation scripts So we saw all the components of ID We saw the record button Then I showed you the toolbar I showed you the editor box and also the test execution log So now let's come to what are the limitations of this ID With IDE you cannot export your scripts your test scripts to web driver scripts This support is not yet added but it is in the works Datadriven testing like using your Excel files or reading data from the CSV files and passing it to the script This capability is still not available Also you cannot connect to database for reading your test data or perform any kind of database testing With Selenium web driver yes you can Also unlike Selenium web driver you do not have a good reporting mechanism with the ID like say for example test NG or report NG So that brings us to the next component of the suite which is Selenium RC Selenium remote control So Selenium RC was developed by Paul Hammond He refactor the code which was developed by JSON and was credited with JSON as a co-creator of Selenium Selenium server is written in Java It is used to write web application test in different programming languages as it supports multiple programming languages like your Java C# Pearl Python and Ruby It interacts with the browser with the help of an RC server So this RC server uses a simple HTTP get and post request for communication And as I said earlier also Selenium RC was called as Selenium 1.0 over But it got deprecated in Selenium 2.0 and was completely removed in 3.0 and it got replaced by what web driver and we'll see why this happened and what was that issue which we had with the RC server So this is the architecture of selenium remote control at a very high level So when Jason Huggin introduced selenium you know the tool was called as JavaScript program and then that was also called as a selenium core So every HTML has a JavaScript statements which are executed by web browser and there is a JavaScript engine which helps in executing this command Now this RCA had one major issue Now what was that issue say for example you have a test script say test.javascript here which you are trying to access elements from anywhere from the google.com domain So what used to happen is every element which is accessible e are the elements which can belong only to google.com domain like say for example mail the search or the drive So any elements from this can be accessible through your test scripts However nothing outside the domain of say google.com in this case was accessible Say for example if your test scripts wanted to access something from yahoo.com this was not possible And this is due to the security reasons obviously Now to overcome that the testers what they had to do was they had to install the Selenium core and the web server which contained your web application which is under test on the same machine And imagine if you have to do this for every machine which is under test this is not going to be feasible or even effective all the time and this issue is called as a same origin policy Now what a same origin policy issue says is it prohibits a JavaScript from accessing elements or interacting with scripts from a domain different from where it is launched and this is purely for the security measure So if you have written a scripts which can access your google.com or anything related to google.com these scripts cannot access any elements outside the domain like as we said in the example yahoo.com This was the same origin policy Now to overcome this what this gentleman did was he created something called as selenium remote control server to trick the browser in believing that your core your selenium core and your web application under test are from the same domain And this is what was the selenium remote control So if you look at again a highle architecture or how did this actually work first you write your test scripts which is here right in any of the supported language like your PHP or your Java or Python and before we start testing we need to launch this RC server which is a separate application So this selenium server is responsible for receiving the selen commands and these selen commands are the ones which you have written in your script It interprets them and reports the result back to your test So all that is done through your RC server The browser interaction which happens through RC server right from here to your browser So this happens through a simple HTTP and post and get request and that is how your RC server and your browser communicate and how exactly this communication happens This RC server it acts like a proxy So say your test script asks to launch a browser So what happens is this commands goes to your server and then your RC server launches the browser It injects the JavaScript into the browser Once this is done all the subsequent calls from your test script right from your test scripts to your browser goes through your RC And now upon upon receiving these instruction your Selenium core executes these actual commands as JavaScript commands on the browser And then the test results are displayed back from your browser to your RC to your test scripts So the same cycle gets repeated right until the complete test case execution is over So for every command what you write in your JavaScript here or your test script here goes through a complete cycle of going through the RC server to the browser collecting the results again from the RC server back to your test scripts So this cycle gets repeated for every command until your complete test execution is done So RC had definitely lot of shortcomings and what are those so RC server needs to be installed before running any test scripts which we just saw So that was an additional setup since it acts as a mediator between your commands which is your selen commands and your browser The architecture of RC is complicated Why because of its intermediate RC server which is required to communicate with the browser The execution of commands takes very long it is slower We know why because every command in this takes a full trip from the test script to your RC server to the core engine to the browser and then back to the same route which makes your overall test execution very slow Lastly the APIs supported by RC are very redundant and confusing So RC does have a good number of APIs However it is less object-oriented So they are redundant and confusing Say for example say if you want to write into a text box how and when to use a type key command or just a type command is always confusing Another example is some of the mouse commands using a click or a mouse down both kind of you know almost providing a similar functionality So that is a kind of confusion which developers used to create Hence selenium RC got deprecated and is no more available in latest selenium versions It is obsolete now Now to overcome these shortfalls WebDriver was introduced So while RC was introduced in 2004 WebDriver was introduced by Simon Stewart in 2006 It's a crossplatform testing platform So web driver can run on any platform like say Linux Windows Mac or even if you have a Ubuntu machine you can run your Selenium scripts on this machine It is a programming interface to run test cases It is not an ID And how does this work actually so test cases are created and executed using web elements or objects using the object locator and the web driver methods So when I do a demo you will understand what this webdriver methods are and how do we locate the web elements on the web page It does not require a core engine like RC So it is pretty fast Why because web driver interacts directly with the browser and it does not have that intermediate server like the uh RC had So each browser in this case what happens is each browser has its own driver on which the application runs and this driver is responsible to make the browser understand the commands which you'll be passing from the script like say for example click of a button or you want to enter some text So through your script you tell which browser you want to work with say Chrome and then the Chrome driver is responsible for interpreting your instructions and to execute it on the web application launched on the Chrome browser So like RC web driver also supports multiple programming languages in which you can write your test scripts So another advantage of web driver is it supports various frameworks like testNG JUnit NUnit and report So when we talk about the limitations of web driver you will appreciate how this support for various frameworks and tool help in making the selenium a complete automation solution for web application So let's look at the architecture of webd driver at a high level What is in web driver so web driver consists of four major components The first one is we have got client libraries right or what we also call it as language bindings So since selenium supports multiple language and you are free to use any of the supported languages to create your automation script these libraries are made available on your selenium website which you need to download and then write your scripts accordingly So let's go and see from where do we download this So if I go to my browser so seleniumhq.org right so if you're working with selenium this website is your bible So anything and everything you need to know about selenium right you need to come here and use all the tabs here in this website So right now what we are going to look at is what are those language bindings So for that I'll have to go to this download tab here Okay And if you scroll down here you will see something like selenium client and web driver language bindings And for each of the supported language of selenium you have a download link Right so say for example if you're working with Java here what you need to do is you need to download your Java language binding So let's go back to the presentation So this is where your language bindings are available next So Selenium provides lots of APIs for us to interact with the browser and when we do the demo I'll be showing you some of these APIs right and these are nothing but the rest APIs and everything whatever we do through the script happens through the rest calls Then we have a JSON wire protocol What is JSON javascript object notation It is nothing but a standard for exchanging data over the web So for example you want to say launch a web application through your script So what selenium does it it creates a JSON payload and posts the request to the browser driver that is here and then we have this browser drivers themselves and as I said there is a specific driver for each browser As you know every tool has its own limitation So does selenium So let's look at what these limitations are and if there are any workarounds for them Cannot test mobile applications requires framework like APM Selenium is for automating web application It cannot handle mobile applications Mobile applications are little different and they need its own set of automation tool However what Selenium provides is a support for integrating this APM tool which is nothing but a mobile application automation tool And using APM and Selenium you can still achieve mobile application automation And when do you usually need this when your application under test is also supported on mobile devices you would want a mechanism to run the same test cases on web browser as well as your mobile browsers Right so this is how you achieve it The next limitation So when we talked about the components of selenium I said that with web driver we can achieve only sequential execution However in realtime scenario we cannot just live with this We need to have a mechanism to run our test cases parallelly on multiple machines as well as on multiple browsers So though this is a limitation of web driver but what selenium offers is something called as grid which helps us achieve this and we will see in shortly what the selenium grid is all about Also if you want to know more details as how do we work with the grid how do you want to install that grid so do check out our video uh on simply learn website on selenium grid Third limitations so limited reporting capability So selenium web driver has a limited reporting capability It can create basic reports but what we definitely need is a more So it does support some tools like say testNG report and even extent reports which you can integrate with selenium and generate beautiful reports Powerful isn't it also there are other challenges um with selenium like selenium is not very good with image testing especially for the ones which are designed for web application automation But then we have other tools which can be used along with selenium like autoet and secularly So if you look at all this selenium still provides a complete solution for your automation need and that's the beauty of selenium and that is why it makes the most popular tool of today for automation Okay let's do a quick comparison between the selenium RC and the web driver So RC has a very complex architecture You know why because of the additional RC server Whereas due to direct interaction with the browser WebDriver architecture is pretty simple Execution speed it is slower in RC and much faster in web driver Why because in web driver we have eliminated the complete layer of selenium server right that the RC server and we established a direct communication with the browser through browser drivers It requires an RC server to interact with the browsers We just talked about it and whereas web driver can directly interact with the browser So RC again we talked about this as one of the limitations that we have lot of redundant APIs which kept developers guessing as which API to use for what functionality However web driver offers pretty clean APIs to work with RC did not offer any support for headless browser whereas in web driver you do have a support for using headless browsers Let's see the web driver in action now Now for the demo we will use this particular use case and what this use case says is navigate to the official simply learn website Then type the selenium in search bar and click on it and click on the selenium 3.0 training So we are basically searching for selenium 3.0 training on the simply learn website First let's do the steps manually and then we will go ahead and write the automation script So let's go to my browser on my browser What I'll do is let me first launch the simply learn website Okay And here what my use case step says is I need to search for Selenium and click on the search button So once I do that it is going to give me a complete list of all kind of selenium trainings which is available with simply learn And what I'm interested in is the Selenium 3.0 training here Once I find this on the web page I need to go and click on that All right So this is all the steps which we are going to perform in this use case Okay Now so for writing the test cases I'll be using an ID which is Eclipse I've already installed my Eclipse and also I have installed Selenium in this instance of my Eclipse All right So if you can see the reference library folder here you'll see all the jars which are required for the Selenium to work Next another prerec which is required for Selenium and that is your driver files Now every browser which you want to work with has its own driver file to execute your selenium scripts And since for this demo I'll be working with the Firefox browser I will need a driver file for Firefox Now driver file for Firefox is the Gecko driver which I have already downloaded and placed in my folder called drivers Now where did I download this from let's go ahead and see that So if I go back to my browser and if you go to your seleniumhq.webite you have to go to this download tab here In the download tab when you scroll down you will see something like third party drivers bindings and plugins In this you'll see the list of all the browsers which is supported by selenium and against each of this browser you will find a link which has the driver files Now since we'll be using the gecko driver this is the link where you need to go to and depending on which operating system which you're working on you need to download that particular file Now since I'm working on Mac this is the file which I'm using If you're a Windows user you need to download this zip file and unzip it So once you unzip that you would get a file called Gecko driver for your Firefox or a Chrome driver for your Chrome browser And then what you do is you just create a directory called drivers under your project and just place the driver files here So these are the two prerexs for your Selenium One is importing your jar files like this and then having your drivers downloaded and keep them under a folder where you can reference to Okay So now we'll go ahead and create a class I already have a package created in this project So I'll use this project and create a new class So I'll say create new Java class and let's call this as search training I'll be using a public static void main and I'll click on finish So let's remove this autogenerated lines as we do not need them All right Now the first statement which you need to write before even you start writing the rest of your code is what you need to do is you need to define or declare your driver variable using your class web driver So what I would do is I'll say web driver driver Done All right Now you'll see that this ID is going to flash some errors for you That means it is going to ask you to import certain libraries which is required by the web driver So simply just go ahead and say import web driver from og.opsq.selia This is the package which we will need All right So you have a driver created which is of the class webdriver And now after this I'm going to create three methods All right So first method I will have for launching the Firefox browser Okay and then I will write a simple method for searching selenium training and clicking on it This is the actual use case what we'll be doing And then third method I'm going to write is just to close the browser which I'm going to be opening Right so these are the different methods which I'll be creating and from the public static void main I will just call these methods one after the other Okay So let's go ahead and write the first method Now my first method is launching the Firefox browser So I'll say public void since my return type is null or there is no return type for this let's call it as launch browser Okay All right Now in this for launching any browser I need to mention two steps Now the first step is where I need to do a system set property Okay Let's do that first and then I'll explain what this does I'll just say system dot set property So this accepts a key and a value pair So what is my key here my key here is webd driver dot gekcko dot driver and I need to provide a value So value is nothing but the path to the gecko driver and we know that this gecko driver which I'm going to use here is right here in the same project path under the drivers folder Correct And that is what the path which I'm going to provide here So here simply I need to say drivers / geko driver coo All right done And let me close this sentence All right Now since I'm a Mac user my Gecko driver installable is just the name Gecko driver If you're a Windows user and if you're running your Selenium scripts on the Windows machine you need to provide a complete path to this including.exe because driver executable on your machines is going to be gecko driver.exe All right So just make sure that your path which you mentioned here in the systems set property is the correct path Okay Then the next thing what we need to do is I need to just say driver is equal to new Firefox driver Okay So this command new Firefox driver creates an instance of the Firefox browser Now this is also flagging me error Why because again it's going to ask me to import the packages where the Firefox driver class is present Okay we did that Now these two lines are responsible for launching the Firefox browser from So this is done So what's my next step in the use case now I need to launch the website simply learn So for that we have a command called driver.get driver.get what it does is whatever URL you're going to give it here in this double quotes as an argument it is going to launch that particular website and for us it's a simply learn website So what I do as a best practices instead of typing out the URL I go to my browser launch that URL which I want to test and I simply copy it Come back to your Eclipse and just simply paste it So this ensures that I do not make any mistakes in the URL Okay So done So our first method is ready where we are launching the browser which is our Firefox browser and then launching the simply learn website Now the next method what is my next method in my next method I need to give the search string to search Selenium training on this particular website Now for that we need to do few things What are those few things let's go to the website again All right So let me relaunch this Let's close this Okay let me remove all this and let's go to the homepage first Okay this is my homepage So as you saw when I did a manual testing of this I entered the text here So now since I have to write a script for this first I need to identify what this element is For that what I'm going to do is I'm just going to say right click here and I'll say inspect element All right Now this element let's see what attribute it has which I can use for finding this element So I I see that there is an ID present So what I'm going to do is I'm just going to simply use this ID and then I'll just copy this ID from here Go back to Eclipse Let's write a method first So I'll say public void and what do we give the method name say search training or just search All right Now in this I need to use a command called driver dot find element by id is what I'm going to use as a locating technique and in double quotes the ID which I copied from the website is what I'm going to paste here Okay And then what am I going to do on this element is I need to send that text the text which I'm going to search for which is selenium So I'll just say send keys and whatever text I want to send I need to give it in double quotes So for that selenium So this is done So now I've entered the text here and after entering the text I need to click on this button So for that I need to first know what that button is So let's inspect that search button Okay Now if you look at the search button other than the tag which is span and the class name I do not have anything here All right So what I can do is I can either use the class name or I can write an X path Since this is a demo which we have already used idocating technique I would go ahead and use the X path here So for me to construct an X path uh I will copy this class first Okay And then I already have a crow path installed on my Firefox So I'll use the crow path and first test my X path So I'll just say double slash Let's see what was that element It has a span tag Okay So I'll have to use span and at class equal to and I'll just copy the class name here And let's see if it can identify that element Yeah So it is able to identify So I'll just use this X path in my code So I'll go back to Eclipse and I'll say driver dot find element by do.x path and the X path which I just copied from crop path is what I'm going to paste here And what is the action I need to do here i need to say click done So I have reached a stage where I have entered this selenium Okay And then I have clicked on the search button Once I do this I know that expected result is I should be able to find this particular link here selenium 3.0 training Okay And I should be able to click on that So for that again I need to inspect this So let's inspect this selenium 3.2 All right So now what are the elements this has now this particular element has attributes like it has a tag h2 Then it has got some class name and some other attributes So I would again would like to use a x path here Now this time while using the x path I'm going to make use of a text functionality so that I can search for this particular text So I'll simply copy this I'll go to my crop path The tag is H2 So I'll say simply H2 Okay And here I'll say text equal to and this is the text which I copied I missed out that yes there So I'm just going to add an S Okay So let's first test here whether it is able to identify that element Yeah So it is able to identify So can you see a blue dotted line it is able to show us which element it is identified So I'll copy this X path now And let's go to my ID Eclipse So now here what I need to do is I'll have to again simply say driver dotfindelement by XPath and paste the XPath which we just did and then again I have to do a click operation Done All right So technically we have taken all the steps of the use case and we have written the commands for that All right Now let's add an additional thing here Say after coming to this page after finding this we want to u say print the title of this page now what is the title of this page if you just hover your mouse on this it says online and classroom training for professional certification courses simpler So what I will do is after doing all these operations I will just print out this page title on our console So for that I have to just do this driver dot uh so let's do a sis out So I'll say sis out system.out.printl print ln Okay And here I would say let's add a text here The page title is and then let's append it with driver dot get title So this is the command which we'll be using to fetch the page title Done Now what is the last method I need to add just to close the browser All right So let me add a method here I'll say public void close browser And this is one single command which I need to call I'll say driver do.quit Okay And then I need to call all these methods from my public static void main So I let me use my class name which is this So I'm going to create an object OBJ is equal to new class name And then using this object first is I need to call the method launch browser and then I'll call the method search Right and then I'll call the method close browser Done So technically our script is ready with all the functionality which we wanted to cover from our use case Now there are few other tweaks which I need to do this and I'll tell you why I need to do this Now for example after we click here right after we click on the search if you observed on your website it took a little while before it listed out all the selenium trainings for us And usually when you're actually doing it you wait for the selenium 3.0 training to be available and then you click on that Now same thing you also need to tell your scripts to do that You need to tell your scripts to wait for a while until you start seeing the selenium 3.0 training or it appears on your web page There are multiple ways to do that in your script and it is a part of overall synchronization what we call where we use kind of implicit and explicit kind of a weights Now since this is a demo for demo purpose what I'm going to do is I'm going to use a command called thread And I'm just going to give an explicit weight of say 3 seconds So you can use this mainly for the demo purposes You can use a thread dots sleep command Now this thread dosle command needs us to handle some exceptions So I'm just going to click on add throws declaration and say interrupted exception Now same thing I'll have to do it in my main function also Okay So let's do that and complete it All right So this is done So by doing this what am I doing i'm ensuring that before I click on the selenium 3.training we are giving enough time for the script to wait until the web page shows this link to the selenium 3.0 training That's one thing I'm doing All right And also now since you're going to be seeing this demo through the video recording the script when it starts running it is going to be very fast So you might just miss out seeing how it does the send keys and how did it click on the search button for us to enable us to see it properly I'll just add some explicit weights here just for a demo purpose So after entering the keys right so what I'll do is I'll just give a simple thread do sleep here Okay So probably a 3 seconds or a 2 seconds wait should be good enough Okay A 3 seconds wait should be good enough here so that we should be able to see how exactly this works on your browser when we execute this Okay Now our complete script is ready So what I'll do is I'll just save the script and then we will simply run the script So to run the script I'll just say right click run as Java application Okay It says asks me to select and save I've saved the script now So let's observe how it runs Okay The simplearn.com the website is launched So the selenium text has been entered in the search box It has clicked on the search Okay All right So now it did everything whatever we wanted it to do All right So since we are closing the browser you are unable to see whether the selenium 3.Raining was selected or not However what I have given here is to fetch the title after all these operations were complete And if you see here the complete operations was done and we were able to see the page title here Okay So now what I'll do since we are unable to see whether it clicked on the selenium 3.0 training or not I'll just comment out the closed browser uh the command Okay So we will not call the closed browser so that the browser remains open and we get to see whether did it really find the training link or not Okay So let me close this window We don't need this Firefox window Close all tabs and then I'll just ex reexecute this script So I'll say run as Java application So save the file Okay simplylearn.com is launched So search text is entered Now it's going to click on the search button Yes All right So we've got the search results It should click on selenium 3.0 training And yes it is successfully able to click on that All right So now it's not going to close the browser because we have commented on that line However it did print us the title here All right So this is a simple way of using the selenium scripts Selenium grid So grid is used to run multiple test scripts on multiple machines at the same time With web driver you can only do sequential execution But in real time environment you always have the need to run test cases in distributed environment And that is where selenium grid comes into picture So grid was conceptualized and developed by Patrick The main objective is to minimize test execution time and how by running your test parallelly So design is in such a way that commands are distributed on multiple machines where you want to run test and all these are executed simultaneously What do you achieve by this methodology of course the parallel execution on different browsers and operating system Grid is pretty flexible and can integrate with many tools like say you want a reporting tool integrated to pull all the reports from the multiple machines where you're running your test cases and you want to present that report in a good-looking format So you have an option to integrate such report Okay So how does this grid work so grid has a hub and node concept which helps in achieving the parallel execution Let's take an example Say your application supports all browsers and most of the operating systems like as in this picture you could say one of them is a Windows machine one of them is a Mac machine and another one is say a Linux machine So your requirement is to run the test on all supported browsers and operating system like the one which is depicted in this picture So what you have to do is first thing is you configure a master machine or what you also call it as a hub by running something called as selenium standalone server and this talent standalone server can be downloaded from the selenium hq website Using the server you create a hub configuration that is this node and then you create nodes specific to your machine requirement And how are these nodes created you again use the same server which is your standalone Selenium server to create the node configuration So I'll show you where the Selenium server can be downloaded So if we go back to our Selenium HQ website So you can see here right on the top it says Selenium standalone server Welcome everyone to our one another demo on which we are going to see that how exactly we can do the installation of Docker on the Windows platform specifically on Windows 10 Now docker is something which is available for most of the operating systems different different platforms So it supports both the Unix and the Windows platform as such So um Linux through various commands we can do the installation But in the case of Windows you have to download the exe file and a particular installer from the dockerhub websites You can simply Google it and you know will get a kind of link from where you will be able to download the package So let's go to the chrome and uh try to search on for the Windows string uh particular installer You will get a link from docker hub You download it You get the stable version You get the edge version Whichever version you want you wish to download you can download it So let's go back to the chrome So here you have the docker desktop for Windows So you can go for the stable or you can go for the edge Right so you also have the comparison that what is the difference between these two versions Right so um the particular edge version is something which is getting releases every month and uh the um stable version is getting the releases every quarter So they are not doing much of the changes to the stable version as compared to the edge there So you just have to double click on the installer and that will help you to do the installation of the process So let's get started So you just click on the get instable version So when you do that the uh particular installer is going to install Now it's going to take like around 300 MB there So that's the kind of installer which is available So uh once the installer is downloaded so what you can do is that you can actually go ahead and you can uh proceed with the doing the double click on this installer When you double click on that you have to proceed with some of the steps like you know from the GUI itself you are going to proceed with the steps So we'll wait for 10 to 20 seconds more and then the installer will be done and then we can do the double click and the installation will proceed So another thing is that u there is a huge difference between the installer like for example in case of Unix the installer is a little bit less but in case of Windows it's a GUI is also involved and there are a lot of binaries which is available there So that's the reason why you know the huge size is there Now it's available for free that's for sure and it also requires the Windows 10 professional or enterprise 64-bit there So um if you are working on some previous uh version of operating systems like Windows 7 and all you have the older version called docker toolbox So they used to call it as like docker toolbox earlier but now they are calling it as a docker desktop with the new docker uh windows 10 support as such here So another couple of seconds and then the installer will be done and then we will be able to proceed with the installation So let's see that how much progress is there to the download So we'll click on the downloads and here still we have some particular installations or some download going on So we'll wait for some time and uh once the installation is done then we'll go back and uh we'll proceed with installation So couple of seconds So it's almost done So I'll just click on this one You can go to the directory to the downloads and you can double click on that also But if you want to do the installation you can click on this one also and it will ask for the approval Yes or no you have to provide Now once that is done so um a desktop a kind of a GUI component will open there So it will start proceeding with the installation So it's asking whether you want to add the desktop the shortcut to desktop So you can say okay I'm going to click on okay So it will unpack the files all the files uh which is required for docker to successfully install that is getting unpacked over here So it will take some time to do the installation because it's doing a lot of uh work here So you can just wait for till the execution of the installer to be completed and once the installer is done you can open your command line and start working on the docker So taking some time to extract the files Now it's asking us to you know do the close and uh do the restart So once that is done you will be able to proceed further and you can just you know run the command line and uh any docker command if you can run so that will give you the response whether the docker is installed or not So you can see here that docker is you know something which is installed So you can run like docker version you will be able to get a version of the client when you do the restart of the machine then at that moment of time the docker server will also be started and then this particular error message will go off Right now the docker demon is not up and running because the installation requires a restart and when you close on this one and go for the restart the machine will be restarted here So this is a way that how exactly we can go for a docker installation and we can go on that part So now let's begin with the demo We'll be installing docker on an Ubuntu system So this is my system I just open the terminal So the first thing you can start with is removing any docker installation that you probably already have present in your system if you want to start from scratch So this is the command to do so p sudo app get remove docker docker engine docker.io enter your password and docker is removed So now we'll start from scratch and we'll install docker once again Before that I'll just clear my screen Okay So before I install Docker let me just ensure that all the softwares on my system currently is in its latest state So sudo app get update Great So that's done Next thing we'll actually install our docker So type in p sudo apt get install docker Now as you can see here there's an error that's occurred So sometimes it's possible that due to the environment of the machine that you're working in this particular command does not work In which case there's always another command that you can start with Just type docker install and that by itself will give you the command you can use to install docker So as it says here s p s p s p s p s p s p s p s p s p s pseudoapp install docker.io is a command that we will need to execute to install docker and after that we'll execute the sudo snap install docker So sudoapp install docker.io first and this will install your docker After that's done we will have sudo snap install docker So snap install docker installs a newly created snap package There are basically some other dependencies for docker that you'll have to install Of course since this is the installation process for the entire Docker IO it will take some time Great So our Docker is installed The next thing we do as I mentioned earlier is that we need to install all the dependency packages So the command for that is sudo snap install docker Enter your password So with that we have completed the installation process for Docker but we'll perform a few more stages where we will test if the installation has been done right So before we move on with the testing for docker let's once again just check the version that we have installed So for that the command is docker version And as you can see docker version 17.12.1 has been installed Next thing we do is we pull an image from the docker hub So docker run hello world Now hello world is a docker image which is present on the docker hub Docker hub is basically a repository that you can find online So with this command the docker image hello world has been pulled onto your system So let's see if it's actually present on your system Now the command to check this is pseudo docker images and as you can see here hello hello world repository this is present on our system currently So the image has been successfully pulled onto the system and this means that our docker is working Now we'll try out another command pseudo docker ps minus a This displays all the containers that you have pulled so far So as you can see here there are three hello world images displayed and all of them are in exited state So I did this demo previously too which is why the two hello worlds which is created 2 minutes ago is also displayed here and the first hello world which has been created a minute ago is the one we just did for this demo Now as you have probably noticed that all the hello world images over here all these containers are in their exited state So when you give the option for docker ps minus a where minus a stands for all it displays all the containers whether they are in exited or running state If you want to see only those containers which are in their running state you can simply execute sudo docker ps sudo docker Yes And as you can see no container is visible here because none of them are in running state In this presentation we're going to go through a number of key things We're going to compare Docker versus traditional virtual machines and what are the differences and why you'd want to choose Docker over a virtual environment We'll go through the advantages of working with Docker and the structure and how you would build out a Docker environment And during that structure we'll dig through the components and the advanced components within Docker At the end of the presentation we'll go through some basic commands and then show you how those basic commands can be used in a live demo So with all that said let's get started So let's first of all compare Docker with a traditional virtual machine So here we have the architecture on the left and right of a traditional virtual machine versus a Docker environment And there are some things that you'll probably see immediately that are big differences One is that the virtual environment has hypervisor layer whereas the docker environment has a docker engine layer And then in addition to that there are additional layers within the virtual machine Each of these really start compounding and creating very significant differences between a docker environment and a virtual machine environment So with a virtual machine the actual memory usage is very high whereas with a docker environment the memory usage is very low If we look at performance virtual machines when you start building out particularly more than one virtual machine on a server the performance starts degragating and starts getting poorer Whereas with docker the performance always stays really good This is largely due to the lightweight architecture used to construct the Docker containers themselves If we look at portability virtual machines just are terrible for portability They're still dependent on the host operating system and there's just a lot of problems that happen when you are using virtual machines for portability In contrast Docker was designed for portability So you can actually build solutions in a Docker container environment and have the guarantee that the solution will work as you have built it no matter where it's hosted Finally boot up time Now the boot up time for a virtual machine is fairly slow in comparison to the boot up time for a Docker environment which is almost instantaneous So we look at these in a little bit more detail One of the other challenges that you have with a virtual machine is that if you have unused memory within the environment you cannot reallocate that memory So if you set up an environment that has 9 gigs of memory that's being used but you have 6 gigs that are free you can't do anything with it Though that whole 9 gig has been allocated to that virtual machine In contrast with Docker if you have 9 gigs and 6 gigs becomes free that free memory can then be reallocated and reused across other containers used within that Docker environment Another challenge is running multiple virtual machines in a single environment can lead to instability and performance issues Whereas Docker is designed to run multiple containers in the same environment and actually gets better the more containers you run in that hosted single Docker engine Portability issues with a virtual machine is the software can work on one machine but then when you move that VM to another machine suddenly some of the software won't work because there are some dependencies that haven't been inherited correctly Whereas Docker itself is designed specifically to be able to run across multiple environments and to be deployed very easily across systems And again the actual boot up time for a VM it just takes a long time You're talking about minutes in contrast to the milliseconds that it takes for a Docker environment to boot up So let's dig into what Docker actually is and what allows for these great performance improvements over a traditional VM environment So Docker itself is an OS virtualized software platform and it allows IT organizations to really easily create deploy and run applications as what are called Docker containers that have all of the dependencies within that container very easily And the container itself is really just a very lightweight package that has all the instructions and dependencies such as frameworks libraries bins etc all within that container And that container itself can then be moved from environment to environment very easily If we to look in our Dev Ops life cycle the place where Docker really shines is in deployment because when you're actually at the point of deploying your solution you want to be able to guarantee that the code that has been tested will actually work in the production environment But in addition to that what we often find is that when you're actually building the code and you're actually testing the code having a container running the solution at those stages is also a really good plus because what happens is that the people building the code and testing the code are able to validate their work in the same environment that would be used for the production environment So really uh you can use Docker in multiple stages within your DevOps cycle but it becomes really valuable in the deployment stage So let's look at some of the key advantages that you have with Docker Some of the things that we've already covered is that you can do rapid deployment and you can do it really fast The environment itself is highly portable and was designed for that in mind The efficiencies that you'll see will allow you to run multiple Docker containers in a single environment as compared to more traditional VM environments The configuration itself can be scripted through a language called YAML which allows you to be able to write out and describe the Docker environment that you want to create This in turn allows you to be able to scale your environment very very quickly But with all of these advantages probably the one that is most critical to the type of work that we're doing today is security You have to ensure that the environment you are running is a highly secure but highly scalable environment And I'm very pleased to say that Docker takes security very seriously So you'll see it as one of the key tenants for the actual architecture of the system that you're implementing So let's look at how Docker actually works within your environment So Docker works u there is a what's called a Docker engine The Docker engine is really comprised of two key elements You have a server and a client and the communication via the two is via REST API The server as you can imagine has the instructions that are communicated out to the client and instructs the client on what to do The connection between the client and the server Uh the communication is via a REST API On older systems you can take advantage of the Docker toolbox which allows you to go ahead and control the Docker engine the Docker machine Docker Compose and Kitmatic So let's now go into what the actual root components though of Docker are So let's have a look at those key components There are four components that we're going to go through We have the Docker client and server We have Docker images We have the Docker registry and the Docker container We're going to step through each of these one by one So let's look at the Docker client and server first So the Docker client and server is a command line instructed solution where you would use terminal on your Mac or command line on your PC or Linux system to be able to issue commands from the Docker Damon The communication between the Docker client and the Docker host and is via a REST API So you can do similar communication such as a docker pull command which would send an instruction to the damon which would then perform the interaction of pulling in the correct components such as an image or container or registry to the docker client The docker damon itself is actually a service which actually performs all sorts of operating and performance services And as you'd imagine the Docker Damon is constantly listening across the REST API to see if it needs to perform any specific requests If you want to trigger and start the whole process you what you want to do is use the command docker within your Docker Damon and that will start all of your performances And then you have a Docker host which actually runs the Docker Damon and registry itself So now let's look into the actual structure of a Docker image So a docker image itself is a template which contains instructions for the docker container and that template is written with a language called YAML and YAML stands for yet another markup language It's very easy to learn The Docker image itself is built within that YAML file and then hosted as a file in the Docker registry The image is really comprised of several key layers and you start with your base layer which will typically have your base image in this instance it's your base operating system such as Abuntu and then you then have layer of dependencies above that this would then comprise the instructions in a readonly file that would become your docker file So let's actually go through and look at what one of those in sets of instructions would look like So here we have four layers of instructions We have a from pull run and then command So what does that actually look like in our layers so to break this down the from creates a layer which is based on Ubuntu And then what we're doing is we're adding in files from the Docker repository onto that base command that base layer And then what we want to be able to do is then say okay what are the run commands so we can actually then build the container within the environment and then we want to be able to then have a command line that actually executes something within that container and in this instance the command is to run Python So one of the things that we will see is that as we set up multiple containers each new container is a new layer with new images within the docker environment Each container is completely separate from the other containers within your Docker environment So you're able to create your own separate readrite instructions within each layer What's interesting is that if you delete a layer then the uh layer above it will also get deleted So what happens when you pull in a layer but something has changed in the the core image What's interesting then is that the actual main image of itself cannot be modified Once you've copied the image you can then modify it locally but you can never modify the actual base image itself So here are some call outs for the components within a docker image So the base layer are in read only format The layers can be combined in a union file system to create a single image The union file system saves memory space by avoiding duplication of files and this allows a file system to appear as a writable but without modifying the file which is known as a copy on write The actual base layers themselves are read only So to be able to get around this structure within a Docker container the Docker environment itself uses what's known as a copy and write strategy within the images and the containers themselves And so what this allows you to do is you can actually copy the files for better efficiency across your entire container environment The copyright strategy does make Docker super efficient And what you're able to do all the time is keep reducing the amount of disk space you're using and the amount of performance that you're taking from the server And that's really again a key element for Docker is just this constant ability to be able to keep improving the efficiency within the actual system itself All right so let's go on to item number three which is the Docker registry So the docker registry itself is the place where you would host and distribute the different types of images that you have created or you want to be used within your environment The actual repository itself is just a collection of docker images and those docker images are built on instructions that you would write with yaml and are very easily stored and shared And what you can actually do is you can actually associate specific name tags to the actual docker images themselves So it's easy for people to be able to find and share that image within the Docker registry itself One of the things you actually see as we go through the demo is you actually see us actually using the the tag name and you'll see how it is an alpha numeric identifier and how we actually use it to actually create the actual container itself One of the things you can do to as start off how you would manage a registry is you can actually use the publicly accessible Docker hub registry which is available to anybody But you can also create your own registry for your own use internally The actual registry that you create internally can have both public and private images that you create and this may be for various reasons of how you'd structure your environment The actual commands you would use to actually connect to the registry are both push and pull Push is to actually push a new container environment that you've created from your local manager node to the remote registry And a pull allows you to pull a new client that has been created and is being shared So again pull command um it pulls and retrieves a Docker image from the Docker registry and makes it very easy for people to share different images consistently across teams And a push command allows you to take a new command that you've created a new container that you've created and push it to the registry whether it's Docker Hub or whether it's your own private registry and allow it to be shared across your teams Some key did you know in Docker registry deleting a repository is not a reversible action So if you delete a repository it's gone So let's go into the final stage here which is the actual Docker container itself So the docker container itself um is an executable package of applications and its dependencies bundled together So gets all the instructions that you would have for the uh solution that you're looking to run It's actually really lightweight and again this is because of the redundancy that's built into how you structure the container and the container itself is then inherently also extremely portable What's really good about running a container though is that it does run completely in isolation So you're able to share it very easily from group to group and you are guaranteed that uh even if you are running a container it's not going to be impacted by any hosts peculiarities or unique setups as you would have in a VM or a non-containerized environment the actual memory that you have on a Docker environment they can be shared across multiple containers which is really useful Typically when you have a VM you would have a defined amount of memory for each VM environment The challenge you start running into though is that you can't share that memory whereas with Docker you can easily share the memory um for a single environment across multiple containers The actual container is built using docker images and the command to actually run those images is a run command So let's actually go through a basic structure of how you would run a docker image So you go into terminal window and you would write in docker run reddis and then it would run a container called radius So we're going to go in and if you don't have the Reddus image locally installed it will then pull it from the registry Then the new Docker container Reddus will be then available within your environment so you can actually start using it So let's look at why containers are so light lightweight They're so lightweight because they really have been able to get away from some of the additional layers that you have in virtualization within VMs And the biggest one is the hypervisor and the need to run on a host operating system Those are two big big elements So if you can get rid of those then you're doing great So let's look at some of the more advanced concepts within the docker environment And we're going to look at two advanced components One is docker compose and the second is docker swarm So let's look at docker compose Docker Compose is really designed for running multiple containers as a single service And it does this by running each container in isolation but allowing the containers to interact with each other As was stated earlier on you would actually write the compose environment using YAML as the language in the files that you would create So where would you use something like docker compose so an example would be if you are running an Apache server with MySQL database and you need to create additional containers to run additional services without the need to start each one separately And this is where you would write a set of files using Docker Compose to be able to help balance out that demand So let's now look at Docker Swarm So Docker Swarm is a service that allows you to be able to control multiple Docker environments within a single platform So what you actually are looking at doing is within your Docker Swarm is we treating each node as a Docker Damon and we're actually having an API that's interacting with each of those nodes There are two types of node that you're going to be getting comfortable working with One is the manager node and the second is the worker node And as you'd expect the manager node is the one sending out the instructions to all of the worker nodes But there is a two-way communication that is happening The communication allows for the manager node to be able to manage the instructions and then listen to and receive updates from the worker node So if anything happens within this environment the manager node can react and adjust the architecture of the worker node so it's always in sync Really great for large scaled environments So finally let's go through what are some of the basic commands you with use within Docker And once we've gone through all these basic commands we'll actually show you a demo of how you'd actually use them as well So if we're going to go in probably the first command is to install docker and so if you have yum installed you just do yum install docker and it'll install docker onto your computer to start the docker damon is you want to do systemct ctl start docker the command to remove a docker image is docker rmi and then the image ID itself and that's not the image name that's the actual alpha numeric ID number that you want to uh grab the command line to download a new image is docker pull and then the name of the image you'd want to pull and by default you're going to be pulling from the docker default registry that will then connect to your docker damon and download the images from that registry command the command line to run an image is docker run and then the image ID and then we have the if we wanted to pull specifically from docker hub then we would have uh docker pull and then the image name and colon its tag To pull build an image from a docker file you would do docker build- t and then the image name and colon tag To shut down the container you would docker stop container ID The access for running a container is docker exec it container ID bash So we've gone through all the different commands but let's actually see how they would actually look and we're going to go ahead and do a demo So welcome to this demo where we're going to go ahead and put together all of the different commands that we have outlined in the presentation for Docker Uh first is just to list all of the Docker images that we have So we do pseudo Docker images and we enter in our password and this will now list out the images that we've created already And we have three images there So let's go ahead and pull a Docker image So to do that we'll we'll go ahead and type pseudo docker and actually we don't want to do image we want to select pull and then the name of the image that we want to pull which is going to be my SQL and by default this is actually going to go ahead and use the latest MySQL command my SQL image that we have So it's now going ahead and pull this image It's going to take a few minutes depending on your internet connection speed It's kind of a large file that has to be downloaded So we'll just wait for that to download And we see the others have completed Just wait for this last file to download Almost there Once that's done what we're going to go ahead and do is we'll actually uh run the Docker container and create the new container using the image that we just downloaded But we have to wait for this to download first All right So the image has been pulled from Docker Hub and let's go ahead and create the new Docker container So we're going to do pseudo docker run dash dash p 0.0.0.0 0 colon 80 colon 80 and then we'll put in my SQL colon latest So we have the latest version and we have our new token and that shows our new Docker container has been created Now let's go ahead and see if the container is running And we'll do pseudo docker ps to uh list all the running containers And what we see is that the container is not listed there which means it's probably not running So let's go ahead and list out all of the images that we have within docker so we can see whether it's actually listed there So we'll do ps-a And yes there we are We can see that we do have our new container my SQL latest and it was created 36 seconds ago but it's in the exited mode So what we have to do is we have to change that status so it's actually running So let's change that to running state We'll do sudo docker run dash it dash name and we can name it SL SQL uh my SQL slashbin/ slash bash and that's now going to be in the root and we'll exit out of that And now if we list out the docking containers we should see it is now an active container sudo docker start and then we'll start the saying it and then and we should now see it There we are It's now in the running state Excellent And we can see that it was updated 6 seconds ago We're going to go ahead and we're going to clear the screen Okay Now what we want to do is remove the Docker container So we're going to do is checklist of images that we have And so pseudo docker images Here are the images that we have And we have my SQL is listed And what we want to do is delete my SQL And to do that we're going to type in pseudo docker rm-f image mysql Run that command and what we'll find is the image Uh there's no such image Oh okay So what we actually have to do is we have to go and see that the image is now gone It's uh been removed Excellent It's exactly what we wanted to see And we can also delete an image by its image ID as well However if an image is running and active we have to kill that image first So we're going to go ahead and we're going to select the image ID We'll copy that And it's going to we paste that it won't be able to actually run correctly because the image is active So what we have to do now is stop the image and then we can kill it So it's in the running state So we have to do so we do pseudo docker kill and kill slain And now we'll see that the container has gone And now we can delete the image And that's going to be the image gone with the image ID Boom Easy peasy Okay let's go ahead on to the next exercise which is to good So here we are We've listed all the uh containers and they're all gone So let's go on to the next exercise the final exercise which is to actually create a batch image And we're going to do a batch HTTP image So let's go ahead and write that out So it's going to be docker run dash dash name white That's going to be the name of this HTTP service - P 8080 colon 80-v open quotes dollar sign pwd close quotes colon slash usr slash local slash apache2 slash ht docs slash httpd semicolon 2.4 run that put in our password again So what we see is the port is already being used So let's go ahead and see which ports let's go see if we can change the port or see what ports are running So let's do pseudo images and see which ports are being used because it's either the the port or the name um hasn't been put in correctly So pseudo docker images psudo docker ps- a and yep there's port 80 there So we'll clear the screen So we're going to change the container name because I think we actually have the wrong container name here So let's go in and change that and we'll paste that in And voila Here we go Now working And we'll just double check and make sure everything's working correctly So to do that we'll go into our web browser and we'll type in soon as Firefox opens up type in localhost col80 which was the the port that we created And there we are It's a list of all the files which shows that the server is up and running And today we'll be looking at the installation for the tool chef As you probably already know Chef is a configuration management tool So that basically means that Chef is a tool which can automate the entire process of configuring multiple systems It also comes with a variety of other functionalities which you can check out in our video on what is Chef and the Chef tutorial So before we move on to the installation process let me just explain to you in brief the architecture of Chef So Chef has three components There's the workstation which is where the system admin sits and he or she writes the configuration files here Your second system is the server The server is where all these configuration files are stored And finally you have the client or the node systems So these are the systems that require the configuration You can have any number of clients but for our demo to keep it simple we'll just have one client Now I'm using my Oracle VM Virtual Box Manager As you can see here I'll have two machines the master and the node Both of these are CentOS7 machines As of the server we'll be using this as a service on the cloud So let's begin Let's have a look at our master system first This is my master system The terminals open over here And the terminal color here it's black background with green text And this is my note system So the terminal here has a black background with white text So you can differentiate between the both So we start at our master system The first thing we need to do is we need to download the chef DK So you can write wget which is the command for downloading and then go to your browser and just type chef DK here The first link So here you have different versions of chef DK Depending on the operating system that you're using you need to select the appropriate one I'm using the Red Hat Enterprise version and that's number seven So I'm using CentOS7 So this is my link for downloading Chef DK Just copy this link and go back to your terminal and paste it here So your Chef DK is being downloaded This will take a while Right after we download the Chef DK our next step is to install it on our system So our Chef DK is downloaded Now let's install it So guys this is the version of Chef DK that you have downloaded So make sure this is exactly what you type down here too So great our Chef DK is installed So basically our installation for the workstation is done right now But just so you understand how the flow is we'll also write a sample recipe on our workstation So before we do that let's first create a folder My folder name chef repo Basically the chef repository And let's move into this folder Okay So we're in Next what we need to do is as I mentioned earlier all your recipes will be within a cookbook So let's create a folder which will hold all our cookbooks And let's move into this too Okay So our next stage is to create the actual cookbook within which we'll have our recipe So the command for creating the cookbook is chef generate cookbook sample cuz so sample is the name of my cookbook So guys please notice here cookbooks is the directory that I created which will hold all our cookbooks and here cookbook is the keyword So sample is that one cookbook that we are creating under our folder cookbooks and our cookbooks being created Great So that's done Moving into our cookbook Okay So when our cookbook sample was created automatically there's this hierarchal structure associated with it So let's have a look at this hierarchal structure to understand what our cookbook sample exactly is before we move on So the command for looking at a hierarchal structure is tree So as you see here within our cookbook we have a folder recipes and under this there's the default.rb recipe This is where we'll be creating our recipe So we'll just alter the content of default.rb So let's move on to finally writing our recipes So we'll move into this recipes folder first So now we'll open our recipe default rb in gedit So the recipe for this particular demo is to install the HTTP package on our client node that is basically your Apache server and we'll also be hosting a very simple web page So let's begin So the recipes in chef is written in Ruby So I'll explain you the recipe in a while Okay So the first line is where you install HTTPD The second line for service is where you start or enable the HTTPD service on the client node That's our first task The second part where we need to create our web page So this is the path where your web page will be stored If you have written any HTML file previously you know that this is probably like a default path where our web pages are created Yep that's it So this is the content that will be displayed on your web page if everything works right and I'm pretty sure it will So now we can save our recipe and that's done Close your Git So now that we have created the recipe all our work at the workstation is completed The next thing we do is we move on to the server So as I mentioned earlier we'll be using the server as a service on the cloud So go to your browser and here just type manage.chef.io So this is the homepage of your chef server Click here to get started We need to first create an account for using the chef server This completely free We just need to give our email ID and a few other details It's in fact a lot like creating an account on Facebook or Instagram Fill in all the details Check the terms of service box So the next thing you need to do is go back to your inbox and verify your email ID So I have my inbox opened here on my Windows machine So this is my inbox You would have received a mail from Chef Software Just click on this link to verify it and create your password And that's done So let's continue this on our workstation machine So type in your username and password So the first time you log into your chef's server you'll have this popup appear where you need to create a new organization So create your organization So this organization is basically the name that'll be associated with the collection of the client machines First thing you do go to your administration tab and download the starter kit So guys when you're doing this part make sure that you're on your workstation That is you're opening your chef server on the workstation because you need this folder to be installed here You save the file So this gets downloaded So the Chef starter kit is the key to connecting your workstation with the server and the server with the node So basically it has a tool called knife which we'll come across later in our demo This knife is what takes care of all the communication and the transferring of cookbooks between the three machines In our case the two machines the workstation and the node and the one server So let's go back to our root directory So our chef starter zip file is within our downloads folder What we do first is we'll move the zip folder into our cookbooks folder and then we'll unzip it there because our cookbooks folder is the one that contains the recipe and that is where we require a knife tool command to be present So we can send this recipes over to the server So we'll just check the contents of our cookbooks right now to ensure that our chef starter.zip file is within the cookbooks Yep So it's here So next thing we do is we need to unzip this folder Great So that's unzipped And this means that our workstation and our server are now linked So we just need to use the knife command tool to transfer or to upload our recipes which we created on the workstation onto the server So before we execute this command we need to move into our cookbooks directory As you know that is where we unzipped our chef starter kit So that is where our knife command is present to And now let's execute the knife command So it's knife cookbook upload and sample So as you probably recall sample is the name of the cookbook that we created and within sample we created our recipe which is default.rb So we uploading the entire cookbook onto the server Execute the command Great So our cookbooks uploaded Now let's check this on our server So move to your browser where you opened your chef's server and go to policy So here you go This is the cookbook we uploaded sample and it's the first time we uploaded it So the version 0.1.0 the first version Now what you would notice is if you go to the nodes tab there are no nodes present So if you have no nodes you basically have no machine to execute your cookbooks and the nodes are not seen right now because we have not configured them yet So that's the next thing we need to do All this so far was done on your master machine Now we'll move on to the node machine So before moving on let's just check the IP of our node machine So that's our IP Note this down somewhere And now we move back to our workstation As we already saw that we uploaded a sample workbook Next thing we need to make sure that our server and node are able to communicate with each other So again we use the knife tool for this tool The command here is knife bootstrap and enter the IP address of your node which we just checked We'll be logging in there So we'll be using the node as the root user And then we also need to specify our root password for the node And we give a name to this node So this is the name by which we'll be identifying our node at the server So as you have probably noticed here we're using the term SSH which is a secure shell So it basically provides a channel of secure communication between two machines in an unsafe environment Okay So it's done So if your command has executed right which in our case as we can see has our chef server and our chef node must be able to communicate with each other So if this is so we should be able to send the cookbook that we previously uploaded from our workstation onto the server now from our server to our node So to do that before we move on to the node machine we need to go back to our chef server let's refresh this page and as you see here previously under the nodes tab we did not have any node mentioned now we do chef node which is the node we wanted to identify our node by which is a cents platform and that's our IP so it's active for 2 hours that's the up time last checkin the last time we checked into our node was a minute back and yeah that's pretty much it so Now we'll create a run list and we'll add our sample to this run list So just click on your node and you'll see the small arrow here in the end Click on that edit run list and under available recipes we have our cookbook sample present So drag and drop this to the current run list and accept it Okay So now that we updated our run list our recipe is sent to our node What we next need to do is that we need to execute this at our node So now we'll move on to our node machine Chef client is the command to execute your so while this recipe is executing you can see what exactly is happening Our recipe was to install HTTP package first which is your Apache server So the first line that's done and it's up to date The second line it's enabled Third line the service is started and the fourth line is where your contents created for the web page at this very location So by the look of this everything should work fine So how do we check this we can just go to our browser and in the search bar just type localhost and there you go So our httpd package which is the patchy server is installed and our sample web page is also hosted Congratulations on completing the chef demo Today we'll dive into a tutorial on the configuration management tool chef So if you look at the DevOps approach or the DevOps life cycle you will see that Chef falls under operations and deployment So before we begin let's have a brief look at all that you'll learn today First we'll get to know why should we use Chef and what exactly is the Chef Two of the most common terms used with Chef configuration management and infrastructure as code We'll have a brief look at these We'll also have a look at the components of Chef and the Chef architecture Quickly go through the various flavors of Chef and finally we'll wrap it up with the demo A demo on the installation of Apache on our nodes So let's begin guys Why should we use Chef well consider a large company Now this company caters to a large number of clients and provides any number of services or solutions Of course to get all of this done they need a huge number of servers and a huge number of systems Basically they will have a huge infrastructure Now this infrastructure needs to be continuously configured and maintained In fact when you're dealing with an infrastructure that size there's a good chance systems may be failing and in the long run as your company expands new systems may even get added So what do you do well you could say the company has the best system administrator out there but all by himself could he possibly take care of an infrastructure that size no he can't And that's where Chef comes in because Chef automates this entire process So what does Chef provide chef provides continuous deployment So when you look at the market space today you see how products and their updates are coming out in a matter of days So it's very important that a company is able to deploy the product the minute it's ready so that once it's out it's not already obsolete Chef also provides increased system robustness As we saw Chef can automate the infrastructure But in spite of this automation there's a good possibility that errors do creep in Chef can detect all these bugs and remove them before deploying them into the real environment Not only this Chef also adapts to the cloud We all know how today the services tools solutions everything's revolving around the cloud So Chef does really play along by making itself easily integrable with the cloud platform So now that you know why to use Chef let's look at what exactly is Chef Chef is an open-source tool developed by Ops code Of course there are paid versions of Chef such as Chef Enterprise but other than that most of it is freely accessible Chef is written in Ruby and Erlang If you would have gone through any previous material on Chef I'm sure you would have come across Ruby being related to Chef but not Erlang So this is why cuz Ruby and Erlang are both used to build Chef But when it comes to actually writing the codes in Chef it's just Ruby And these are the codes that's deployed onto your multiple servers and does the automatic configuration and maintenance And this is why Chef is a configuration management tool So I've used this term configuration management a couple of times What exactly does this mean let's start with the definition of configuration management Configuration management is a collection of engineering practices that provides a systematic way to manage entities for efficient deployment So let's break this down Configuration management basically is a collection of practices And what are these practices for these practices are for managing your entities The entities which are required for efficient deployment So what are these entities that you need for efficient deployment they are code infrastructure and people Code is basically the code the system administrators write for configuring your various systems Infrastructure is the collection of your systems and your servers And then finally you have the teams that take care of this infrastructure So codes need to be updated whenever your infrastructure needs a new configuration or some sort of updation in the operating system or the software versions Your code needs to be updated at first and as the requirements of the company change the infrastructures configuration needs to change and finally of course the people need coordination So if you have a team of system administrators and say person A makes some change to the code Person B C D and so on need to be well aware when the change is made as to why it was made what was the change made and where exactly this change was made So there are two types of configuration management On our left we have the push configuration Here the server that holds the files with instructions to configure your nodes pushes these files onto the node So the complete control lies with the server On your right side we have the pull configuration In case of pull configuration the nodes pull against the server to first check if there's any change in the configurations required If there is the nodes themselves pull these configuration files Chef follows pull configuration and how it does this we'll see further in our video Another important term often used with Chef infrastructure as code So let's understand what this term infrastructure as code means through this small story So here's Tim Tim's a system administrator at a large company Now he receives a task He has to set up a server and he has to install 20 software applications over it So he begins he sets up the server but then it hits him It would take him the entire night to install 20 software applications Wouldn't things have been much simpler if he just had a code to do so well of course codes do make things much simpler Codes have a number of advantages They're easily modifiable So if today Tim is told we need MySQL installed on 20 systems Tim simply writes a code to do so And the very next day Tim is told we changed our mind We don't need MySQL I think we'll just use Oracle This does not bother Tim cuz now he just opens the file He makes a few corrections in his code and that should work just fine Code is also testable So if Tim had to write 10 commands to do something and at his 10th command he realized the very first command he wrote there was something not right there Well that would be quite tiresome wouldn't it with codes however you can test it even before running it and all the bugs can be caught and corrected Codes are also deployable So they're easily deployable and they're deployable multiple times So now that we saw the various advantages of having codes let's say what infrastructure as code exactly is here's the definition Infrastructure as code is a type of IT infrastructure where the operation team manages the code rather than a manual procedure So infrastructure as a code allows the operations team to take care of a code which automatically performs various procedures rather than having to manually do those procedures So with this feature all your policies and your configurations are written as code Let's now look at the various components of Chef So our first component is the workstation The workstation is the system where the system administrator sits He or she creates the codes for configuring your nodes Now these codes which in case of chef are written in Ruby are called the recipes and you'll have multiple number of recipes So a collection of recipes is called a cookbook Now these cookbooks are only created at the workstation but they need to be stored at the server So the knife is a command line tool So it's basically a command that you will see us executing in one of our demos that shifts these cookbooks from the workstation over to the server A second component is the server So server is like the middleman It lies between your workstation and your nodes And this is where all your cookbooks are stored cuz as you saw previously the knife sends these cookbooks over from the workstation to the server The server can be hosted locally that's on your workstation itself or it can be remote So you can have your server at a different location You can even have it on the cloud platform And a final component the node So nodes are the systems that require the configuration In a chef architecture you can have a number of nodes Ohigh is a service which is installed on your node and it is responsible for collecting all the information regarding your current state of the node This information is then sent over to the server to be compared against the configuration files and check if any new configuration is required Chef client is another such service on your node which is responsible for all the communications with the server So whenever the node has a demand for a recipe the chef client is responsible for communicating this demand to the server Since you have a number of nodes in a chef architecture it's not necessary that each node is identical So of course every node can have a different configuration Let's now have a look at the chef architecture So here we have a workstation one server machine and two nodes You can have any number of nodes First things first the system administrator must create a recipe So the recipes that are mentioned in our chef architecture are just dummy recipes We look into actual functioning recipes later in our demo So you have one recipe two recipes three recipes and a collection of recipes forms a cookbook So guys if you look at the recipe in source you have simply learn 3.B Erb is the extension for your Ruby files So the cookbooks are only created at the workstation They now need to be sent over to the server where they are stored And this is the task of the knife Knife is a command line tool which is responsible for transferring all your cookbooks onto the server from the workstation Here's the command for running your knife Knife upload simply db where simply- db is the name of the cookbook We then move on to our node machines At our nodes we run the ohigh service The ohigh service will collect all information regarding the current state of your nodes and send it over to the chef client When you run the chef client these informations are sent over to the server and they are tested against the cookbooks So if there is any discrepancy between the current state of your nodes and the cookbook that is if one of the nodes does not match the configurations required the cookbook is then fetched from the server and executed at the node This sets the node to the right state There are various flavors of chef We'll quickly go through these First we have chef solo With Chef Solo there's no separate server So your cookbooks are located on the node itself Now this kind of configuration is used only when you have just a single node to take care of The next flavor is the hosted chef With hosted chef you still have your workstation and your node but your server is now used as a service on the cloud This really makes things simple because you don't have to set up a server yourself and it still performs all the functioning of a typical chef This is the configuration you will notice that we'll be using in our demo Chef client server With chef client server you have a workstation you have server and you have a number of nodes Now this is the traditional chef architecture This is the one we have used for all the explanations previously And finally we have private chef Private chef is also known as enterprise chef In this case your workstation server and node all are located within the enterprise infrastructure This is the main difference between chef client server and private chef In case of chef client server all these three machines could be dispersed The enterprise version of chef also provides the liberty to add extra layers of security and other features And we reach the final part of our video where we'll have the hands-on So before we dive into our demo let me just quickly give you an introduction to it We'll be using two virtual boxes both sent to S7 One will be used as a workstation while the other will be a node So we are just using one node to make things simple The server will be used as a service on the cloud Now these are the steps we'll be performing during our demo We'll first download and install the Chef DK on our workstation We then make an empty cookbook file and we'll write a recipe into it We need to then set up the server So as I mentioned earlier server will be a service on the cloud So you'll have to create a profile but this will be completely free We then link the workstation to the server and we'll upload the recipe to the server The nodes will now download the cookbooks from the server and configure themselves So now that you have some idea about what we'll be doing let's move on to the actual demo We begin our demo Here's my Oracle VM virtual box manager I have two machines here I've already created my workstation and node Both of these are sent to S7 machines Just for you to differentiate this is my terminal And for my workstation it's a black background with white text And as of my node it's a black background with green text The first thing you do is you go to your workstation box and open a web browser Search for Chef DK installation Go to the first link which is your chef's official page A very warm welcome to all our viewers I'm Anjali from simply learn and today I'll be showing you how you can install the configuration management tool anible So let's have a brief about why one would use anible and what exactly is anible So if you consider the case of an organization it has a very large infrastructure which means it has more than probably hundreds of systems and giving one or even a small team of people the responsibility to configure all these systems makes their work really tough repetitive and as you know manual work is always prone to errors So Ansible is a tool which can automate the configuration of all these systems With Anible a small team of system administrators can write simple codes in YAML and these codes are deployed onto the hundreds and thousands of servers which configures them to the desired states So Ansible automates configuration management that is configuring your systems It automates orchestration which means it brings together a number of applications and decides an order in which these are executed and it also automates deployment of the applications Now that we know what anible does let's move on to the installation of Ansible So here is my Oracle VM virtual box manager I'll be using two systems There's the node system which is basically my client system and there's the server system or the master system So let's begin at our server system So this is my master system guys So the first thing we do is we download our anible tool So one thing we must remember with anible is that unlike chef or puppet anible is a push type of configuration management tool So what this means is that the entire control here lies with your master or your server system This is where you write your configuration files and these are also responsible for pushing these configuration files onto your node or client system as and when required Great So Ansible tool is installed Now we need to open a ansible host file and there we'll specify the details of our node or client machine So this is our anible host file As you can see here the entire file is commented but there's a certain syntax that you'd observe For example here we have a group name web servers under which we have the IP addresses or certain host name So this is about how we'll be adding the details for our client system First we need to give a group name Under this group basically we add all the clients which require a certain type of configuration Since we are using just one node we'll give only the details for that particular node First we need to add the IP address of our client machine So let's just go back to our client machine And this here is the IP address 192.168.2.11 Once you have typed in your IP address give a space and then we'll specify the user for our client machine So all communications between the server or the master system and the client or the node system takes place through SSH SSH basically provides a secure channel for the transfer of information Follow this up with your password In my case it's the roots password And that's it We are done So now we save this file and go back to our terminal So now that our host file is written the next thing we do is we write our playbook So playbook is the technical term used for all the configuration files that we write in anible Now playbooks are written in YAML YAML's extremely simple to both write and understand It's in fact very close to English So now we'll write our playbook The playbook or any code in YAML first starts with three dashes This indicates the beginning of your file Next thing we need to give a name to our playbook So name and I'm going to name my playbook sample book We next need to specify our host systems which is basically the systems at which the configuration file or the playbook in our case will be executed So we'll be executing this at the client machines mentioned under the group Anible servers So we had just one client machine under it We'll still mention the group name We next need to specify the username with which we'll be logging into our client machine which is root in my case and become true specifies that you need to become the root to execute this playbook So becoming the root is called a privilege escalation Next we need to specify our task So these are basically the actions that the playbook will be performing So you would have noticed everything so far is aligned that is name host remote user become and task because these are at one level Now whatever comes under task will be shifted slightly towards the right Although YAML is extremely simple to understand and read both It's a little tricky while writing because you need to be very careful about the indentations and the spacing So my first task is install HTTPD which is basically a Apache server So now my commands yum and this will be installing the HTTPD package and the latest state of it will be installed So that's our first task Now our second task would be running our Apache service So name run httpd and the action which is service will be performed on httpd hence the name httpd and state must be started Now we come to our third task So here we'll create a very simple web page that will be hosted So create content is the name of our task and the content that we are creating here will basically be copied to our node system at a particular file location that we'll provide Our content will be congrats and then we'll provide the destination at which this file will be copied So this is the default location for all our HTML files And that's it We are done writing our playbook Just save this and go back to your terminal Before we execute the playbook or push the playbook onto our node system let's check the syntax of our playbook So the command for doing so is and if everything is fine with your playbook the output would be just your playbook name So our syntax is perfectly fine Now we can push on the playbook to our node or the client machine and that's the syntax for doing so Now as your playbook is being sent over to the client machine you can see that first the facts are gathered that is the current state of your client machine is first fetched to check what all is to be changed and what is already present So the first thing is installing httpd So our system already had httpd So it says okay because this does not need to be changed Our next task was running HTTPD Now although our system had the Apache service it was not running So that is one thing that was changed The next was there was no content available So the content was also added So two tasks were changed and four things were okay Now everything seems fine And before you move any forward it is very important that you check this one line of documentation provided by Anible You have all kind of information available here regarding which all tasks were executed if your client machine was reachable or unreachable and so on So now that everything's fine here we can move on to our node system and we'll just go to our browser So if our playbook has been executed here what should happen is that the HTTP service must be in the running state and the web page that we created should be hosted So let's just type localhost and great everything's working fine So our web page is displayed here So we come to an end for our installation and configuration video for the configuration management tool anible If you have any doubts please post them in the comment section below and we'll definitely get back to you as soon as possible Thanks Anelie Now we have Matthew and Anjelie to take us through how to work with Anible Anible today as one of the key tools that you would have within your DevOps environment So the things that we're going to go through today is we're going to cover why you would want to use a product like Ansible what Ansible really is and how it's of value to you in your organization the differences between Ansible and other products that are similar to it on the market and what makes Anible a compelling product And then we're going to dig into the architecture for Anible We're going to look at how you would create a playbook how you would manage your inventory of your server environments and then what is the actual workings of Ansible As a little extra we're going to also throw in Ansible tower one of the secret source solutions that you can use for improving the speed and performance of how you create your Ansible environments And finally we're going to go through a use case by looking at Hootswuite social media management company and how they use Ansible to really improve the efficiency within their organizations So let's jump into this So the big question is why Ansible so you have to think of Ansible as another tool that you have within your DevOps environment for helping manage the servers And this definitely falls on the operations sides of the DevOps equation So if we look here we have a picture of Sam and like yourselves Sam is a system administrator and he is responsible for maintaining the infrastructure for all the different servers within his company So some of the servers that he may have that he has to maintain could be web servers running Apache they could be database servers running my SQL And if you only have a few servers then that's fairly easy to maintain I mean if you have three web servers and two database servers and let's face it wouldn't we all love just to have one or two servers to manage it would be really easy to maintain The trick however is as we start increasing the number of servers and this is a reality of the environments that we live and operate in it becomes increasingly difficult to create consistent setup of different infrastructures such as web servers and databases for the simple reason that we're all human as if we had to update and maintain all of those servers by hand there's a good chance that we would not set up each server identically Now this is where Ansport really comes to the rescue and helps you become an efficient operations team Anible like other system solutions such as Chef and Puppet uses code that you can write and describe the installation and setup of your servers So you can actually repeat it and deploy those servers consistently into multiple areas So now you don't have to have one person redoing and refollowing setup procedures You just write one script and then each script can be executed and have a consistent environment So we've gone through why you'd want to use Ansible Let's step through what anible really is So you know this is all great but you know how do we actually use these tools in our environment So Ansible is a tool that really allows you to create and control three key areas that you would have within your operations environment First of all there's IT automation So you can actually write instructions that automate the IT setup that you would typically do manually in the past The second is the configuration and having consistent configuration Imagine setting up hundreds of Apache servers and being able to guarantee with precision that each of those Apache servers is set up identically And then finally you want to be able to automate the deployment so that as you scale up your server environment you can just push out instructions that can deploy automatically different servers The bottom line is you want to be able to speed up and make your operations team more efficient So let's talk a little bit about pool configuration and how it works with anible So there are two different ways of being able to set up uh different environments for server farms Uh one is to have a key server that has all the instructions on and then on each of the servers that connect to that main master server you would have a piece of software known as a client installed on each of those servers that would communicate to the main master server and then would periodically either update or change the configuration of the slave server This is known as a pull configuration An alternative is a push configuration And the push configuration is slightly different The main difference is as with a pull configuration you have a master server where you actually put up the instructions But unlike the pull configuration where you have a client installed on each of the services with a push configuration you actually have no client installed on the remote servers you simply are pushing out the configuration to those servers and forcing a restructure or a fresh clean installation in that environment So Ansible is one of those second environs where it's a push configuration server And this contrasts with other popular products like Chef and Puppet which have a master slave um architecture with a master server connecting with a client on a remote slave environment where you would then be pushing out the updates With Ansible you're pushing out the service and the structure of the server to remote hardware and you are just putting it onto the hardware irrelevant of the structure that's out there And there are some significant advantages that you have in that in that you're not having to have the extra overhead weight of a client installed on those remote servers having to constantly communicate back to the master environment So let's step through the architecture that you would have for an Ansible environment So when you're setting up an anible environment the first thing you want to do is have a local machine And the local machine is where you're going to have all of your instruction and really the power of the control that you'd be pushing out to the remote server So the local machine is where you're going to be starting and doing all of your work connected from the local machine are all the different nodes pushing out the different configurations that you would set up on the local machine The configurations that you would write and you would write those in code within a module So you do this on your local machine for creating these modules and each of these modules is actually consistent playbooks The local machine also has a second job and that job is to manage the inventory of the nodes that you have in your environment The local machine is able to connect to each of the different nodes that you would have in your hardware network through SSH client So a secure client Let's dig into some of the different elements within that architecture And we're going to take a first look at playbooks that you would write and create for the Anible environments So the core of Ansible is the playbook This is where you create the instructions that you write to define the architecture of your hardware So the playbook is really just a set of instructions that configure the different nodes that you have And each of those set of instructions is written in a language called YAML And this is a standard language used for configuration server environments Did you know that YAML actually stands for YAML a markup language that's just a little tidbit to hide behind your ear So let's have a look of one of these playbooks it looks like And here we have a sample uh YAML script that we've written So you start off your YAML script with three dashes and that indicates the start of a script And then the script itself is actually consistent of two distinct plays At the top we have play one and below that we have play two Within each of those plays we define which nodes are we targeting So here we have a web server in the top play and in the second play we have a database server that we're targeting And then within each of those server environments we have the specific tasks that we're looking to execute So let's step through some of these tasks We have an install patchy task we have a start Apache task and we have an install my SQL task And when we do that we're actually execute a specific set of instructions And those instructions can include installing Apache and then setting the state of the Apache environment or starting the Apache environment and setting up and running the MySQL environment So this really isn't too complicated And that's the really good thing about working with YAML is it's really designed to make it easy for you as an operations lead to be able to configure the environments that you want to consistently create So let's take a step back though We have two hosts We have web server and database server Where do these names come from well this takes us into our next stage and the second part of working with anible which is the inventory management part of anible So the inventory part of anible is where we maintain the structure of our network environment So what we do here is part of the structure creating different nodes is we've had to create two different nodes here We have a web server node and a database server node And under web server node we actually have the names that we're actually pointing to specific machines within that environment So now when we actually write our scripts all we have to do is refer to either web server or database server and the different servers will have the instructions from the YAML script executed on them This makes it really easy for you to be able to just point to new services without having to write out complex instructions So let's have a look at how anible actually works in real world So the real world environment with is that you'd have the Ansport software installed on a local machine and then it connects to different nodes within your network On the local machine you'll have your first your playbook which is the set of instructions for how to set up the remote nodes and then to identify how you're going to connect to those nodes you'll have an inventory We use secure SSH connections to each of the servers So we are encrypting the communication to those servers We're able to grab some basic facts on each server so we understand how we can then push out the playbook to each server and configure that server remotely The end goal is to have an environment that is consistent So let's ask you a simple question What are the major opportunities that Anible has over chef and puppet really like to hear your answers in the comments below Pop them in there and we'll get back to you and really want to hear how you feel that Ansible is a stronger product or maybe you think it's a weaker product as it compares to other similar products in the market Here's the bonus We're going to talk a little bit about Anible Tower So Antsible tower is an extra product that Red Hat created that really kind of puts the cherry on the top of the ice cream or is the icing on your cake Anible by itself is a command line tool However Ansible tower is a framework that was designed to access anible and through the Ansible Tower framework we now have an easy to use guey This really makes it easy for non-developers to be able to create the environment that they want to be able to manage in their DevOps plan without having to constantly work with the command prompt window So instead of opening up terminal window or a command window and writing out complex instructions only in text you can now use drag and drop and mouseclick actions to be able to create your appropriate playbooks inventories and pushes for your nodes All right so we've talked a lot about Ansull Let's take a look at a specific company that's using Ansible today And in this example we're going to look at Hootuite Now Hootsweet if you've not already used their products and they have a great product Hootsweet is a social media management system They are able to help with you managing your pushes of social media content across all of the popular social media platforms They're able to provide the analytics They're able to provide the tools that marketing and sales teams can use to be able to assess a sentiment analysis of the messages that are being pushed out Really great tool and very popular But part of their popularity drove a specific problem straight to Hootsweet The challenge they had at Hootsweet is that they had to constantly go back and rebuild their server environment and they couldn't do this continuously and be consistent There was no standard documentation and they had to rely on your memory to be able to do this consistently Imagine how complex this could get as you're scaling up with a popular product that now has tens of thousands to hundreds of thousands of users This is where Ansible came in and really helped the folks over at Hootsweet Today the DevOps team at Hootsweet write out playbooks that have specific instructions that define the architecture and structure of their hardware nodes and environments and are able to do that as a standard product Instead of it being a problem in scaling up their environment they now are able to rebuild and create new servers in a matter of seconds The bottom line is Anible has been able to provide Hootswuite with IT automation consistent configuration and free up time from the operations team so that instead of managing servers they're able to provide additional new value to the company A very warm welcome to all our viewers I'm Anjali from SimplyLearn and today I'll be taking you through a tutorial on Anible So Ansible is currently the most trending and popular configuration management tool and it's used mostly under the DevOps approach So what will you be learning today you'll learn why you should use Ansible what exactly is Ansible the Ansible architecture how Ansible works the various benefits of Ansible and finally we'll have a demo on the installation of Apache or the HTTPD package on a client systems We'll also be hosting a very simple web page and during this demo I'll also show you how you can write a very simple playbook in YAML and your inventory file So let's begin Why should you use anible let's consider a scenario of an organization where SAM is a system administrator Sam is responsible for the company's infrastructure A company's infrastructure basically consists of all its systems This could include your web servers your database servers the various repositories and so on So as a system administrator Sam needs to ensure that all the systems are running the updated versions of the software Now when you consider a handful of systems this seems like a pretty simple task Sam can simply go from system to system and perform the configurations required But that is not the case with an organization is it an organization has a very large infrastructure It could have hundreds and thousands of systems So here is where Sam's work gets really difficult Not only does it get tougher Sam has to move from system to system performing the same task over and over again This makes Sam bored Not just that repeating the same task leaves no space for innovation And without any ideas or innovation how does the system grow and the worst of it all is manual labor is prone to errors So what does Sam do well here is where Ansible comes in use With Ansible Sam can write simple codes that are deployed onto all the systems and configure them to the correct states So now that we know why we should use Ansible let's look at what exactly is Ansible Anible is an IT engine that automates the following tasks So first we have orchestration Orchestration basically means bringing together of multiple applications and ensuring an order in which these are executed So for example if you consider a web page that you require to host this web page stores all its values that it takes from the user into a database So the first thing you must do is ensure that the system has a database manager and only then do you host your web page So this kind of an order is very crucial to ensure that things work right Next Ansible automates configuration management So configuration management simply means that all the systems are maintained at a consistent desired state Other tools that automate configuration management include Puppet and Chef And finally Ansible automates deployment Deployment simply means the deploying of application onto your servers of different environments So if you have to deploy an application on 10 systems with different environments you don't have to manually do this anymore because Ansible automates it for you In fact Ansible can also ensure that these applications or the code are deployed at a certain time or after regular intervals Now that we know what exactly anible is let's look at Ansible's architecture Anible has two main components You have the local machine and you have your node or the client machine So the local machine is where the system administrator sits He or she installs Ansible here And on the other end you have your node or the client systems So in case of anible there's no supporting software installed here These are just the systems that required to be configured and they are completely controlled by the local machine At your local machine you also have a module A module is a collection of your configuration files And in case of anible these configuration files are called playbooks Playbooks are written in YAML YAML stands for YAML ain't a markup language and it is honestly the easiest language to understand and learn since it's so close to English We also have the inventory The inventory is a file where you have all your nodes that require configuration mentioned and based on the kind of configuration they require they're also grouped together So later in the demo we'll have a look at how the playbook and the inventory is written and that will probably make it clearer So of course a local machine needs to communicate with the client And how is this done this is done through SSH SSH is your secure shell which basically provides a protected communication in an unprotected environment Okay So we saw the various components of anible Now how does anible exactly work you have your local machine on one end This is where you install anible If you've gone through any previous material on Anible you would have come across the term agentless often being associated with this tool So this is what agentless means You're installing Ansible only on your local machine and there's no supporting software or plug-in being installed on your clients This means that you have no agent on the other end The local machine has complete control and hence the term agentless Another term that you would come across with Ansible is push configuration So since the local machine has complete control here it pushes the playbooks onto the nodes and thus it's called a push configuration tool Now the playbooks and the inventory are written at the local machine and the local machine connects with the nodes through the SSH client This step here is optional but always recommended to do so It's where the facts are collected So facts are basically the current state of the node Now all this is collected from the node and sent to the local machine So when the playbook is executed the task mentioned in the playbook is compared against the current status of the node and only the changes that are required to be made further are made and once the playbooks are executed your nodes are configured to the desired states So as I mentioned before Ansible is currently the most trending tool in the market under the configuration management umbrella So let's have a look at the various benefits of Ansible which gives it this position Well Anible is agentless It's efficient It's flexible simple important and provides automated reporting How does it do all this let's have a look at that Agentless as I already mentioned before you require no supporting software or plug-in installed on your node or the client system So the master has complete control and automatically this means that anible is more efficient cuz now we have more space in our client and node systems for other resources and we can get anible up and running real quick Anible is also flexible So an infrastructure is prone to change very often and Anible takes no amount of time to adjust to these changes Anible cannot get any simpler with your playbooks written in a language such as YAML which is as close to English as you can possibly get It important basically means that if you have a playbook which needs to be run any number of systems it would have the same effect on all of these systems without any side effect And finally we have automated reporting So in case of anible your playbook has a number of tasks and all these tasks are named So whenever you run or execute your playbook it gives a report on which task ran successfully which failed which clients were not reachable and so on All this information is very crucial when you're dealing with a very large infrastructure And finally we reach the most exciting part of our tutorial the hands-on Before we move on to the actual hands-on let me just brief you through what exactly we'll be doing So I'll be hosting two virtual boxes both CentOS7 operating systems One would be my local machine and other my node or the client machine So on my local machine first I'll install anible We'll then write the inventory and the playbook and then simply deploy this playbook on the client machine there's just one thing that we need to do is that we need to check if the configurations that we mentioned in our playbook are made right so we'll now begin our demo this my Oracle virtual box here I have my master system which is the local machine and this is the client machine so let's have a look at these two machines this is my client machine the terminals open right now so the client machine terminal has a black background with white text and the master machine terminal has a white background with black text just So you can differentiate between the two So we'll start at the master machine The first thing to do is we need to install our anible So yum install anible y is the command to do so So this might take some time Yeah So Anible's installed The next step we go to our host file So our host file here is basically the inventory It's where you'll specify all your nodes In our case we just have one node That's the path to your host file As you'll see everything here is commented So just type in the group for your client nodes So I'm going to name it Ansible clients And here we need to type the IP address of a client machine So my client machine's IP address is 192.168.2.127 So before you come to this it's advised that you check the IP address on your client machine The simple command for that is if config Now once you type the IP address put a space and here we need to mention the username and the password for our client So I'll be logging in as the root user So this is the password and then the user which is root in my case That's it Now you can save this file Just clear the screen Next we move on to our playbook We need to write the playbook So the extension for our playbook is YML which stands for YAML And as you can see here I have already written my playbook but I'll just explain to you how this is done So a YAML file always begins with three dashes This indicates the start of your YAML file Now the first thing is you need to give a name to the entire playbook So I have named it sample book Host is basically where this would be executed So as we saw earlier in our inventory I mentioned client's group name as anible clients So we use the same name here The remote user is the user you'll be using at your client So in my case that's root and become true is basically to indicate that you need to set your privileges at root So that's called a privilege escalation Now a playbook consists of task So we have here three tasks The first task I've named it to install HTTPD So what we are doing here is we are installing our HTTP package which is basically the Apache server and we're installing the most latest version of it Hence the state value is latest The next task is running HTTPD So for the service the name is HTTPD because that's the service we need to start running and the state is started Our next task is creating content So this is the part where we are creating our web page So copy because this is the file that will be created at the client The content will be welcome and the destination of the file will be v www htmlind index.html As you know this is like a default path that we use to store all our HTML files Now as you can see here there's quite a lot of indentation and when it comes to YAML although it's very simple to write and very easy to read the indentation is very crucial So the first dash here represents the highest stage that is the name of the playbook and all the dashes under task are slightly shifted towards the right So if you have two dashes at the same location they basically mean that they are siblings So the priority would be the same So to ensure that all your tasks are coming under the tasks label make sure they are not directly under name So yeah that's pretty much it So when you write your YAML file the language is pretty simple very readable indentations absolutely necessary Make sure all your spaces are correctly placed We can now save this file Next thing we need to check if the syntax of our YAML file is absolutely right because that's very crucial So the command to check the syntax of the YAML file is anible playbook the name of your playbook syntax check So we have no syntax errors which is why the only output you receive is sample yiml which is the name of your playbook So our playbook is ready to be executed The command to execute the playbook is anible playbook and the name of your playbook So a playbook's executed as you can see here gathering facts that's where all the facts of the node that the present state of the node is collected and sent to the local machine So it's basically to check that if the configuration changes that we are about to make is already made So it's not made We do not have the HTTPD package installed on our node So this is the first change that's made Also if it's not installed of course it's not running That's the second change that's made So it's put into the running state And a final task which is create content is under the okay state This means that the contents already present in the client machine So I made it this way so that you can at least see the different states that's present So over here we have okay four So four things are all fine The facts are gathered Two things are changed and one is already present Two changes are made Zero clients are unreachable and zero tasks have failed So this is the documentation that I was referring to previously that anible provides automatically and is very useful as you can see So our next step we need to just check on our client machine if all the changes that we desired are made So let's move to our client So this is my client machine So to check this since we are installing the HTTP package and hosting a web page the best way to do it is open your browser and type in localhost So there you go Your Apache server is installed and your web page is hosted Today I'll be showing you the installation procedure for the configuration management tool Puppet So what exactly is the use of Puppet if you consider the scenario of an organization which has a very large infrastructure it's required that all the systems and servers in this infrastructure is continuously maintained at a desired state This is where Puppet comes in Puppet automates this entire procedure thus reducing the manual work So before we move on to the demo let me tell you what the architecture of puppet looks like So puppet has two main components You have the puppet master and the puppet client The puppet master is where you write the configuration files and store them And the puppet client are basically those client machines which require the configuration In case of puppet these configuration files that you write are called manifests So let's move on to the demo So here are my two machines The first is the server system which is basically your master where you'll write your configuration files and the other is the node or the client system So let's have a look at both of these machines This my node system The terminals open here and the terminal has a black background with white text and as of my server or the master machine it has a black background with green text So we start at a server machine The first thing that we need to do is we need to remove the firewall So in a lot of cases there are chances that the firewall stops the connection between your server and your node Now since I'm doing a demo and I'm just showing you how Puppet works between two virtual boxes I can safely remove the firewall without any worries But when you're implementing Puppet in an organization or a number of systems on a local network be careful about the consequences of doing so So our firewall is disabled Next thing that we do is we'll change the host name of our server system Now while using the puppet tool it's always advisable that you name your server's host as puppet This because the puppet tool identifies the host name puppet by default as the host name for the master or the server system Let's just check if the host name is changed successfully Yep So that's done So as you see still localhost is appearing as the host name So just close your terminal and start it again And you see here the host name has been changed to puppet Okay So the next thing that we have to do is we install our puppet labs Make sure your system is connected to the net Right So Puppet Labs is installed Next we need to install the puppet server service on our server system Now that our puppet server service is installed we need to move into the system configurations for our puppet server So the path for that is etc sys config puppet server So this is a configuration file for the puppet server Now if you come down to this line Now this line here this is the line which allocates memory for your puppet server Now you must remember that puppet is a very resource extensive tool So just in case to ensure that we do not encounter any errors because of out of memory we will reduce these sizes So as of now we have 2GB allocated by default we'll change this to 512 MB Now in a lot of cases it may work without doing so but just to be on the safer side we make this change Save it and go back to your terminal We are now ready to start our puppet server service The first time you start your puppet service it may take a while Next we need to enable this And if your puppet server service is started and enabled successfully this is the output that you would get In case you're still not sure you can always check the status at any point of time And as you see here it's active So everything's fine as of now Next thing we do is we'll move on to our agent system or our client or node system So here too we'll have to install puppet labs But before we do so we need to make a small change in our host file So let's open the host file Yeah So this is our host file We need to add a single line here which specifies our puppet master So first we put our puppet master's IP address followed by the host name and then we'll add a DNS for our puppet server So let's just go back to a server system and find out its IP address and that's my IP address for the server system Now the host name of our puppet server and a DNS for it Save this file and return to your terminal So now we can download our puppet labs on the node system is the exact same procedure that you followed for downloading puppet labs on your server system too So in my node system the puppet labs is already downloaded So the next thing is we need to install a puppet agent service So puppet is a pull type of configuration tool What this means is that all your configuration files that you'll be writing on your server is pulled by the node system as and when it requires it So this is the core functionality of the agent service which is installed on your client node or agent system So my puppet agent service is installed So next I'll just check if my puppet server is reachable from this node system So 8140 is the port number that the puppet server must be listening on and it's connected to puppet So that guarantees that your server is reachable from the node system So now that everything's configured right we can start our agent service So guys you would have noticed that the command for starting the agent service is a little more complex than the command for starting your server service This is because when you start your agent service you're not just starting a service but you're also creating a certificate This is the certificate that will be sent over to your master system Now at the master system there's something called the certificate authority This gives the master the rights to sign a certificate if it agrees to share information with that particular node So let's execute this command which does both the function of sending the certificate and starting your agent service So as you can see here our service started successfully It's in a running state Now we'll move to our master system or the server system So first we'll have a look at the certificates that we received The certificate should be in this location So as you can see here this is the certificate that we just received from our agent service So this here within quotes is the name of our certificate So next when we are signing the certificate this is the name we'll provide to specify that this is the particular certificate that we want to sign So the minute we sign a certificate the node that send the certificate gets a notification that the master has accepted your request So after this we can begin sharing our manifest files Now here's the command for signing this certificate Okay so our certificate is signed which means that the nodes request is approved and the minute the certificate is signed the request is removed from this list So now if we execute the same command as we did to check the list of all the certificates we will not find the certificate anymore Let's just check that So as you see now there are no more requests pending because we have accepted all the requests If you want to have a look at all the certificates that is signed or unsigned you can use the same command with the addition of all and all the certificates received so far will be listed As you can see here the plus sign indicates that the certificate request has already been accepted So now that our certificate is signed the next thing we do is we'll create a sample manifest file So this is the path that you create your manifest files in Our file name is sample.ppp and our files created So right now we have no content in this file We'll just check if the agent is receiving it and once that's confirmed we'll add some content to the file So let's move to our agent system Now this is the command to execute at the agent system to pull your configuration files So our catalog is applied in 0.02 seconds So now that the communication between our agent system and our master system is working perfectly fine let's add some content to the previous placeholder file that we created on our master system So now we open the same file in an editor Okay So we are going to write a code for installing the HTTPD package on our node system which is basically your Apache service node and then within quotes insert the host name of your node system So my node systems host name is client the package you wish to install which in our case is httpd and the action to be performed and that's it a very small and simple code save this file Now let's go back to our node system and let's pull this second version of the same configuration file So every time you execute this command as we did previously too what happens is that the agent service so the agent service basically checks on your master system if there's any new configuration file added or if there's any change to the previous configuration file made If so then the catalog is applied once again So now our catalog's applied in 1.55 seconds So now to check if our catalog served its purpose let's just open our browser Just type localhost here And as you can see if your HTTPD package has been successfully installed the Apache testing page will appear here So in this session what we're going to do is we're going to cover what and why you would use Puppet What are the different elements and components of Puppet and how does it actually work and then we'll look into the companies that are adopting Puppet and what are the advantages that they have now received by having Puppet within their organization And finally we'll wrap things up by reviewing how you can actually write a manifest in Puppet So let's get started So why Puppet so here is a scenario that as an administrator you may already be familiar with You as an administrator have multiple servers that you have to work with and manage So what happens when a server goes down it's not a problem you can jump onto that server and you can fix it But what if the scenario changes and you have multiple servers going down So here is where puppet shows its strength With Puppet all you have to do is write a simple script that can be written with Ruby and write out and deploy to the servers your settings for each of those servers the code gets pushed out to the servers that are having problems and then you can choose to either roll back to those servers to their previous working states or set them to a new state and do all of this in a matter of seconds and it doesn't matter how large your server environment is You can reach to all of these servers Your environment is secure You're able to deploy your software and you're able to do this all through infrastructure as code which is the advanced devops model for building out solutions So let's dig deeper into what Puppet actually is So Puppet is a configuration management tool maybe similar tools like Chef that you may already be familiar with It ensures that all your systems are configured to a desired and predictable state Puppet can also be used as a deployment tool for software automatically You can deploy your software to all of your systems or to specific systems And this is all done with code This means you can test the environment and you can have a guarantee that the environment you want is written and deployed accurately So let's go through those components of Puppet So here we have a breakdown of the puppet environment and on the top we have the main server environment and then below that we have the client environment that would be installed on each of the servers that would be running within your network So if we look at the top part of the screen we have here our puppet master store which has and contains our main configuration files and those are comprised of manifests that are actual codes for configuring the clients We have templates that combine our codes together to render a final document And you have files that would be deployed as content that could be potentially downloaded by the clients Wrapping this all together is a module of manifest templates and files You would apply a certificate authority to sign the actual documents so that the clients actually know that they're receiving the appropriate and authorized modules outside of the master server where you'd create your manifest templates and files You would have public client is a piece of software that is used to configure a specific machine There are two parts to the client one is the agent that constantly interacts with the master server to ensure that the certificates are being updated appropriately and then you have the factor that the current state of the client that is used and communicated back to through the agent So let's step through the workings of puppet So the puppet environment is a master slave architecture The clients themselves are distributed across your network and they are constantly communicating back to a master server environment where you have your puppet modules The client agent sends a certificate with the ID of that server back to the master and then the master will then sign that certificate and send it back to the client And this authentication allows for a secure and verifiable communication between client and master The factor then collects the state of the client and sends that to the master Based on the facts sent back the master then compiles manifests into the cataloges and those cataloges are sent back to the client and an agent on the client will then initiate the catalog A report is generated by the client that describes any changes that have been made and sends that back to the master with the goal here of ensuring that the master has full understanding of the hardware running software in your network This process is repeated at regular intervals ensuring all client systems are up to date So let's have a look at companies that are using puppet today There are a number of companies that have adopted Puppet as a way to manage their infrastructure So companies that are using Puppet today include Spotify Google AT&T So why are these companies choosing to use Puppet as their main configuration management tool the answer can be seen if we look at a specific company Staples So Staples chose to take and use Puppet for their configuration management tool and use it within their own private cloud The results were dramatic The amount of time that the IT organization was able to save in deploying and managing their infrastructure through using Puppet enabled them to open up time to allow them to experiment with other and new projects and assignments A real tangible benefit to a company So let's look at how you write a manifest in Puppet So So manifests are designed for writing out in code how you would configure a specific node in your server environment The manifests are compiled into cataloges which are then executed on the client Each of the manifests are written in the language of Ruby with a PPP extension Now if we step through the five key steps for writing a manifest they are one create your manifest and that is written by the system administrator two compile your manifest and it's compiled into a catalog Three deploy The catalog is then deployed onto the clients Four execute The cataloges are run on the client by the agent And then five end Clients are configured to a specific and desired state If we actually look into how a manifest is written it's written with a very common syntax If you've done any work with Ruby or really configuration of systems in the past this may look very familiar to you So we break out the work that we have here You start off with a package file or service as your resource type and then you give it a name and then you look at the features that need to be set such as IP address Then you're actually looking to have a command written such as present or start The manifest can contain multiple resource types If we continue to write our manifest and puppet the default keyword applies a manifest to all clients So an example would be to create a file path that creates a folder called sample in a main folder called etc The specified content is written into a file that is then posted into that folder and then we're going to say we want to be able to trigger an Apache service and then ensure that that Apache service is installed on a node So we write the manifest and we deploy it to a client machine on that client machine A new folder will be created with a file in that folder and an Apache server will be installed You can do this to any machine and you'll have exactly the same results on those machines We're going to decide which is better for your operations environment Is it Chef Puppet and support or Salt Stack all four are going to go head-to-head So let's go through the scenario of why you'd want to use these tools So let's meet Tim He's our system administrator and Tim is a happy camper putting and working on all of the systems in his network But what happens if a system fails if there's a fire a server goes down Well Tim knows exactly what to do He can fix that fire really easily The problems become really difficult for Tim however if multiple servers start failing particularly when you have large and expanding networks So this is why Tim really needs to have a configuration management tool and we need to now decide what would be the best tool for him because configuration management tools can help make Tim look like a superstar All he has to do is configure the right codes that allows him to push out the instructions on how to set up each of the servers quickly effectively and at scale All right let's go through the tools and see which ones we can use The tools that we're going to go through are Chef Puppet Aniple and Salt Stacks And we have videos on most of these software and services that you can go and view to get an overview or a deep dive in how those products work So let's go and get to know our contestants So our first contestant is Chef And Chef is a tool that allows you to configure very large environments It allows you to scale very effectively across your entire ecosystem and infrastructure Chef is by default an open-source code Um and one of the things that you find is a consistent metaphor for the tools that we recommend on simply learn is to use opensource code The code itself is actually written in language of Ruby and Erlang and it's really designed for heterogeneous infrastructures that are looking for a mature solution The way that chef works is that you write recipes that are compiled into cookbooks And those cookbooks are the definition of how you would set up a node And a node is a selection of servers that you have configured in a specific way So for instance you may have Apache Linux servers running or you may have a MySQL server running or you may have a Python server running And Chef is able to communicate back and forth between the nodes to understand what nodes are being impacted and need to have instructions sent out to them to correct that impact You can also send instructions from the server to the nodes to make a significant update or a minor update So there's great communication going back and forth If we look at the pros and cons the pros for chef is that there is a significant following for chef and that has resulted in a very large collection of recipes that allow you to be able to quickly stand up environments There's no need for you to have to learn complex recipes The first thing you should do is go out and find the recipes that are available It integrates with Git really well and provides for really good strong version control Some of the cons though are really around the learning speed it takes to go from a beginner user with Chef to being an expert There is a considerable amount of learning that has to take place and it's compounded by having to learn Ruby as the programming language and the main server itself doesn't really have a whole lot of control It's really dependent on the communication throughout the whole network All right let's look at our second contender Puppet And puppet is actually in many ways very similar to chef There are some differences but again puppet is designed to be able to support very large heterogeneous organization It is also built with Ruby and uses DSL for writing manifests So there are some strong similarities here to Chef As with a Chef there is a master slave infrastructure with Puppet and you have a master server that has the manifests that you put together in a single catalog and those cataloges are then pushed out to the clients over an SSL connection Some of the pros with uh Puppet is that as with Chef there is a really strong community around Puppet and there's just a great amount of information and support that you can get right out of the gate It is a very well-developed reporting mechanism that makes it easier for you as an administrator to be able to understand your infrastructure One of the cons is that you have to really be good at learning Ruby Again as with Chef you know the more advanced tasks really need to have those Ruby skills And as with Chef the server also doesn't have much control So let's look at our third contender here Anible And so Anible is slightly different It is the way that Ansible works is that it actually just pushes out the instructions to the server environment There isn't a client server or master slave environment where Ansible would be communicating backwards and forwards with its infrastructure It is merely going to push that instructions out The good news is that the instructions are written in YAML and YAML stands for YAML a markup language YAML is actually pretty easy to learn If you know XML and XML is pretty easy If you know XML you're going to get YAML really well Anible does work very well on environments where the focus is getting servers up and running really fast It's very very responsive and can allow you to move quickly to get your infrastructure up quick very fast And we're talking seconds and minutes here Really really quick Uh so again the way that Ansible works is that you put together a playbook and an inventory of you have a playbook So the way that Ansible works is that you have a playbook and the playbook it then goes against the inventory of servers and will push out the instructions for that playbook to those servers So some of the pros that we have for anible we don't need to have an agent install on the remote nodes and servers It makes it easier for the configuration YAML is really easy to learn you can get up to speed and get very proficient with YAML quickly The actual performance once you actually have your infrastructure up and running is less than other tools that we have on our list Now I do have to add a proviso This is a relative less It's still very fast It's going to be a lot faster than individuals manually standing up servers but it's just not as fast as some of the other tools that we have on this list And YAML itself as a language while it's easy to learn it's not as powerful as Ruby Ruby will allow you to do things that at an advanced level that you can't do easily with YAML So let's look at our final contender here Sort stack So Salt Stack is a CLI based tool It means that you will have to get your command line tools out or your terminal window out So you can actually manage the entire environment via Salt Stack The instructions themselves are based on Python but you can actually write them in YAML or DSL which is really convenient And as a product it's really designed for environments that want to scale quickly and be very resilient Uh the way that sort snap works is that you have a master environment that pushes out the instructions to what they call grains which is your network And so let's step through some of the pros and cons that we have here with Salt Stack So source stack is very easy to use once it's up and running It has a really good reporting mechanism that makes your job as an operator in your DevOps environment much much easier The actual setup though is a little bit tougher than some of the other tools and and it's getting easier with the newer releases but it's just a little bit tougher And related to that is that sort stack is fairly late in the game when it comes to actually having a graphical user interface for being able to create and manage your environment Other tools such as anible have actually had a UI environment for quite some time All right so we've gone through all four tools Let's see how they all stack up next to each other So let the race begin Let's start with the first stage architecture So the architecture for most of our environments is a server client environment So for chef puppet and salt snack So very similar architecture there The one exception is anible which is a clienton solution So you're pushing out the instructions from a server and pushing them out into your network and there isn't a client environment There isn't a two-way communication back to that main client for what's actually happening in your network So let's talk about the next stage Ease of setup So if we look at the four tools there is one tool that really stands out for ease of setup and that is Ansible It is going to be the easiest tool for you to set up And if you're new to having these types of tools in your environment you may want to start with Ansible just to try out and see how easy it is to create automated configuration before looking at other tools Now and so with that said Chef Puppet and SSA aren't that hard to set up either And you'll find there's actually some great instructions on how to do that setup in the online community Let's talk about the languages that you can use in your configuration So we have two different types of language with both Chef and Ansible being procedural in that they actually specify at how you actually supposed to do the task in your instructions With puppet and salt stack it's decorative where you specify only what to do in the instructions Let's talk about scalability Which tools scale the most effectively and as you can imagine all of these tools are designed for scalability That is the driver for these kind of tools You want them to be able to scale to massive organizations What do the management tools look like for our four contenders so again we have a two-way split with Ansible and Salt Stack The management tools are really easy to use You're going to love using them They're just fantastic to use With Puppet and Chef the management tools are much harder to learn and they do require that you learn some either the Puppet DSL or the Ruby DSL to be able to be a true master in that environment But what does interoperability look like again as you'd imagine with the similar to scalability interoperability with these products is very high in all four cases Now let's talk about cloud availability This is increasingly becoming more important for organizations as they move rapidly onto cloud services Well both Ansible and Salt Stack have a big fail here Neither of them are available in the most popular cloud environments And Puppet and Chef are actually available in both Amazon and Azure Uh we've actually just uh haven't had a chance to update our Chef link here but Chef is now available on Azure as well as Amazon So what does communication look like with all of our four tools so the communication is slightly different with them Chef has its own knife tool whereas puppet uses SSL secure sockets layer and anible and salt stack use secure socket hashing SSH as their communication tool Bottom line all four tools are very secure in their communication So who wins well here's the reality All four tools are very good And it's really dependent on your capabilities and the type of environment that you're looking to manage that will determine which of these four tools you should use The tools themselves are open- source so go out and experiment with them There's a lot of videos Our team has done a ton of videos on these tools and so feel free to find out other tools that we have and covered so you can learn very quickly how to use them But consider the requirements that you have and consider the capabilities of your team If you have Ruby developers or you have someone on your team that knows Ruby your ability to choose a broader set of tools becomes much more interesting If however you're new to coding then you may want to consider YAMLbased tools Again the final answer is going to be up to you and we'll be really interested on what your decision is Monitoring as the term says you're monitoring your watching your uh logging your production environment So of course there are whole bunch of monitoring tools So they become an important part of your production environment and lot of these uh monitoring tools are also I've seen them also being used especially in your UAT environment and uh you can optionally have them for some time even in your uh you know development envir no not development development servers are usually not very um high-end configurations but you know maybe a a decent uh development/ / integration server especially if you have uh long running scripts and if you have uh programs that use a lot of uh servers uh you know maybe CPU or processing power so then you can have monitoring tools when you're writing such scripts and you know uh you are uh doing the u unit testing for those scripts so that you know to see uh what kind of server utilization happens when you run this script You know if you'll put this in production will it actually you know slow down your uh production server and what kind of uh impact that will have on the you know your rest of your application or other applications running on that server But uh this particular uh chapter is more in context with production environments So these uh tools they basically monitor your server they monitor your switches of course they monitor your applications and any services that you have deployed on your uh servers and they generate alerts when something goes wrong That's the whole job of monitoring It is continuously watching continuously looking at what is running what is happening what is going up what is going down when is uh CPU peaking when is memory peaking and all that So there you can uh you typically send uh limits for these uh all these different parameters and anytime any of these parameters goes outside of that limit you know even more than that or less than that uh these monitoring tools usually send out an alert uh and these alerts could again be SMS alerts or email alerts and there there are usually people monitoring these monitoring tools uh to look uh look out for any issues reported and they also generate alerts when the problem has been resolved So it work both ways Synagogios is an open-source monitoring tool and it can even monitor your network services There's a little diagram here which is little too small but here is najio somewhere what I can read and status so these are different devices I think no no yeah these are different devices to which Najios is uh sending the status there's a browser there's an SMS there's an email and then there's a graph also And these are different objects that Najio is uh basically monitoring This is an SMTP server I can read SMTP This is I don't know TCP IP No I don't know some database server Okay this is a database server and this is an application server This is a switch router Okay Okay I can read that now So these are the different kind of objects These are different kind of servers that uh Nagios monitors and uh these are the different kind of uh uh devices or statuses that it can send So it helps uh monitor your CPU usage your disks usage and you know even your system logs and it uses uh a plug-in script that can be written uh you know in uh any scripting language actually you ask me Najio's remote plug-in executors are basically agents that allow remote scripts to be executed as well And these scripts are usually executed to monitor again your uh CPU number of uh users logged in who is logged in who's logged in at what time logged out at what time and all uh these things So all these uh monitoring tools work on the concept of polling Uh so polling is more like you know they so the NRP agent is a program that will continuously keep polling a machine for certain parameters that are configured in Najios to be monitored So this program continuously keeps pinging the server bringing the program uh you know to keep checking for what it has been asked to check So in case of uh logged in users you keep checking uh at a you know like maybe every 30 seconds or every 1 minute you keep uh pinging uh to see how many users have logged in onto this server and who are the users who have logged in what time they logged in what time they logged out and things like that So Najio poll agents on remote machines This is what basically it means Naj has uh agent programs that can you know help you uh call or ping even remote machines The Nigio remote data processor is an agent that allows uh you know flexible data transports and you know it uses uh HTTP and uh XML protocols to do that And we're talking about uh essentially your databases and uh data server usages like you know within if you have an Oracle database how many database instances are there you know how your load balancing is set up on that how data is moving between different uh database servers within Oracle and how data is moving being within the load balancers and um there's always a DRP uh there's always a backup with database so that's why you see me mention DRP as soon as I say the word database and uh if there's a backup plan you know uh how how's the data moving how much time does did the backup take did it take too much time and why you know why did it take so it helps you do all those kind of monitoring The NS client is basically mainly used to monitor Windows machines And um typically when we talk about servers uh we end up talking more about you know Unix or Linux servers Of course now with a lot of uh Microsoft technologies being uh robust than they were you know uh like SharePoint or uh things like that There are Windows servers too but uh 10 years ago if you would talk about having a Windows server it was actually kind of frowned upon especially for production And again you know this uh helps you monitor usual your CPU your dis uh usage and uh it pulls the plugin and this particular uh uh agent listens to this particular port always So that's a reserved port and usually your system administrators or server administrators know all these things Today let's get started with Jenkins Jenkins in my opinion is one of the most popular continuous integration servers of recent times What began as a hobby project by a developer working for Sun Micros systemystems way back in early or mid 2000s has gradually and eventually evolved into very very powerful and robust automation servers It has a wide adoption since it is released under MIT license and is almost free to use Genkins has a vast developer community that supports it by writing all kinds of plugins Plugins is the heart and soul of Genkins because using plugins one can connect Genkins to anything and everything under the sun With that introduction let's get into what all will be covered as a part of this tutorial I will get into some of the prerequisites required for installing Genkins post which I will go ahead and install Genkins on a Windows box There are few first time configurations that needs to be done and I will be covering those as well So once I have genkins installed and configured properly I will get into the user administrative part I'll create few users and I will use some plugins for setting up various kinds of access permissions for these users I will also put in some freestyle jobs Freestyle job is nothing but a very very simple job And I will also show you the powerfulness of genkins by scheduling this particular job to run based upon time schedule I will also connect genkins with uh GitHub GitHub is our source code where source code repository where I've got some repositories put up there So using genkins I will connect to GitHub pull up a repository that is existing on GitHub onto the genkins box and run few commands to build this particular repository that is pulled from GitHub Sending out emails is a very very important configurations of genkins or any other continuous integration server for that matter Whenever there is any notifications that has to be sent out as a part of either build going bad or build being good or build being propagated to some environment and all these scenarios you would need the continuous integration servers to be sending out notifications So I will get into a little bit details of how to configure genkins for sending out emails I will also get into a scenario where I would have a web application a Maven based Java web application which will be pulled from a GitHub repository and I will deploy it onto a Tomcat server The Tomcat server will be locally running on my system Eventually I will get into one other very very important topic which is the master slave configuration It's a very very important and pretty interesting topic where distributed builds is achieved using a master slave configuration So I will bring up a slave I will connect the slave with the master and I will also put in a job and kind of delegate that particular job to the slave configuration Finally I will let you know how to use some plugins to back up your Jenkins So Jenkins has got lot of useful information set up on it in terms of the build environments in terms of workspace All this can be very very easily backed up using a plug-in So this is what I'm going to be covering as a part of this tutorial Genkins is a web application that is written in Java and there are various ways in which you can use and install genkins I have listed popular three mechanisms in which genkins is usually installed on in any system The topmost one is as a Windows or a Linux based services So if at all you have Windows like the way I have and I'm going to use this mechanism for this demo So I would download a MSI installer that is specific to Genkins and install this service So whenever I install it as a service it goes ahead and nicely installs all that is required for my genkins and I have a service that can be started or stopped based upon my need any flavor of Linux as well One other way of running genkins is downloading this generic war file and as long as you have JDK installed you can launch this war file by the command opening up a command prompt or shell prompt if at all you're on Linux box specifying java- jar and the name of this war file It typically brings up your web application and you know you can continue with your installation The only thing being if at all you want to stop using genkins you just go ahead and close this prompt You either do a control C and then bring down this prompt and your genin server would be down Other older versions of genkins were run popularly using this way in which you already have a Java based web server running up and running So you kind of drop in this war file into the root folder or the HTTP root folder of your web server So Jenkins would explode and kind of bring up your application All user credentials or user administration is all taken care of by the Apache or the Tomcat server or the web server on which Jenkins is running This wasn't very older way of running but still some people use it because if they don't want to maintain two servers if they already have a Java web server which it's being nicely maintained and backed up Genkins can run attached to it All right So either ways it doesn't matter however you're going to bring up your Genkins instance The way we're going to operate genkins is all going to be very very same or similar one with the subtle changes in terms of user administration If at all you're launching it through any other web server which will take care of the user administration otherwise all the commands or all the configuration or the way in which I'm going to run this demo it is going to be same across any of these installations All right So the prerequisites for running genkins as I mentioned earlier genkins is nothing but a simple web application that is written in Java So all that it needs is Java preferably JDK 1.7 or 1.8 2GB RAM is the recommended RAM for running genkins and also like any other open-source tool sets when you install JDK ensure that you set in the environment variable Java home to point to the right directory This is something very specific to JDK But for any other open source tools that you've installed there's always a preferred environment variable that you got to set in which is specific to that particular tool that you're going to use This is a generic thing that is there for you know for any other open source projects because the way opensource projects discover themselves is using this environment variables So as a general practice or a good practice always set these environment variables accordingly So I already have JDK 1.8 8 installed on my system But in case you do not what I would recommend is just navigate on your browser to the Oracle homepage and just type in or search for install JDK1.8 and navigate to the Oracle homepage You'll have to accept the license agreement and there are bunch of installers that is okay that you can pick up based upon the operating system on which you're running So I have this Windows 64 installer that is already installed and running on my system So I will not get into the details of downloading this or installing it Let me show you once I install this what I've done with regard to my path So if you get into this environment variables all right so I have just set in a Java home variable If you see this CQ program files Java JDK 1.8 update This is where my my Java is located C program files C program files Java Okay So this is the home directory of my JDK So that is what I've been I've set it up here in my environment variable So if you see here this is my Java home All right One other thing to do is ensure that in case you want to run Java or Java C from a command prompt ensure that you also add that path into this path variable So if you see this somewhere I will see yes there you go SQL and program files Java JDK 1.8 bin So with these two I'll ensure that my Java installation is nice and you know good enough So to check that to double check that or to verify that let me just open up a simple command prompt and if I type in java - version all right and java c - version so the compiler is on the path java is on the path and if at all I do this even the environment variable specific to my java is installed correctly So I am good to go ahead with my Genkins installation Now that I have my prerequisites all set for installing Genkins let me just go ahead and download Genkins So let me open up a browser and say download Genkins All right LTS is nothing but the long-term support These are all stable versions Weekly I would not recommend that you try these unless and until you have a real need for that um long-term support is good enough and as I mentioned there are so many flavors of genkins that is available for download you also have a docker container wherein you know you can launch genkins as a container but I'll not get into details of that in this tutorial all right so what I want is yes this is the war file which is generic war file that I was talking to you earlier and this is the windows MSI installer so go ahead and download this MSI installer I already have that downloaded so let me just open that All right So this is my downloaded Genkins instance or rather installer This is a pretty maybe a few months old but this is good enough for me Before you start uh genkins installation just be aware of one fact that uh there is a variable called genkins home This is where genkins would store all this configuration data jobs project workspace and all that specific to genkins So by default if at all you don't set this to any particular directory If at all you install an MSI installer all your installation gets into C program files 86 and genkins folder If at all you run a WAR file depending upon the user ID with which you're running a war file the genkins folder there's a genkins folder that gets created inside the user home directory So in case you have any need wherein you want to back up your genkins or you want genkins installations to get into some specific directories go ahead and set this genkins home variable accordingly before you even begin your installation For now I don't need to do any of these things So I've already downloaded the installer Let me just go ahead with the default installation All right So this is my Genkins MSI installer I would just I don't want to make any changes into the Genkins configuration CL and program files is good for me Yeah this is where all my destination folder and all the configuration specific to it goes I'm happy with this I don't want to change this I would just say go ahead and click installation Okay So what typically happens once the genkins installation gets through is it'll start installing itself and there are some small checks that needs to be done So and by default genkins launches on the port 8080 So let me just open up localhost at80 There's a small checking that will be done as a part of the installation process wherein I need to type in a hash key All right So there's a very very simple hash key that gets stored out here So I will have to just copy this path If at all you're running as a war file you would see that in your logs All right So this is a simple hash key that gets created every time when you do a genkins installation So as a part of the installation it just asks you to do this So if that is not correct it'll crib about it but this looks good So it's going ahead All right One important part during the installation So you would need to install some recommended plugins What happens is the plugins are all related to each other So it's like the typical RPM kind of a problem where you try to install some plugin and it's got a dependency which is not installed and you get into all those issues In order to get rid of that what Jenkins recommends there's a bunch of plugins that is already recommended So just go ahead and blindly click that install recommended plugin So if you see there is a whole lot of plugins which are bare essential plugins that is required for genkins in order to run properly So Jenkins as a part of the installation would get all these plugins and then install it for you This is a good combination to kind of begin with And mind you at this moment Genkins needs uh lots of bandwidth in in terms of network So in case your you know your network is not so good few of these plugins would kind of fail and these plugins are all you know on available on openly or or mirrored sites and sometimes some of them may be down So do not worry in case some of these plugins kind of fail to install You'll get an option to kind of uh retry installing them But just ensure that you know at least most or 90 95% of all these plugins are installed without any problems Let me pause the video here for a minute and then get back once all these plugins are installed My plug-in installation is all good There was no failures in any of my plugins So after that I get to create this first admin user Again this is one important point that you got to remember key in given any username and password but ensure that you kind of remember that because it's very hard to get back your username and password in case you forget it All right So I'm going to create a very very simple username and password something that I can remember I will that's my name and um an email ID is kind of optional but it doesn't allow me to go ahead in case I don't So I just give in an admin and I got a password I've got I remember my password This is my full name All right I say save and finish All right that kind of completed my Genkins installation It was not that tough was it now that I have my Genkins installed correctly let me quickly walk you through some bare minimal configurations that is required These are kind of a first-time configuration that is required So and also let me warn you the UI is little hard for many people to wrap their head around it Specifically the Windows guys But if at all you are a Java guy you know how painful it is to write UI in Java You will kind of appreciate you know all the effort that has gone into the UI Bottom line UI is a little hard to you know wrap your head around it But once you start using it possibly you'll start liking it All right So let me get into something called as manage genkins This can be viewed like a a main menu for all the genkins configuration So I'll get into some of those important ones Something called as configure system Configure system This is where you kind of put in the configuration for your complete genkins instance Few things to kind of look out for This is a home directory This is a Java home where all the configurations all the workspace anything and everything regarding Genkins is stored out here System message You want to put in some message on the system You just type in whatever you want and is possibly show up somewhere up here on the menu Number of executors Very very important configuration This just lets Jenkin know at any point in time how many jobs or how many threads can be run You can kind of visualize it like a thread that can be run on this particular instance As a thumb rule if at all you're on a single core system number of executors too should be good enough In case at any point in time if there are multiple jobs that kind of get triggered at the same time in case the number of executors are less compared to the number of jobs that have woken up no need to panic because they will all get queued up and eventually Jenkins will get to running those jobs Just bear in mind that whenever a new job kind of you know gets triggered the CPU usage and the memory usage in terms of the disk right is very high on the Jenkins instance So that's something that you got to kind of keep in mind All right But number of executors two for my system is kind of good Label for my genkins I don't want any of these things Usage how do you want to use your genkins this is good for me because I only have a primary uh server that is running So I want to use this node as much as possible Quiet pair Each of these options have got some pair minimal help kind of a thing that is that is out here By clicking on these question marks you will get to know as to what are these particular configurations All right So this all look good What I want to show you here is there's something regarding the docker timestamps get plug-in SVN email notifications I don't want that What I want the yes I want this SMTP server configuration Remember I mentioned earlier that I would want Genkins to be sending out some emails And what I've done here is I've just configured the SMTP details of my personal email ID In case you are in a in an organization you would have some sort of an email ids that is set up for a genkins server So you can specify the SMTP server details of your company so that you know you can authorize genkins to kind of send out emails But in case you want to try it out like me I have configured my personal email ID which is on my Gmail for sending out notifications So the SMTP server would be smtp.gmail.com I'm using the SMTP authentication I have provided my email id and my password I'm using the SMTP port which is 465 and I'm you know reply to address is the same as mine I can just send out an email and see if at all this configuration works Again Gmail would not allow you to allow anybody to send out notifications on your behalf So you'll have to lower the security level of your Gmail ID so that you can allow programmatically somebody to send out email notifications on your behalf So I've done already that I'm just trying to see if I can send a test email with the configurations that I've set in Yes All right So the email configuration looks good So this is how you configure your uh you know your Gmail account in case you want to do that If not put in your organization SMTP server details which are with a valid username and password and it should all be set All right So no other configurations that I'm going to change here All of these look good All right So I come back to manage genkins Okay One other thing that I want to kind of go over is the global tool configuration Imagine this scenario or look at it this way Genkins is a is a continuous integration server It doesn't know what kind of a code base it's going to pull in what kind of a tool set that is required or what is the code that it's going to pull in and how is it going to build So you would have to put in all the tools that is required for building the appropriate kind of code that you're going to pull in from you know your source code repositories So just to give an example in case your source code is a Java source code and assuming that you know because in this demo this is my laptop and I've put in all the configurations JDK everything on my laptop because I'm a developer I am working on the laptop but my continuous integration server would be you know a separate server without anything being installed on it So in case I want genkins to you know run a Java code I would need to install JDK on it I need to specify the JDK location of this out here this way Okay since I already have the JDK installed and I've already put in the Java home directory or rather the environment variable correctly I don't need to do it Git if at all I want the genin server to use git Git is a you know command bash or the command prompt for for running git and connecting to any other git server So you would need git to be you know installed on that particular system and set the path accordingly Gradel and Maven If at all you have some Mavens as well you want to do this Any other tool that you're going to install on your system which is your continuous integration server you will have to come in here and configure something In case you don't configure it when Jenkins runs you'll not be able to find these tools for building your task and it'll crib about it That's good I don't want to save anything Manage genkins Let me see what else is required Yes configure global security All right the security is enabled and if you see by default it's the uh security uh access control is set to genkins own user database So what does this mean you know genkins by default it uses file system where it stores all the usernames which hashes up these usernames and kind of stores them So as of now a genkins is configured to use its own database Assuming that you are running in an organization you would probably want to have a you know some sort of an AD or an LDAP server using which you would want to control access to your Jenkins repository rather Jenkins tool So you would specify your LDAP server details the root DN password or the manager DN and the manager password and all these details in case you want to connect your Genkins instance with your LDAP or AD or any of the authentication servers that you have in your organization But for now since I don't have any of these things I'm going to use this own database That's good enough All right So I will set up some authorization methods and stuff like that once I put in few jobs So for now let me not get into any of these details of this Just be aware that Genkins can be connected for authorization to an LDAP server or you can have Genkins managing its own servers which is happening as of now So I'm going to save all this stuff That's good for me So enough of all these configurations Let me put in a very very simple job All right So job new item You know little difficult to kind of figure out but then that's the new item So I'll just say you know first job this is good for me I just give a name for my job I would say it's a freestyle project That's good enough for me I don't want to choose any of that So unless until you choose any of this this particular button would not become active So choose the freestyle project and say okay At a very high level you would see general source code management build triggers build environment build and post build In case you install more and more plugins you will see a lot more options But for now this is what you would see So what am I doing at the moment i'm just putting up a very very simple job And the job could be anything and everything So I don't want to put in a very complicated job for now For the demo purpose let me just put in a very very simple job I'll give a description This is an optional thing This is my first genkins job All right I don't want to choose any of these Again there are some helps available here I don't want to choose any of this I don't want to connect it to any source code for now I don't want any triggers for now I'll come back to this in in a while Build environment I don't want any build environment as a part of this build step You know I just want to you know run few things so that I kind of complete this particular job So since I'm on a Windows box I would say execute Windows uh batch command All right So what do I want to do i will let me just echo something Echo Uh hello this is my first Jenkins job and possibly I would want the date and the time stamp pertaining to the job I mean the date and time in which this job was run All right very very simple command that says you know this is my first job It just puts out something along with the date and the time All right I don't want to do anything else I want to keep this job as simple as this So let me save this job All right So once I save this job you know the job names comes up here and then I need to build this job and you would see some build history out here Nothing is there as of now because I've just put in a job Have not run it yet All right So let me try to build it now You see a build number You would see a date and a time stamp So if I click on this you would see a console output If I go here okay as simple as that And where is all the job details that is getting into if you see this if I navigate to this particular directory All right So this is the directory what I was mentioning earlier regarding genkins home So all the job related stuff that is specific to this particular genkins installation is all here All the plugins that is installed the details of each of those plugins can be found here All right So the workspace is where all the jobs that I've created whichever I'm running would you know there will be an individual folders specific to the jobs that has been put up here All right so one job one quick run that's what it looks like Pretty simple Okay let me do one thing Let me put up a second job I would say second job I would say freestyle project All right This is my second job I just want to demonstrate the powerfulness of the automation server and how simple it is to automate a job that is put up on Genkins which will be triggered automatically Remember what I said earlier about Genkins because at the core of Genkins is a very very powerful automation server All right So what I'm going to do I will just keep everything else the same I'm going to put in a build script pretty much similar to second job that gets triggered automatically every minute All right let me do that percentage date and I'll put in the time All right So I just put in another job called second job And it pretty much does the same thing as what I was doing earlier in terms of printing the date and the time But this time I'm just going to demonstrate the powerfulness of the automation server that is there If you see here there's a build trigger So a build can be triggered using various triggers that is there So we'll get into this GitHub uh triggering or hook or a web hook kind of a triggering later on But for now what I want to do I want to ensure that this job that I'm going to put in would be automatically triggered on its own Let's say every minute I want this job to be run on its own So build periodically is my setting If you see here there's a bunch of help that is available for me So for those of you you have written chron jobs on Linux boxes you'll find it very very simple But for others don't panic Let me just put up a very very simple regular expression for scheduling this job every minute All right So that's 1 2 3 4 5 All right Come up Come up Come up All right So five stars is all that I'm going to put in And Jenkins got a little worried and he's asking me do you really mean every minute oh yeah I want to do this every minute Let me save this And how do I check whether it gets triggered every minute or not i just don't do anything I'll just wait for a minute And if at all everything goes well Jenkins would automatically trigger my second job in a minute's time from now This time around I'm not going to trigger anything Look there You see it's automatically got triggered If I go in here yep second job that gets triggered automatically You know it was triggered at 42 1642 which is 4:42 my time That looks good And if everything goes well every 1 minute onwards this job will be automatically triggered Now that I have um my genkins up and running a few jobs that has been put up here on my genkins instance I would need a way of controlling access to my genkins server This is wherein I would use a plug-in called role based access plug-in and create few rules The rules are something like a global role and a project role project specific role I can have different roles and I can have users who have signed up or the users whom I create kind of assigned to these roles so that each of these users fall into some category This is my way of kind of controlling access to my genkins instance and u ensuring that people don't do something unwarranted All right So first things first let me go ahead and uh install a plugin for doing that So I get into manage genkins and uh manage plug-in A little bit of a confusing screen in my opinion There's updates available installed and advanced As of now we don't have the role-based plug-in So let me go to available It'll take some time for it to get refreshed All right Now these are the available plugins These are the installed plugins All right So let me come back to available and I would want to search for my role based access plugin So I would just search for RO and hit enter Okay Role-based authorization strategy enables user authorization using a role-based strategy Roles can be defined globally or for particular jobs or nodes and stuff like that So exactly this is the plug-in that I want I would want to install it without a restart All right Looks good so far Yes Go back to the top of the page Yes Remember Genkins is running on a Java using a Java instance So typically many things would work the same way unless and until you want to restart Genkins once in a while But as a good practice whenever you do some sort of a big installations or big patches on your Genkins instance just ensure that you kind of restart it Otherwise there would be a difference in terms of what is installed on the system and what is there on the file system You would need to flush out few of those settings later on But for now these are all very small plugins So these would run without any problems But otherwise if at all there are some plugins which would need a restart you know kindly go ahead and restart uh your genkins instance But for now I don't need that It looks good I've installed the plug-in So where do I see my plug-in i installed the plugin that is specific to the user control or the access control So let me go into yes global security and uh I would see this rolebased strategy showing up now All right So this comes in because of my installation of my role based uh plug-in So this is what I would want to enable because I already have my uh own database set up and for the authorization part in the sense that who can do what I'm going to install I mean I've already installed a role-based strategy uh plug-in and I'm going to enable that strategy All right I would say save Okay now I've installed the role based access plug-in I would need to just set it up and check that you know I would go ahead and create some rules and ensure that I assign users as per these rules All right So let me go to manage genkins configure All right let me see where is this configure configure global security Is that where I create my roles nope not here Yes manage and assign roles Okay again you would see these options only after you install these plugins So for now I've just enabled the plug-in I have enabled role-based access control and I would go ahead and create some roles for this particular Jenkins instance So I would say first manage roles So I would need to create some roles here and the roles are at a very high level These are global roles and there are some project roles and there are some slave roles I'll not get into details of all of these at a very very high level which is a global role Let me just create a role A role can be kind of visualized like a group So I would create a role called developer Typically the genkins instance or the CA instance are kind of owned up or controlled by QA guys So QA guys would need to provide some sort of you know limited access to developers So that's why I'm creating a role called developer and I'm adding this role at a global role level So I would say add this here and you would see this developer role that is there and each of these options you if you hover over it you would see some sort of a help on what what are these uh you know permissions specific to So what I want is like you know it sounds a little you know different but I would want to give very very little permissions for the developer So from an administration perspective I would just want him to have a read um kind of a role Credentials again I would just want a view kind of a role I don't want him to create any agents and all that stuff That's looks good for me For a job I would want him to just possibly uh read I don't want him to build I don't want him to cancel any jobs I don't want him to configure any job I don't even want him to create any job I would just want him to read few things I would not give him possibly a role to the workspace as well I mean I don't want him to have access to the workspace I would just want him to uh read a job or check you know have read only access to the job run Um no I don't want him to give him any any particular access which will allow him to run any jobs View configure Yeah possibly create Yeah delete I don't want read Yes definitely And this is the specific role So what am I doing i'm just creating a global role called developer and I'm giving him very very limited roles in the sense that I don't want this developer to be able to run any agents nor create jobs or build jobs or cancel jobs or configure jobs at the max I would just want him to read a job that is already put up there Okay So I would save Now I created a role I still don't have any users that is there on the system So let me go ahead and create some user on the system That's not here I will say configure manage genkins manage users Okay let me create a new user I would call this user as yeah developer one sounds good Some password some password that I can remember Okay his name is developer one dat.com or something like that Okay so this is the admin with with which I kind of configured or brought up the system and developer one is a user that I have configured So still I have not set any roles for this particular user yet So I would go to manage and I would say manage and assign roles I would say assign roles Okay So if you see what I'm going to do now is assign a role that is specific to that particular de I will find the particular user and assign him the developer role that I have already configured The role shows up here I would need to find my user who created and then assign him to that particular role So if you remember the user that I created was uh developer one I would add this particular user and now this particular user what kind of a role I want him to have because this is the global role that I had created So developer I would assign this developer one to this particular global role and I would go ahead and save my changes Now let me check the permissions of this particular user by logging out of my admin account and logging back as uh developer one If you remember this role was created with very less privileges So there you go I have genkins but I don't see a new item I can't trigger a new job I can't do anything I see these jobs However I don't think so I'll be able to start this job I don't have the permission set for that The maximum I can do is look at the job see what was there as a part of the console output and stuff like that So this is a limited role that was created and I added this developer to that particular role which was a developer role so that the developers don't get to configure any of the jobs because the genkins instance is owned by a QA person He doesn't want to give developer any administrative rights So the rights that he set out by creating a developer role and anybody who is tagged any user who is tagged as a part of this developer role would get the same kind of permissions and these permissions can be you know fine grain it can be a project specific permissions as well but for now I just demonstrated the highle permissions that I had set in Let me quickly log out of this user and get back as the admin user because I need to continue with my demo with the developer role that was created I have very very less privileges One of the reasons for Genkins being so popular as I mentioned earlier is the bunch of plugins that is provided by users or community users who don't charge any money for these plugins but it's got plugins for connecting anything and everything So if you can navigate to or if you can find genkins plugins you would see index of over so many plugins that is there All of these are wonderful plugins Whatever connectors that you would need if you want to connect Genkins to an AWS instance or you want to connect genkins to a Docker instance or any of those containers you would have a plug-in you can go and search up if I want to connect Genkins to Bitbucket Bitbucket is one of the git servers There's so many plugins that is available Okay So bottom line Genkins without plugins is nothing So plugins is the heart of Genkins for you to connect or for in order to connect genkins with any of the containers or any of the other tool sets you would need the plugins If you want to connect or you want to build a repository which has got Java and Maven you would need to install Maven and JDK on your Genkins instance If at all you're looking for a .NET build or a Microsoft build you would need to have MS build installed on your on your Genkins instance and the plugins that will trigger MS build If at all you want to listen to some serverside web hooks from GitHub you would need GitHub specific plugins If you want to connect Jenkins to AWS you need those plugins If you want to connect to a Docker instance that is running anywhere in the world as long as you have the URL which is publicly reachable you just have a Docker plug-in that is installed on your Genkins instance Sonar Cube is one of the popular static code analyzers So you can connect a Genkins build You can build a job on genkins and push it to sonar cube and get sonar cube to run analysis on that and get back the results in genkins All of these works very well because of the plugins Now with that let me connect our genkins instance to GitHub I already have very very simple uh Java repository up on my GitHub instance So let me connect genkins to this particular GitHub instance and pull out a job that is put up there All right So this is my very very simple uh you know repository that is there called hello java and this is what is there in the repos there is a hello hello hello java application that is here or a simple class file that is there it's got just one line of system.out So this is already present on github.com at this place and this would be the URL for this uh repository If I pick up the https URL this is my https URL So what I would do is I would connect my Genkins instance to go to GitHub provide my credentials and pull out this repository which is on the cloud hosted github.com and get it to my genkins instance and then build this particular Java file I'm keeping the source code very very simple It's just a Java file How do I build my Java file how do I compile my Java file i just say Java C and the name of my u class file which is hello Java And how do I run my Java file i would say Java and hello Okay So remember I don't need to install any plugins now because uh what it needs is a git plugin So if you remember when we were doing the installation there was a bunch of recommended plugins So git is already installed on my system So I don't need to install it again So let me put up a new job here It says uh get job Let it be a freestyle project That's good for me I would say okay All right So the source code management remember in the earlier examples we did not use any source code because we were just putting up some echo kind of a jobs we did not need any integration with any of the source code systems So now let me connect this So I'm going to put up a source code and git would show up because the plug-in is already there SPN Perfose any of those additional um source code management tools If at all you would need just install those plugins and Genkins connects wonderfully well to all these particular source control tools Okay so I would copy the HTTPS URL from here I would say this is the URL that I'm supposed to go and grab my source code from But all right that sounds good But what is the username and password so I'll have to specify a username and password All right So I would say the username this is my username and uh this is my HTTPS credential for my job Okay So this is my username and this is my password I just save this I say add and then I would say you know use this credentials to go to GitHub and then on my behalf pull out a repository All right If at all at this stage if there's any error in terms of not able to genkins not able to find git or the g.exe exe or if my credentials are wrong Somewhere down here you would see a red message saying that you know something is not right you can just go ahead and kind of fix that For now this looks good for me I'm going to grab this URL What am I going to do this step would pull the source code from the GitHub And then what would be there as a part of my build step because this repository just has a Java file correct hello Java So in order to for me to build this I would just say execute windows batch command and I would say java c hello hello dot java That is the way I would build my uh java code And if I have to run it I would just say java hello Pretty simple two steps And this would run after the repository contents are fetched from GitHub So Java C Java that sounds good I would say save this and let me try to run this Okay if you see there's a lot of you know it executes git on your behalf It goes out here It provides my credentials and says you know it pulls all my repository and by default it'll pull up the master branch that is there on my repository and it kind of builds this whole thing Java Cello Java and it runs this project Java hello and there you see this is the output that is there and if at all you want to look at the contents of the repository If you can go here this is my workspace of my system Hang on this is not right Okay get job If you see here this is my hello Java This is the same program that was there on my GitHub repository Okay Okay so this is a program that was there on the mean GitHub repository All right so this was the same program that was here and Genkins on our behalf went over all the way to GitHub pulled this repository from there and then you know it brought it down to my local system or my Genkins instance It compiled it and it ran this particular application Okay now that I've integrated Genkins successfully with GitHub for a simple Java application let me build a little bit on top of it What I will do is I have a Maven based web application that is up there as a repository in my GitHub So this is the repository that I'm talking about It's called AMV and web app It's got it's a Maven based uh repository as you would know Maven is a very very simple uh Java based uh build tool that will allow you to run various targets and it'll compile it will based upon the goals that you specify It can compile it can run some tests and it can you can build a war file and even deploy it into some other server For now what we're going to use Maven is just for building and creating a package out of this particular web application It contains a bunch of things and uh what is important is just the index.jsp It just contains an HTML file that is there as a part of this web application So from a perspective of requirements now since I'm going to connect genkins with this particular repository git we already have that set we only need two other things one is maven because genkins will use maven so in order to use maven genenkins would have to have a maven installation that is there on the genkins box and in this case the genkins box is this laptop and after I have my maven installed I also need a Tomcat server tomcat is a very very simple uh web server uh that you can freely download I'll let you know how to quickly uh download and install the Tomcat All right So download Maven first There are various ways in which you can kind of download this Maven There is zip files binary zip files and archive files So what I've done is I've just already downloaded Maven And if you see I've unzipped it here So this is the folder with which I have unzipped my Maven So as you know Maven again is is one open source uh build tool So you'll have to set in a few configurations and set up the path So mvn - version if I specify this after I set in my path maven should work and if at all I echo m2 home which is nothing but the variable environment variable specific to maven home it is already set here So once you unzip maven just set this m2 home variable to the directory where you unzipped your maven also just set the path to this particular directory /bin because that is where your Maven executables are all found All right so that's with Maven and you know since I've set the path and the environment variable Maven is running perfectly fine on my system I've just verified it Okay next one is a Tomcat server Download Apache 8 Tomcat server 8.5 is what I have on my system So I'm just going to show you where to download this from This is where you download Tomcat server and um I already have the server downloaded Again this doesn't need any installation I just unzip it here and it kind of has a bin and configuration I have made some subtle changes in the configuration First and foremost Tomcat server also by default runs on port 8080 Since we already have our uh genkins server running on port 8080 we cannot let Tomcat run on the same uh port There will be a port clash So what I've done I have configured Tomcat to use a different port So if I go to this configuration file here there is a server.xml Let me open this up here All right Okay So this is the port By default it will be 8080 I've just modified it to 8081 So I've changed the port on which my Tomcat server would run All right So that is one chain Second change when Genkins kind of tries to get into my Tomcat and deploy something for someone he would need some authentications so that he'll be allowed deployment by Tomcat So for that I need to create a user on Tomcat and provide this user credentials to my genkins instance So I would go to Tomcat users.xml file Here I've already created a username called deployer and the password is deployer and I've added a role called manager hyphen script Manager hyphen script will allow programmatic access to the Tomcat server So this is the role that is there So using this credentials I will enable or I'll empower genkins to get into my Tomcat server and deploy my application All right only these two things that is required Let me just start my Tomcat server first So I get into my bin folder I open a command prompt here And there's a startup.bat It's pretty fast It just takes a few seconds Yes there you go Tomat server is up and running Now this is running on port 8081 So let me just check if that looks good So localhost 8081 Okay my Tomcat server is up and running That sounds good The user is already configured on this That's also fine So what I'll do as a part of my first job Maven is also installed on my system So I'm good to use Maven as a part of my genkins So I will put up a simple job Now I will say job MVN web app I call this freestyle job That's good Okay So this will be a git repository What is the URL of my git repository is uh this guy https URL Okay that's this URL I will use a credentials The old credentials that I set up will work well because it's the same git user that I'm kind of connecting into All right so now the change happens here where after I get this since I said this is a simple Maven repository I will have some Maven targets to run So the simple target first is let me run maven package This creates a war file Okay So mvn package is the uh target Package is the target So when whenever I run this package it kind of creates it it builds it it tests it and then creates a package So this is all that is required Maybe let me try to save this and uh let me first run this and see if it connects well if there's any problem with my war file or the war file gets created properly Okay wonderful So it built a war file and if you see it all shows you what is the location where this war file was generated So this will be the workspace If you see this this war file was successfully built Now I need to grab this particular war file and then I would need to deploy it into Tomcat server Again I would need a small plug-in to do this because I need to connect Tomcat with my Jenkins server Let me go ahead and um install the plug-in for the container deployment So I will go to manage plugins available type in container container container deploy to container okay so that's what this the plug-in that I would need I would install it without a restart right seems to be very fast Nope Sorry it's still installing Okay it installed the plug-in So if at all you see this if you go to my workspace okay in the target folder I would see this web application war file that is already built So I would need to configure this plug-in to pull up this war file and deploy it onto the Tomcat server For deploying onto the Tomcat server I will use the credentials of the user that I've created Okay So let me go to configure this particular project again and um okay all this is good So the package is good I'm going to just create a package That's all fine Now add a post build step So after the war file is built as a part of this package uh directive let me use this deployment to container Now this will show up after you install the plug-in So deploy this one to the container Now what is that you're supposed to specify you're supposed to specify the what is the location Okay so this is a global uh you know configuration that is there that'll allow you to from the root folder it'll pick up the war file that is there So star/star.war that's good for me Okay What is the context path context path is nothing but just the name of an application that you know under which it will get deployed into the Tomcat server I will just say MVN web app as the name of my thing Now I need to specify what kind of a container that I'm talking about All right So the deployment would be for this Tomcat 8.5 is what I need Okay because the server that we have is a Tomcat 8.5 server that I have So this would be the URL So the credentials yes I need to add a credential for this particular server So if you remember I had created a credential for my web application So let me just find that my Tomcat server Yes configuration of this Okay So deployer and deployer username is deployer password is deployer Okay So let me use that credential I would say I would say add a new credential genkins credential The username is deployer and the password is deployer So I would use this deployer credentials for that And what is the URL of my Tomcat instance so this is the URL of my Tomcat instance So take the war file that is find found in this particular folder and then you know context path is m web app use the deployer deployment credentials and get into this local host which is there 8081 This is the Tomcat server that is running on my system and then go ahead and deploy it Okay So that is all that is required So I would say just save this and uh let me run it now Okay it built successfully built the war file It is trying to deploy it and uh looks like the deployment went ahead perfectly well So the context path was MVN web app So if I type in this right if at all I go ahead into my uh Tomcat server there would be a web apps folder you would see the you know the date time stamp So this is the file that get got recently copied and this is the explorer version of our application So the application was built the source code of this application was pulled from the GitHub server It was built locally on the uh genkins instance and then it was pushed into a Tomcat server which is running on a different port which is 8081 Now for this demo I'm running everything locally on my system But assuming that you know this particular Tomcat instance was running on some other server with some other different IP address All that you got to go and change is the URL of the server So this would be the server in case you you already have that uh you know if you have a Tomcat server which is running on some other machine that's all fine with a different IP that's all good enough the whole bundle or the warfare that was built as a part of this genkins job gets transferred onto the other server and gets deployed That's the beauty of uh genkins and automatic deployments or rather deployments using genkins and maven distributed build or master slave configuration in genkins as you would have seen you know we just have one instance of genkins server up and running all the time and also I told you that whenever any job that kind of you know gets started on the genkins server it is little heavy on on in terms of disk space and the CPU utilization So which kind of you know if at all you are in an organization wherein you are heavily reliant on um the genin server you don't want your genin server to go down So that's wherein you kind of start distributing the load that is there on the genin server So you primarily have a server which is just a placeholder or like a master who will take in all the kind of jobs and what he'll do is based upon trigger that has happened to the job or whichever job needs to be built he if at all he can delegate these jobs onto some of the machines or some other slaves you know that's a wonderful thing to have Okay use case one Use case two assuming that you know if you have a genkins server that is running on a Windows box or on a Linux one and if at all you have a need where you need to build based upon operating systems you have multiple build configurations to support Maybe you need to build a Windows uh you know Windows-based net kind of a projects where you would need a Windows uh machine to build this particular project You also have a requirement where you want to build Linux Linux based uh systems you also have a Mac you you support some sort of an apps or something that is built on Mac OS you would need to build you know Macbased system as well so how are you going to support all these needs so that's where in a beautiful concept of master slave or you know primary and delegations or agent and master comes into play so typically you would have one genin server who will just you know configured with all the proper authorizations users configurations and everything is set up on this Jenkins server his job is just delegations he will listen to some server triggers or based upon the job that is coming in he will if there's a way nice way of delegating these jobs to somebody else and you know taking back the results he can control lot of other systems and these systems may not have a complete or there's no need to put in a complete genkins installation all that you got to do is have a very very simple runner or slave that is a simple jar file that is run as a low priority thread or a process within these systems so with that you can have a wonderful distributed build server that can be set And in case one of the servers goes down your master would know that what went down and kind of delegate the task to somebody else So this is the kind of distributed build or the master slave configuration So what I'll do in this exercise or in this demo is I will set up a simple slave But since I don't have too many machines to kind of play around what I'll do is I will set up a slave in in one other folder within my hard drive So I've got the C drive and D drive My genkins is on my C drive So what I would do is I would just use my E drive and set up a very very simple uh slave out there I'll just show you how to provision a slave and how to connect to a slave and how to delegate a job to that slave Let me go back to my Jenkins uh master and uh configure him to you know talk to an agent So there are various ways in which this client and server talk to each other What I'm going to choose is something called as GNLP Java network launch protocol So using this I would ensure that you know the client and server talk to each other So for that I need to ensure that I kind of enable this JNLP port So let me try to find out where is that Let me try this Okay Yes Agents and uh by default this JNLP agents uh thing would be disabled So if you see here there's a small help on this So I'm going to use this JNLP which is nothing but uh Java network launch protocol And you know I'll configure the master and server to talk to each other using JNLP So for that I need to enable this guy So I enable this guy instead of making the by default the configuration was disabled So I make him random I make him you know enabled and I say save this configuration All right So now I configured or I made a setting for the master so that the JNLP U port is kind of opened up So let me go ahead and um you know create an agent So I'll go to manage nodes So if you see here there's only one master here So let me provision a new node here So this is the way you know in which you bring up a new node You have to configure it on the server Jenkins would put in some sort of uh security around this particular uh agent and let you know how to launch this particular agent so that he can connect to our genkins master So I would say new node I would give a name for my node or I would say Windows node because both of these are Windows only So that's fine I'll just give an identifier saying that Windows node I would say this is a permanent agent I'll say okay So if you see the name let me just copy this name here with the description Number of executors since it's a slave node and both of these are running on my system I'll keep the number of executors as one That's fine Remote root directory Now this is where let me just clarify this Since I have both my my master is running on my C drive C drive program files 86 or hang on not 86 C program files It is indeed 86 All right genkins So this is where my master is running So I don't want the C drive What I'll do is I'll use something called as A drive I have another drive in my system Please visualize this like you know you're running this on a separate system altogether So I create a folder here called genkins node and this is where I'm going to place my or I'm going to provision my slave and I'm going to run it from here So this is the directory in which I'm going to provision my slave node So I'm going to copy this here and that is the remote root directory of your particular agent or slave So I just copy it here the label you know possibly this is fine for me and usage how do you want to use this guy so I would don't want him to run all kinds of jobs I will only build jobs with label expressions that match this particular node and so this is the label of this node so in order for somebody to kind of delegate any task to them they will have to specify this particular label so imagine this way if I have a bunch of windows system I name it as windows star anything SAS from Windows I can give a regular expression and say that anything that matches Windows run this particular task there If I have some Mac machines I name all these Mac agents as Mac star or something like that and I can delegate all tasks you know saying that start with whatever starts with Mac and this node run the MAC jobs there So you identify a node using the label and then delegate the task there All right So launch method you know we will use Java web startar because we got to we got to use JNLP protocol Okay that sounds good Directory I think nothing else is required Availability yes we'll keep this agent yep online as much as possible That sounds good All right let me save this All right I'm just provisioning this particular node now So if I click on this node I get a bunch of commands along with an agent.jchar So this is the agent.jchar that has to be taken down to the other machine or the slave node And from there I need to run this along with the small security credential So let me copy this whole text here in my notepad Notepad++ is good for me Okay I copy this whole path there I also want to download this agent.jar I would say yes And this agent.jar is the one that is configured by our server So all the details that is required for launching this agent.jar is found in this uh sorry for launching this agent is found in this agent.jar So typically I need to take this jar file onto the other system and then kind of run it from there So I have this aent.jar I copy this or rather I cut this I come back to my folder my Jenkins node I paste it here Okay So now with this provision agent.jar and I need to use this whole command ctrl arr c and then launch this particular agent So let me bring up a command prompt right here and then launch this So I'm saying in the same folder where there is agent.jar I'm going to launch this a particular agent java - jar agent.jar jnlp this is the URL of my server in case the server and client are on different locations or different IPs Let us specify the IP address All this anyway would show up and then the secret and you know the root folder of your genkins or the slave node Okay So something ran and then you know it says it's connected very well It seems to have connected very well So let me come back to my genkins instance and see you know if at all you see earlier this was not connected Let me refresh this guy Okay Now these two guys are connected Provision a genkins node and then I copied all the credentials of the slave.jar jar along with the launch code and then took it to the other system and kind of ran it from there Since I don't have another system I've just got a separate directory in another folder another drive and I'm launching the agent from here As long as this particular agent is up and running or this command prompt is up and running the agent would be connected So once I close this the connection goes down All right So successfully you've launched this particular agent Now this would be the home directory of this genkins node or the genkins slave So any task that I'm going to delegate to this particular slave would all be run here It'll create a workspace right here All right So good So let me just come back and let me kind of put up a new task here I will say that you know delegate job is good I say freestyle project I'm going to create a very very simple job here I don't want it to connect to gate or anything like that Let me just create a very very simple echo delegated to the slave delegated to I don't like the word slave delegated to agent put this way all right so delegated to agent sounds good now how am I going to ensure that this particular job runs on the agent or on the slave that I have configured Right you see this if at all you remember how we provisioned our particular slave we gave a label Right so now I'm going to put in a job that will only match this particular label So I'm going to say that whatever matches this you know Windows label run this job on that particular node So we have only one node that is matching this you know Windows node So this job will be delegated out there So I save this and uh let me build this This is again a very very simple job There's nothing in this I just want to demonstrate how to kind of delegate it to an agent So if you see this it ran successfully and uh where is the workspace the workspace is right inside our Jenkins node It created a new workspace delegated job It put in here So my old or the my primary master uh job is in SQL uh program files under genkins And this is the slave job that was successfully run Very very simple but very very powerful concept of master slave configuration or distributed build in Jenkins Okay Approaching the final section where um we've done all this hard work in bringing up our genin server configuring it putting up some jobs on it creating users and all this stuff Now we don't want this configuration to kind of go away We want a very nice way of ensuring that we back up all this configuration and in case there is any failure hardware crash or a machine crash we will want to kind of restore from the uh existing configuration that we kind of backed up So one quick way to do that would be or one dirty way to do that would be just you know take a complete backup of our sec col program files col genkins directory because that's where our whole genkins configuration is present but we would don't want to do that let's use some plugins for taking up a backup so let me go to manage genkins and uh click on available and uh let me search for some back there are bunch of backup plugins so I would recommend one of these plugins that I specifically use So this is the backup plug-in So let me go ahead and install this plug-in All right So we went ahead and installed this plug-in So let me come back to my manage plugins So this plug-in is there So hang on backup manager So you will see this option once you you install this plug-in So first time I can you know do a setup I would say backup this particular I'll give a folder uh this folder is pertaining to the folder where I want Jenkins to back up some data and I would say the format should be zip format is good enough let me give a name or a template or a file name for my u you know backup this is good I want it in verbose mode I don't want to shut down my genkins or should I shut it down no okay one thing that you got to remember is that whenever a backup happens if there are too many jobs that is running on the server it can kind of slow down on your genkins instance because it's it's in the process of copying few of those things and if the files are being changed at that moment it's little bit problematic for genkins So typically you back up your servers only when there is very less load or typically try to you know bring it to a shutdown kind of a state and then take a backup All right So I'm going to back up all these things You know I don't want to exclude anything else I want the history I want the Maven artifacts Possibly I don't want this guy I would just say save and then I would say back him up So this would run a bunch of you know steps and all the files that is required as a part of this pretty fast But then if at all you have too many things up on your server For now we didn't have too many things up on our server But in case you had too many things to kind of backup this may take a while So let me just pause this recording and get back to you once the uh backup is complete So there you go The backup was successful Created a backup of all the workspace the configurations the users and you know all that So all this is kind of hidden down in this particular zip file So at any instance if at all I kind of crash my system for some instance or say a hard disk failure and I bring up a new instance of genkins I can kind of use the backup plug-in for restoring this particular configurations So how do I do that i just come back to my manage genkins come back to backup manager and I'll say restore Hson or genkins configuration So DevOps today is being implemented by you know most of the major organizations whether it's a financial organization whether it's a kind of a service organization every organization is somehow looking forward for the implementation and the adaptation of DevOps because it totally redefineses and automate the whole development process all together and whatever the man efforts you were putting earlier that is simply gets automated with the help of these tools here So this is something which get really implement mainted because of some of the important uh feature like a CI/CD pipeline because CI/CD pipeline is responsible for delivering your source code into the production environment in less duration of time So CI/CD pipeline is ultimately the goal which really helps us to deliver more into the production environment when we talk about from this perspective Now let's talk about that what exactly is a CI/CD pipeline Now when we go into that part when we go into that understanding so CI/CD pipeline is basically continuous integration and continuous delivery concept which is used or which is considered as an backbone of the overall DevOps approach Now it's one of the prime approach which we implement when we are going for a DevOps implementation for our project So if I have to go for a DevOps implementations the very first and the minimum implementation and the automation which I'm looking forward is actually from the uh particular CI/CD pipelines here So CI/CD pipelines is really a wonderful option when we talk about the DevOps here So what exactly is the pipeline term all about so pipeline is an series of events that are connected together with each other It's kind of a sequence of the various steps like you know typically when we talk about any kind of deployment So we have like you know build process like we compile the source code we generate the artifacts we do the testing and then we deploy to a specific environment All these various steps which we used to do it like manually that is something which we can do it into a pipeline So pipeline is nothing but a sequence of all these steps interconnected with each other executed one by one into a particular sequence Now the pipelines is responsible for performing a variety of tasks like building up the source code running the test cases uh probably the deployment can also be added up when we go for the uh continuous integration and continuous delivery there So all these steps are being done into a sequence definitely because sequence is very important when we talk about the pipeline So you need to talk about the sequence the same way in which you are working on the development and in a typical world the same thing you will be putting up into a specific pipeline So that's a very important aspect to be considered Now let's talk about what is the continuous integration here Now continuous integration is also you know known as the CI Uh pretty much you can see that lot of uh tools are actually named as CI but they are referring to the continuous integration only So continuous integration is a practice that integrates the source code into a shared repository and uh it used to uh automate the verification of the source code So it involves the build automations test cases automation So it also helps us to detect the uh issues and the bugs quite easily and quite faster That's a very early mechanism which we can do as such if we want to resolve all these problems Now continuous integrations does not eliminate the bugs but yes it definitely helps them uh you know easily to find out because we we are talking about the uh automated process we are talking about the automated test cases So definitely that is something which can help us to uh find out the bugs and then you know the development can help on that and they can you know proceed with those bugs and they can try to resolve those things one by one So it's not an kind of automated process which will eventually remove the bugs Bugs is something which you have to recode and you have to fix it by following the development practice But yes it can really help us to find those bugs quite easy and help them to remove Now what is the continuous delivery here so continuous delivery also known as CD is an kind of a phase in which the changes are made uh into the code before the deployment Now in this case what happens that uh it's um something which we are discussing or we are validating that what exactly we want to deliver it to the customer So what exactly we are going ahead or we are moving to the customers So that's what we typically do in case of continuous delivery and the ultimate goal of the pipeline is to make the deployments That's the end result because coding is not the only thing You code the programs you do the development After that it's all about the uh deployments like how you're going to that to perform the deployment So that is a very important aspect you want to go ahead with the deployments that's you can go there and that's a real beauty about this because it it's in kind of a way in which we can identify that the how the deployments can be done or can be executed as such here right so the ultimate goal for the pipeline is nothing but to do the deployments and to proceed further on that right so when both these practices are placed in together in an order so all the steps could be referred as an complete automated process and this process This is known as CI/CD So when we are talking about like when we are working on this automation So in that case what happens that we are looking forward that how the automation needs to be done and since it's an kind of a CI/CD automation which we are talking about So it's nothing but the uh end result would be like build and deployment automation So you will be taking care of both the build and the test case executions and the deployments as such When we talk about as such a CI/CD here the implementation of CI/CD also enables the team to do the build and deploys quite quickly and uh efficiently because these are things which is you know happening automatically So there is no manual efforts involved and there is no scope of human error also So we have frequently seen that while doing the deployments we may miss some boundaries or some mis can be there So that is something which is you know completely removed as such when we talk about this The process makes the teams more agile productive and the uh confident here because um the automations definitely gives a kind of a boost to the confidence that yes things are going to work perfectly fine and there is no issues as such present Now why exactly Jenkins like Jenkins is what we typically understand or we you know uh hear and there CI tool it's a CD tool So what exactly is Jenkins all about so Jenkins is also known as a kind of orchestration tool It's an automated tool which is there and the best part is that it's completely open source Yes there are some particular uh paid or the enterprise tools are there like cloud bees and all but there is no as such off offering difference between the clouds and the Jenkins here So Jenkins is an kind of open source tool which lot of organizations pretty much implement as it is itself So even if they don't want to go um we have seen in a lot of big organizations where you know they are not going for the enterprise tool like cloudbs and all and they are going for the pretty much you know core Jenkins software as such here So this tool uh makes it easy for the developers to integrate the changes to the project that is something which is very important because it can really help the teams to say that how the things can be done and how it can be performed over there So the tools is very easy for the developers to integrate and that's the biggest uh you know benefit which we are getting when we talk about these uh tools as such So Jenkins is an important tool to be considered when we talk about all these automations Now Jenkins achieves continuous integration with the help of plugins that is also uh a kind of another feature or benefit which we get because there are so many plugins which is available there as such which is being used and uh for example you want to have an integration for kubernetes docker and all maybe by default those plugins are not installed but yes you have the provisioning that you can go for the installation of those plugins and yes those features will start embedded up and integrated within your genkins So this is the reason this is the main benefit which we get when we talk about the Jenkins implementation So Jenkins uh is you know one of the best fit which is there for building a CI/CD pipeline because of its flexibility uh open-source nature plug-in capabilities the support for plugins and it's quite easy to use and it's very simple straightforward GUI which is there which can definitely helps us you can you know easily understand and go through the Jenkins and you can grab the understanding and as an end result you will be able to have a very robust tool which using which pretty much any kind of source source code or any kind of programming language you can implement CI/CD whether it's a android it's a NNET it's a Java it's a NodeJS all the languages are having the support for the Jenkins so let's talk about the CI/CD pipeline with the Jenkins here now to automate the entire development process a CI/CD pipeline is the ultimate you know solution which we are looking forward to build such a pipeline Jenkins is our best solution and best fit which is available here so there are pretty much six uh steps which is involved when we look forward for any kind of pipeline It's a generic pipeline which we are looking forward Now it may have like uh any other steps which is available there probably some additional steps you're doing like some other plugins you are installing but these are the basic steps which is there like a minimum pipeline if you want to design these are the steps which is available there Now let's see the first one is that we have to uh require a Java JDK like a JDK to be available on the system Now most of the operating systems are already available with a JRE like a Java JRE but the problem with JRE is that it's only for the build process Um it will not be doing the compilation You can run the artifs you can run the jar files you can you know run the application run the codebase but the compilation requires the Java C or the Java JDK kit to be installed onto the system And that's the reason why for this one we also require the JDK and certain uh Linux commands execution understanding we need to have because we are going to run some kind of steps some installation steps and you know process So that's pretty much required Now let's talk about how to CI/CD pipeline with Jenkins Now first of all you have to download the JDK and uh that is something which is installed So after that you can go for the Jenkins download Now Jenkins.io/d download is a website is the official websites of Jenkins Now the best part is that there you have the support for different operating systems and platforms From there you can easily say that if you want to go for a Java uh package like a warfile Tucker Ubuntu Devian Cents Ferora Red Hat Windows Open Sushi uh FreeBSD Ganto Mac operating system In fact whatever the different kind of artifacts or different environment or different uh uh application you want to download you will be able to do that So that's the very first thing to start upon You download the generic Java package like a war file Then you have to execute it You have to download that into a specific folder structure Let's say say that you have you know created a folder called Jenkins Now you have to go into that Jenkins folder with the help of CD command and there you have to run the command called Java - jar and the Jenkins.bar there So uh these are the executables uh artifacts So uh war files can be easily executable um jar files war files can be easily deployed So just because uh with a java command you can run them You don't require any kind of web container or application container as such So here also you can see that we are running the java command and it runs the applications as such And once that is done so you can open the web browser and uh you can open like local host colon So Jenkins uses AT port just like automate Apache So um if you know once the deployment is done installation is done so you can just open the local host colon Now if you want to get uh the Jenkins up and running in the browser probably you can you know go through the uh public IP address also there So you can put the public IP address col and that can also help you to you know start accessing the Jenkins application Now in there you will be having an option called create new jobs So you need to click on that Now once the uh particular new job new item new job that's a different naming conventions which is available there Now all you are going to do is that you're going to do like you are proceeding with the creating the uh pipeline job So you will be having an option called pipeline job over there Just select that and provide your custom name What pipeline name or job name you want to uh refer or you want to process there Now once that is available so what happens that it will be an easy task for us to see that how exactly we can go ahead and we can perform on that part So this can really help us to see that how a pipeline job can be created and you know performed on uh this modifications as such Now when the pipeline is selected and uh we can give a particular name that this is the name which is available and then we can say okay as such over there Now you can scroll down and find the pipeline section So uh there what happens that when you go over there and say that okay this is the way that how the pipelines are managed and you know those kind of things So you will scroll down and find the pipeline section and go with that pipeline script Now when you select that option there are different options which is available like how you want to manage these pipelines Now you are you know have the direct access also like if you want to directly uh create the uh create a pipeline script you can do that If you feel that you want to manage like you want to retrieve the Jenkins files also code management tool also can be used there So you can work on that also So like this there are so many a variety of things which is available like which you can use to work around that how exactly the pipeline job can be created So either you can fetch it from the source code management tool uh like get version or something like that or you can directly put the pipeline code as such over there right now So next thing is that we can configure and execute a pipeline job with a direct script So uh we can once the pipeline is selected so you can put the uh particular script like Jenkins file into your uh particular GitHub link So you you may be having like already a GitHub link so that the where the Jenkins file is there So you can make use of that Now once you process the GitHub link so what we can do is that we can proceed with that and uh once the processing is done so you can do the save and you know you can keep the changes and you know uh it will be picking up the pipelines you know the pipeline script is added up into the uh GitHub and you know you have already specified that uh let's just go ahead with this Jenkins file pipeline script from the GitHub repository and proceed further Now once that is done so what next you can do is that you can go with the build now process you click on the build now and once that is done so what will happen that you will be able to see that how the build process will be done and how the build will be performed over there So these are pretty much a kind of a way so you can click on the console output you will get all the logs that is happening in the inside that whatever the pipeline steps are getting executed all of them you will be able to get or you will be able to you know get on that part there So these are the different steps which is involved as such And uh the sixth one is that you know uh yes whatever the uh particular uh when you run the build now you will be able to see that the source code will be uh you know will be checked out and will be downloaded before the build and uh you can proceed with that part Now later on if you want to change the URL of this GitHub you can configure the job again the existing job and you can change that URL GitHub link URL whenever you require You can also clone this uh job whenever you go ahead and you work on that and that's also kind of you know the best part which is available as such right and uh then you can have the advanced settings over there So in there you can put like uh your GitHub repository you can say like okay uh the GitHub repository is there So I'm just going to put this URL and uh you know with that what will happen that the settings will be available there and the Jenkins file will be downloaded as such and when you run the build now you will be able to have a lot of steps like a lot of configurations going on So uh then the check out SEM so uh we can have a declaration like check out SCM which is there So when the check out SEM is there so it will check out a specific source code After that you go to the log and you will be able to see that each and every stage which is being built up and executed as such Okay So now we are going to talk about a demo here So on the pipeline here so this is a Jenkins portal Now you can see here that there is an option called create a job You can either click on the new item or you can click on the new uh create a job here Now here I'm going to say like a pipeline and uh then you know you can select the pipeline uh job type here Now you have the freestyle pipeline GitHub organization multi multi branch pipeline These are the different options which is available there But I'm going to continue with the pipeline here as such So when I selected the pipeline and say okay So what will happen that I will be able to see a configuration page which is related to the pipeline Now here the very important part is that you have all the uh general build trigger uh you know options which is similar to the freestyle but the build step and the post build step is completely removed because of the pipeline introduction Now here you either have the option to put the pipeline script all together You can also have some uh particular example for example let's talk about some GitHub Maven uh particular uh tool here So you can see that uh we have you know got some steps as such over here and you know it's pretty much running over there Now you run it it will work smoothly It will check out some source code but how we are going to integrate like the version the Jenkins file into the uh version control system because that's the ideal approach we should be following when we create a pipeline of a CI/CD Now I'm going to select a particular pipeline from SCM here then go with the get here Now in there the Jenkins file is the name of the file of the pipeline script and I'm going to put my repository over here in this one Now this repository is of my git which is like having a maven build pipeline which is available there It's having some steps related to CI with for the build and deployments and that's what we can follow as such over here Now in this one the uh if it is a private repository definitely you can add on your credentials but this is a public repository a personal repository so I don't have to put any kind of credentials but you can always add the credentials with the help of add here and that can help you to you know set up whatever the credentials private repositories you want to configure Now once you save the configuration here now what it's going to do is that you it's going to give you a particular page related to build now uh if you want to run if you want to delete the pipeline if you want to reconfigure the pipeline all these different options are available there so we are going to click on the build now here and when I do that immediately the pipeline will be downloaded and will be processed now you may not be able to get the complete stage view as of now because it's still running so yeah you can see that the checkout code is done then it's going on to the build Okay that's one of the step which is there Now once the build will be done so it will continue with the next steps with the next further steps there So you can also go to the console output log here like you can click on this or you can click on the console output to check the complete log which is happening there or in fact you can also see the stage wise logs also uh because that is also very important when you go for the complete logs uh it may you know uh have a lot of steps involved and you know a lot of logs will be available there but if you want to see a specific log of a specific stage that's where this comes into the picture and as you can see that all the different uh steps like test cases executions the sonar cube analysis the archive artifacts deployment and in fact the notification so all this is a part of a complete pipeline this whole pipeline is done here and uh you know you get a kind of a stage view it's success over here and the artifacts is also available to download so you can download this war file is a web applications as such over here so this is what a typical pipeline looks like that how the automation the complete automations really looks like as such over here Now this is a very important aspect because it really helps us to understand that how the pipelines can be configured can be done and pretty much with the same steps you will be able to automate any kind of pipelines as such So that was the demo to build a simple pipeline as such with the Jenkins and uh pretty much in this one we understood that how exactly the CI/CD pipelines can be configured and we can use them and we can get hold on that part DevOps has emerged as a transformative approach fusing development and operations to streamline workflows enhance collaboration and boost efficiency This dynamic fusion has given rise to a multitude of groundbreaking projects that are reshaping the industry So in this explanation of the top 10 DevOps projects we'll delve into the innovative solutions and tools that are catalyzing progress from automation and containerization to continuous integration and deployment These projects not only facilitate agility but also drive excellence in software delivery ensuring that DevOps remains at the forefront of modern technology So join us as we embark on a journey through the most influential DevOps initiatives of our time With that said if these are the types of videos you would like to watch then hit that subscribe button and the bell icon to get notified And if you're a professional with minimum one year of experience and an aspiring DevOps engineer looking for online training and certifications from prestigious universities and in collaboration with leading experts then search no simply's postgraduate program in DevOps from Caltech University in collaboration with the IBM should be your right choice For more details use the link mentioned in the description box below So let's start with why are DevOps skills crucial understanding DevOps is vital for optimizing the software development life cycle DevOps engineers need to master several key skills Linux proficiency Many firms prefer Linux for hosting apps and managing configuration systems It's essential for DevOps engineers to be wellversed in Linux as it's the foundation of tools like Chef Anible and Puppet Continuous integration and continuous delivery CI ensures teams collaborate using a single version control system while CD automates design testing and release improving efficiency and reducing errors Number three infrastructure as code Automation scripts provides swift access to necessary infrastructure a critical aspect with containerization and cloud technologies IAC manages configuration executes commands and swiftly deploys application infrastructure Configuration management tracking software and operating system configurations ensures consistency across servers Tools like Enible Chef and Puppet simplify this process making it efficient At number five we have automation DevOps aims for minimal human intervention maximizing efficiency Familiarity with automation tools like Dred Git Jenkins and Docker is essential for DevOps engineers So these tools streamline development processes and enhance productivity Moving on to the first project of the day we have unlocking efficiency of Java application with Griddle Meet Griddle the versatile build automation tool transcending platforms and languages This project helps you start on a journey of Java application creation breaking it into modular sub projects and more The main aim of this project is to help you master project initiation as a Java application adaptly build it and generate meticulous test reports You will be wellversed in running Java applications crafting archives and elevating your Java development pros So dive in to transform your coding skills with Griddle The source code for this project is linked in the description box below Moving on to project number two Unlock robust applications with Docker for web servers Docker the go-to container technology revolutionizes services and app hosting by virtualizing operating systems and crafting nimble containers This project focuses on creating a universal base image and helping you collaborate with fellow developers in diverse production landscapes You will be dealing with taking web apps foundations in Python Ruby and Meteor So master this project and you will yield Docker file efficiency like a pro slashing build times and simplifying setups So say goodbye to lengy docker file creation and resourceheavy downloads The source code for this project is also mentioned in the description box below So don't forget to check out Moving on to project number three we have master CI/CD pipelines using Azure In this Azure project we harness Azure DevOps to create efficient CI/CD pipelines This project mainly focuses on leveraging Azure DevOps project We deploy applications seamlessly across Azure services like app service virtual machines and Azure Kubernetes Service or AKS Utilizing Azure's DevOps starter we set up ASP.NET sample code explore preconfigured CI/CD pipelines commit code changes and initiate CI/CD workflows Additionally we fine-tune monitoring with Azure application insights for enhanced performance insights The source code for this project is also mentioned in the description box below Moving on to the next project elevating Jenkins communication the remoting project The Jenkins remoting project is all about enhancing Jenkins communication capabilities It's an endeavor to bolster the Jenkins remoting library creating a robust communication layer This project incorporates a spectrum of features from TCP protocols to efficient data streaming and procedure calls As a part of this project you will start on the exciting journey of making Jenkins remoting compatible with bus technologies like active MQ and Rabbit MQ To succeed in this project a strong grasp of networking fundamentals Java and message cues is your arsenal Dive in and join us in elevating the way Jenkins communicates with the world Check out the link mentioned in the description box below for the source code Moving on to project number five automated web application deployment with AWS your CD pipeline project In this project you will create a seamless continuous delivery pipeline for a compact web application Your journey begins with the source code management through a version control system Next discover the art of configuring a CD pipeline enabling automatic web application deployment whenever your source code underos changes Embracing the power of GitHub AWS elastic beantock AWS code build and AWS code pipeline This project is your gateway to streamline efficient software delivery The source code for this project is linked in the description box below Moving on to the next project containerized web app deployment on GKE scaling with Docker This project will help you discover the power of containerization With this project you will learn how to package a web application as a Docker container image and deploy it on a Google Kubernetes Engine or GKE cluster You can watch your app scale effortlessly to meet user demands This hands-on projects cover packaging your web app into a Docker image uploading it to artifact registry creating a GKE cluster managing autoscaling exposing your app to the world and seamlessly deploying newer versions You get to unlock the world of efficient scalable web app deployment on GKE The source code for this project is linked in the description box below Moving on to project number seven Mastering version control with Kit In a world of software development mastering a version control system is paramount Version controlling enables you for code tracking version comparison seamless switching between versions and collaborating among developers Your journey in this project will begin with the fundamental art of saving code in a VCS Taking the scenic route to set up a repository You can then start on a quest through code history unraveling the mysteries of version navigation Navigating through branching a deceptively intricate task is next on your path By the end of this project you will be fully equipped to conquer Git one of the most powerful version control system tools in the developers arsenal The source code for this project is mentioned in the description box below Moving on to the next project effortless deployment running applications with Kubernetes The major focus of this project is to help you harness a straightforward web service that handles user messages akin to a voicemail system for leaving messages Your mission you ask you get to deploy this application seamlessly with Kubernetes then dockerize it By mastering this fundamental step you will unlock the power to run your application in Docker containers simplifying the deployment process The source code for this project is mentioned in the description box below So don't forget to check it out Moving on to the project number nine mastering Terraform project structure This project will help you maintain and extend the efficiency of Terraform projects in everyday operations A well ststructured approach is essential This project unveils the art of organizing Terraform projects based on their purpose and complexity So harness the power of key terraform features including variables data sources provisionals and locals to craft a streamlined project structure By the end your project will effortlessly deploy an Ubuntu 20.04 server on Digital Ocean Configure an Apache web server and seamlessly point your domain to it Level up your Terraform game with proper project structuring and practical application Check out the link mentioned in the description box below for the source code Moving on to the last project of the day we have efficient Selenium project development and execution In the world of test automation Selenium projects play a pivotal role They enable seamless test execution report analysis and bug reporting This proficiency not only accelerates product delivery but also elevates client satisfaction By the end of this project you will master the art of building selenium projects whether through a Java project or a Maven project showcasing your ability to deliver highquality results efficiently DevOps has become an essential skill set for today's technology professionals with many organizations seeking out talented individuals who can help them build and maintain their infrastructure If you are looking to become a DevOps engineer this video is for you In this video we'll be covering some of the most common interview questions for DevOps engineer as well as some tips on how to answer them successfully We will cover infrastructure as code and CI/CD pipelines along with many other important topics You'll often be asked about your experience with ISA tools like Terraform and Ansible as well as your knowledge of cloud providers like AWS Google Cloud or Microsoft Azure We will also discuss tools like Jenkins TravisCCI or CircleCI as well as concepts of containerization and Kubernetes There's a lot to learn and a lot to discuss in our DevOps engineer interview questions video So without further ado let's get started Also if you are keen to learn more about DevOps and its concepts and want to attain a job in a renowned company then simply learn Caltech post-graduate program in DevOps could be the right choice for you This comprehensive program will equip you with the knowledge and skills needed to master the DevOps principles tools and practices So dive deep into containerization orchestration and automation framework like Docker Kubernetes and Jenkins Get hands-on experience with cloud platforms and learn how to leverage the power of infrastructure as code So don't miss out on this chance to transform your career and become a valuable asset to any organization Click the link in the description to discover more about this DevOps course Now let's jump into the video But before moving ahead let's first understand what is DevOps Now DevOps is a set of activities and approaches aimed at enhancing the effectiveness and excellence of software development delivery and deployment It brings together the realms of software development depth and information technology operations op The main goal of DevOps is to encourage seamless collaboration between development and operations team through the entire software development life cycle It achieves this through the utilization of automation continuous integration delivery and deployment thereby accelerating the process and minimizing errors in software development Now let's explore who is a DevOps engineer Now a DevOps engineer is an expert in developing deploying and maintaining software systems using DevOps practices They work closely with IT operations developers and stakeholders to ensure efficient software delivery The responsibilities include implementing automation continuous integration and continuous delivery or deployment practices as well as resolving issues throughout the development process DevOps engineers are proficient in various tools and technologies such as source code management systems build and deployment tools virtualization and container technologies But how exactly to become a DevOps engineer now depending on the business and the individual function different criteria for becoming a DevOps engineer may exist However some specific fundamental skills and certifications are frequently needed or recommended First is an excellent technical background Now DevOps engineers should be well-versed in IT operations system administration and software development Second is experience with DevOps tools and methodologies Now DevOps engineers should have experience with various DevOps technologies and processes including version control systems build and deployment automation containerization cloud computing and monitoring and logging tools Third is scripting and automation skills Now DevOps engineers should have strong scripting skills and be proficient in using tools such as Buzz Python or PowerShell to automate tasks and processes for this cloud computing experience Now DevOps engineers should have experience working with cloud platforms such as Amazon Web Services Microsoft Azure or Google Cloud Platform and in the end certification Some organizations may require DevOps engineers to hold relevant certifications such as certified DevOps engineer CDE or certified Kubernetes administrator CKA or AWS certified DevOps engineer professional Well now let us begin with some really important DevOps interview questions and answers as we have already covered the road map of how to become a DevOps engineer So the first question that we are coming up with is how is DevOps different from agile methodology well DevOps is a culture that allows the development and operation team to work together This results in continuous development testing integration deployment and monitoring of software throughout the life cycle Whereas agile is a software development methodology that focuses on iterative incremental small and rapid release of software along with customer feedback Basically it addresses gaps and conflicts between the customer and developers DevOps addresses gaps and conflicts between the developers and IT operations Now the second question is which are some of the most popular DevOps tools well some of the most popular DevOps tools include Selenium Puppet Chef Get Jenkins Anible and Docker which are considered really important in today's world if you want to become a successful DevOps engineer The third question is what is the difference between continuous delivery and continuous deployment now we will address this one by one So continuous delivery ensures that you can safely deploy onto production But continuous deployment ensures that every change that passes through automation testing is deployed to production automatically instead of manually Continuous delivery ensures business applications are delivered as they were expected Now continuous deployment makes sure that software development and other processes like release are smooth and faster continuously We also make changes to a production life environment through rigorous automated testing But when it comes to continuous deployment there is no explicit approval for a developer to require a developed culture Question four is what is the role of configuration management in DevOps now configuration management enables management of and changes to multiple systems Also it standardizes resource configuration which in turn manage its infrastructure Also it helps with the administration and management of multiple servers and maintains the integrity of the entire infrastructure Next is what is the role of AWS in DevOps well AWS has the following role in DevOps First is flexible services This provides readytouse flexible services without the need to install or set up the software Second is build for scale You can manage a single instance or scale to thousands using AWS services Third is automation AWS lets you automate tasks and processes giving you more time to innovate Then come secure Using AWS identity and access management you can set user permissions and policies in your organization And then comes large partner ecosystem AWS supports a large ecosystem of partners that integrate within extended AWS services Now if we talk about the sixth question that is name three important DevOps KPIs Now the three very important KPIs are as follows Meanime to failure recovery This is the average time taken to recover from a failure Deployment frequency the frequency in which the deployment occurs Percentage of failed deployments the number of times the deployment fails Now the seventh question is what are the benefits of using version control here are some of the benefits of using version control While all team members are free to work on any file at any time with the version control system later on VCS will allow the team to integrate all of the modifications into a single version The VCS asked to provide a summary of what was changed Every time we save a new version of the project we also get to examine exactly what was modified in the content of the file As a result we will be able to see who made what changes to the projects Now inside the VCS all the previous variants and versions are properly stored We will be able to request any version at any moment and we will be able to retrieve a snapshot of the entire project at our fingertips A VCS that is distributed such as git lets all the team members retrieve a complete history of the project This allows developers or other stakeholders to use the local git repositories of any of the teammates even if the main server goes down at any point in time So the next question is what is the blue green deployment pattern now this is a method of continuous deployment that is commonly used to reduce downtime This is where traffic is transformed from one instance to another In order to include a fresh version of the code we must replace the code with the new code version The new version exists in a green environment and the old version exists in a blue environment Now after making changes to the previous version we need a new instance from the old one to execute a newer version of the instance So this was the right answer Next is what is continuous testing continuous testing constitutes running of automated tests as part of the software delivery pipeline to provide instant feedback on the business risk present in the most recent release in order to prevent problems in step switching in software delivery life cycle and to allow development teams to receive immediate feedback Every build is continually tested in this manner Now this results in a significant increase in speed in a developer's productivity as it eliminates the requirement of rerunning all the tests after each update and project rebuilding Now let's move to the next question that is what is automation testing now test automation or manual test automation is the process of automating a manual procedure in order to test an application or system Automation testing entails the use of independent testing tools that allow you to develop test scripts that can be run repeatedly without the need for human interaction The next question is how to automate testing in DevOps life cycle Now developers are obliged to commit all the source code changes to a shared DevOps repository Every time a change is made in the code Jenkins like continuous integration tools will grab it from this common repository and deploy it for continuous testing which is done by tools like Selenium So why is continuous testing important for DevOps any modification to the code may be tested immediately with continuous testing This prevents concerns like quality issues and release delays that might occur whenever big bank testing is delayed until the end of the cycle In this way continuous testing allows for highquality and more frequent releases So the next question is how do you push a file from your local system to the GitHub repository using git now first connect the local repository to your remote repository Now get remote add origin and then you can see the code and the second step that you need to do is push your file to the remote repository Next question is what is the process for reverting a commit that has already been pushed and made public Now there are two ways that you can revert a commit Remove or fix the bad file in a new commit and push it to the remote repository Then commit it to the remote repository using this command And second is create a new commit that undoes all the changes that were made in the bad commit You can use this command for it Next is explain the difference between git fetch and gitpull Now git fetch only downloads new data from a report repository whereas gitpull updates current head branch with the latest changes from the remote server The second difference is git fetch does not integrate any new data into working files whereas gitpull downloads new data and integrates it with the current working files Git fetch users can run a git fetch at any time to update the remote tracking branches whereas git pole tries to merge remote changes with your local ones Now coming to the next question explain the concept of branching in git Suppose you are working on an application and you want to add a new feature to the app You can create a new branch and build a new feature on that branch By default you always work on the master branch and the circles on the branch represent various comments made on the branch So after you're done with all the changes you can merge it with the master branch Next question is explain the master slave architecture of Jenkins Now Jenkins master pulls the code from the remote GitHub repository Every time there is a code commit it distributes the workload to all the Jenkins layers and when requested from the Jenkins master the slaves carry out build test and produce test reports The next question is which file is used to define dependency in Maven build.xml pom.xml dependency.xml or version.xml the correct answer is perm.xml Next question that we are going to cover is explain the two types of pipelines in Jenkins along with their syntax Now Jenkins provides two ways of developing a pipeline code scripted and declarative Now scripted pipeline is based on groovy script as their domain specific language One or more node blocks do the core work throughout the entire pipeline Now the syntax is execute the pipeline or any of its stages on any available agent Define the build stage perform steps related to building stage define the test stage perform steps related to this test stage Define the deploy stage and perform steps related to the deploy stage Now declarative pipeline provides a simple and friendly syntax to define a pipeline Here the pipeline block defines the work done throughout the pipeline So the syntax that follows is first execute the pipeline or any of its stage on any available agent Define the build stage perform steps related to building stage Then define the test stage Perform steps related to the test stage Define the deploy stage and perform steps related to the deploy stage Again this was the code for declarative pipeline And the last question for this video is explain how you can set up a Jenkins job To create a Jenkins job we go to the top page of Jenkins choose the new job option and then select build a freestyle software project Now the elements of this freestyle job are optional triggers for controlling when Jenkins builds optional steps for gathering data from the build like collecting Java do testing results and or archiving artifacts a build script that actually does the work or the optional source code management system like subversion or CBS Well there you go These are some of the most common DevOps interview questions that you might come across while attending an interview As a DevOps engineer in-depth knowledge of processes tools and relevant technologies is essential and these DevOps interview questions and answers will help you get some knowledge about some of these aspects In addition you must also have a holistic understanding of the products services and systems in place Here's an inspiring success story from one of our satisfied learners who has propelled their career with DevOps This can help you boost your confidence and make a firm decision in this field Do watch the video That's a wrap- up on this full course guys If you have any doubts or question ask them in the comment section below Our team of experts will reply you as soon as possible Thank you and keep learning with Simply Learn Staying ahead in your career requires continuous learning and upskilling Whether you're a student aiming to learn today's top skills or a working professional looking to advance your career we've got you covered Explore our impressive catalog of certification programs in cuttingedge domains including data science cloud computing cyber security AI machine learning or digital marketing Designed in collaboration with leading universities and top corporations and delivered by industry experts Choose any of our programs and set yourself on the path to career success Click the link in the description to know more Hi there If you like this video subscribe to the SimplyLearn YouTube channel and click here to watch similar videos To nerd up and get certified click here