Transcript for:
Pengantar DevOps dan Praktiknya

If you know my videos, you know that I cover various devops tools and concepts on this channel. But to address the main question, what exactly is devops? And that's what we're going to focus on in this video.

First, we'll see why is devops even needed in the application release process? And what are the challenges in this process that devops solves? Of course, we will talk about what devops concept actually is.

We will also see the devops as a separate role. and how it evolved as well as what are the tasks and responsibilities of a devops engineer and finally we will briefly talk about sre and how sre fits in the whole devops process well devops is a relatively new concept which has been gaining a lot of popularity and taking over the traditional way of software development devops term itself is so broad and includes so many things that it became difficult to exactly define it and clearly set the boundaries of DevOps compared to other IT fields. So it's encompassing a lot of things.

The simplest definition is that DevOps is an intersection of development and operations. But where do boundaries of DevOps start and end? Which part of development is not DevOps or what part of operations is not DevOps? And why was there even a need for something between development and operations.

Development and operations are two main components in the whole application release process. So let's look in detail. at this release process starting from the very beginning. Whenever we're developing an application we always have the same process of delivering that application to the end users.

So this is the main goal no matter if you use waterfall or agile or whatever approach at its core you create an application and you want to deliver it to your end users so that they can use it. So let's say you have a great idea about a cool application, you define its functionality, or in other words, what features it will have, you code it, you test it, and now that you have a tested application, you want to actually deploy it. on a public server and let users access it.

For that, you build and package your application in some kind of executable form so that it can run. You configure the public server with all the needed stuff. like installing any tools the application needs and deploy your application there.

You configure firewall rules to allow access to the application on the server and you have launched. Users can start using it. So that's the simplified basis of any application release. But that's not the end of the journey. While in use, you of course have to check in on your application.

Is everything running fine? Are users experiencing any issues? Maybe there are bugs in the the application that you didn't catch when testing also can application handle high user loads etc so after launching it you have to actually make sure that your application is accessible and usable by end users And if there are any issues for users, of course, you should fix them. Now, that was the initial launch of your application, but the application development is not done yet.

If you see users like your application, you would want to make it even cooler. add new features, maybe optimize the performance by getting better servers or making your application faster and so on. So you still have a lot of things to do. And every time you improve your application, either the code itself. or the server configuration, you want to make this improvement accessible to the end users immediately.

So after the initial launch, you do multiple updates to your application. And to keep track of these updates, you version those changes. There are many ways to version changes to the application. One common way of versioning is with three numbers, one for major changes, like you replace the framework you use for coding, another one for minor changes, like you edit one small feature, and one for quick small changes or maybe small bug fixes.

And you do that over and over again. you have an idea of improvement, you implement it in code, you test it, build and package it, you deploy it, and once released, you observe it in the production to see whether there are any new improvement possible. possibilities or any issues that need to be fixed right away. So this gives you a process of continuous delivery of changes and endless cycle of improvements to your application.

And DevOps is about making this process of Continuous delivery, fast and with minimal errors and bugs. So with DevOps, improvements get created and delivered to users fast, but also those improvements are of high quality and well tested. And that is a big challenge, quickly delivering high quality code. Now let's see what are exactly the challenges that teams may face during this process and which DevOps tries to solve.

During this whole release process, we have roadblocks and frictions that slow down the process, make it too much effort and allow errors to slip through all the way to production. Now, what are the frictions and roadblocks in the release process? First and the most important challenge is miscommunication and lack of collaboration between developers and operations. So releasing application has two main parts.

You code the application, you deploy and and run the application. Developers are responsible for coding, operations are responsible for running the application. And between these two, there might be a gap of, I wrote an application, but I can't run it. or I'm running the application, but I don't know how it works.

So developers would code without considering where or how the code will be deployed, while operations would try to deploy without really understanding what and why they're deploying or how the application even works. And this would result in miscommunications between these two. Developers finish coding, but the deployment guide for the operations team is not good enough or well documented enough.

So operations team struggle. deploying it, so release takes longer. Or developers finish coding, but the feature cannot be deployed because it has a lot of issues, so the operations throws it back with improvement suggestions.

This kind of miscommunication could cause stretching the release periods for days and weeks, and in complex, badly maintained projects, maybe even months. So between the developer is done with the feature and operations starts deploying it, there is no clearly defined automated process of handover. It's based on a complex bureaucratic process of what checklists need to be completed, and what needs to be documented, and who needs to manually approve what for the release, and so on.

So no streamlines or automated processes here. Apart from miscommunications between development and operations, in a traditional setup where one team is only responsible for development, and other team only for... operations, these two have seemingly different incentives that make it hard for them to work together.

Developers want to push out new features fast, that's their incentive, while operations want to make sure those changes don't break anything. Because operations are incentivized to maintain stability in production. Their main focus is to make sure the application is available, doesn't crash, doesn't show 500 errors.

to the users and so on. This means that operations need to resist the speed of release and check all the aspects of a new release to make sure it's 100% safe, which again slows down the process, especially considering that operations don't really understand the code or the application. So it's even more effort for them to evaluate this new release.

So for example, let's say developers developed a new feature which was released, but this feature consumes so much resources in the production environment that the servers get overloaded and the application crashed. Now operations team needs to fix that. So because it's the operations who needs to put out the fires when something like this goes wrong, developers may not be as careful as operations about the changes they release.

and again focus on releasing new features as fast as possible without really thinking so much about stability. So even though the main common goal of everyone in a company should be to deliver high quality applications, to the end users fast. In practice, the more immediate goals are for each role to do their job.

And developer's job is to quickly create new features and push them out. Operations job is to maintain the system stability and resist new changes. being pushed out and this gives us a conflict of interest.

So this kind of setup naturally makes it difficult for those two to collaborate. Another showstopper when releasing a feature is security. Just like operations team carefully evaluates any changes to make sure they won't affect system stability, security team will evaluate any changes to make sure they don't affect systems security and in a traditional setup this is the same manual bureaucratic process as operations which takes days or weeks and slows down the release process and as i mentioned devops is about removing any roadblocks that slow down the process so it includes this one as well however even though this is part of the devops solution there was a separate term created for that called DevSecOps in order to highlight and just remind the teams of the importance of security because it somehow got left out. I actually have a separate dedicated video on DevSecOps, which you can also check out if you're interested.

Now, adding to the list of showstoppers is application testing. In many projects, there are separate teams or roles for testers who test the application changes on different levels. like testing just the feature, testing the whole application, testing on multiple environments, etc.

And often these tests are done manually when teams cannot totally rely only on their automated tests. And only after manual testing is done can the change be released. And even though this may not be done by development or operations role, but rather a separate tester role, this is an important part of the release process and may also slow it down considerably.

As I mentioned, many of the tasks during the release process like testing, security checks, deployment, et cetera, used to be done manually. For example, operations would do most of the operations tasks manually, either by directly executing commands on the servers to install tools, configure stuff, do patches, or have scripts or small programs they execute. But in both cases, This is manual work. So application release tasks are not automated.

So we're manually deploying the application, manually preparing the deployment environment, creating infrastructure, configuring servers, etc., manually configuring Jenkins jobs, for example, manually configuring access to the servers to Jenkins build to all the tools, etc. And this manual work is slow and more error prone because of human error. Plus, with manual work, you have a disadvantage that knowledge sharing is very difficult.

Because people who do the tasks would have to document it, and others would have to read it. It's also very intransparent because it's hard to trace who executed what when. And finally, when infrastructure configuration and so on was done manually, if something happens to the infrastructure, it may be really hard to recover and replicate the exact state.

fast. You would have to remember exactly what was done on the servers in which order to get to that previous infrastructure state. So you see, the main characteristic of all these issues is that they all slow down the release cycle and create roadblocks on the way.

And you also see in case of security and testing that DevOps may even go over only development or only operations, responsibilities and tasks. And that's why to understand DevOps, Instead of focusing on the name and what it means, we're focusing on what it tries to achieve. DevOps tries to remove all these roadblocks and things that slow down the release process, whatever that may be.

And instead of manual inefficient processes, helps create fully automated, streamlined processes for release cycles. And this can be done step by step, removing one roadblock at a time until you have a fully optimized, an automated devos process that makes your application release super easy. There are many companies who have optimized the process to the level that they can release multiple times a day.

Of course, not every project needs multiple releases a day, but having this kind of streamlined release process is obviously beneficial for everyone. So how does DevOps help achieve this and solve all these challenges? Well, by the official definition, and this was the original idea of DevOps, DevOps defines a combination of cultural philosophies, practices and tools for doing that.

So DevOps is not just one set of tools or one. specific concept, it's a combination of anything that creates the process of releasing the software fast and with high quality. And the main part of the concept was that developers and operations people should work together more often, talk to each other more often and collaborate better to achieve that.

But actually this definition is too broad and too high level and makes it hard to imagine how it works in practice. So it's just not specific enough. So naturally different companies implemented DevOps in different ways.

So the actual implementation of DevOps looked pretty different from company to company. But since companies started adopting it, gradually it got a more concrete form with some of the common patterns across many companies. And one of these patterns was that DevOps evolved into an actual role called a DevOps engineer, where either developers are doing DevOps as a job next to development, or operations are doing it, or someone is doing DevOps exclusively as their only job. And a set of technologies that were used to implement the DevOps principles became DevOps technologies, which now DevOps engineers would need to learn.

And I understand that many people are resisting the idea of DevOps engineer and the creators of the DevOps concept didn't see it used this way. But the reality is often different from the theory. We see that concept was adjusted and bent to meet the needs of the end goal. And DevOps engineer role is what came out of it.

And that DevOps role is responsible for creating a streamlined release process without any roadblocks slowing down the release. And that's why in the center of DevOps is the well-known continuous integration, continuous delivery process. So let's see exactly what makes up a fully streamlined CI CD pipeline and generally what are the tools and concepts you need to learn as a devops engineer, what tasks and responsibilities it has, as well as where is the line and boundaries of devops from development and from operations.

It all starts with the application. Developer's team will program an application with any technology stack, different programming languages, build tools, etc. And they will of course have a code repository to work on the code in a team. One of the most popular ones today is Git. Now you as a DevOps engineer will not be programming the application.

But you need to understand the concepts of how developers work, which Git workflow they are using, also how the application is configured to talk to other services or databases, as well as concepts of automated testing and so on. Now, that application needs to be deployed on a server so that eventually users can access it. Right.

That's why we're developing it. So we need some kind of an infrastructure on-premise servers or cloud servers. And these servers need to be created and configured to run our application.

Again, you as a DevOps engineer may be responsible for preparing the infrastructure to run the application. And since most of the servers where applications are running are Linux servers, you need knowledge of Linux and you need to be comfortable using command line interface because you will be doing most of the stuff on the server using command line interface. So knowing basic Linux commands, installing different tools and software on servers, understanding Linux file system basics of how to administer a server, how to SSH into the server and so on.

You also need to know basics of networking and security. For example, to configure firewalls to secure the application, but also open some ports to make application accessible from outside, as well as understand how IP addresses, ports and DNS works. However, to draw a line here between IT operations and DevOps, you don't have to have advanced super operating system or networking and security skills and be able to administer the servers from start to finish. There are own professions like network and system administrators, security engineers and so on that really specialize in one of these areas. So your job is to understand the concepts and know all of this to the extent that you're able to prepare the server to run your application, but not to completely take over managing the servers and whole infrastructure.

Nowadays, as containers have become the new standard, you will probably be running your application as containers on a server. This means you need to generally understand concepts of virtualization and containers and also be able to manage containerized applications on a server. One of the most popular container technologies today is Docker, so you definitely need to learn it.

Great. So now we have developers who are creating new features and bug fixes on one side and we have infrastructure or servers which are managed and configured to run this application. The question now is how to get these features and bug fixes from development team to the servers to make it available to the end users. So how do we release the new application versions basically? And that's where the main tasks and responsibilities of DevOps comes in.

With DevOps, the question is not just how we do this in any possible way, but how we do this continuously and in an efficient, fast and automated way. So first of all, when the feature or bug fix is done, we need to run the tests and package the application as an artifact. jar file or a zip, etc., so that we can deploy it.

That's where build tools and package manager tools come in. Some of the examples are Maven and Gradle for Java applications, for example, NPM for JavaScript applications and so on. So you need to understand how this process of packaging testing applications work.

As I mentioned, containers are being adopted by more and more companies as a new standard. So you will probably be building Docker images from your application. As a next step, this image must be saved somewhere, right? In an image repository.

So Docker Artifact Repository on Nexus or Docker Hub, etc. will be used here. So you need to understand how to create and manage artifact repositories as well. And of course, you don't want to do any of this manually. Instead, you want one pipeline that does...

All of these in sequential steps. So you need build automation. And one of the most popular build automation tools is Jenkins. Of course, you need to connect this pipeline with Git repository to get the code. So this is part of continuous integration process where code changes from the code repository get continuously tested.

And you want to deploy that new feature or bug fix to the server. after it's tested, built and packaged, which is part of continuous deployment process, where code changes get deployed continuously on a deployment server. And there could be some additional steps in this pipeline, like sending notification to team about the pipeline state or handling failed deployment, etc.

But this flow represents the core of the CI CD pipeline and the CI CD pipeline. happens to be at the heart of the DevOps tasks and responsibilities. So as a DevOps engineer, you should be able to configure the complete CI CD pipeline for your application. And that pipeline should be continuous.

That's why the unofficial logo of DevOps is an infinite cycle, because the application improvement is infinite. New features and bug fixes get added all the time that need to be deployed. Now let's go back to the infrastructure where our application is running.

Nowadays, many companies are using virtual infrastructure on the cloud instead of creating and managing their own physical infrastructure. These are infrastructure as a service platforms like AWS, Google Cloud, Azure, Linode, etc. One obvious reason for that is to save costs of setting up your own infrastructure. But these platforms also manage a lot of stuff for you, making it much easier to manage your infrastructure there.

So, for example, using a UI, you can create your network, configure firewalls, route tables and all parts of your infrastructure through services and features that these platforms provide. However, many of these features and services are platform specific, so you need to learn them to manage infrastructure there. So if your applications will run on AWS, you need to learn the AWS and its services.

Now, AWS is pretty complex, but again, you don't have to learn all the services that it offers. You just need to know those concepts and services that you need to deploy and run your specific application on the AWS infrastructure. Now, our application will run as a container, right?

Because we're building Docker images and containers need to be managed for smaller applications. Docker compose is enough to manage them. But if you have a lot more containers, like in case of big microservices, you need a more powerful container orchestration tool to do the job.

Most popular of which is Kubernetes. So you need to understand how Kubernetes works. And I'm going to show you how it works. and be able to administer and manage the cluster as well as deploy applications in it. Now, Kubernetes is a powerful but also a very complex tool.

So it's usually a lot of effort to set up and manage multiple Kubernetes clusters for different teams in a company. So before moving on, I want to give a shout out to our sponsor Loft, which is a platform that helps you build self-service Kubernetes clusters easily. Platform teams can deploy Loft, connect clusters, and then let engineers create isolated development and CI-CD environments on demand whenever they need them. So it puts developers in charge and gives them direct self-service access to Kubernetes. One of the great features and benefits of Loft is that that Loft can save you more than 70 percent of the cloud costs by automatically putting virtual clusters to sleep when nobody is using them and automatically waking them up again once engineers interact with it.

If you want to learn more about how Loft works, I actually did a separate video on it for the DevOps tool of the month series. Now, Loft has a lot of other great use cases for working with Kubernetes. So if you want to try it out yourself, for my followers, Loft actually provides six month free for their paid subscription for the first 500 people. So check out my special link and use my promo code for that. Now back to our DevOps roadmap.

Now, when you have all these, maybe thousands of containers running in Kubernetes on hundreds of servers, how do you track performance of your individual applications or whether everything runs successfully? Whether your infrastructure has any problems and what's more important, how do you know in real time if your users are experiencing any problems? One of your responsibilities as a DevOps engineer may be to set up monitoring for your running application, the underlying Kubernetes cluster, and the servers on which the cluster is running.

So you need to know a monitoring tool like Prometheus or Nagios, etc. Now, let's say this is our production environment. Well, in your project, you will, of course, need development and testing or staging environments as well to properly test your application before deploying it to the production.

So you need that same deployment environment multiple times. Creating and maintaining that infrastructure for one environment already takes a lot of time and is very error prone. So we don't want to do it manually three times. As I said before, we want to automate as much as possible. So how do we automate this process?

Creating the infrastructure as well as configuring it to run your application and then deploying your application on that configured infrastructure can be done using a combination of two types of infrastructure as code tools. Infrastructure provisioning tool like Terraform, for example, and configuration management tool like Ansible or Chef, etc. So you as a devops engineer should know one of these tools to make your own work more efficient, as well as make your environments more transparent so you know exactly in which state it is and easy to replicate and easy to recover. In addition, since you are closely working with developers and system administrators to also automate some of the tasks for them, you would most probably need to write scripts, maybe small applications to automate tasks like doing backups, system monitoring tasks, cron jobs, network management and so on. In order to be able to do that, you need to know a scripting language.

This could be an operating system specific scripting language like bash or powershell, or what's even more demanded, a more powerful and flexible language like Python, Ruby or Golang, which are also operating system independent. Again, here you just need to learn one of these languages and Python without a doubt is the most popular and demanded one in today's DevOps space. Easy to learn, easy to read and very flexible.

Python has libraries for most of the databases, operating system tasks as well as for different cloud platforms. Now with these automation tools and languages you write all of this automation logic as code like creating, managing, configuring infrastructure. That's why the name infrastructure as code.

Now how do you manage your code? Just like the application code you manage this also using version control like git. So as a DevOps engineer you also need to learn git. So at this point you may be thinking.

How many of these tools do I need to learn? Do I need to learn multiple tools in each category? Also, which ones should I learn? Because there are so many of them.

Well, you should learn one tool in each category, one that's the most popular and most widely used. Because once you understand the concepts well, building on that knowledge and using an alternative tool will be much easier. if for example you need to use another tool in your company or project now you may be thinking these are a lot of things to learn and it may be hard to know where to start or what to learn first or what resources to use and so on well there are many resources out there to learn individual devops technologies but ideally you want to follow a well-structured step-by-step roadmap and more importantly learn how to use these technologies together in combination Because that's what DevOps engineers do.

They use and integrate multiple technologies together to create DevOps processes. And of course, you want to learn all of this with actual real-life project examples to know how it will look like in a real job. And very few courses and learning resources offer this. And that's exactly why we created a complete DevOps bootcamp with a clear structure and lots of hands-on projects. So if you're thinking about becoming a DevOps engineer, or slowly transitioning into DevOps, you should definitely check out our DevOps Bootcamp in the video description.

To get a full picture of DevOps, I want to mention one more concept, which is SRE, or Site Reliability Engineering, and how it fits into DevOps. In this video, we learned that there are two definitions of DevOps. the original definition, which is more high level and more broad, and doesn't specify how exactly DevOps should be implemented and a more practical one which evolved over time with its own DevOps engineer role, which is what you learn in our DevOps boot camp. So when we compare DevOps with SRE, it's important to know which definition of DevOps we're using for this comparison. So with the first broader definition of DevOps, DevOps is a more high level comparison concept that defines what needs to be done to achieve the automated streamlined release process, while SRE is more specific about how to exactly implement this process and how to implement DevOps principles.

So many people would say that SRE is a specific implementation of the DevOps concepts. But as we saw, DevOps itself also became more practical with its own role and specific technologies. and ways to implement it. So what's the comparison here?

Well, in many companies, DevOps implementation, this practical DevOps implementation, became more focused and concentrated on the speed of delivery for the application changes. And of course, even though it's part of the DevOps principles to not only release fast, but release quality code, many DevOps teams in practice, again, seemed to optimize more. for the speed than reliability.

So as a great complimentary part of DevOps, SRE emerged with the same principles and goals in mind, which is release quality code fast, but as the name suggests, more focused on reliability and keeping systems stable while allowing for fast changes. So SRE is its own role with its own set of tools for making systems reliable. So these two were kind of parallel developments and are now often seen as two sides of the same coin.

And it's not uncommon for teams to have both DevOps engineer and SRE helping implement the DevOps principles. So this was just a short look at SRE to understand it in comparison to DevOps. But since I have received many questions about what SRE is, I will release its own follow up video on SRE in the next weeks to explain in more detail. like how SRE works in practice, what are the tasks and responsibilities of a site reliability engineer, and so on. So be sure to subscribe to my channel and activate the notification bell to be notified when I release the video.

And for DevOps, I hope I could clarify all your questions about it. If not, leave a comment with a question and I will try to answer them. With that, thank you for watching and see you in the next video.