Transcript for:
Azure: Day 1 - Basics of Cloud Concepts

hello everyone my name is abishek and welcome back to my Channel today is day one of azure 0 To Hero series and in this video we will understand the basics of cloud Concepts some of the most commonly used vocabulary and terminology in cloud computing this video serves as a prerequisite before you start with any Cloud platforms it can be AWS azure or gcp the concepts that I'm going to explain in today's video are the prerequisites now I could have easily started with day two and start explaining you how to create an account with Azure but the problem is that if you don't understand these fundamentals such as what is cloud what is the difference between public private and hybrid cloud or let's say you don't understand some keywords like what is an API or what is a virtual machine then it becomes difficult for me to explain also it becomes difficult for the beginners to understand the Azure platform or any Cloud platform the same happens even if you are trying to understand the cloud platform through documentation or other videos so that's why we will get familiar with the concepts that are mentioned here as well as the keywords that I have mentioned here so that we will be good to start with learning the cloud platforms so please try to watch this video till the end now before I start with explaining these Concepts one thing that I have to mention is the notes will be up uploaded to these folders like currently you see folder called day one because I just uploaded the notes for day one after watching this video you can use these MD files as a revision to what you have learned in the video now this does not provide everything that I'm talking in the video but definitely they serve as a reference and revision material for the video that I'm going to make so please start this repository folk watch so that you get constant updates whenever I make changes to this documentation so let's start with understanding Cloud I'll make it very simple even if you are a beginner you will easily understand after watching this so let's say today's world we all use applications like we all use Instagram we all use uh Twitter any other applications so these applications are developed on the developers laptop or the developers machine but for these application to be accessible to everyone the applications have to be deployed on servers right when we use Instagram or Twitter we don't use from the Twitter's or Instagrams developer but we use it from the servers where this application is deployed so what is a server typically server is also a computer but server has some local storage plus CPU plus RAM and the purpose of this server is to run applications or processes like your laptop which is also a server but your laptop is an advanced or the sophisticated a rich server which has a very good display your laptops might have a graphic card right but servers they don't have these rich features but they have the important ones that is a local storage CPU Ram that are required to run the applications as well as processes now why I'm talking about server I'll get there to understand Cloud you should definitely understand this if you go 10 years back now instead of Instagram let's talk about Google or Yahoo so there used to be a dedicated system admin or system administrators team whose responsibility is to procure these servers so there used to be or even now there are vendors like IBM HP who sell these servers so these system administrators they used to procure multiple servers why multiple servers because let's say let's talk about the system administrators team in Google like even 10 years or 15 years back before even the microservices concept existed if you talk about Google Google has multiple applications right Google has Gmail Google has the complete G Suite right there are a lot of applications that Google has and there are some thousands of developers there are uh QA Engineers there are automation testers so deploying these applications on one server does not work so what the system administrators team at Google used to do is they used to procure multiple servers let's call it as server one server two server three 4 100 and these servers like today let's say you access www.google.com or even if you would have accessed google.com 10 years back you don't get the response back in 15 minutes right you usually get response back if you're talking about 10 15 years ago probably we used to get response back in seconds now we get response back in fraction of seconds that is because these servers that we are talking about right they are connected with so these servers system administrator teams they used to procure these servers they used to connect these servers to high data transfer cables High data transfer switches and from there all of these servers used to get connected to the routers right so these system administrator teams they used to build this entire system where these servers used to be placed in some racks like you have racks in your wardrobes so these servers were also placed in the racks right so if you have someone who is working in the software industry from last 15 years or 10 years you know everyone is acquainted with this concept because they must have physically seen a server room so this particular room where all of these servers are placed applications are deployed on these servers and these servers are connected with some cables switches routers this entire setup is called as data center and typically most companies used to have their own data centers I'm talking about most let's exclude the extraordinary cases but mostly every company used to have this data center so let's say this is the Google data center now what is the advantage of it like I told you one is system administrators team have built this servers now let's say there is a developer called X and this developer needs a server right so when this developer requests for a server the system administrator team used to watch for all of these servers and they used to say that okay this particular server is free right so you can use part of this server or you can use this particular server now this particular thing is called as private Cloud you know at that point of time probably the word cloud was not that popular but today we call this particular setup as private Cloud now everything was going in this particular way and then Amazon came into picture right and they said we are starting something called as AWS now why did they start this and what did they do so they said that okay like for Google the example that we have taken previously was Google probably for Google it is very easy to buy a lot of these servers right for Google it is easy to set up a system administrator team team for Google it's very easy to buy these racks connect them to switches right and not just it basically the system administrator team their job is not just done at this point of time because because they have helped application developers to deploy applications on these servers there are multiple challenges one this particular server room should have 24x7 electricity Supply right because if one of these servers does not have the electric electricity Supply then the application deployed on this server is not accessible anymore two for this particular organization here we took example of google.com but let's say this is some midscale organization having 10 system administrators becomes very difficult for them right so second thing is overhead third thing is continuous maintenance of these servers right so the system administrators team they have to continuously patch these servers there can be some vulnerabilities on these servers so they have to continuously upgrade the versions on these servers right there will be definitely operating system on them so they have to continuously upgrade the operating system on this servers or you know if some cables are damaged they have to make sure the cables are replaced or tomorrow there is some Advanced Cable they have to replace this cable right so third thing is maintenance and there are many other challenges not just that cost is another challenge like maintaining all of these things is a very costly process so what Amazon said is I know that it is becoming difficult for some organizations so let's take advantage of it so what they said is we are going to start something as AWS and what we will do is is we will set up our data centers right we just understood what is a data center so Amazon said we will set up our data centers across the world now here I'm not talking about servers I'm talking about data center which is collection of servers like this entire thing is one data center then Amazon said we will set up hundreds of data centers across the world there can be five data centers in the US two data Cent in India or 10 data centers in the Europe region two in Australia and what companies can do is if we take this particular company as an example let's say this company is example.com now Amazon said example.com can come to us and request for 10 servers and in Amazon World they are calling it as virtual machines right I'll talk about this don't worry for now but what Amazon says is okay request for 10 servers and we will give you the 10 servers now you don't have to worry about it like if this server is secure or not or if this server has continuous electricity electricity Supply or not if this servers are continuously upgraded or not all of the challenges that I've mentioned here don't worry just tell us in which region because we have data centers across the world do you want 10 servers in us or do you want 10 servers in India or in Europe just tell me and we will give you those 10 servers right and this particular thing is called as public Cloud now where did this name Cloud come into picture so if you watch carefully here so this is a private cloud like I told you and this is public cloud and in any of these things these are the computers that are connected to each other right if someone is requesting a server they get a server from this cloud in Amazon's World these are the data center and in each data center there are servers right anyone request they give you a server from this particular Cloud which is connection of multiple servers you don't know where you get this servers from so that's why this is called as public cloud and this is called as private cloud and even in today's world this is the difference between private and public public Cloud now you might ask me abishek in today's world are there any companies that maintain their private Cloud 100% yes there are some banking companies there are some Financial companies Etc who needs their own setup because of some security reasons or they have been using this Legacy systems right that is is the traditional old systems and you know they have some data requirements or they cannot move to the cloud because of the kind of applications that they are running but typically even today there are lot of banking Financial Insurance domains applications companies who are still running their applications on the private cloud and there are some companies who do not rely on the public Cloud because of the security reasons or because they don't get the requirements that they have on the public cloud of course there are so many public clouds like AWS Azure gcp there are hundreds of cloud providers which are typically which typically fall into the category of public Cloud but still there are certain requirements that these Cloud platforms cannot generate even today for that reasons or for security reasons or because they are running some Legacy applications there are some companies that are using private Cloud now finally there is a concept called as hybrid Cloud so what exactly is hybrid Cloud let's take example of a company called xyz.com so this XYZ company said that okay we absolutely love Azure so Microsoft has done similarly similar job like AWS and Microsoft has set up their data centers across the world and they said that xyz.com so they are located in Europe and what they said is okay instead of setting up this entire Data Center and private Cloud by ourselves let us simply go to Microsoft and say that in Europe region I want 100 servers and they devop steam they simply ran some scripts and they got some 100 servers and xyz.com has deployed their application on this 100 servers right but after a while the company because of some government restrictions or because of their restrictions they said that okay there is some very sensitive data and we want to store that sensitive data in databases but these databases have to be on our private Cloud only that is we cannot give this sensitive information to Microsoft we want to keep the sensitive information with us only so what they will do is they will set up a very simple data center of course this is also a public Cloud private Cloud but it's not public Cloud it's private Cloud because this is in their premises it's also called on premises so they have this setup on on premises but it's a very small setup and rest all setup where critical data is not there so they will use the servers from Azure so it's a mix of private cloud and public Cloud that's why it's called as hybrid Cloud right so don't get confused like don't confuse yourself with hybrid cloud and multi Cloud know these days you might also hear about the things like multicloud so multic cloud is typically where companies use some services from AWS some services from Azure like they have mix of these Cloud providers then we call it as multic Cloud it is different from hybrid cloud of course you don't have to worry about this at this point of time because till now the goal is to understand what is private Cloud what is public cloud and what is hybrid Cloud right so I hope this particular concept is clear now the next thing is to understand what exactly is cloud computing so cloud computing is very simp simple whatever we discussed till now is basically cloud computing if you are Computing on cloud that is if you are running your applications on cloud and if the applications are accessed by the users so system administrators now everything is devops Engineers Cloud Engineers so whatever the task that these Engineers perform and whatever the task that AWS Azure which the cloud platform that you're using together this entire Computing is called as cloud computing so typically every company today is performing the cloud computing because most of the companies are on the public Cloud platform or most of the companies are on the hybrid Cloud right now let's move to these keywords like don't have to worry the goal is to get familiar with these keywords but not to get expert on these keywords because in today's video it will be very difficult for you to become expert on load balancing expert on high availability for now just get familiar with these keywords and as we go on this series as we understand day two day three day four you will become expert on all of these things like towards the day 25 you will understand all of these Concepts at an expert level today's video we will understand get Basics and get familiar with these Concepts let's start with virtualization virtualization is a very very simple concept like if you go back to the previous examples whether it is private cloud or public Cloud let's say devops Engineers or system administrators they bought a server from IBM now obviously when IBM sells This Server This Server comes with huge configuration like it comes with let's say 512 GB Ram right it's not like your laptop which comes with 4GB Ram or some configuration usually servers comes with some extensive configuration so let's say it comes with 512gb ram and it comes with let's say 1,000 CPUs and know I'm just making up this number don't have to think that you know these servers comes with these huge numbers I'm just making up these numbers now the thing is that when these servers comes with these configuration obviously let's say the is a developer called Dev one and this developer requests for a server one developer cannot use the complete resources of This Server right it will be a foolish decision if you allocate this entire server to one developer whether a system administrator or a devops engineer does not dedicate the entire server to a developer previously this used to happen right then came the concept called virtualization where virtualization is a concept where devops Engineers or system administrator the reason why I'm repeating system administrators or devops engineer because back then there was nothing called as a devop when these Concepts initially came up so typically it was system administrators who are doing this so system administrators on top of these servers they used to install a particular software that is called as hypervisor now hypervisor is a software that implements the concept called virtualization where the goal of this virtualization is to in a layment terminology if you want to understand if this is the server what virtualization using the hypervisor software does is it will break this server not physically but logically Al right so a server is logically isolated into multiple servers so if it is 1,000 GB Ram using hypervisor you can break it into th000 logical servers these servers are called as virtual machines right because one developer might just need one or two CPUs to run their application or probably four or six CPUs to run their application so instead of giving them thousand CPUs using this concept called virtualization and installing hypervisor on top of these servers you can break them into logical 100,000 10,000 any number of servers that you want and the small logically separated servers are called as virtual machines in today's world you don't do it because this is what Amazon also does right in today's world world you don't have to install hypervisor by yourself unless you are maintaining the private Cloud infrastructure in your organization but it is typically easy to understand and sorry it is typically important to understand so what Amazon does is like I told you they have their data center right let's take one single data center that's in one of the US east region so in US east region they might have multiple data centers let's say this is one of the data center and like I told you in this data center they'll have multiple servers which are connected using rxs and each of these server has huge configuration so when one software engineer from company XYZ or ABC or a student who just has an account with AWS or Azure right let's say Microsoft Azure so whoever has an account with Azure even it can be a student you can request for a virtual machine today and what Azure will do is okay a student has requested a virtual machine with one CPU and 2 GB Ram so it will go to one of these servers which is set up with hypervisors these servers are already set up with hypervisors and these servers are logically split into multiple servers from one of these servers it will just grant you one of the virtual machine right so whether it is azure or Amazon or gcp they all use this concept called virtualization because their data center although they have thousands of servers but the virtual machine request that they get and because these servers are huge in size they cannot give someone who is just requesting for one CPU and 2 GB Ram with this entire server that has this huge configuration right so that's why they use the concept of of virtualization and depending upon the request they give you a server right so that's why virtualization is very very important concept so Amazon use this Microsoft Azure use this to meet the requirements of millions of requests that they get on one single day one single day millions of people might request virtual machines from them and this is how they generate those virtual machines and in your organization if you are maintaining the data centers you also have to do it because there will be thousands of developers in your organization who are requesting and your organization just has some tens of servers right so this is the concept of virtualization and now the second thing that you have to understand is what is an API because I have already explained what is a virtual machine now let's try to understand what is an API so again lot of people ask me this question abishek what is an API I'll explain this in a very simple way for any application that you take let's take example of Facebook or let's take example of any application that you have deployed in your organization let's say you are working in an organization and your organization has an application that you have deployed so there are multiple ways to access these applications if you take Facebook as an example you can go to your browser and you can just say www.f facebook.com and you can access this application you can create an account with Facebook you can talk to your friends and whatever you are doing in this particular way it's called as user interface so typically you are accessing the Facebook application from the user interface now when you talk about applications let's say devops applications uh for example if you have heard about kubernetes or if you have heard about genkins right so these applications also can be accessed using the user interface similarly how you access facebook.com along with that these applications I mean you can do it with Facebook also but let's take better example that is Jenkins these applications can also be accessed using API or CLI now what are these things abishek I understood was what is a user interface because every day I use Instagram every day I use Facebook and that happens through the user interface but the application that you are writing let's say the Jenkins application the developers who wrote them they will also allow to access these applications programmatically okay because you are a user of facebook.com where you are just using it for watching some feed you don't have to access facebook.com through some scripting or through some programming but application developers devops Engineers QA Engineers who want to test facebook.com right they want to perform some scripting on the facebook.com because they don't have to only test it through the user interface they might want to test it for some scripts for some errors through the scripts or programming for that the application developers of facebook.com they allow two different ways one is through the API and one is through the CLI so API stands for application interface where developers devops Engineers or QA Engineers whoever it can be like even with Facebook you can do it today but if you ignore that in your organization the applications that your developers are developing they will expose the application using the API so that someone can programmatically access these applications now if you talk about Microsoft Azure itself right so in Microsoft Azure like I told you there will be different resources like you can request a virtual machine from Microsoft Azure and to request this virtual machine in tomorrow's class when you create an account with Microsoft Azure you can go to your browser like Chrome or Firefox and you can click some buttons and you can request for a virtual machine right but what if in your organization someone wants to request thousand virtual machines clicking this four five buttons thousand times becomes difficult so what Microsoft Azure does is okay we will also expose something called API and the developers or devops Engineers or QA Engineers they can write a shell script or they can write a python script and we will provide you a API that is the application interface so if they hit this particular URL with some parameters they can just say that I want 1 2 10 100 and you will get th000 how many number of virtual machines that you are requesting for right so what you are doing here programmatically accessing Microsoft Azure so this programmatic way of accessing Microsoft Azure is called as accessing the resources of Microsoft Azure using the apis right simple thing that you have to understand at this point if you are doing through the user interface that is if you're going to your browser and clicking few buttons to get a resource from Microsoft Azure that means you are doing it using the user interface but the same thing if you're doing using scripting like shell scripting or python scripting what you will do is you will talk to the API that is written by Microsoft Azure developers and using this apis you can programmatically get the resources right when we use any particular application or resources on Azure this concept will become much better to you but for now just get familiar with so this is API and the next concept is Regions and availability zones shortly called as AES now availability zones is very simple to understand you can consider an availability Zone as a data center but what is a region because we just discussed about the complete Microsoft Azure servers and all it will be very easy to understand so Azure assume this as the world map because I cannot draw the world so Azure has their data centers across multiple parts of the world like I told you so they have data centers in the US they have data centers in India they have data centers in Africa multiple data centers in the Europe region so for Azure and even for people who are requesting the resources from Azure there has to be a geographical identification and there are some other significant things like you know let's say if you take us as an example right if you consider this as the United States and if Microsoft usure sets up all of its data centers in a particular region of the US okay so if they have 10 data centers in us if they set up all of these data centers in one particular region let's say Alaska the problem would be what if there is an electric shortage or electricity shortage or there is some kind of natural disaster that has taken place in Alaska and there is an lack of electric electricity Supply to this particular thing so what happens is Microsoft Azure servers stops working in the US and the result of it is hundreds and thousands of companies who have trusted Microsoft Azure for keeping their data secure as well as for running their applications 24 by7 because each application has customers and the downtime in this particular area of us means downtime of complete Microsoft Azure So to avoid that what Microsoft Azure or any cloud provider does is in instead of setting the data centers in one particular place in the US what they do is they set up these data centers probably five data centers here five data centers here right and they call this as us east region and this as us west region so even if something happens right even there is a natural Calamity or there is some disaster that takes place and if complete us west is shut down which is quite rare I'll tell you the reason why still there is Us East that is working right so the companies which are deploying their applications if they deploy one application I mean part of an application here and part of an application here even if this goes down this is up and running so their application might not work here but the application that is deployed here was works fine so customers might request here that that is not working so the request goes to this particular thing how does that go there is a load balancer that is set up we'll get to there but for now just understand that the regions are created by Microsoft Azure or any cloud provider because if there is some disaster that takes place or there is some natural Calamity that happens and there is short of Supply in one of the regions still there is other region that works fine now I told you this is very rare that is complete Us West Region goes down is very rare because even if you pick up one particular region right so here if you consider this as us West even the data centers in US West are created in completely far locations right if there are t three Data Center ERS in the US west region that is if there are three availability zones in US west region each region are very farly set up again the reason is same they're farly set up because even if this data center goes down this and this data center are working fine and they are completely operational right so company XYZ if they have an application what they can simply do is they can deploy their application one part or one application here and the same replica of that application in this location and they will place a load balancer in front of it so that even if this availability Zone goes down the load balancer will send the request here right we will come to load balancer in a while or let's talk about load balancer at this point of time only because we are already talking about it right so what is a load balancer if you go back to the same example right so This Is Us west region right and US west region let's say has three data centers so what application developers devops Engineers does is they will deploy their application one replica here and one replica here replica is nothing but the same clone of the application so what happens is they will set up a load balancer in front of it so if there are 10 customers who are requesting the job of this load balancer is if load balancer gets 10 requests right you can Define you can design this load balancer in such a way or you can configure this load balancer in such a way you can tell load balancer multiple algorithms you can say the load balancer send file requests to this particular server in the data center right Data Center and send F requests to the other server let's call this as one virtual machine in this particular data Data Center and let's say this is another virtual machine in this particular data center so you can design the load balance in such a way if you get 10 requests you can say fire request send to this particular virtual machine fire request send to this particular virtual machine not just that load balancers have hundreds of algorithms where you can say the load balancer that okay send 20% of the request here and 80% of the request here right or load balancers by default are configured in such a way that if this virtual machine goes down right because let's say this availability is went completely down and this availability Zone went completely down so this virtual machine has gone down load balancer automatically identifies it because there is some health check mechanism and load balancer will start sending 10 requests to this particular thing only so load balancers are that powerful and they have lot of capabilities we will understand about all of the cap capabilities once we talk about load balancers dedicated video but for now understand the advantage of load balancers is to split the load because Instagram today millions of users use load balance Instagram and if all the millions of requests just go to one particular application that application will die because of the number of requests it is receiving so load balancers are created they are placed in front of two five 10 replic because of your application and the job of load balancer is if it gets 1 million request it can send 10,000 here 10,000 here 10,000 here 10,000 here and if unfortunately one of the server goes down immediately it will understand and it will stop sending request to this particular server and then probably it will send 20,000 here 15,000 here and 15,000 here so such powerful load balancers are and in the world of public Cloud where applications are deployed in multiple availability zones or data centers load balancers plays critical role next topic that we have are the features of cloud computing scalability elasticity agility High availability Disaster Recovery I'll complete these things again very quick don't worry I just need some 15 minutes probably so scalability right what is scalability in the world of cloud we are just getting familiar with this topics so don't we don't have to deep down and understand but for now just understand that as we use Instagram right or let's say uh you are watching the cricket match or you are watching the baseball match in hotar or any particular applications right so what happens is sometimes there might be 10 million users who are watching this particular thing at one point and drastically this goes to 25 million the number of users who are watching it sometimes it goes to 40 million right so the thing here is that when this number of users are increasing dynamically or this can be a static increase also sometimes you might expect this particular thing or sometimes you might not expect this particular thing but there is an increase in the number of users and increase in the number of requests so initially let's say you have created multiple copies of your application and deployed in three virtual machines and let's say you have placed a load balancer okay so this is copy one let's say this is vm1 this is vm2 and vm3 now we have created three copies and you said okay if there are 10 million requests send one/ third here one3 here and one thir here now I'm just making up there will be multiple copies just understand that this is how it is configured drastically if it goes to 25 million if you send 20 million requests here or if you send 15 million requests here the application might not serve and what will be the result of it there will be a slowness in the application and if you are watching a particular match you will not expect the slowness right you will not take it for granted so for that particular reason what we will do or what the devops engineers do right so there is a concept called autoscaling right so Autos scaling means you will tell the Azure platform or you will tell the adws platform wherever whichever Cloud platform you're using it they have the autoscaling services where you can say this Auto scaling service that okay you know if this virtual machine is occupied you can automatically scale one more virtual machine and deploy the application if you need more requirement then scale one more more and load balancer will automatically identify and it will automatically add these servers so the load is automatically managed and this mechanism is called as autoscaling and if you want to do it manually in some cases you don't want to use this particular feature of the cloud provider if you do it manually then it is called as manual scaling and this particular capability or this particular feature of cloud computing is called as scalability now sometimes you might have heard people talking about scalability make your application scalable that means this is the thing that we are talking about make your infrastructure scalable when someone says make your infrastructure scalable that means this is what they're talking about it can be autoscaling it can be manual scaling of course in scalability there is one more concept called as horizontal scaling and vertical scaling we will come there you don't have to learn that at this point of time for now just understand the concept of scalability then the next thing is elasticity the other name of elasticity is nothing but Dynamic scaling or Auto scaling right like I have told you there are two parts of scaling one is manually you scale the infrastructure two is you scale it dynamically or you Auto scale it so Auto scaling or dynamically scaling is also called as elasticity so whenever you here the term called elasticity don't get scared there they are talking about automatically scaling your infrastructure perfect then comes High availability Now High availability you just need to understand at a broad level at this point of time high availability is making your application available most of the Times best examples Instagram rarely you see Instagram going down right or Facebook rarely you see Facebook going down let's say some Bank applications you know you might expect the banks to have schedule maintenance you might expect uh the ticket booking applications to go down but if you looking at Instagrams or Facebook or any particular applications they rarely go down because they have high availability infrastructure setup so when someone is talking about high availability understand they are talking about infrastructure that requires the applications to be highly available most of the times ignore fall tolerance let's talk about Disaster Recovery now what is disaster recovery so again Disaster Recovery means it's also called Dr Disaster Recovery is a technique or it's a mechanism where you need to have a plan or action if something goes wrong right Disaster Recovery means disaster has taken place now how do you recover from that particular disaster right so you might do a lot of things for making your application secure safe your application should not go down but what if it goes down you need to have a disaster recovery planned so that Disaster Recovery is like having a backup right so when we are dealing with the cloud platforms like Azure or AWS although they have very good slas that is they give a lot of promises to you although they are mostly stable but let's say you have your users data in one of the databases in Azure platform okay in Microsoft Azure you requested for a database just like you request for virtual machine you can also requ request for a database and in this particular database you started storing all of your users information right and for some reason this database gone down again the same thing probably the availability Zone has gone down you have to make sure you have a backup what is a backup probably you will store it in another availability zone or the best thing is store in complete different region itself let's say you have one copy of the database in US east region probably create another copy in Singapore just as a backup right so such plans are called as disaster recovery so as a devops engineer as a cloud engineer you should always think about Disaster Recovery now why did I cover all of these topics like you might be feeling that abishek for today's class this class is very long and you talked about a lot of Concepts some of these things might go above your head but as I keep repeating these words you'll get very familiar with it probably when I explained about scalability some of you might have not understood but that's fine because at least we are getting familiar with that term and in future classes we will get expert with it right and next classes the concepts will be very straightforward you will not learn so many Concepts in one single day but because this is a prerequisite class my idea and intention is to make you familiar with this concept right so at least get acquainted with the terms like virtualization virtual machine API regions Avail I'm pretty sure many people might have understood till here scalability is again a very simple concept like I've explained scaling your applications load balancer if you understand that concept it's very simple usually if you have 10 people requesting or 10 people sending request to your application in initially you might feel two virtual machines are enough you have deployed one copy of your application in one virtual machine another copy in another virtual machine and send fire request here fire request here right and immediately when the application request goes to 20 30 probably sending 15 here 15 here might work but what if request immediately goes to 10,000 because it is Christmas it is some Festival anything request goes to 10,000 this cannot withhold your traffic so immediately your infrastructure should be in such a way that it has to scale new virtual machines and automatically it has to deploy in those virtual machines right this is the concept of scalability so other Concepts should be pretty much easy to understand and I hope this entire class sounds informative for you and going ahead this plays a critical role and serves as a prerequisite so please let me know in the comment section what did you feel about today's video if it is lengthy let me know or if you find it informative and lengthy also let me know thank you so much for watching the video take care everyone see you all in the next class bye-bye