Transcript for:
Complete Cloud Computing Course Summary

[Music] hello guys welcome to this complete cloud computing course by simply learn in this video we will be covering some of the most important concepts related to cloud computing samuel our multi-platform cloud architect with over 15 years of experience in the cloud domain will take you through the fundamentals of cloud computing we will talk about the cloud lifecycle important topics of aws microsoft azure and google cloud platform we will also take a look at how to become an aws cloud architect we will also look at the differences between the three platforms finally we will take you through the most important interview questions that you might face in your cloud interview first off we will have an introduction to cloud computing by our instructor samuel hi there i'm sam from simply learn and i work as a multi-platform cloud architect and trainer and let me welcome you to this learning video called what is cloud computing as we learn what is cloud computing we will also be learning about how things were before cloud computing and benefits of cloud computing different types of cloud computing available and some of the famous companies that are using cloud computing and they're getting benefited out of it we're going to learn all that before cloud computing existed if we need any it servers or application let's say a basic web server it does not come easy now here is an owner of a business and i know you would have guessed it already that he's running and successful business by looking at the hot and fresh brewed coffee in his desk and lots and lots of paperwork to review and approve now he had a smart not only smart looking but a really smart worker in his office called mark and on one fine day he called mark and said that he would like to do business online in other words he would like to take his business online and for that he needed his own website as the first thing and mark puts all his knowledge together and comes up with this requirement that his boss would need lots of servers databases and softwares to get his business online which means a lot of investment and mark also adds that his boss will need to invest on acquiring technical expertise to manage the hardware and software that they will be purchasing and also to monitor the infrastructure and after hearing all this his boss was close to dropping his plan to go online but before he made a decision he chose to check if there are any alternatives where he don't have to spend a lot of money and don't have to spend acquiring technical expertise now that's when mark opened this discussion with his boss and he explained his boss about cloud computing and he explained his boss the same thing that i'm going to explain to you in some time now about what is cloud computing what is cloud computing cloud computing is the use of a network of remote servers hosted on the internet to store manage and process data rather than having all that locally and using local server for that cloud computing is also storing our data in the internet from anywhere and accessing our data from anywhere throughout the internet and the companies that offer those services are called cloud providers cloud computing is also being able to deploy and manage our applications services and network throughout the globe and manage them through the web management or configuration portal in other words cloud computing service providers give us the ability to manage our applications and services through a global network or internet example of such providers are amazon web servers and microsoft azure now that we have known what cloud computing is let's talk about the benefits of cloud computing now i need to tell you the cloud benefits is what is driving cloud adoption like anything in the recent days if you want an it resource or a service now with cloud it's available for me almost instantaneously and it's ready for production almost the same time now this reduces the go live date and the product and the service hit the market almost instantaneously compared to the legacy environment and because of this the companies have started to generate revenue almost the next day if not the same day planning and buying the right size hardware has always been a challenge in legacy environment and if you're not careful when doing this we might need to live with the hardware that's undersized for the rest of our lives with cloud we do not buy any hardware but we use the hardware and pay for the time we use it if that hardware does not fit our requirement release it and start using a better configuration and pay only for the time you use that new and better configuration in legacy environments forecasting demand is an full-time job but with cloud you can let the monitoring and automation tool to work for you and to rapidly scale up and down the resources based on the need of that r not only that the resources services data can be accessed from anywhere as long as we are connected to the internet and even there are tools and techniques now available which will let you to work offline and will sync whenever the internet is available making sure the data is stored in durable storage and in a secure fashion is the talk of the business and cloud answers that million dollar question with cloud the data can be stored in a highly durable storage and replicated to multiple regions if you want and the data that we store is encrypted and secured in a fashion that's beyond what we can imagine in local data centers now let's bleed into the discussion about the types of cloud computing very lately there are multiple ways to categorize cloud computing because it's ever growing now we have more categories out of all these six sort of stand out you know categorizing cloud based on deployments and categorizing cloud based on services and again under deployments categorizing them based on how they have been implemented you know is it private is it public or is it hybrid and again categorizing them based on the servers it provides is it infrastructure as a service or is it platform as a service or is it software as a service let's look at them one by one let's talk about the different types of cloud based on the deployment models first in public cloud everything is stored and accessed in and through the internet and any internet users with proper permissions can be given access to some of the applications and resources and in public cloud we literally own nothing beat the hardware or software everything is managed by the provider aws azure and google are some examples of public cloud private cloud on the other hand with private cloud the infrastructure is exclusively for an single organization the organizations can choose to run their own cloud locally or choose to outsource it to a public cloud provider as managed services and when this is done the service the infrastructure will be maintained on a private network some examples are vmware cloud and some of the aws products are very good example for private cloud hybrid cloud has taken things to the whole new level with hybrid cloud we get the benefit of both public and private cloud organizations will choose to keep some of their applications locally and some of the application will be present in the cloud one good example is nasa it uses hybrid cloud it uses private cloud to store sensitive data and uses public cloud to store and share data which are not sensitive or confidential let's now discuss about cloud based on service model the first and the broader category is infrastructure as a service here we would rent the servers network storage and will pay for them in an hourly basis but we will have access to the resources we provision and for some we will have root level access as well ec2 in aws is a very good example it's a vm for which we have root level access to the os and admin access to the hardware the next type of service model would be platform as a service now in this model the providers will give me a pre-built platform where we can deploy our codes and our applications and they will be up and running we only need to manage the codes and not the infrastructure here in software as a service the cloud providers sell the end product which is a software or an application and we directly buy the software on an subscription basis it's not the infra or the platform but the end product or the software or a functioning application and we pay for the hours we use the software and in here the client maintains full control of the software and does not maintain any equipment amazon and azure also sell products that are software as service this chart sort of explains the difference between the four models starting from on-premises to infrastructure as a service to platform as a service to software as a service this is self-explanatory that the resource managed by us are huge in on premises that towards your left as you watch and it's little less in infrastructure as a service as we move further towards the right and further reduced in platform as a service and there's really nothing to manage when it comes to software as a service because we buy the software not any infrastructure component attached to it now let's talk about the life cycle of the cloud computing solution the very first thing in the life cycle of a solution or a cloud solution is to get a proper understanding of the requirement i didn't say get the requirement but said get a proper understanding of the requirement it is very vital because only then we will be able to properly pick the right service offered by the provider getting a sound understanding the next thing would be to define the hardware meaning choose the compute servers that will provide the right support where you can resize the compute capacity in the cloud to run application programs getting a sound understanding of the requirement helps in picking the right hardware one size does not fit all there are different services and hardwares for different needs you might have like ec2 if you're looking for is and lambda if you're looking for serverless computing and ecs that provides containerized service so there are a lot of hardware's available pick the right hardware that suits your requirement the third thing is to define the storage choose the appropriate storage service where you can backup your data and a separate storage service where you can archive your data locally within the cloud or from the internet and choose the appropriate storage there is one separately for backup called s3 and there is one separately for archival that's for glaciers so you know you knowing the difference between them really helps in picking the right service for the right kind of need define the network define the network that securely delivers data video and applications define and identify the network services properly for example vpc for network route 53 for dns and direct connection for private p2p line from your office to the aws data center set up the right security services im for authentication and authorization and kms for data encryption at rest so there are variety of security products available we got to pick the right one that suits our need and there are a variety of deployment and automation and monitoring tools that you can pick from for example cloud watch is for monitoring auto scaling is for being elastic and cloud formation is define the management process and tools you can have complete control of your cloud environment if you define the management tools which monitors your aws resources and or the custom applications running on aws platform there are variety of deployment automation and monitoring tools you can pick from like cloud watch for monitoring auto scaling for automation and the cloud formation for a deployment so knowing them will help you in defining the life cycle of the cloud computing solution properly and similarly there are a lot of tools for testing a process like code star and code build and code pipeline these are tools with which you can build test and deploy your code quickly and finally once everything is set and done click the analytic service for analyzing and visualizing the data using the analytics services where we can start querying the data instantly and get a result now if you want to visually view the happenings in your environment you can pick antenna and other tools for analytics or emr and which is elastic mapreduce and cloud search thanks guys now we have samuel and rahul to take us through the full course in which they will explain basic framework of amazon web services and explore all of its important services like ec2 lambda s3 iam and cloudformation we'll also talk about azure and some of its popular services hello everyone let me introduce myself as sam a multi-platform cloud architect and trainer and i'm so glad and i'm equally excited to talk and walk you through the session about what aws is and talk to you about some services and offerings and about how companies get benefited by migrating their applications and infra into aws so what's aws let's talk about that now before that let's talk about how life was without any cloud provider and in this case how life was without aws so let's walk back and picture how things were back in 2000 which is not so long ago but a lot of changes a lot of changes for better had happened since that time now back in 2000 a request for a new server is not an happy thing at all because a lot of money a lot of validations a lot of planning are involved in getting a server online or up and running and even after we've finally got the server it's not all said and done a lot of optimization that needs to be done on that server to make it worth it and get a good return on investment from that server and even after we have optimized for a good return on investment the work is still not done there will often be a frequent increase and decrease in the capacity and you know even news about our website getting popular and getting more hits it's still an bittersweet experience because now i need to add more servers to the environment which means that it's going to cost me even more but thanks to the present day cloud technology if the same situation were to happen today my new server it's almost ready and it's ready instantaneously and with the swift tools and technologies that amazon is providing uh in provisioning my server instantaneously and adding any type of workload on top of it and making my storage and server secure you know creating a durable storage where data that i store in the cloud never gets lost with all that features amazon has got our back so let's talk about what is aws there are a lot of definitions for it but i'm going to put together a simple and a precise definition as much as possible now let me iron that out cloud still runs on and hardware all right and there are certain features in that infrastructure in that cloud infrastructure that makes cloud cloud or that makes aws a cloud provider now we get all the services all the technologies all the features and all the benefits that we get in our local data center like you know security and compute capacity and databases and in fact you know we get even more cool features like content caching in various global locations around the planet but again out of all the features the best part is that i get or we get everything on a pay as we go model the less i use the less i pay and the more i use the less i pay per unit very attractive isn't it right and that's not all the applications that we provision in aws are very reliable because they run on a reliable infrastructure and it's very scalable because it runs on an on-demand infrastructure and it's very flexible because of the designs and because of the design options available for me in the cloud let's talk about how all this happened aws was launched in 2002 after the amazon we know as the online retail store wanted to sell their remaining or unused infrastructure as a service or as an offering for customers to buy and use it from them you know sell infrastructure as a service the idea sort of clicked and aws launched their first product first product in 2006 that's like four years after the idea launch and in 2012 they held a big sized customer even to gather inputs and concerns from customers and they were very dedicated in making those requests happen and that habit is still being followed it's still being followed as a reinvent by aws and at 2015 amazon announced its revenue to be 4.6 billion and in 2015 through 2016 aws launched products and services that help migrate customer services into aws well there were products even before but this is when a lot of focus was given on developing migrating services and in the same year that's in 2016 amazon's revenue was 10 billion and not but not the least as we speak amazon has more than 100 products and services available for customers and get benefited from all right let's talk about the services that are available in amazon let's start with this product called s3 now s3 is a great tool for internet backup and it's it's the cheapest storage option in the object storage category and not only that the data that we put in s3 is retrievable from the internet s3 is really cool and we have other products like migration and data collection and data transfer products and here we can not only collect data seamlessly but also in a real-time way monitor the data or analyze the data that's being received that they're cool products like aws data transfers available that helps achieve that and then we have products like ec2 elastic compute cloud that's an resizable computer where we can anytime anytime after the size of the computer based on the need or based on the forecast then we have simple notification services systems and tools available in amazon to update us with notifications through email or through sms now anything anything can be sent through email or through sms if we use that service it could be alarms or it could be service notifications if you want stuff like that and then we have some security tools like kms key management system which uses aes 256 bit encryption to encrypt our data at rest then we have lambda a service for which we pay only for the time in seconds seconds it takes to execute our code and we're not paying for the infrastructure here it's just the seconds the program is going to take to execute the code for the short program we'll be paying in milliseconds if it's a bit bigger program we'll be probably paying in 60 seconds or 120 seconds but that's not cheap lot simple and lots cost effective as against paying for service on an ugly basis which a lot of other services are well that's cheap but using lambda is a lot cheaper than that and then we have services like route 53 a dns service in the cloud and now i do not have to maintain a dns account somewhere else and my cloud environment with aws i can get both in the same place all right let me talk to you about um how aws makes life easier or how companies got benefited by using aws as their i.t provider for their applications or for the infrastructure now uniliver is a company and they had a problem right and they had a problem and they picked aws as a solution to their problem right now this company was sort of spread across 190 countries and they were relying on a lot of digital marketing for promoting their products and their existing environment their legacy local environment proved not to support their changing id demands and they could not standardize their old environment now they chose to move part of their applications to aws because they were not getting what they wanted in their local environment and since then you know rollouts were easy provisioning new applications became easy and even provisioning infrastructure became easy and they were able to do all that in push button scaling and needless to talk about backups that are safe and backups that can be securely accessed from the cloud as needed now that company is growing along with aws because of their swift speed in rolling out deployments and being able to access secure backups from various places and generate reports and in fact useful reports out of it that helps their business now on the same lines let me also talk to you about kellogg's and how they got benefited by using amazon now kellogg's had a different problem it's one of its kind now their business model was very dependent on an infrared that will help to analyze data really fast right because they were running promotions based on the analyzed data that they get so they're being able to respond to the analyzed data as soon as possible was critical or vital in their environment and luckily sap running on hana environment is what they needed and you know they picked that service in the cloud and that's sort of solved the problem now the company does not have to deal with uh maintaining their legacy infra and maintaining their heavy compute capacity and maintaining their database locally all that is now moved to the cloud or they are using cloud as their i.t service provider and and now they have a greater and powerful it environment that very much complements their business hi there i'm samuel a multi-platform cloud architect and i'm very excited and honored to walk you through this learning series about aws let me start the session with this scenario let's imagine how life would have been without spotify for those who are hearing about spotify for the first time spotify is an online music service offering and it offers instant access to over 16 million licensed songs spotify now uses aws cloud to store the data and share it with their customers but prior to aws they had some issues imagine using spotify before aws let's talk about that back then users were often getting errors because spotify could not keep up with the increased demand for storage every new day and that led to users getting upset and users cancelling the subscription the problem spotify was facing at that time was their users were present globally and were accessing it from everywhere and they had different latency in their applications and spotify had a demanding situation where they need to frequently catalogue the songs released yesterday today and in the future and this was changing every new day and the songs coming in rate was about 20 000 a day and back then they could not keep up with this requirement and needless to say they were badly looking for a way to solve this problem and that's when they got introduced to aws and it was a perfect fit and match for their problem aws offered at dynamically increasing storage and that's what they needed aws also offered tools and techniques like storage lifecycle management and trusted advisor to properly utilize the resource so we always get the best out of the resource used aws addressed their concerns about easily being able to scale yes you can scale the aws environment very easily how easily one might ask it's just a few button clicks and aws solved spotify's problem let's talk about how it can help you with your organization's problem let's talk about what is aws first and then let's bleed into how aws became so successful and the different types of services that aws provides and what's the future of cloud and aws in specific let's talk about that and finally we'll talk about a use case where you will see how easy it is to create a web application with aws all right let's talk about what is aws aws or amazon web services is a secure cloud service platform it is also pay as you go type billing model where there is no upfront or capital cost we'll talk about how soon the service will be available well the service will be available in a matter of seconds with aws you can also do identity and access management that is authenticating and authorizing a user or a program on the fly and almost all the services are available on demand and most of them are available instantaneously and as we speak amazon offers 100 plus services and this list is growing every new week now that would make you wonder how aws became so successful of course it's their customers let's talk about the list of well-known companies that has their idea environment in aws adobe adobe uses aws to provide multi-terabyte operating environments for its customers by integrating its system with aws cloud adobe can focus on deploying and operating its own software instead of trying to you know deploy and manage the infrastructure airbnb is another company it's an community marketplace that allows property owners and travelers to connect each other for the purpose of renting unique vacation spaces around the world and the airbnb community users activities are conducted on the website and through iphones and android applications airbnb has a huge infrastructure in aws and they are almost using all the services in aws and are getting benefited from it another example would be autodesk autodesk develops software for engineering designing and entertainment industries using services like amazon rds or rational database service and amazon s3 or amazon simple storage servers autodesk can focus on deploying or developing its machine learning tools instead of spending that time on managing the infrastructure aol or american online uses aws and using aws they have been able to close data centers and decommission about 14 000 in-house and co-located servers and move mission critical workload to the cloud and extend its global reach and save millions of dollars on energy resources bitdefender is an internet security software firm and their portfolio of softwares include antivirus and anti-spyware products bitdefender uses ec2 and they are currently running few hundred instances that handle about five terabytes of data and they also use elastic load balancer to load balance the connection coming in to those instances across availability zones and they provide seamless global delivery of servers because of that the bmw group it uses aws for its new connected car application that collects sensor data from bmw 7 series cars to give drivers dynamically updated map information canon's office imaging products division benefits from faster deployment times lower cost and global reach by using aws to deliver cloud-based services such as mobile print the office imaging products division uses aws such as amazon s3 and amazon route 53 amazon cloudfront and amazon im for their testing development and production services comcast it's the world's largest cable company and the leading provider of internet service in the united states comcast uses aws in a hybrid environment out of all the other cloud providers comcast chose aws for its flexibility and scalable hybrid infrastructure docker is a company that's helping redefine the way developers build ship and run applications this company focuses on making use of containers for this purpose and in aws the service called the amazon ec2 container service is helping them achieve it the esa or european space agency although much of esa's work is done by satellites some of the programs data storage and computing infrastructure is built on amazon web services esa chose aws because of its economical pay as ego system as well as its quick startup time the guardian newspaper uses aws and it uses a wide range of aws services including amazon kinesis amazon redshift that power an analytic dashboard which editors used to see how stories are trending in real time financial times ft is one of the world's largest leading business news organization and they used amazon redshift to perform their analysis a funny thing happened amazon redshirt performed so quickly that some analysis thought it was malfunctioning they were used to running queries overnight and they found that the results were indeed correct just as much faster by using amazon redshift fd is supporting the same business functions with costs that are 80 percentage lower than what was before general electric ge is at the moment as we speak migrating more than 9000 workloads including 300 desperate erp systems to aws while reducing its data center footprint from 34 to 4 over the next three years similarly howard medical school htc imdb mcdonald's nasa kellogg's and lot more are using the services amazon provides and are getting benefited from it and this huge success and customer portfolio is just the tip of the iceberg and if we think why so many adapt aws and if we let aws answer that question this is what aws would say people are adapting aws because of the security and durability of the data an end-to-end privacy and encryption of the data and storage experience we can also rely on aws way of doing things by using the aws tools and techniques and suggested best practices built upon the years of experience it has gained flexibility there is a greater flexibility in aws that allows us to select the os language and database easy to use swiftness in deploying we can host our applications quickly in aws be it a new application or migrating an existing application into aws scalability the application can be easily scaled up or scaled down depending on the user requirement cost saving we only pay for the compute power storage and other resources you use and that too without any long-term commitments now let's talk about the different types of services that aws provides the services that we talk about fall in any of the following categories you see like you know compute storage database security customer engagement desktop and streaming machine learning developers tools stuff like that and if you do not see the service that you're looking for it's probably is because aws is creating it as we speak now let's look at some of them that are very commonly used within computer services we have amazon ec2 amazon elastic bean stock amazon light sale and amazon lambda amazon ec2 provides compute capacity in the cloud now this capacity is secure and it is resizable based on the user's requirement now look at this the requirement for the web traffic keeps changing and behind the scenes in the cloud ec2 can expand its environment to three instances and during no load it can shrink its environment to just one resource elastic beanstalk it helps us to scale and deploy web applications and it's made with a number of programming languages elastic beanstalk is also an easy to use service for deploying and scaling web applications and services deployed a bit in java.net php nodejs python ruby docker and lot other familiar services such as apache passenger and iis we can simply upload our code and elastic beanstalk automatically handles the deployment from capacity provisioning to load balancing to auto scaling to application health monitoring and amazon lightsail is a virtual private server which is easy to launch and easy to manage amazon lightsail is the easiest way to get started with aws for developers who just need a virtual private server lightsail includes everything you need to launch your project quickly on a virtual machine like ssd based storage a virtual machine tools for data transfer dns management and a static ip and that too for a very low and predictable price aws lambda has taken cloud computing services to a whole new level it allows us to pay only for the compute time no need for provisioning and managing servers and aws lambda is a compute service that lets us run code without provisioning or managing servers lambda executes your code only when needed and scales automatically from few requests per day to thousands per second you pay only for the compute time you consume there is no charge when your code is not running let's look at some storage services that amazon provides like amazon s3 amazon glacier amazon abs and amazon elastic file system amazon s3 is an object storage that can store and retrieve data from anywhere websites mobile apps iot sensors and so on can easily use amazon s3 to store and retrieve data it's an object storage built to store and retrieve any amount of data from anywhere with its features like flexibility and managing data and the durability it provides and the security that it provides amazon simple storage service or s3 is a storage for the internet and glacier glacier is a cloud storage service that's used for archiving data and long-term backups and this glacier is an secure durable and extremely low-cost cloud storage service for data archiving and long-term backups amazon ebs amazon elastic block store provides block store volumes for the instances of ec2 and this elastic block store is highly available and a reliable storage volume that can be attached to any running instance that is in the same availability zone abs volumes that are attached to the ec2 instances are exposed as storage volumes that persistent independently from the lifetime of the instance an amazon elastic file system or efs provides an elastic file storage which can be used with aws cloud service and resources that are on premises an amazon elastic file system it's an simple it's scalable it's an elastic file storage for use with amazon cloud services and for on-premises resources it's easy to use and offers a simple interface that allows you to create and configure file systems quickly and easily amazon file system is built to elastically scale on demand without disturbing the application growing and shrinking automatically as you add and remove files your application have the storage they need and when they need it now let's talk about databases the two major database flavors are amazon rds and amazon redshift amazon rds it really eases the process involved in setting up operating and scaling a relational database in the cloud amazon rds provides cost efficient and resizable capacity while automating time-consuming administrative tasks such as hardware provisioning database setup patching and backups it sort of frees us from managing the hardware and sort of helps us to focus on the application it's also cost effective and resizable and it's also optimized for memory performance and input and output operations not only that it also automates most of the services like taking backups you know monitoring stuff like that it automates most of those services amazon redshift amazon redshift is a data warehousing service that enables users to analyze the data using sql and other business intelligent tools amazon redshift is an fast and fully managed data warehouse that makes it simple and cost effective analyze all your data using standard sql and your existing business intelligent tools it also allows you to run complex analytic queries against petabyte of structured data using sophisticated query optimizations and most of the results they generally come back in seconds all right let's quickly talk about some more services that aws offers there are a lot more services that aws provides but are we going to look at some more services that are widely used aws application discovery services help enterprise customers plan migration projects by gathering information about their on-premises data centers in a planning a data center migration can involve thousands of workloads they are often deeply interdependent server utilization data and dependency mapping are important early first step in migration process and this aws application discovery service collects and presents configuration usage and behavior data from your servers to help you better understand your workloads route 53 it's a network and content delivery service it's an highly available and scalable cloud domain name system or dns service and amazon route 53 is fully compliant with ipv6 as well elastic load balancing it's also a network and content delivery service elastic load balancing automatically distributes incoming application traffic across multiple targets such as amazon ec2 instance containers and ip addresses it can handle the varying load of your application traffic in a single available zones and also across availability zones away auto scaling it monitors your application and automatically adjusts the capacity to maintain steady and predictable performance at a lowest possible cost using aws auto scaling it's easy to set up application scaling for multiple resources across multiple services in minutes autoscaling can be applied to web services and also for db services aws identity and access management it enables you to manage access to aws services and resources securely using im you can create and manage aws users and groups and use permissions to allow and deny their access to aws resources and moreover it's a free service now let's talk about the future of aws well let me tell you something cloud is here to stay here's what in store for aws in the future as years pass by we're gonna have a variety of cloud applications born like iot artificial intelligence business intelligence serverless computing and so on cloud will also expand into other markets like healthcare banking space automated cars and so on as i was mentioning some time back a lot or greater focus will be given to artificial intelligence and eventually because of the flexibility and advantage that cloud provides we're going to see a lot of companies moving into the cloud all right let's now talk about how easy it is to deploy and web application in the cloud so the scenario here is that our users like a product and we need to have a mechanism to receive input from them about their likes and dislikes and you know give them the appropriate product as per their need all right though the setup and the environment it sort of looks complicated we don't have to worry because aws has tools and technologies which can help us to achieve it now we're going to use services like route 53 services like cloud watch ec2 s3 and lot more and all these put together are going to give an application that's fully functionable and an application that's going to receive the information like using the services like route 53 cloudwatch ec2 and s3 we're going to create an application and that's going to meet our need so back to our original requirement all i want is to deploy a web application for a product that keeps our users updated about the happenings and the new comings in the market and to fulfill this requirement here is all the services we would need ec2 here is used for provisioning the computational power needed for this application and ec2 has a vast variety of family and types that we can pick from for the types of workloads and also for the intents of the workloads we're also going to use s3 for storage and s3 provides any additional storage requirement for the resources or any additional storage requirement for the web applications and we are also going to use cloudwatch for monitoring the environment and cloudwatch monitors the application and the environment and it provides trigger for scaling in and scaling out the infrastructure and we're also going to use route 53 for dns and route 53 helps us to register the domain name for our web application and with all the tools and technologies together all of them put together we're going to make an application a perfect application that caters our need all right so i'm going to use elastic bean stock for this project and the name of the application is going to be as you see gsg sign up and the environment name is gsg signup environment one let me also pick a name let me see if this name is available yes that's available that's the domain name so let me pick that and the application that i have is going to run on node.js so let me pick that platform and launch now as you see elastic bean stock this is going to launch an instance it's going to launch the monitoring setup or the monitoring environment it's going to create a load balancer as well and it's going to take care of all the security features needed for this application all right look at that i was able to go to that url which is what we gave and it's now having an default page shown up meaning all the dependencies for the software is installed and it's just waiting for me to upload the code or in specific the page required so let's do that let me upload the code i already have the code saved here that's my code and that's going to take some time all right it has done its thing and now if i go to the same url look at that i'm being thrown an advertisement page all right so if i sign up with my name email and stuff like that you know it's going to receive the information and it's going to send an email to the owner saying that somebody had subscribed to your service that's the default feature of this app look at that email to the owner saying that somebody had subscribed to your app and this is their email address stuff like that not only that it's also going to create an entry in the database and dynamodb is the service that this application uses to store data there's my dynamodb and if i go to tables right and go to items i'm going to see that a user with name samuel and email address so and so has said okay or has shown interest in the preview of my site or product so this is where this is how i collect those information right and some more things about the infrastructure itself is it is running behind and load balancer look at that it had created a load balancer it had also created an auto scaling group now that's the feature of elastic load balancer that we have chosen it has created an auto scaling group and now let's put this url you see this it's it's not a fancy url right it's an amazon given url a dynamic url so let's put this url behind our dns let's do that so go to services go to route 53 go to hosted zone and there we can find the dns name right so that's a dns name all right all right let's create an entry and map that url to our load balancer right and create now technically if i go to this url it should take me to that application all right look at that i went to my custom url and now that's pointed to my application previously my application was having a random url and now it's having a custom url so what did we learn we started the session with what is aws we looked at features and tools technologies products that aws provides and we also looked at the how aws became very successful again we looked into the benefits and features of aws in-depth and we also looked at some of the services that aws provides in random and then we picked particular services and we talked about them like ec2 elastic bean stock light sale lambda storage stuff like that then we also looked at the future of aws what aws holds in the store for us we looked at that and then finally we looked at a lab in which we created an application using elastic beanstalk and all that we had to do was a couple of clicks and boom an application was there available that was connected to the database and that was connected to the simple notification system that was connected to cloudwatch that was connected to storage stuff like that what is azure what's the big cloud service provider all about so azure is a cloud computing platform provided by microsoft now it's basically an online portal through which you can access and manage resources and services now resources and services are nothing but you know you can store your data and you can transform the data using services that microsoft provides again all you need is the internet and being able to connect to the azure portal then you get access to all of the resources and their services in case you want to know more about how it's different from its rival which is aws i suggest you click on the top right corner and watch the aws versus azure video so that you can clearly tell how both these cloud service providers are different from each other now here are some things that you need to know about azure it was launched in february 1st 2010 which is significantly later than when aws was launched it's free to start and has a pay-per-use model which means like i said before you need to pay for the services you use through azure and one of the most important selling points is that 80 percent of fortune 500 companies use as your services which means that most of the bigger companies of the world actually recommend using azure and then azure supports a wide variety of programming languages the c-sharp node.js java and so much more another very important selling point of azure is the amount of data centers it has across the world now it's important for a cloud service provider to have many data centers around the world because it means that they can provide their services to a wider audience now azure has 42 which is more than any cloud service provider has at the moment it expects to have 12 more in a period of time which brings its total number of regions it covers to 54. now let's talk about azure services now azure services have 18 categories and more than 200 services so we clearly can't go through all of them it has services that cover compute and machine learning integration management tools identity devops web and so much more you're going to have a hard time trying to find a domain that azure doesn't cover and if it doesn't cover it now you can be certain they're working on it as we speak so first let's start with the compute services first virtual machine with this service what you're getting to do is to create a virtual machine of linux or windows operating system it's easily configurable you can add ram you can decrease ram you can add storage remove it all of it is possible in a match of seconds now let's talk about the second service cloud service now with this you can create a application within the cloud and all of the work after you deploy it deploying the application that is is taken care of by azure which includes you know provisioning the application load balancing ensuring that the application is in good health and all the other things are handled by azure next up let's talk about service fabric now with service fabric the process of developing a micro service is greatly simplified so you might be wondering what exactly is a micro service now a micro service is basically an application that consists of smaller applications coupled together next up functions now with functions you can create applications in any programming language that you want another very important part is that you don't have to worry about any hardware components you don't have to worry what ram you require or how much storage you require all of that is taken care of by azure all you need is to provide the code to azure and it will execute it and you don't have to worry about anything else now let's talk about some networking services first up we have azure cdn or the content delivery network now the azure cdn service is basically for delivering web content to users now this content is of high bandwidth and can be transferred or can be delivered to any person across the world now these are actually a network of servers that are placed in strategic positions across the world so that the customers can obtain this data as fast as possible next up we have express route now with this you can actually connect your on-premise network onto the microsoft cloud or any of the services that you want through a private connection so the only communication that happens is between your on-premise network and the service that you want then you have virtual network now with virtual network you can have any of the azure services communicate with each other in a secure manner in a private manner next we have azure dns so azure dns is a hosting service which allows you to host their dns or domain name system domains in azure so you can host your application using azure dns now for the storage services first up we have disk storage with this storage you are given a cost effective option of choosing hdd or solid state drives to go along with your virtual machines based on your requirements then you have blob storage now this is actually optimized to ensure that they can store massive amounts of unstructured data which can include text data or even binary data next you have file storage which is a managed file storage and can be accessible via the smb protocol or the server message block protocol and finally you have queue storage now with queue storage you can provide durable message queuing for an extremely large workload and the most important part is that this can be accessed from anywhere in the world now let's talk about how azure can be used firstly for application development it could be any application mostly web applications then you can test the application see how well it works you can host the application on the internet you can create virtual machines like i mentioned before with the service you can create these virtual machines of any size or ram that you want you can integrate in sync features you can collect and store matrices for example how the data works how the current data is how you can improve upon it all of that is possible with these services and you have virtual hard drives which is an extension of the virtual machines where these services are able to provide you a large amount of storage where data can be stored talk about azure in great length and breadth and if you're looking for a video that talks and walks you through all the services in azure then this could be one of the best video you could find in the internet and without any further delay let's get started everybody like story is it so let's get started with the story in a city not so far away a ceo had plans to expand his company globally and called one of his id personal for an i.t opinion and this guy has been in the company for a long time and is very seasoned with the company's infra and he nicely answered the questions with what he foresaw and he said i have a good news and a bad news for us to go global and he starts with the good news he said sir we're well on our way to become one of the world's largest shipping company and the bad news is however our data centers have almost run out of space and setting up new ones around the world would be too expensive and very time consuming now the i.t personnel let's call him mike now he explains the situation from how he saw it but the ceo had done some homework about how he was going to do it and he answered mike saying don't worry about that mic i've come up with a solution for a problem and it's called microsoft azure well mike is an hard-working and honest id professional working for that company but he did not spend time on learning the latest technologies and he asked this question very honestly oh how does it solve a problem and the ceo begins to explain azure to mike and he starts with what is cloud computing and then he goes on and talks about azure and the service is offered by azure and why azure is better than the other cloud providers and what are the great companies that uses azure and how they got benefited out of it and then he winds it all up with the use cases of azure so he begins his explanation saying microsoft azure is known as the cloud service provider and it works on the basis of cloud computing now microsoft azure is formerly known as windows azure and it's a microsoft's public cloud computing platform it also provides a range of cloud services including some of them are compute analytics storage and networking we can always pick and choose from these services to develop and scale our applications or even plan on running existing applications in the public cloud microsoft azure is both a platform as a service and infrastructure as a service let's now fit their conversation out and let's talk about what is cloud computing azure services offered by azure how is azure leading when compared to other cloud service providers and what are the companies that are using azure let's talk about that in simple terms cloud computing is being able to access compute services like servers storage database networking software analytics intelligence and lot more over the internet which is the cloud with the flexibility of the resources that we use like anytime i want a results i can use one and it becomes available immediately and anytime if i want to retire and results i can simply retire as a service and not pay for it and we also typically pay only for the services that we use and this helps greatly with our operating cost to run our infrastructure more efficiently and scale our environment up or down depending on the business needs and changes and all the servers and storages and databases and networking all that are accessed through the network of remote systems or remote computers hosted in the internet typically in the providers data center which is azure in this case now we don't use any physical server or an on-premises server here well we still use physical servers and vms you know hosted on a hardware or a physical server but they're all in the provider's environment and none of them sit on premises or in our data center we only access them remotely it looks and feels the same except for the fact that they are in a remote location we access them remotely do all the work remotely and when we're done we can shut it down and not pay for them so some of the use cases some of the use cases of cloud computing are creating applications and services the other use cases are storing or using cloud for storage alone if there is one thing that ever grows in our organization is the storage every new day there is a new storage requirement and it's very dynamic it's very hard to predict and if we go out and buy a big storage capacity up front until we use the storage capacity fully the empty storages you know we're wasting money on them so instead i can go for a storage with scales dynamically that's in the cloud put storage or put data in the cloud and pay only for what you're storing and for the next month if you have deleted or flushed out some files or data pay less for it so it's a very dynamic storage in the cloud and a lot of companies are getting benefited from storing data in the cloud because of its uh dynamic in nature and the cost that comes along with it the cheap cost that comes along with it and also they give a lot of the providers like azure they give a data replication for free they promise an sla along with the data we store in the cloud so there's an sla attached to it and they also provide data recoveries as well if in case something goes wrong with the physical disk where our data is stored azure automatically makes our data available from the redundant or other places where it had stored our data because of the sla they wanted to keep the other use case for azure is hosting websites and running blogs using the compute service be it storing music and letting your users stream the music azure is a good place to store music and stream the music with the benefit of cdn content delivery network which allows us to stream a video or audio files with great speed you know with that with azure our audio or video application works seamlessly because they are provided to the client with very low latency and that improves the customer experience for our application azure compute service is a good place for delivering software on demand there are a lot of software's embedded softwares that we can buy using azure and everything on a pay-as-you-go service model so anytime we need a software we can go out and immediately buy the software for the next one or 2r let's say and use them and then return it back we're not bound to any yearly licensing cost by that azure computing services has analytic available for us with which we can analyze get a good visualization of what's going on in a network be it logs be the performance be the metrics you know instead of looking at logs and searching logs and trying to do manual things over the heaps and heaps of logs that we have saved azure analytics services helps us to get a good visual of what's going on in the network where have we dropped where have we increased or what's causing what's the major driver what is the top 10 errors that we get in the server in the application stuff like that those can be easily gathered from the azure analytic services now cloud is really a very cool term for the internet a good analogy would be looking back anytime we look at a diagram when we do not know how things are transferred we simply draw a cloud right for example a mail gets sent from a person in one country to a person in the other country a lot of things happening in between from the time you hit the send button and the time the other person hits the real button right and we the the simple and the easiest way of putting it in a picture is simply draw a cloud and on the one end one person will be sending the email and on the other end the other person will be reading the email so a cloud is a really cool term for the internet now that's some basics about cloud computing now that we've understood about cloud computing in general let's talk about microsoft azure as a cloud service now microsoft azure is a set of cloud services to build manage and deploy applications on a network with the help of microsoft azure's frameworks now microsoft azure is a computing service created by microsoft basically for building testing deploying and managing applications and services through a global network of microsoft managed data centers now microsoft azure provides sas which is software as a service and pass which is platform as a service and iaas infrastructure as a service and they support many different programming languages tools and framework and those tools and framework include both microsoft specific and third-party software now let me pick and talk about a specific service for example management azure automation provides a way for us to automate the manual long running and frequently repeated tasks that are commonly performed tasks both in cloud and enterprise environment it saves us a lot of time and increases the reliability and it kind of gives a good administrative control and even schedules the task automatically to be performed on a regular basis to give you a quick history of microsoft azure it was launched on first february 2010 and it was awarded or it was called an industry leader for infrastructure and platform as a service by a gartner now gartner is the world's leading research and advisory company this microsoft azure supports a number of programming languages like c sharp java and python all these cool services we get to use and pay only for how much we use for example if we use for an r we only get to pay for an r even the costliest system available if you use them for an r we only pay for that particular r and then we're done no more billing on the resource that we have used a microsoft azure has spread itself more than 50 regions around the world so it's quite easy for us to pick a region and it will start provisioning and running our applications probably from day one because the infrastructure and the tools and technologies needed to run our application are already available all that we have to do is commit the code in that particular region or build an application and launch it in that particular region and they become live starting day one now because we have 50 regions around the world we can very carefully design our environment to provide low latency services to our customers all right instead of in traditional data center let's say you know customers will have to or their request will have to travel all the way around the globe to reach a data center which lives in the other side of the planet and this adds more latency to it and it is really not feasible to build a data center near each customer location because of the cost involved but with azure it's possible azure already has data centers around the world and all that we have to do is just pick a data center build an environment there they are available starting day one number one and also the cost is considerably saved because we are using a public cloud instead of an physical infrastructure to serve those customers from a very local location and these services that azure is offering is ever increasing as of now as we speak we have like 200 plus services offered and they span through different domain or different platform or different technologies available within the azure console portal now we're going to talk about that later in this section so hold your breath till we talk about it but for now just know that we have like 200 plus services offered by azure let's now talk about different services and azure starting with artificial intelligence plus machine learning where we have a lot of tools and technologies so the wide variety of services available on azure includes artificial intelligence plus machine learning plus analytic services to get and or to give us a good visual of how the data or how the application is performing or the type of the category of data stored and to read from the logs and variety of compute services different vms with different size and different operating systems different containers available different type of databases available a lot of developer tools that are available for us and identity service to manage our users in the azure cloud and those users can be integrated or federated with let's say google facebook you know linkedin so there are some external federation services they can be used to integrate with our identity system iots iot services iot tools and technologies available and management tools to manage the users creating identity is one and then managing them on top of it is a totally different thing and we have tools technologies to manage the users cool services for data migration data migration is now made simple tools and technologies available for mobile application development and i can plan my own network in the cloud with the networking services i can implement my own security both azure provider and third-party security services on azure cloud that's now possible and a lot of storage options available in the cloud so these are just a glimpse of the big list of services available in azure cloud so that was a glimpse of what's available in the cloud let's talk about the services in a specific let's take compute for example you know whenever we're building a new application or deploying existing ones the azure compute service provides the infrastructure we need to run and maintain our application we can easily tap in the capacity that azure cloud service has and we can scale our compute requirement on demand we can also containerize our application we have the option of choosing windows or linux via machine and take the advantage of the flexible options azure provides for us to migrate our vms to azure and lot more and these compute services also include a full-fledged identity solution meaning integration with active directory in the cloud or an on-premises and lot more let's look at some of the services that this compute domain provides some of the services the compute domain provides are virtual machines and this azure virtual machines gives us the ability to develop and manage a virtual computer environment or a virtualized environment inside azure's cloud environment that too in a virtual private network now we will talk about virtual private network at a later point but as of now just uh know that there are a lot of services available in azure compute servers that we can get benefited from we can always choose from a very wide range of compute options for example you know we have an option to choose the operating system we have the option to choose whether the system should be in on premises or in the cloud or do we want to maintain the environment both in on premises and in the cloud we have the option of choosing the operating system whether we want to use our own operating system with some software attached to it or do you want to go and buy the operating system from the cloud from azure marketplace and these are just a few of the options available for us when we want to buy the compute environment and these compute environments are easily scalable meaning we can easily scale our vm instances from one instance to thousands of virtual machines in a matter of minutes or simply put in a couple of button clicks and all these services are available on a pay for what we use model meaning there is no upfront cost we use the service and then pay for the services that we have used there is no literal a long-term commitment when it comes to using virtual machines in the cloud and these most of the services are built on a pay-per-minute billing basis all right and at no point because of the pay-per-minute billing model at no point we will be overpaying for any of the services that's that's attractive isn't it now let's talk about batch service now bad service is always independent regardless of whether you choose windows or linux it's going to run fairly well and with batch servers we can take advantage of the environment's unique features and not only that in short the batch service helps us to manage the whole batch environment and also it helps to schedule the jobs now this azure batch service is actually runs on a large scale parallel and high performance computing because of that batch jobs are highly efficient in azure and when we run bad services this azure batch creates a pool of computer nodes and installs the needed applications that we want to run and then it schedules jobs to those individual nodes in those pools as a customer there is no need for us to install a cluster or there is no need for us to install a software that actually can use the jobs or even to manage or even to scale those infrastructure are the software because everything is managed by azure and this batch service is a platform as a service there is no additional charge for using this batch service except for i mean the only charges that we'll be paying is for the virtual machines that the service uses and the storage that we will be using of course and the networking services that we would be using for this batch service let's summarize this bad service we have a choice of operating system that we can pick and use and it scales by itself now the alternative for the batch would be queues but in queues we'll have to pre-provision and pay for the infrastructure even if we're not using it but with a batch we only pay for what we use and this bad service helps us to manage the application manage the scheduling as a whole as if they are just one thing as next thing in the compute domain let's talk about this fabric service now this fabric service is actually a distributed system platform that helps us to package deploy and manage a scalable and a very reliable microservice and containers and what does it help this azure fabric service helps us or it helps the developers and administrators so they can avoid the complex infrastructure problems and they can focus only on implementing workloads or taking care of their development taking care of their application instead of spending time on infrastructure so what service fabric service fabric it provides runtime capabilities and life cycle management two applications that are composed of micro services no infrastructure management at all and with service fabric we can easily scale the application to tens or hundreds or even to thousands of machines here machines represent containers as next thing in compute domain let's talk about virtual machine scale set now this virtual machine scale that it lets us to create a group of identical load balanced vms i just want to mention it again it helps us to manage a group identical and load balanced vms the number of instances or the number of vm instances in an in a scale set can increase or decrease in response to the demand or in response to a schedule that we define you know the resources needed on a monday morning is not the same as that would be required on a saturday or a sunday morning all right and even within the day the resources that would be needed in the beginning of the business hour is not the resources that would be needed at noon or you know after eight or nine in the evening so the demands could actually vary in the environment and the skill set helps us to take care of the varying demand or take care of the different infrastructure requirement at a different schedule throughout the day throughout the week throughout the month or could be throughout the year as well the skill set also allows us to provide high availability to our applications and it helps us to centrally manage configure and update a large number of vms as if they they are just one thing now you might ask well virtual machines are enough why would we need a virtual machine skill set just like i said this virtual machine skill set helps us with a greater redundancy and improved performance for our applications and those applications can be accessed through a load balancer that actually distributes the request to the application instances so in a nutshell this virtual machine scale set uh it helps us to create a large number of identical virtual machines number one and with skill set we can increase or decrease the virtual machines with virtual machine scale set we can centrally manage and configure and update a big group of vms and it's a great use case when it comes to big data or container workloads as next thing in compute domain let's talk about cloud services now this azure cloud service is actually a platform as a service and it's very friendly in fact it is designed for applications that support scalability or an application that requires scalability or reliability and and on top of it you want them to be very inexpensive to operate so azure cloud service provides all these so where would this cloud service run well it runs on a vm but it's a platform as a service vms are infrastructure as a service and when we run applications on vm through cloud service it becomes platform as a service so here is how you got to be thinking with infrastructure as a service like vms we first create and configure the environment and then we run applications on top of it let's look at the responsibility the responsibility for us in vm is that we manage everything end to end like uh you know deploying new patches picking the versions of the operating system and making sure they are intact and all that stuff it's all managed by us but on the contrary with platform as a service it's i mean it's as if the environment is already ready all that you have to do is deploy your application in it and manage the platform i mean manage the platform not as an administrator because all the administration is taken care by azure like you know deploying new versions of the operating system it's all handled by the azure so we deploy the application and we manage the application that's it infrastructure management is handled by azure so what does cloud service provide this cloud service provides a platform uh where we can uh write the uh application code and we don't have to worry about hardware simply hand over the code and cloud service takes care of it so no worry on the hardware at all so responsibilities like patching what do we do if something crashes how do i update the infrastructure how do i manage the maintenance or the downtime in the underlying infrastructure all that is handled by azure it also provides a good testing environment for us you know we can simply run the code test it before it's actually released to the production i want to expand a bit on these testing applications so this azure cloud service it actually gives us a staging environment for testing a new release without it affecting the existing release which actually reduces the customer downtime so we can run the application tested and anytime that's ready for production all that's needed for us to do to move it to production is simply to swap the staging environment into the production environment and the old production environment will now become the new staging environment where we can add more to it and then swap it back at a later point so it kind of gives us unswappable environment for testing our applications and not only that it gives us health monitoring alerts it helps us to monitor the health and availability of our application and there is a dashboard we can benefit from when we use azure cloud services and that shows the key statistics all in one place and we can also set up real-time alerts to one when a service availability or a certain metrics that we are concerned about degrades as next thing in compute domain let's talk about functions now functions are serverless computing many time if you heard about azure being serverless a lot of time they are referencing or the person who's talking to you is referencing to serverless computing or azure functions which is a serverless computing service hosted on microsoft azure the main motive of a function is to accelerate and simplify application development functions helps us to run code on demand without we need to pre-provision or manage any azure infrastructure so azure functions are script or a piece of code that gets run in response to an event that you want to handle so in short we can just write a code that you need for a problem at hand without actually worrying about the whole application or the infrastructure that will be running that code and the best of all the best is when we use functions we only pay for the time that our code runs so what does functions provide or what does azure functions provide azure functions allow users to build applications using serverless simple functions with the programming language of our choice so the current programming languages that are supported is c sharp f sharp node.js java and php so here we really don't have to worry about provisioning or maintaining servers if a code requires more resource yes azure functions handles or it provides the additional resources needed by the code and the best part is we only pay for the amount of time the functions are running not the resources but the amount of time the function is running as next thing and moving to the new domain let's talk about the container domain in azure now the container domain or the container service it allows us to quickly deploy a production ready kubernetes or a docker swamp cluster now what's a container a container is a standard unit of software that packages of code and all its dependencies so the applications run quickly and reliably from one computing environment to another it could be a testing uh to staging to developing development environment to staging to production or from one production to another production are on premises to cloud or one cloud to another cloud vice versa now imagine we had an option not to worry about the vm and just focus on the application well that's exactly what containers helps us achieve so these container instances enable us to focus on applications and not worrying about managing vms or not worrying about the learning the new tools required to manage the vms or even the deployment and our applications that we create they run in a container running in a container is what helps us to achieve all these not being able to manage or not needing to manage the virtual machines so these containers uh they can be deployed into the cloud using a single command if you're using a command line interface and a couple of button clicks if we are using the azure portal and these containers are kept lightweight but they are equally secure as virtual machines let's talk about container services next thing the container service or sometimes called as azure kubernetes service it helps us to manage the containers container is one thing and a service that's used to manage the container is another thing now this kubernetes service or acs it helps us to manage the containers so let's expand on this a bit so this azure container service or acs it it actually provides a way to simplify the creation configuration and management of a cluster of virtual machines that are pre-configured to run containerized applications on top of them and deploying them deploying these containers might take like 15 to 20 minutes or deploying the virtual machines that run containers in it might take 15 to 20 minutes and once they are provisioned we can actually manage them by using simple ssh tunnel into them and this acs when it runs application it runs applications from docker images what does that mean a docker images makes sure that the applications the container runs are fully portable images are portable and acs also helps us to orchestrate the container environment not only that it also helps us to ensure that these applications that we run in containers can be scaled to thousands or even tens of thousands of containers so in a nutshell managing an existing application into a container and running it using aks or acs is really easy or that's what it is all about to make the application management or migration easy now managing the container based architecture and we discussed that containers could be tens or even tens of thousands of containers so managing them is made simple using this container services and even training of model using a large data set in a complex and resource intensive environment this aks helps us to simplify that environment all right as next thing in container domain let's talk about container registry we spoke about registry a little bit when we spoke about docker images so container registry is a single place where we can store our images which are docker images when we use when we use containers it's it's docker images that we use for our image purposes so these container images are a central registry that can be used to ease container development by easing the storage and management of container images so there we can store all kinds of images like docker swamp or the images used in docker swarm are in kubernetes everything can be stored in container registry in azure now anytime we store a container image it provides us an option for jio replication what that means is that we can efficiently manage a single registry replicated across multiple regions now these jio replication it actually enables us to manage global deployments assuming we are having an environment that requires a global deployment so it helps us to manage global deployments as one entity because we are geo-replicating we would be updating we would be editing one image and that image gets replicated throughout the global replication centers we would have set up and so just one editing would have actually edited the global images and those global images would have provisioned at the global application so one edit replication and then provisioning of the applications global wide and this replication also helps us to helps us network latency because you know anytime an application needs to deploy it does not have to rely on a single source which which can be reached only through high latency network because we have global replications around the world anytime the application wants to check back it would check back uh the application which is in a very nearby location for the application itself global replication means that we are managing it as a single entity that's being replicated across the multiple regions in the globe as next thing in our learning let's talk about azure databases now these azure databases are rational in fact they have many flavors in them we're going to look at different flavors or sql nosql cache type of database that has your offer so we're going to learn one at a time or we're going to learn one by one so this azure sql database is a relational database in fact it's a regional database as a service it's managed by azure we don't get to do a lot of management in it so it's a rational database as a service uh based on microsoft sql server database engine and this database is a high performance database it's very reliable and it's very secure as well and this high reliability high performance and for this high security really don't have to do anything it comes along with it and it's managed by azure and there are two things that i definitely need to mention about azure sql database that is it's an intelligent service number one it's fully managed by azure and it also has this one good thing which is it has built-in intelligence that learns app patterns and adapts to maximize performance and reliability and data protection of the application that's something that's not found in many of the other cloud providers that i'm aware of so i thought i'll mention it so it uses built-in intelligence to learn about the user's database patterns and helps improve performance and protection and migration or importing data is very easy when it comes to azure sql database so it can be readily or immediately used for analytic reporting and intelligent applications in azure as next thing let's talk about azure cosmo db now azure cosmodb is a database service that is for nosql type and it's it's created to provide low latency and an application that scales dynamically or that scales rapidly now this azure cosmodb is an a globally distributed service and it's a multi-model database this can be provisioned in a click of a button that's all we got to do if we need to provision and azure cosmodb in the azure it helps with scaling the database now we can elastically and independently scale throughput and storage across this database and in any of the azure geographic regions it provides a good throughput it provides good latency it provides good availability and it provides or azure promises a comprehensive sla that no other database can offer that's the best part about cosmodb so this cosmodb was built with a global distribution in mind and it's built with the horizontal scale in mind and all this we can use by only paying for what we have used and remember the difference between azure cosmodb and the sql database is that azure cosmodb supports nosql whereas sql doesn't alright few other things about azure cosmodb is it allows users to use key value graph a column family and document data it also gives users a number of ap options like sql javascript mongodb and and few others that you might want to check the document at the time of reading and the best part here is that all that we mentioned we get to use only by paying for the amount of storage and throughput that are required and the storage and the throughput can be elastically scaled based on the requirement of that r all right let's talk about that is cache discussion about azure database won't be complete without we talking about redis cache now redis cache is a secure data cache it's also called it's also sometimes called as messaging broker that provides high throughput and low latency access to data for the applications now that is cash is based on an a popular open source caching product which is what is sometimes called as that is cash now what's the use case it's typically used to cache to improve the performance and scalability of a system that rely heavily on backend data stores now performance when we use zach is improved by temporarily copying the frequently accessed data to a fast storage located very close to the application now with redis cache this fast storage is located in memory with redis cache instead of being loaded from the actual disk in the database itself now this redis cache can also be used as an in-memory data structure store not only that it can be used as a distributed non-relational database and a message broker so there are a variety of use cases for this redis cache and by using redis cache the application performance is improved by taking advantage of the low latency and the height throughput performance that this redis cache engine provides so to summarize this redis cache when we use redis cache data is stored in the memory instead of the disk to ensure that there is high throughput and low latency when the application needs to read the data it provides various levels of scaling without any downtime or interference now this redis cache is actually backed by reduce server and it supports a string hashes linked list and various other data structures now let's talk about security and identity services now identity management in specific is a process of authenticating first and then authorizing using security principles and not only that identity management involves controlling information about those principle identities you might ask now what's a principle identity now identity or principle identity are services applications users groups and a lot more the speciality about this identity management is that it not only helps authenticate and authorize principles in cloud it also helps authenticate and authorize principles or resources on premises especially when you run an hybrid cloud environment so all these services and features that this identity management helps us to get additional level of validation like identity management can provide multi-factor authentication it can provide access policies based on condition a permit or deny based on condition it can also monitor suspicious activity and not only that he can also report it it can also help generate alerts for potential security issues and in a way to mitigate it can send us an alert so we can get involved and prevent and a security accident from happening so let's talk more about identity management so some of the services under security and identity management are azure security center now this azure security center provides security management and threat protection across the workloads in both cloud and in the hybrid environment it helps control user access and application control to stop any malicious activity if present it helps us to find and fix vulnerabilities before they can be even exploited it integrates very well with analytic methods that helps us to identify or it gives us the intelligent to identify or detect attacks and prevent them before it can actually happen and it also works seamlessly with hybrid environments so you don't have to have one policy for on-premises and one policy for the cloud it's now a unified service both for on-premises and the cloud the next service and security and identity would be key vault now a key vault is a service or a feature that helps safeguard the cryptographic keys and any other secrets used by the cloud applications and the services in other words this azure keyword is a tool for securely storing and accessing the secrets of the environment i mean the secret keys now a secret is anything that you really want to have a very tight control access like the certificates like the passwords stuff like that now if i tell you what keyword actually solves that would actually explain what keyword is now key vault is used in secrets management it's helped in securely storing the tokens the passwords the certificates it helps in key management and it really helps in creating and controlling the encryption keys that we would use to encrypt data it helps in certificate management talking about certification management it helps us to easily provision manage and deploy public and private ssl tls certificates in azure and lot more so in a nutshell this keyword it provides users the ability to provision new walls and keys in just a matter of minutes all that in a single command or all that in a couple of button clicks it also helps users to centrally manage their keys secrets and policies next in the list let's talk about azure active directory now azure active directory it helps us to create intelligent driven access policies to limit resource usage and manage user identities but what does that mean now this azure active directory is a cloud based active directory and identity management service now azure active directory combines and it's actually a combination of the core directory services plus application access management plus identity protection and one good thing about this azure in fact there are a lot of good things but especially when you're running hybrid environments you might wonder well how this azure active directory is going to behave now this azure active directory is built to work on on-premises and cloud environment as well not only that it also works seamlessly with mobile applications as well so in a nutshell this azure active directory it acts as a central point of identity and access management for our cloud environment it also provides good security solutions that protect against unauthorized access of our app and the data now that we've discussed about security and identity let's talk about the management tools that azure has to offer azure provides built-in management and account governance tools that helps administrators and developers that helps them to keep the resources secure and very compliant and again it helps both in on premises and in the cloud environment and these management tools help us to monitor the infrastructure monitor the applications it also helps in provisioning and configuring resources it also helps in updating apps it helps in analyzing threats taking backup of the resources build disaster recoveries it also helps in applying policies and conditions to automate our environment we use azure management tools and it's also used in cost control methods so this azure management plays a wide role across the azure services and in the management tools first comes the azure advisor now this azure advisor it acts as a guide to educate us about azure best practices it throws recommendations that we can select on the basis of the category of service and it also provides the impact it can have or the impact that would happen in our environment if we follow the recommendations given and recommendations are first one is the recommendations are kind of templatized and it throws the templatized recommendations not only that it also provides customized recommendations on the basis of the configuration on the basis of our usage patterns and these recommendations are not hard it's not like something that it recommends and then just leaves us hanging there these recommendations provided are very easy to follow very easy to implement and see results you can think of as your advisor as an a very personalized cloud consultant that helps you to follow best practices to optimize our deployments it kind of analyzes our resources our configurations our usage and then it recommends a solution for us that really helps in improving the cost effectiveness improving the performance improving high availability and improving security in our azure environment so with this azure advisor we can get a proactive actionable and personalized best practice recommendation now you don't have to be an expert just follow the azure advisor and your environment is going to be good it also helps in improve the performance security high availability of our environment and also it helps in bringing down the overall azure spend and the best part is it's a free service that analyzes our azure usage and provides recommendations how we can optimize our azure resource to reduce cost and reduce cost at the same time boost the performance helps in strengthening the security and improve the overall reliability of our environment next in the list would be network watcher now this network watcher helps users identify and gain insights in the overall network performance and the health of the overall environment now these azure watchers provides enough tools to monitor to diagnose to view the metrics and to enable or disable logs which means regenerate and collect the logs for resources in the azure virtual network so with network watcher you can monitor and diagnose issues in networking without even logging into the virtual machines with just the logs which are real time we can actually come to a conclusion what could be wrong in a certain resource in a vm or in a database you know by just looking at the logs and not only that it's used for analytic or to gain some intelligence of what's happening in our network we can gain a lot of insight to the current network traffic pattern using the security group flow logs that this network watcher offers it also helps in investigating vpn connectivity issues using detailed logs now you might or might not know that you know vpn troubleshooting requires both parties or it involves two parties another person the network administrator on this side and the network administrator on the other side and they will have to check logs in their end and we'll have to check logs and hour and stuff like that but with the network watcher it kind of takes it to the next level the logs itself we could easily identify which side is having the issue and suggest an appropriate fix and the next in the list would be microsoft azure portal now this microsoft azure portal it provides a single unified console to perform various number of activities like building not only building managing and monitoring the web applications that we build now this portal can be used to organize our environment or the appearance of the environment or the visual of the environment based on our work style and using azure portal users can control who gets to manage or access the resources all from the azure portal and this azure portal gives a very good visibility on the spends that happen on each resource right and if we can customize it we can also identify spends based on team spends based on days spends based on department stuff like that so it kind of gives us a good visual of where the money is spending or where is the bill consumed within the azure environment next in the list would be azure resource manager now azure resource manager enables us to manage the usage of the application resources now we use resource manager to deploy monitor and manage solution resources as a group as if it's one single entity now the infrastructure of our application is typically made of various components which includes virtual machine storage virtual network web app database servers some other third-party services that we might use in our environment and they are by nature separate services but with azure resource manager we don't see them as different components or different entities instead we see them as related services in a group that supports an application now we kind of get the relation between them instead of you know letting them spread azure resource manager identifies the relation between them and helps us to visually see them all as one or single entity not only that azure resource manager helps or it ensures that the resources that we provision are deployed at a constant rate along with the other application it also helps users to visually see their resources and how they are connected and that helps in managing the resources lot better resource group also is used to control who can access the resources within the users organization kind of gives you the fine grained control over who gets to access and who does not get access and the last one in the management tools would be automation and this automation gives us the ability to automate configure and install upgrades across hybrid environments it provides a cloud-based automation and configuration service not only that this can be applied for non-azure environments as well which is on-premises so some of the automation we could do is process automation update management automation configuration features automation stuff like that and this azure automation provides complete control during the deployment operation and also during the decommissioning of the workloads and resources with automation we can actually automate a time consuming or mundane or any task that's error prone because of human errors those things can be automated so irrespective of how many times you run it it's gonna run the same way and that really helps in reducing the overall time and also the overhead cost because a lot of the things are automated which means it's human error free which means the application is not going to break and keep running for a longer time with automation we can actually build a good inventory of operating system resources and configuration items all in one place with ease and this really helps in tracking the changes and investigating the issue let's say something happened because we have automation because it's logging the configuration changes it's easy to track easy to identify easy to identify what has changed lately that has broken the environment go back and fix it or kind of roll it back that solves the problem and that actually summarizes the azure management tools or management services now let's talk about the networking tools or the networking services available in azure there are variety of services especially networking services that azure offers and i'm sure it's going to be an interesting one let's begin our discussion with content delivery network now the content delivery network in short cdn it allows us to perform secure and a very reliable content delivery not only that it also helps in accelerating the delivery time or in other words reducing the delivery time also called as low times it also helps in saving bandwidth and increases in responsiveness to the application let's expand on this the content delivery network is actually a distributed network of servers that can efficiently deliver web content to users now cdns we're going to use the word cdn here cdns store cached content on global edge servers also called as pops point of presence locations that are very close to the end users so the latency is minimized it's like taking a copy of the data or taking a multiple copy of the data and storing it in different parts of the world and whoever is requesting it the data gets delivered to them from a server which is very locally to them so this cdn offers developers a global solution for rapidly delivering high bandwidth content to users by caching the content in a strategically placed location which is very near to them so these content delivery networks it really helps in handling that's one advantage you get for content delivery network that's we can handle spikes and heavy loads very efficiently and we can also run analytic against the locks that gets generated in content delivery network which helps in gaining good insight on the workflow and what would be the future business need for that application and this just like a lot of other services this is on a pay as you go type so you use the results first and then you only pay for what you have used the next one in networking would be express route now express route is actually a circuit or a link that provides an a direct private connection to azure and because it's direct it gives low latency link to azure it gives good speed and reliability for the azure data transfer it could be on premises to azure so it gives very good speed it gives increased reliability and low latency for that connection let's expand on this a bit now this express route is an service that actually provides and private connection between microsoft data center and infrastructure in our premises or in a different collocation facility that we might have now these express routes do not go over the public internet and because they don't go over the public internet they offer a high security reliability and speed and low latency compared to the connections which are in the internet because it's fast because it's reliable because it has low latency it can be used as an extension of our existing data center you know users are not going to feel the difference whether they are accessing services from an on-premises or in the cloud environment because latency is minimized as much as possible users are really not going to see the difference and because it's a private line and not an public internet line it can be used to build hybrid applications without compromising a privacy or the performance now these virtual private cloud these express routes can be used for taking backups if assume a backup going through the internet that would be a nightmare if you use express route for backups that's going to be fast and imagine recovering a data through the internet from the cloud through the internet to the on premises in a time of disaster that would be the worst nightmare so these express routes can be used not only to backup but also to recover the data because it provides good speed low latency recovering the data is going to be a lot sooner the next product or service we're gonna discuss in networking is azure dns now azure dns allows us to host domain name in azure and these domain names come with an exceptional performance and availability now azure dns is used to set up and manage dns zones and records for our domain name in the cloud now this azure dns is a service for dns just like the name says and it provides name resolution by using azure's infrastructure and by using this domain we can actually manage the dns ourselves through the azure portal with the same credential imagine having a dns provider which does not even belong in rit imagine that environment you know we would have a separate portal to manage the dns environment now those are gone and now we can actually manage the dns in the very same azure portal where we use the rest of the other services and this azure dns very much integrates with other dna service providers it uses a global network of name servers to provide fast response to dns queries and these domains are having additional availability compared to the other domain service providers availability promises these are going to have more availability than the rest because most of the servers are maintained by a microsoft and it helps resolve sooner it helps re-syncing let's say a server fails it kind of helps re-syncing with the rest of the servers so all the microsoft's environment all the microsoft global network of name servers kind of ensures that our domain names are resolved properly not only properly but also are available most of the time all right next in the list in networking services is virtual network i'm sure this is going to be very interesting and i'm sure you're going to like it so this networking or virtual networking in azure it actually allows us to set up our own private cloud in the public cloud it gives us an isolated and highly secure environment for our application let's expand on this on this azure virtual network helps us to provision azure virtual machines and it helps us to securely communicate with other on-premises and internet networks it also helps in controlling the traffic that flows through or flows in and out of this virtual network to other virtual networks and to the internet now this azure a virtual network sometimes called as v-net is actually a representation of our own network in the cloud it's actually a logical isolation of the azure cloud dedicated to our subscription all our environments are provisioned in a v-net that is separate from another customers we nets that way we have that logical separation there so this virtual network can also be used to provision a vpns in the cloud so we can connect the cloud and the on-premises infrastructure and lot more especially in a environment where we have hybrid environment surely we will be using virtual network because that's going to require a vpn for secure data transfer in and out of the cloud and in and out of the on-premises environment all right so it kind of gives us an boundary for all the resources so all the traffic between the azure resources they kind of logically stay in between or logically stay within the azure virtual network and here we can design the network it's given over to us you know you can pick the ip you can pick the routing you can pick the subnet and a lot of freedom is given or i would say a lot of control on how the network is designed it's not like something that's already cooked and we only get to use it no we can actually build the network from the scratch we can pick the ip address that we like we can pick you know which subnet needs to communicate with the other subnet stuff like that and like i said if you are using hybrid environment you definitely would be requiring a virtual network because it helps connect the on-premises and the cloud in a secure fashion using vpn the last product we're going to discuss in networking is a load balancer this load balancer actually provides application a good availability and a good network performance so how does it work it actually works by load balancing the traffic to and from the virtual machine and the cloud resources not only that it also load balances between cloud and across premises virtual networks with azure load balancer we can actually scale our application and create high availability for our services which means our application will be available most of the time if any of the server goes dead the server does not get traffic what happens if the server gets traffic user is going to experience downtime what happens if the server does not get traffic user won't experience any downtime the connection is shifted to an healthy service so the user experiences uptime all the time so this load balancer supports inbound and outbound scenarios and it provides low latency it gives high throughput of the data transfer and we can actually scale up the flow of the tcp and udp connections from hundreds to thousands to even millions because we have a load balancer now in between the user and the application so how does it operate this load balancer actually receives the traffic and it load balances the traffic to the back end pool of instances connected to it according to the rule and the health probe that we set that's how it maintains high availability so what does load balancer help it helps in creation of high available scalable application in the cloud in minutes it can be used to automatically scale the environment with the increasing application traffic and one feature of load balancer is to check the health of the user's application instance and it removes or it stops sending the request to the unhealthy instance and kind of shifts that connection to the healthy instance that way a user or a connection does not get stuck with an instance that's not healthy that's all that you need to know about the networking services now let's talk about the storage services or the storage domain in azure now azure storage in general is a microsoft manage service providing cloud storage which basically is highly available secure durable scalable and redundant because it's all managed by azure we don't get to manage a lot of it and these azure storages are a group of storage services they capture different needs and the storage products include azure blobs which is actually an object storage it includes azure data lake it includes azure files as you see it in it includes azure queues it includes azure tables and lot more but let's start our discussion with azure store simple azure store simple is an hybrid cloud storage solution that actually lowers the cost of storage to nearly 60 percent of how much you'd be actually spending without using it so azure simple storage or store simple is an integrated storage solution that manages the storage task between on-premises and the cloud storage what i really like about azure is that it's built around a hybrid environment in might there are a lot of other cloud providers that are there where running and hybrid environment is a big challenge you know it has some compatibility you won't be able to find an hybrid or a on-premises and cloud solution for your need stuff like that but with azure especially when it comes to storage a lot of the things that we're gonna see it clearly is designed with hybrid environment in mind all right so let's come back and talk about store simple so store simple is a very efficient cost effective and a very easily manageable san storage area networking solution in the cloud i thought i'll throw in this information the reason why it got store simple is really because it uses store simple 8000 series devices which are used in azure data center and this stores simple or simple storage it comes along with storage steering to manage the stored data across the various storage media so the current the very current data is actually stored in on-premises on solid-state drives and data that is used less frequently is stored in hdds or hard disk drives and the data that requires archived or that needs to be archived very old data let's say less frequently used data candidate for archive they are actually pushed to the cloud so you see how this storage steering automatically happens in store simple and one another cool feature of store simple is that it enables us to create an on-demand and scheduled backups of data and then store the data locally or in the cloud and these backups are actually taken in the form of incremental snapshot which means that they can be created and restored quickly it's not a complete backup it's an incremental backup and these cloud snapshots they can be critically important when there is a disaster and when there is a disaster recovery scenario because these snapshots can be called in and they can be put on storage systems and then they become the actual data so recovering is faster if you have proper scheduled backups or if you have frequent backups and this storage simple it really helps in easing our backup mechanism which means it kind of eases our disaster recovery steps or procedures as well so this is so simple it can be used to automate data management data migration data movement data turing across the enterprise both in cloud and on-premises it actually improves the compliance and accelerates the disaster recovery for our environment and if there is one thing that's increasing every new day in our environment that would be storage and this store simple addresses that need and we really don't have to pre-plan or think in deep or having a proper storage because now we have a simple storage available in the cloud and moreover it's on a pay-as-you-go type so not much pre-planning on storage is needed yes there will be a need but not as much as i would without the cloud or without the simple storage and the next service under storage that we would like to discuss is the data lake store this data lake store or storage it's a cost effective solution for big data analytics in specific so let's expand this so this data lake storage is an enterprise-wide repository for big data analytic workload now that's the major service that's dependent on this data lake store and this data lake enables us to capture data of any size of any type and of any injection speed and it kind of collects them in one single space or in one single place for operational efficiency i mean operational efficiency and for analytic purpose hadoop in azure is very dependent on this data lake storage and this data lake store is designed with performance for analytics in mind so anytime you think of or anytime you're using analytic in the cloud or anytime you're using hadoop in the cloud in azure we are definitely using or we will be to the most part or the normal procedure or the right storage to pick would be data lake store in azure it's designed with security in mind so anytime we use azure storage we can be rest assured that we are using storage from within a data center which has or which was built with security in mind so this data store also uses azure blob storage behind the scenes for global scale durability and for performance let's talk about blob storage now blob storage provides large amount of storage and scalability now this blob storage is the object storage solution for azure cloud let's expand a bit on blob storage azure blob storage is microsoft offering for object storage now this blob storage is optimized for storing massive amount of unstructured data which could be text or binary data it's designed and it's optimized for rapid reads if i explain to you on what scenarios we would be using blob storage that might help you get a good understanding of what blob storage is so it's help or its design as of now it's being used in many idea environments to serve images or documents directly to the browser it helps in storing files for distributed access a lot of features can fetch data from azure blob storage and it currently helping users stream video and audio it's currently being used for writing log files it's currently being used to store data as backup and restore at a later point in times of disaster recovery it also is used as an archiving storage in a lot of cloud id environments it's widely used in storing analytic data not only storing but also running analytic query against the data stored in it so that's a wide use case for blob storage not only that in addition to all that we mentioned it also supports versioning so anytime somebody updates and data a new version gets created which means at any point i can roll back as in when needed and it provides a lot of flexibility on optimizing the user's storage need it also supports tiering of the data so based on need when i actually explore i would find a lot of options i can pick from that uh you know suits to my unique storage environment or unique storage need and like i said it stores unstructured data and this unstructured data is available for customers through rest based object storage environment the next product and storage service would be queue storage now queue storage provides durable cues for large volume cloud services it's a very simple and a cost effective durable messaging queue for large workers let's expand this queue storage for a moment now this queue storage is a service for storing large amount of messages that can be accessed from anywhere in the world through http and https calls a single queue or a single cube message can be up to like 24 kb in size and a single queue can contain millions of such 24 kb in size messages and how much can it hold it can hold up to the total capacity of the storage account itself so that's kind of easy to translate how much would it hold and this azure queue storage it provides an messaging solution between applications and components in the cloud what does it help it helps in designing an application for scale it helps in decoupling the application so you know it's not very dependent or sometimes it's not at all dependent on the other application because now we have a queue in between which kind of translates or which kind of connects or which kind of decouples both the environment now we have a queue in between both the environment can scale up or scale down independently the next in the storage service would be file storage let's talk about file storage now these azure files provide secure simple and managed cloud file shares now with file share in the cloud it actually extends the user servers on premises performance and capacity and lot of familiar tools for the cloud file share management can be used along with the file storage that we're talking about so let's expand a bit on file storage now this azure files or azure file storage offers a fully managed file shares in the cloud that can be accessed via the smb protocol server message block protocol now this azure file shares can be mounted concurrently by cloud or in on-premises deployments a lot of operating systems are compatible with it windows are compatible linux is compatible mac os is compatible in addition to all this being able to run on premises and on the cloud or being able to access from on premises and on the cloud it can also offer cache for caching the data and keeping it locally so it's immediately available when needed so that's some additional feature i would say that's an advanced feature that it offers compared to the other file shares available in the market let's talk about table storage let's talk about table storage now table storage is a no sql key value pair storage for quick deployments with the large semi-structured data sets the difference between one important thing to note with table storage is that it has a flexible data schema and also it's highly available let's expand a bit on table storage so anytime you want to pick a schema less a nosql type table storage is the one we'll end up picking it provides an key pair attribute storage with a schema-less design this table storage is very fast and very cost effective for many of the applications and for the same amount of data it's a lot cheaper when you compare it with the traditional sql data or data storage so some of the things that we can store in the table storage are of course they're going to be flexible data sheets such as user data for web application address books device information and other types of metadata for our service requirements and it can have any number of tables up to the capacity limit of the storage account now this is not possible with sql this is only possible with nosql especially with table storage in azure explanation of storage really concluded the length and breadth of the explanation this ceo was giving his id personal but this id personal is not done with it yet he still has a question even after this lengthy discussion and his question was well there are a lot of other cloud providers available what made you specifically choose azure i mean from the kind of question that he asked we can say that he is very curious and he definitely had asked a very thoughtful question so his ceo went on and started to explain about the uh other capabilities of azure or how it kind of outruns the rest of the cloud providers so he started or he again started his discussion but from a different angle now so he started to explain what are the capabilities or how azure is better than the competitors so he started with explaining the platform as a service capabilities and i'm going to tell you what the ceo told his id person so this platform as a service or in platform as a service the infrastructure management is completely taken care by uh microsoft allowing users to focus completely on the innovation no more infrastructure management responsibilities go and focus on innovation that's that's a fancy way of saying it when we buy platform as a service that's what we get we can contribute our time on innovation and not just maintaining the infrastructure and azure especially is a dot-net friendly azure supports dotnet programming language and it has or it is built or designed or it is optimized to work with old and the new applications deployed using.net programming framework so if your application is a dotnet most of the time you would end up picking azure i mean if you try to compare most of the time you would end up picking azure as your cloud service provider and the security offerings that azure offers is it's designed based on the security development uh life cycle which is an industry leading assurance process when we buy services from azure it assures that the environment is designed based on security development life cycle and like i mentioned many times in the past and i would like to mention it again azure has well thought about the hybrid environments which a lot of other cloud providers have failed so it's very easy to set up and hybrid environment to migrate the data or not to migrate the data and still run a hybrid environment they work seamlessly with the azure because azure provides seamless connection across on-premises data centers and the public cloud it also has a very gentle learning curve if you look at the documentation it's picture rich and the documentations are neat and clear would really it would encourage you to learn more it would encourage you to think and imagine and try easily get a grasp of how services work so it has a very gentle learning curve azure allows the utilization of technologies that several businesses have used for years so there is a big history behind it it has a very gentle learning curve the certifications the documentations at the stage by stage certification levels it's all very gentle learning curve which is generally missing in other cloud service providers now this would really impress the ctos or people working in finance and budgeting if an organization is already using microsoft software they can definitely go and avail or be bold and ask for a discount that can reduce the overall azure spending in other words overall a pricing of the azure so that's what helped or they are the information that helped the ceo pick azure as his cloud service provider and then the ceo goes on and talks about the different companies that are currently using azure and they are definitely using azure for a reason like pixar boeing samsung easyjet xerox bmw 3m they are major multinational multi-billion companies they rely run operate their id in azure and this ceo has a thought that his id person is still not very convinced unless and until he shows him a visual of how easy things are in azure so he goes on and explains about a practical application of azure which is what exactly i'm going to show you as well all right a quick project on building an azure app using or building a dotnet application in azure web app and making it connect to an sql database will solidify all the knowledge that we have gained so far so this is what we're gonna do i have an azure account open as you see logged in and everything is fresh here let me go to resource group there's nothing in there it's it's kind of fresh all right i'm logged in and this is what we're gonna do so we're gonna create an application like this which is nothing but an to-do application a to-do list application which is going to run from the web app get information from us and save it in the database that's connected to it so you can already see it's a two-tiered application web and db all right so let me go back to my azure account the first thing is to create an resource group let's give it an a meaningful name let's call it azure simply learn all right and it's going to be a free trial and the location pick one that's nearest to you or you know wherever you want to launch your application now for this use case i'm going to pick central us and create it's going to take a while to get created there you go it's created it's called azure simply learn now what do we need we need an web app and and a separate sql database let's first get our web app running so go to app services and then click on add it's not the web app plus sql that we want we want web app alone for this example so let's create and web app give it a quick name let's call it azure simply learn the subscription is free trial and i'm going to use my existing resource group a resource group that we created sometime back it's going to run out of windows and we're going to publish the code all set we can create it all right while this is running let me create my database right sql database create a database give it a name let's call it azure simply learn db put it in our existing resource group that we created it's going to be a blank database all right and it's going to require some settings like the name of the server and the admin login the password that goes along and in which location this is going to be created the server name is going to be azure simply learn db that's the server name and the admin login can be what can be the admin login name let's see so let's call it simply learn that's my admin login name and let me pick a password click on create so what have we done so far we have created an web app and we have created an a database in the resource group that we have created so if i go to resource group it's going to take some time before things show up so if i go to my resource group i only have one resource group as of now azure simply learn and there i have a bunch of resources being created and that's still being created right in the meantime i have my application right here that's running out of uh or that's in visual studio as of now right so once the infrastructure is set and ready in the azure console uh we're gonna go back to visual studio feed these inputs in the visual studio so the code knows what the database is the the credentials to log into the database stuff like that so we're going to feed those information in visual studio by that we're actually feeding it into the application and then we're going to run it from there deploying this application takes quite a while we really got to be patient right now we have all the resources that we need for the application to run here is my database and here is my app service there's one more thing we need to do that is create and firewall exception rule so one more thing needed is to create and firewall exception rule right so the application is going to run from my local desktop and it's going to connect to the database right so let's add an exception rule by simply adding the client id it's going to pick my ip the ipf laptop i'm using as of now and it's going to create an exception to access the database so that's done now we can go back to our visual studio i already have a couple of apps running or a couple of configurations pushed from a visual studio i'm going to clean that up if you're doing it for the first time you may not need to do this all right so let's start from the scratch this is very similar to how you would be doing in your environment all right so we're going to select an existing azure app service now before that i have logged in as you can see i have logged in with my credentials so it's gonna pull few things automatically from my azure account so in this case i'm gonna use an existing azure app so select existing and then click on publish all right if you recall these are the very same resources that we created a while back all right we have clicked on save and it's running kind of validating the code and it's going to come up with an url now initially the url is not going to work because we haven't mapped the application to the database that would be the next thing all right so the app has been published and it's running from my web app as of now it's going to throw an error like you see it's throwing an error that's because we haven't mapped the app and the db together so let's do that all right let's do that so let's go to server explorer this is where we're going to see our databases that we have created now let's quickly verify that go back to the resource group all right appropriate service group which is right here and here i have my database azure simply learn database all right it has some issues connecting to my database give me a quick moment let's fix it all right so we'll have to map the database into this application all right so let's go to the solution explorer click on publish and a page like this gets shown and from here we can go to configure here is our a web app all right with all its credentials let's validate the connection number one all right and then click on next this is my db connection string right which the app is going to use to connect to my db now if you recall rdb was uh azure simply learn db and that's not being shown here so let's fix that right so let's fix that click on configure and here let's put our db servers url now before that let's change this to sql server all right and then in here put the db's url so go back to azure here is my db or server's name put that here right the username to connect to the server that's right here put that in and the password to connect to the server let's put that in all right it's trying to connect to our azure portal or the azure infrastructure and here is my database if you recall it's azure sldb that's the name of the database let's test the connection connection is good click on ok so now it's showing up correctly azure simply learn db that's the name of the database that'll be created now it's configured all right let's modify the data connections right let's map it to the appropriate database again all right so our name of the database is azure simply learn db and then it's going to be sql server that's the data source the username is simply learn and the password is what we have given in the beginning all right let's validate the connection it's good click ok now we're all set and ready to publish our application again now the application knows how to connect uh to the database we have educated it with the uh the correct connection strings the dns name the username and the password for the application to connect to the database so visual studio is building this project and once it is up and running will be prompted with an url to connect and anytime we put or we give inputs to the url that's going to receive the input and save it in the database all right so here is my to-do list app and i can start creating to-do list for myself all right so i have items already listed i can create an entry and these entries get stored in the in the database i can create another entry you know take the doc for a walk that's gonna get stored i can create another entry uh book tickets for uh scientific exhibition and that's gonna receive and put that in the database and that concludes our session so through this session we saw how i can use azure services to create a web app and connect that to the db instance and how those two services which are decoupled by default which are separate by default how i can you know use the connection strings to make connection between the app server and the database and be able to create an working app hi guys i'm raul from simply learn and today i'd like to welcome you all to the greatest debate of the century today i am joined by two giants of the cloud computing industry they'll be going head to head with each other to decide who amongst them is better it's going to be one hello fight now let's meet our candidates on my left we have aws who's voiced by a picture hi guys and on my right we have microsoft azure who's voiced by anjali hey there so today we'll be deciding who's better on the basis of their origin and the features they provide their performance on the present day and comparing them on the basis of pricing market sharing options free tier and instance configuration now let's listen to their opening statements let's start with aws launched in 2006 aws is one of the most commonly used cloud computing platforms across the world companies like adobe netflix airbnb htc pinterest and spotify have put their faith in aws for their proper functioning it also dominates the cloud computing domain with almost 40 percent of the entire market share so far nobody's even gotten close to beating that number aws also provides a wide range of services that covers a great number of domains domains like compute networking storage migration and so much more now let's see what azure has to say about that azure was launched in 2010 and is trusted by almost 80 percent of all fortune 500 companies the best of the best companies in the world choose to work only with azure azure also provides its services to more regions than any other cloud service provider in the world azure covers 42 regions already and 12 more are being planned to be made azure also provides more than 100 services spanning a variety of domains now that the opening statements are done let's have a look at the current market status of each of our competitors this is the performance route here we have the stats for the market share of aws azure and other cloud service providers this is for the early 2018 period amazon web services takes up a whopping 40 percent of the market share closely followed by hdr at 30 and other cloud services adding 30 this 40 indicates most organizations clear interest in using aws we are number one because of our years of experience and trust we've created among our users sure you're the market leader but we are not very far behind let me remind you more than 80 percent of the fortune 500 companies trust azure with their cloud computing needs so it's only a match of time before azure takes the lead the rest of the 30 percent that is in aws or azure accounts to the other cloud service providers like google cloud platform rackspace ibm software and so on now for our next round the comparison round first we'll be comparing pricing we'll be looking at the cost of a very basic instance which is a virtual machine of two virtual cpus and eight gbs of ram for aws this will cost you approximately 0.0928 us dollars per hour and for the same instance in azure it will cost you approximately 0.096 us dollars per hour next up let's compare market share and options as i mentioned before aws is the undisputed market leader when it comes to the cloud computing domain taking up 40 percent of the market share by 2020 aws is also expected to produce twice its current revenue which comes close to 44 billion dollars not to mention aws is constantly expanding its already strong roaster of more than 100 services to fulfill the shifting business requirements of organizations all that is great really good for you but the research company gardner has released a magic quadrant that you have to see you see the competition is now neck-to-neck between azure and aws it's only a matter of time before azure can increase from its 30 market share and surpass aws this becomes more likely considering how all companies are migrating from aws to azure to help satisfy their business needs azure is not far behind aws when this comes to services as well azure service offerings are constantly updated and improved on to help users satisfy their cloud computing requirement now let's compare aws and azure's free offerings aws provides a significant number of services for free helping users get hands-on experience with the platform products and services the free tier services fall under two categories services that will remain free forever and the others that are valid only for one year the always free category offers more than 20 services for example amazon sns sqs cloud watch etc and the valid fourier category offers approximately 20 services for example amazon s3 ec2 elastic cache etc both types of services have limits on the usage for example storage number of requests compute time etc but users are only charged for using services that fall under the valid for a year category after a year of their usage a show provides a free tier as well it also provides services that belong to the categories of free for a year and always free there are about 25 plus always free services provided by azure these include app service functions container service active directory and lots more and as of the valid for a year there are eight services offered there's linux or windows virtual machines blob storage sql database and few more azure also provides the users with credits of 200 us dollars to access all their services for 30 days now this is a unique feature that azure provides where users can use their credits to utilize any service of a choice for the entire month now let's compare instance configuration the largest instance that aws offers is that of a whopping 256 gbs of ram and 16 virtual cpus the largest that azure offers isn't very far behind either 224 gbs of ram and 16 virtual cpus and now for the final round now each of our contestants will be shown facts and they have to give explanations for these facts we call it the rapid fire round first we have features in which aws is good and azure is better aws does not cut down on the features it offers its users however it requires slightly more management on the user's part azure goes slightly deeper with the services that fall under certain categories like platform as a service and infrastructure as a service next we have hybrid cloud where aws is good and azure is better okay although aws did not emphasize on hybrid cloud earlier they are focusing more on technology now azure has always emphasized on hybrid cloud and has features supporting it since the days of its inception for developers aws is better and azure is good of course it's better because aws supports integration with third-party applications well azure provides access to data centers that provide a scalable architecture for pricing both aws and azure are at the same level it's good for aws because it provides a competitive and constantly decreasing pricing model and in the case of azure it provides offers that are constantly experimented upon to provide its users with the best experience and that's it our contestants have finished giving their statements now let's see who won surprisingly nobody each cloud computing platform has its own pros and cons choosing the right one is based entirely on your organization's requirements hi guys today we've got something very special in store for you we're going to talk about the best cloud computing platform available amazon web services uh rahul i think you said something wrong here the best cloud computing platform is obviously google cloud platform no it isn't aws has more than 100 services that span a variety of domains all right but google cloud platform has cheaper instances what do you have to say about that well i guess there's only one place we can actually discuss this a boxing ring so guys i'm apeksha and i will be google cloud platform and i'm rahul i'll be aws so welcome to fight night this is aws versus gcp the winner will be chosen on the basis of their origin and the features they provide their performance in the present day and comparing them on the basis of pricing market share and options the things they give you for free and instance configuration now first let's talk about aws aws was launched in 2004 and is a cloud service platform that helps businesses grow and scale by providing them services in a number of different domains these domains include compute database storage migration networking and so on a very important aspect about aws is its years of experience now aws has been in the market a lot longer than any other cloud service platform which means they know how businesses work and how they can contribute to the business growing also aws has over 5.1 billion dollars of revenue in the last quarter this is a clear indication of how much faith and trust people have in aws they occupy more than 40 percent of the market which is a significant chunk of the cloud computing market they have at least 100 services that are available at the moment which means just about every issue that you have can be solved with an aws service now that was great but now can we talk about gcp i hope you know that gcp was launched very recently in 2011 and it is already helping businesses significantly with a suite of intelligent secure and flexible cloud computing services it lets you build deploy and scale applications websites services on the same infrastructure as google the intuitive user experience that gcp provides with dashboards wizards is way better in all the aspects tcp has just stepped in the market and it is already offering a modest number of services and the number is rapidly increasing and the cost for a cpu instance or regional storage that gcp provides is a whole lot cheaper and you also get a multi-regional cloud storage now what do you have to say on that i'm so glad you asked let's look at present day in fact let's look at the cloud market share of the fourth quarter of 2017. this will tell you once and for all that aws is the leader when it comes to cloud computing amazon web services contributes 47 of the market share others like rackspace or verizon cloud contribute 36 microsoft azure contributes 10 the google cloud platform contributes four percent and ibm software contributes three percent forty seven percent of the market share is contributed by aws you need me to convince you any more wait wait wait all that is fine but we only started a few years back and have already grown so much in such a less span of time haven't you heard the latest news our revenue is already a billion dollars per quarter wait for a few more years and the world shall see and aws makes 5.3 billion dollars per quarter it's going to take a good long time before you can even get close to us yes yes we'll see now let's compare a few things for starters let's compare prices for aws a compute instance of two cpus in 8gb ram costs approximately 68 us dollars now a computer instance is a virtual machine in which you can specify what operating system ram or storage you want to have for cloud storage it costs 2.3 cents per gb per month with aws you really want to do that because gcp wins this hands down let's take the same compute instance of two cpus with 8 gb ram it will cost approximately 50 dollars per month with gcp and as per my calculations that's a 25 annual cost reduction when compared to aws talking about cloud storage costs it is only 2 cents per gb per month with gcp what else do you want me to say let's talk about market share and options now aws is the current market leader when it comes to cloud computing now as you remember we contribute at least 47 of the entire market share aws also has at least 100 services available at the moment which is a clear indication of how well aws understands businesses and helps them grow yeah that's true but you should also know that gcp is steadily growing we have over 60 services that are up and running as you can see here and a lot more to come it's only a matter of time when we will have as many services as you do many companies have already started adopting gcp as a cloud service provider now let's talk about things you get for free with aws you get access to almost all the services for an entire year with usage limits now these limits include an hourly or by the minute basis for example with amazon ec2 you get 750 hours per month you also have limits on the number of requests to services for example with aws lambda you have 1 million requests per month now after these limits across you get charged standard rates with gcp you get access to all cloud platform products like firebase the google maps api and so on you also get 300 in credit to spend over a 12-month period on all the cloud platform products and interestingly after the free trial ends you won't be charged unless you manually upgrade to a paid account now there is also the always free version for which you will need an upgraded billing account here you get to use a small instance for free and 5gb of cloud storage any usage above this always free usage limits will be automatically built at standard rates now let's talk about how you can configure instances with aws the largest instance that's offered is of 128 cpus and 4 tvs of ram now other than the on demand method like i mentioned before you can also use spot instances now these are the situations where your application is more fault tolerant and can handle an interruption now you pay for the spot price which is effective at a particular r now these part prices do fluctuate but are adjusted over a period of time the largest instance offered with google cloud is 160 cpus and 3.75 tbs ram like spot instances of aws google cloud offers short-lived compute instances suitable for bad jobs and fault tolerant workloads they are called as preemptable instances so these instances are available at eighty percent off on on demand price hence they reduce your compute engine costs significantly and unlike aws these come at a fixed price google cloud platform is a lot more flexible when it comes to instance configuration you simply choose your cpu and ram combination of course you can even create your own instance types this way before we wrap it up let's compare on some other things as well telemetry it's a process of automatically collecting periodic measurements from remote devices for example gps gcp is obviously better because they have superior telemetry tools which help in analyzing services and providing more opportunities for improvement when it comes to application support aws is obviously better since they have years of experience under their bed aws provides the best support that can be given to the customers containers are better with gcp a container is a virtual process running in user space as kubernetes was originally developed by google gcp has full native support for the tool other cloud services are just fine tuning a way to provide kubernetes as a service also the containers help with abstracting applications from their environment they originally run it the applications can be deployed easily regardless of their environment when it comes to geographies aws is better since they have a head start of a few years aws in this span of time has been able to cover a larger market share and geographical locations now it's time for the big decision so who's it going to be yeah who is it going to be gcp or aws i think i'm going to go for choosing the right cloud computing platform is the decision that's made on the basis of the user or the organization's requirement aws azure and gcp are three of the world's largest cloud service providers but how are they different from each other let's find out hey guys i'm rahul and i'll be representing amazon web services i'm chinmayi and i'll be representing microsoft azure and i am shruti and i'll be representing google cloud platform so welcome to this video on aws versus azure vs gcp talking about market share amazon web services leads with around 32 of the worldwide public cloud share azure owns up to 16 of the worldwide market share and gcp owns around 9 of the world's market share let's talk about each of these service providers in detail aws provides services that enable users to create and deploy applications over the cloud these services are accessible via the internet aws being the oldest of the lot was launched in the year 2006. azure launched in 2010 is a computing platform that offers a wide range of services to build manage and deploy applications on the network using tools and frameworks launched in the year 2008 gcp offers application development and integration services for its end users in addition to cloud management it also offers services for big data machine learning and iot now let's talk about availability zones these are isolated locations within data center regions from which public cloud services originate and operate talking about aws they have 69 availability zones within 22 geographical regions this includes regions in the united states south america europe and asia pacific they are also predicted to have 12 more editions in the future azure available in 140 countries has over 54 regions worldwide grouped into six geographies these geographical locations have more than 100 data centers gcp is available in 200 plus countries across the world as of today gcp is present in 61 zones and 20 regions with osaka and zurich being the newly added regions now let's talk about pricing these services follow the pay as you go approach you pay only for the individual services you need for as long as you use them without requiring long-term contracts or complex licensing now on screen you can see the pricing for each of these cloud service providers with respect to various instances like general purpose compute optimized memory optimized and gpu now let's talk about the compute services offered first off we have virtual servers for aws we have ec2 it is a web service which eliminates the need to invest in hardware so that you can develop and deploy applications in a faster manner it provides virtual machines in which you can run your applications azure's virtual machines is one of the several types of computing resources that azure offers azure gives the user the flexibility to deploy and manage a virtual environment inside a virtual network gcp's vm service enables users to build deploy and manage virtual machines to run workloads on the cloud now let's talk about the pricing of each of these services aws cc2 is free to try it is packaged as part of aws's free tier that lasts for 12 months and provides 750 hours per month of both linux and windows virtual machines azure virtual machine service is a part of the free tier that offers this service for about 750 hours per month for a year the user gets access to windows and linux virtual machines gcp's vm service is a part of a free tier that includes micro instance per month for up to 12 months now let's talk about platform as a service or pass services for aws elastic bean stock is an easy to use service for deploying and scaling web applications and services developed with java dot net node.js python and much more it is used for maintaining capacity provisioning load balancing auto scaling and application health monitoring the pass backbone utilizes virtualization techniques where the virtual machine is independent of the actual hardware that hosts it hence the user can write application code without worrying about the underlying hardware google app engine is a cloud computing platform as a service which is used by developers for hosting and building apps in google data centers the app engine requires the apps to be written in java or python and store data in google bigtable and use the google query language for this next let's talk about virtual private server services aws provides light sale it provides everything you need to build an application or a website along with the cost effective monthly plan and minimum number of configurations in simple words vm image is a more comprehensive image for microsoft azure virtual machines it helps the user create many identical virtual machines in a matter of minutes unfortunately gcp does not offer any similar service next up we have serverless computing services aws has lambda it is a serverless compute service that lets you run your code without facilitating and managing servers you only pay for the compute time you use it is used to execute backend code and scales automatically when required azure functions is a serverless compute service that lets you run even triggered code without having to explicitly provision or manage infrastructure this allows the users to build applications using serverless simple functions with the programming language of their choice gcp cloud functions make it easy for developers to run and scale code in the cloud and build image driven serverless applications it is highly available and fault tolerant now let's talk about storage services offered by each of these service providers first off we have object storage aws provides s3 it is an object storage that provides industry standard scalability data availability and performance it is extremely durable and can be used for storing as well as recovering information or data from anywhere over the internet blob storage is an azure feature that lets developers store unstructured data in microsoft's cloud platform along with storage it also offers scalability it stores the data in the form of tears depending on how often data is being accessed google cloud storage is an online storage web service for storing and accessing data on google cloud platform infrastructure unlike the google drive google cloud storage is more suitable for enterprises it also stores objects that are organized into buckets amazon provides ebs or elastic block store it provides high performance block storage and is used along with ec2 instances for workloads that are transaction or throughput intensive azure managed disk is a virtual hard disk you can think of it like a physical disk in an on-premises server but virtualized these managed disks allow the users to create up to 10 000 vm disks in a single subscription persistent storage is a data storage device that retains data after power to the device is shut off google persistent disk is durable and high performance block storage for gcp persistent disk provides storage which can be attached to instances running in either google compute engine or kubernetes engine next up we have disaster recovery services aws provides a cloud-based recovery service that ensures that your it infrastructure and data are recovered while minimizing the amount of downtime that could be experienced site recovery helps ensure business continuity by keeping business apps and workloads running during outages it allows recovery by orchestrating and automating the replication process of azure virtual machines between regions unfortunately gcp has no disaster recovery service next let's talk about database services first off for aws we have rds or relational database service it is web service that's cost effective and automates administration tasks basically it simplifies the setup operation and scaling of a relational database microsoft azure sql database is a software as a service platform that includes built-in intelligence that learns app patterns and adapts to maximize performance reliability and data protection it also eases the migration of sql server databases without changing the user's applications cloud sql is a fully managed database service which is easy to set up maintain and administer relational postgresql mysql and sql server databases in the cloud hosted on gcp cloud sql provides a database infrastructure for applications running anywhere next we have nosql database services aws provides dynamodb which is a managed durable database that provides security backup and restore and in-memory caching for applications it is well known for its low latency and scalable performance azure cosmos db is microsoft's globally distributed multi-model database service it natively supports nosql it natively supports nosql created for low latency and scalable applications gcp cloud data store is a nosql database service offered by google on the gcp it handles replication and scales automatically to your application's load with cloud data stores interface data can easily be accessed by any deployment target now let's talk about the key cloud tools for each of these service providers for aws in networking and content delivery we have aws route 53 and aws cloud front for management we have aws cloud watch and aws cloud formation for development we have aws code start and aws code build for security we have iam and key management service for microsoft azure networking and content delivery we have content delivery network and express root for management tools we have azure advisor and network watcher for development tools for management we have azure advisor and network watcher for development we have visual studio ide and azure blob studio for security we have azure security center and azure active directory for gcp we have the following tools for networking and content delivery we have cloud cdn and cloud dns for management we have stack driver and gcp monitoring for development we have cloud build and cloud sdk and finally for security we have google cloud im and google and cloud security scanner now let's talk about the companies using these cloud providers for aws we have netflix unilever kellogg's nasa nokia and adobe pixar samsung ebay fujitsu emc and bmw among others use microsoft azure so as seen on your screens the companies that use gcp are spotify hsbc snapchat twitter paypal and 20th century fox let's talk about the advantages of each of these services amazon provides enterprise friendly services you can leverage amazon's 15 years of experience delivering large-scale global infrastructure and it still continues to hone and innovate its infrastructure management skills and capabilities secondly it provides instant access to resources aws is designed to allow application providers isvs and vendors to quickly and securely host your applications whether an existing application or a new sas based application speed and agility aws provides you access to its services within minutes all you need to select is what you require and you can proceed you can access each of these applications anytime you need them and finally it's secure and reliable amazon enables you to innovate and scale your application in a secure environment it secures and hardens your infrastructure more importantly it provides security at a cheaper cost than on-premise environments now talking about some of the advantages of azure microsoft azure offers better development operations it also provides strong security profile azure has a strong focus on security following the standard security model of detect assess diagnose stabilize and close azure also provides a cost-effective solution the cloud environment allows businesses to launch both customer applications and internal apps in the cloud which saves an id infrastructure costs hence it's opex friendly let's now look at the advantages of gcp google builds in minute level increments so you only pay for the compute time you use they also provide discounted prices for long-running workloads for example you use the vm for a month and get a discount gcp also provides live migration of virtual machines live migration is the process of moving a running vm from one physical server to another without disrupting its availability to the users this is a very important differentiator for google cloud compared to other cloud providers gcp provides automatic scalability this allows a size container scale to as many cpus as needed google cloud storage is designed for 99.9 durability it creates server backup and stores them in an user configure location let's talk about the disadvantages of each of these services for aws there's a limitation of the ec2 service aws provides limitations on resources that vary from region to region there may be a limit to the number of instances that can be created however you can request for these limits to be increased secondly they have a technical support fee aws charges you for immediate support and you can opt for any of these packages developer which costs 29 per month business which costs more than hundred dollars an enterprise that costs more than fifteen thousand dollars it has certain network connectivity issues it also has general issues when you move to the cloud like downtime limited control backup protection and so on however most of these are temporary issues and can be handled over time talking about some of the disadvantages of microsoft azure codebase is different when working offline and it requires modification when working on the cloud pass echo system is not as efficient as iaas azure management console is frustrating to work with it is slow to respond and update and requires far too many clicks to achieve simple tasks azure backup is intended for backing up and restoring data located on your on-premises servers to the cloud that's a great feature but it's not really useful for doing bare metal three stores of servers in a remote data center let's now look into the disadvantages of gcp so when it comes to cloud providers the support fee is very minimal but in the case of gcp it is quite costly it is around 150 dollars per month for the most basic service similar to aws s3 gcp has a complex pricing schema also it is not very budget friendly when it comes to downloading data from google cloud storage now let us see the certifications that are available for aws and azure and how you can become a cloud engineer and an aws solutions architect rahul will talk about these topics we'll also cover some important aws and azure interview questions right now we're in the aws certification website whose link will be in the description and now we're going to talk about the types of aws certification as you can see here there are three levels of aw certification there's the foundational level associate level and professional level certification now the foundational level certification only requires you to have a basic understanding of how the aws cloud works the aws certified dot practitioner is optional for the architect path developer path and operations path it is mandatory for the specialty certifications like the advanced networking big data and security certifications now the associate level certifications are mid-level certifications for a technical role now a professional certification is the highest level of certification that you can have for a technical role now you have the solutions architect for the architect path and the devops engineer certification for both the developer and operations path so how do you decide which of these certifications is suitable for you so you've seen here that aws provides various certifications for a number of job roles exisops administrator solution architect developer so you need to make the right choice taking into consideration the areas of your interest and the experience level that you have now we're going to talk about each of these certifications in detail so first let's talk about the aw certified cloud practitioner now we all understand that aws is a widely recognized product in the market so this certification helps you validate how well you know the aws cloud so this is just the basic understanding now it is optional for the developer path and the operations path i would suggest it's a good idea to start here because it forms a solid bedrock on all the other things that you're going to learn soon now more importantly it does not require any technical knowledge of other roles such as development architecture administration and so on so it's a great place to start for newcomers now you have the architect role certifications now this is for you if you are interested in becoming a solutions architect or a solution design engineer or someone who just works with designing applications or systems on the aws platform now first we have the aw certified solutions architect associate level certification now this certification is for you if you want to show off how well you can architect and deploy applications on the aws platform now it is recommended that you have at least a year of experience working with distributed systems on the aws platform at the same time it's also required that you understand the aws services and be able to recommend a service based on requirements you need to be able to use architectural best practices and you need to estimate the aws costs and how you can reduce them next up you have the aws certified solutions architect professional level certification now you will not get the certification unless you're done with the aw certified solutions architect associate level certification this is the show of your technical skills and experience in designing distributed applications on the aws platform now this does require you to have two years of experience working with cloud architecture on aws at the same time it also requires you to be able to evaluate requirements and then make architectural recommendations you also need to provide guidance on the best practices on architectural design across a number of different platforms the developer level certifications are for you if you're interested in becoming a software developer now the aws certified developer associate certification is to test how well you know how to develop and maintain applications on the aws platform it does require you to have a year or more of hands-on experience to design and maintain aws based applications like any software developer role it is necessary that you know in-depth at least one high-level programming language it's also necessary that you understand the core of aws services uses and basic architectural best practices you need to be able to design develop and deploy cloud-based solutions on aws platform and you need to understand how applications can be created you need to have experience in developing and maintaining applications for a number of aws services like amazon sns dynamodb sqs and so on now for the aw certified devops engineer professional level certification note here that this certification is exactly the same as the one you have under the operations role so both of them are the same thing so here it tests your ability to create operate and manage distributed applications on the aws platform now it is necessary or it is mandatory to have the aw certified developer associate certification or the aws certified sysops administrator certification with two or more years of hands-on experience in doing the same in aws environments it requires you to be able to develop code in at least one high-level language you need to be able to automate and test applications via scripting and programming and to understand agile or other development processes the operation certifications are for you if you want to become a sysops administrator systems administrator or someone in devops role who wants to deploy applications networks and systems in an automatable and beatable way the aw certified sysops administrator associate certification tests your knowledge in deployment management and operations on the aws platform now you need to have one or more years of hands-on experience in aws based applications you need to be able to identify and gather requirements then define a solution to be operated on aws you need to be able to provide guidance for the best practices through the life cycle of a project as well now the specialty certifications are for you if you're well-versed in aws and want to showcase your expertise in other technical areas the aw certified big data certification showcases your ability to design and implement aws services which can help derive value from a large amount of complex data you are however required to have completed the foundational or associate level certification before you can attempt this you need a minimum of five years of hands-on experience in the data analytics field as well next we have the aw certified advanced networking certification this validates your ability to design and implement aws solutions as well as other hybrid id network architectures at scale this also requires you to have completed the foundational or associate level certification you need to have a minimum of five years of hands-on experience architecting and implementing network solutions and lastly we have the awa certified security certification it helps showcase your ability to secure the aws platform you're required to have an associate or cloud practitioner level of certification a minimum of five years of it security experience and two years of hands-on experience securing aws workloads now say i wanted to schedule an examination so for example i want to do the solutions architect certification so first i would go there now here i can click on register now and the process continues or i can click on learn more by doing this again i can show you the examination here i can also get access to other data like the number of questions available the cost of an examination the portions i need to study and so on now let's talk about solutions architect certification with a little more detail now this certification exam cost 150 us dollars and the practice exam cost 20 us dollars now here i can schedule the examination or download the exam guide i've already downloaded the exam guide and here it is now this exam guide tells you about what you need to learn and what is expected from you here they want you to define a solution based on requirements and provide guidance in its implementation it is also recommended that you know about how the aws services work 1 years of hands-on experience with distributed systems on aws to identify and define technical requirements and so on the rest is available in the exam guide and most importantly they tell you the main content domains and their weightages now we have five domains first domain is to design resilient architectures which holds 34 percent of weightage at depend two you have to define performant architectures three is to specify secure applications and architectures cost optimized architectures and five to define operationally excellent architectures now like you've seen here you've selected one certification and learnt it in detail you can do the same for any of these other certifications you can press learn more and download their exam guide and learn everything that you need to know hi guys i'm rahul from simply learn and today we will be talking about azure certifications now before we go into a little bit of detail let's first talk about what exactly is an azure certification so the azure certifications are actually examinations that are provided by microsoft these help you validate how well you understand the concepts of azure and how well you can work with it these are basically badges of honor that you can show off these also play a very important role when it comes to hiring a promotion you're obviously more likely to be selected if you are a certified professional now these certifications cover a wide range of domains and these certifications also work towards a specified role for example you have certifications that help you become a solutions architect or an as your administrator or an azure developer now if you need a little more convincing as to why you should be taking up an azure certification here are a few more azir provides powerful data and artificial intelligence services that help create intelligent applications azure has more than 100 services that span a wide range of domains they also help satisfy a great number of requirements more than 80 percent of the fortune 500 companies use microsoft azure for their cloud computing requirements not to mention these organizations that use microsoft products can actually avail an enterprise agreement that gives them discounts on azure services and finally azir provides services across 42 regions in the world which is more than any other cloud service provider does in the market right now now let's talk about azure certifications so it's highly likely that you've heard of microsoft more popular certifications the 70532 70533 and 70535 but microsoft is changing its approach towards certifications microsoft's new certifications aim to make the individual fall into a particular role for example as your administrator associate as your developer associate as your solutions architect and the azure devops engineer expert this is what microsoft introduced in the ignite conference that took place on september 24th there microsoft introduced new certifications that fall under the az category now to become an azure administrator associate you need to have the az 100 certification or the microsoft as your infrastructure and deployment and the az101 or microsoft as your integration and security to become an azure developer you need the az200 which is the microsoft azure developer core solution certification and az201 or the microsoft azure developer advanced solution certification finally to become a solutions architect you'll need the asa 300 or the microsoft as your architect technologies and the az-301 or the microsoft azure architect design now these were all introduced on september 24th when the ignite conference took place now the old certifications the 532 533 and 535 will be discontinued from the 31st of december 2018. so what happens to the people who've actually done it that's why microsoft introduced the az102 202 and 302 transition certifications now we'll talk about that in a little bit so now let's go into all of the new certifications in detail so now let's talk about exam az 100 or the microsoft as your infrastructure and deployment certification so this is basically part one of two when it comes to becoming an azure administrator this costs 165 us dollars now let's have a look at the syllabus for this exam some of the domains that you need to prepare for and their approximate weightages so first off you need to know how you can manage azure subscriptions and resources this involves managing subscriptions analyzing how resources are being used and how they're consumed and managing other resource groups secondly you have implementing and managing storage you need to know how you can create and configure storage accounts how you can import and export data to azure configures your files and how you can implement as your backup then you have deploying and managing virtual machines you need to know how you can create and configure virtual machines for windows or linux you need to know how you can automate deployment of virtual machines you need to know how you can manage azure virtual machines and manage virtual machine backups then you need to know how you can configure and manage virtual networks you need to be able to create connectivity between virtual networks how you can implement and manage virtual networking configure name resolution to groups finally you need to know how you can manage identities you need to be able to manage as your active directories and you need to know how you can implement and manage hybrid identities the next step in becoming an azure administrator is to do the az101 or the microsoft azure integration and security certification the az101 or the microsoft azure integration and security certification goes into a little more detail when it comes to becoming an azure administrator the certification cost 165 us dollars now let's have a look at some of the important topics that you need to know for this examination so let's have a look at some of the portions that you need to be well versed with to perform well in this examination first up we have evaluating and performing server migrations to azure then we have implementing and managing application services which involves configuring serverless computing managing app service plans and managing app services then you have implementing advanced virtual networking that involves implementing application load balancing implementing azure load balancer monitoring and managing networking integrating the on-premise network with the azure network and so on and finally securing identities you need to be able to implement a multi-factor authentication manage a role-based control system implementing as your active directories and privilege identity management so after you're done with az100 and 101 you become a microsoft certified as your administrator associate but what happens to the people who already done the implementing microsoft azure infrastructure solution certification or the 70533 it's for these people that microsoft has the az102 or the microsoft azure administrator certification transition now this examination that costs 99 us dollars contains concepts from both az 100 and a z101 and this exam is only valid for people who've already completed the 70533 certification now if you have completed the 70533 i suggest you do this transition certification immediately as this will be discontinued by microsoft by the 31st of march 2019. now let's have a look at how you can become a microsoft certified as your developer associate to do this you need to complete two certifications the az200 or the microsoft azure developer core solution certification and the az201 or the microsoft azure developer advanced solution certification so now let's have a look at the az200 now this certification is part one of two when it comes to becoming an azure developer associate it costs 165 us dollars now this examination is in its beta phase which means that unlike normal examinations where you get your results as soon as you finish the examination this exam will take at least one or two weeks before you can get your result the questions are still being worked on and improved upon there's also a limitation to how many people can take the exam at the moment now let's have a look at some of the portions that you need to prepare for for this examination you need to be able to select an appropriate cloud technology solution based on your requirements this may involve a compute solution integration or a storage solution next you need to be able to develop for cloud storage which involves developing solutions that involve storage tables file storage relational databases and so much more you need to be able to create platform as a service solutions which involves creating web applications mobile applications app services serverless functions and so much more and finally you need to be able to secure cloud solutions you need to implement authentication access control and secure data solutions now microsoft recommends that you have at least a year of experience working with microsoft azure creating applications with azure tools and technologies while at the same time having a solid understanding of all the faces of software development now let's have a look at the next step to becoming a microsoft azure developer you need to do the az201 or the microsoft azure developer advanced solution certification now let's have a look at that now this examination is also in its beta phase and will move to its final version in the next few months as of now it costs 165 us dollars now let's have a look at some of the portions that you need to prepare for for this examination you need to know how you can develop for an azure cloud model you need to develop for auto scaling develop for long-running tasks distribute a transaction and so much more you need to know how you can implement cloud integration solutions like managing apis using api management configure a message based integration architecture to develop an application message model and so much more and finally you need to know how you can develop azure's cognitive services bot and iot solutions you need to be able to create and integrate bots create and implement iot solutions and to integrate the azure cognitive services in an application after you're done with all of this you become a microsoft certified azure developer associate now let's talk about az202 now this is a transition examination for anyone who's done 70532 or developing microsoft azure solution certification now just like the other transition certification this costs 99 us dollars and it is in its beta phase now this is accessible for only people who have done the 70532 and is available only for a limited period of time this will be discontinued from the 31st of march 2019. now this has concepts that have been included in both az 200 and as 201 concepts like developing for crowd storage creating platform as a service solutions securing cloud solutions developing for an azure cloud model implementing cloud integration solutions developing ai machine learning and iot solutions now after you're done with this you can become a microsoft certified as your developer associate now let's find out how you can become a microsoft certified as your solutions architect expert for this you need to do two certifications the aza 300 and the azure 301 now let's have a look at az300 which is the microsoft azure architect technology certification now the az 300 is still in its beta phase and costs 165 dollars this is part one of two for becoming a microsoft certified as your solutions architect expert now let's talk about some of the topics that you need to prepare for for this examination you need to know how you can deploy and configure infrastructure which involves analyzing resource utilization consumption creating and configuring storage accounts creating and configuring virtual machines for windows and linux automating deployment of virtual machines and so much more then you need to know how you can implement workloads and security like migrating servers to assure configure serverless computing implementing application load balancing managing role-based access control implementing multi-factor authentication and so on then you need to architect cloud technology solutions you need to be able to select appropriate compute solutions integrate solutions storage solutions you need to be able to create and deploy applications like for example creating web applications using pass create an application or service that runs on service fabric design and develop applications that run in containers and so on then you need to implement authentications and secure data and finally you need to develop for the cloud which means you need to know how you can develop long-running tasks configure message-based integration develop for auto scaling implement distributed transactions and so on now this certification does require you to be having expert level skills in at least as your administration as your development or devops you also need to have experience with the various steps in it operations like networking virtualization security and so on this is because the azure solutions architect play a very important role of advising stakeholders to convert their requirements into scalable secure and reliable solutions so now the next step is to do the az301 or the microsoft azure architect design certification now let's have a look at that now like all the other certifications this costs 165 dollars and is still in its beta phase this is the final step in becoming an azure solutions architect expert now let's have a look at some of the topics that you need to prepare for for this certification firstly you need to know how you can determine workload requirements you need to know how to gather information and requirements to optimize the consumption strategy and to design a auditing and monitoring strategy you need to design for identity and security like design identity management design authentication authorization and a monitoring strategy for identity and security then you need to know how you can design a data platform solution you need to design a data management strategy a data protection strategy document data flows and so on then you need to design a business continuity strategy for example to design a site recovery strategy ensure that there's high availability design a disaster recovery strategy and to design a data archiving strategy then you need to design for deployment migration and integration you need to design deployments design migrations and design an api integration strategy and finally you need to know how you can design a storage strategy a compute strategy networking strategy and so on after you're done with all of this you become a microsoft certified as your solutions architect expert so what about those people who've already done the 70535 or the architecting microsoft as your solution certification that's where the asic 302 comes in now the az302 is a transition certification which costs 99 us dollars this acts as a replacement for anyone who's done the 70535 certification now this is again in its beta phase but will last only till march 31st 2019. now if you finish the certification you get to become an azure solutions architect expert now this consists of topics included in both az 300 and 301 concepts like determining workload requirements designing for identity and security designing a business continuity strategy implementing workloads in security implementing authentication and securing data and developing for the cloud now what we've mentioned in this video are the most important azure certifications now there are older certifications but we've not covered them because we'll be focusing on the more important role based examinations that have been introduced recently the cloud tech services market is expected to grow 17.3 percent in the span of 2018 to 19 which means there's a growth from 175 175.8 billion dollars to a whopping 206 billion dollars in 2019 and as of 2020 it's expected that 90 of all organizations in the world would be using cloud services not to mention several organizations around the world suggest that using cloud computing services has enabled their employees to experiment a lot more with technologies like machine learning and artificial intelligence so here's what we'll be going through today firstly we'll be talking about who is a cloud computing engineer the steps you need to take to become a cloud computing engineer and the cloud computing engineers salaries so first off who is a cloud computing engineer now a cloud computing engineer is an id professional who takes care of all the technical aspects of cloud computing now be it design planning maintenance and support now a cloud computing engineer can take up a number of different career paths this could be that of a cloud developer security engineer a full stack developer sysops administrator solutions architect cloud architect and so much more now let's have a look at some of the major cloud computing roles first off we have solutions architect now these are individuals who are responsible for analyzing the technical environment in which they are going to produce the solutions the requirements and the specifications secondly they are required to select an appropriate technology that satisfies set requirements they need to estimate and manage the usage and the operational costs of the solutions they provide and they need to support project management as well as solution development next we have sysops administrators they are involved in deploying managing and operating highly scalable and fault tolerant systems they need to select an appropriate service based on compute security or data requirements they need to estimate and manage usage and operational costs and they need to be able to migrate on-premises workloads onto an appropriate cloud computing platform so among both of these roles there are certain requirements that are remaining constant now let's have a look at the steps you need to take to become a cloud computing engineer your first step is to gain proficiency in a cloud computing platform now the first step is to become proficient in at least one of the three major cloud computing platforms beat aws azure or the google cloud platform now there are a huge number of resources that you can find on the internet it could be youtube videos articles virtual or physical classrooms and so much more now after you're done learning you can get certified by microsoft azure aws or the google cloud platform now for aws you have a number of different certifications which can be divided into three categories which are the foundational which is just the basics the associate level certifications the professional level certifications and the specialty certifications similarly with microsoft azure you have certifications that enable you to become an azure developer associate an azure administrator associate an azure architect professional and a devops engineer now most cloud computing platforms have a free tier that you can take advantage of these provide a number of free services for a period of time some of which are free forever so you can use these platforms to your advantage and do as much practice as you can on them now if you want to learn more about cloud computing you can also check out simplylearn's youtube channel then you can go on to the playlist section right here and you can find comprehensive videos on a number of different cloud computing platforms aws and microsoft azure our aws tutorial videos talk about what exactly is aws how you can become an aws solutions architect amazon ec2 s3 some of the other services and so much more we also have detailed tutorials on azure which talks about what exactly is azure the certifications provided by azure some of the services like machine learning as your active directory and so much more and now we're at step two being experienced in at least one programming language unlike general purpose programming languages like c c plus plus c sharp and so on cloud computing requires ones that are a lot more data oriented now some of the major programming languages that are used in cloud computing are go python clojure and java now as i said before there is a wealth of resources that you can learn from there are free websites that you can practice your code on like quick code code academy and several others there's also resources like youtube videos as well as the option of online or offline classes now we're at step 3 specialization you'll also need to be well versed with a number of key concepts these are storage and networking now with storage you need to know how data can be stored and where it can be accessed from you need to know how it can be accessed from multiple different resources you'll also need to have some experience with the services provided by azure and aws like the amazon s3 in aws and the appropriately named azure storage from microsoft azure with networking you need to have a strong understanding of the networking fundamentals as well as virtual networks next up we have virtualization and operating systems with virtualization you need to know how virtual networks which is just a combination of different virtual machines can be used to emulate different components in a particular system with operating systems you need to have a very strong understanding operating systems like windows and linux next up we have security and disaster recovery now you need to understand how data application as well as infrastructure can be protected from malicious attacks with disaster recovery you need to be prepared for any unexpected circumstance by making sure your systems are always safe and are regularly backed up to prevent any sort of loss of data then we have web services and devops now you need to have a strong understanding of apis or application program interfaces and web services some amount of experience with web design also can be of great help with devops you need to have a strong understanding of how cloud computing is able to provide a centralized platform on which you can perform testing deployment and production for devops automation moreover with devops you understand the synergy that the operations as well as the development teams have with each other and for the success of any project and finally you're a cloud computing engineer now let's have a look at the salaries of cloud computing engineers in the united states cloud computing engineers earn around 116 000 per annum in india a cloud computing engineer is paid approximately 6 lakh 66 000 rupees per annum now how can simply learn help you become a cloud computing engineer so let's head on to simply learn's website here we have the cloud architect masters program now this deals with a number of different courses all of which that can help you get started in your journey to becoming a cloud computing engineer this master's program covers a number of different courses like aws technical essentials microsoft azure fundamentals aws developer associate and so much more it provides you 40 plus in-demand skills and 25-plus services provides you a master's certification it has 16 plus real life projects and helps you get a salary that ranges between 15 to 25 lakh rupees per annum it also covers a variety of tools like amazon ec2 azure data factory virtual machines and so much more so why don't you head on to simplylearn.com and get started on your journey to getting certified and getting ahead i'm here to walk you through some of the aws interview questions which we find are important and our hope is that you would use this material in your interview preparation and be able to crack that cloud interview and step into your dream cloud job by the way i'm and cloud technical architect trainer and an interview panelist for cloud network and devops so as you progress in watching you're going to see that these questions are practical scenario based questions that tests the depth of the knowledge of a person in a particular aws product or in a particular aws architecture so why wait let's move on all right so in an interview you would find yourself with a question that might ask you define and explain the three basic types of cloud services and the aws products that are built based on them see here it's a very straightforward question just explain three basic types of cloud service and when we talk about basic type of cloud service it's compute obviously that's a very basic service storage obviously because you need to store your data somewhere and networking that actually connects a couple of other services to your application these basic will not include monitoring these basic will not include analytics because they are considered as optional they are considered as advanced services you could choose a non-cloud service or a product for monitoring of and for analytics so they're not considered as basic so when we talk about basics they are compute storage and networking and the second part of the questions is explain some of the aws products that are built based on them of course compute ec2 is a major one that's that's the major share of the compute resource and then we have platform as a service which is elastic bean stock and then function as a service which is lambda auto scaling and light cell are also part of compute services so the compute domain it really helps us to run any application and the compute service helps us in managing the scaling and deployment of an application again lambda is a compute server so the compute service also helps in running event initiated stateless applications the next one was storage a lot of emphasis is on storage these days because if there's one thing that grows in a network on a daily basis that storage every new day we have new data to store process manage so storage is again a basic and an important cloud service and the products that are built based on the storage services are s3 object storage glacier for archiving ebs elastic block storage as a drive attachment for the ec2 instances and the efs file share for the ec2 instances so the storage domain helps in the following aspects it holds all the information that the application uses so it's the application data and we can also archive old data using storage which would be glacier and any object files and any requirement for block storage can be met through elastic block store and s3 which is again an object storage talking about uh networks it's just not important to answer the question with the name of the services and the name of the product it'll also be good if you could go in depth and explain how they can be used right so that actually proves you to be a person knowledgeable enough in that particular service or product so talking about networking domain vpc networking can't imagine networking without vpc in in the cloud environment especially in aws cloud environment and then we have route 53 for domain resolution or for dns and then we have cloudfront which is an edge caching service that helps customers get our customers to read their application with low latency so networking domain helps with some of the following use cases it controls and manages the connectivity of the aws services within our account and we can also pick an ip address range if you're a network engineer or if you are somebody who works in networks or are planning to work a network you will soon realize the importance of choosing your own ip address for easy remembering so having an option to have your own ip address in the cloud on range of ip address in the cloud it really helps really really helps in cloud networking the other question that gets asked would be the difference between the availability zone and the region actually the question generally gets asked so to test how well you can actually differentiate and also correlate the availability zone and the region relationship right so a region is a separate geographic area like the us2s1 i mean which represents uh north california or the ap south which represents mumbai so regions are a separate geographic area on the contrary availability zone resides inside the region you shouldn't stop there you should go further and explain about availability zones and availability zones are isolated from each other and some of the services will replicate themselves within the availability zone so availability zone does replication within them but regions they don't generally do replication between them the other question you could be asked is what is auto scaling what do we achieve by auto scaling so in short auto scaling it helps us to automatically provision and launch new instances whenever there is an demand it not only helps us meeting the increasing demand it also helps in reducing the resource usage when there is low demand so auto scaling also allows us to decrease the resources or resource capacity as per the need of that particular r now this helps business in not worrying about putting more effort in managing or continuously monitoring the server to see if they have the needed resource or not because auto scaling is going to handle it for us so business does not need to worry about it and auto scaling is one big reason why people would want to go and pick a cloud service especially an awf service the ability to increase and shrink based on the need of that arc that's how powerful is auto scaling the other question you could get asked is what's due targeting in cloud front now we know that cloudfront is caching and it caches content globally in the amazon caching servers global wide the whole point is to provide users worldwide access to the data from a very nearest server possible that's the whole point in using or going for cloud front then what do you mean by geo targeting jio is showing customer and specific content based on language we can customize the content based on what's popular in that place we can actually customize the content the url is the same but we could actually change the content a little bit not the whole content otherwise it would be dynamic but we can change the content a little bit a specific a file or a picture or a particular link in a website and show customized content to users who will be in different parts of the globe so how does it happen cloudfront will detect the country where the viewers are located and it will forward the country code to the origin server and once the origin server gets the specialized or a specific country code it will change the content and it will send to the caching server and it get cached there forever and the user gets to view a content which is personalized for them for the country they are in the other question you could get asked is the steps involved in using cloud formation or creating a cloud formation or backing up an environment within cloud formation template we all know that if there is a template we can simply run it and it provisions the environment but there is a lot more going into it so the first step in moving towards infrastructure as a code is to create the cloud formation template which as of now supports json and yaml file format so first create the cloud formation template and then save the code in an x3 bucket history bucket serves as the repository for our code and then from the cloud formation call the file in the s3 bucket and create a stack and now cloud formation uses the file reads the file understands services that are being called understands the order understands how they are connected with each other cloud formation is actually an intelligent service it understands the relation based on the code it would understand the relationship between the different services and it would set an order for itself and then would provision the services one after the other let's say a service has a dependency and the dependent service the other service which this service let's say service a and b service b is dependent on service a let's say cloud formation is an intelligent service it would provision the resource a first and then would provision resource b what happens if we inverse the order if we inverse the order resource b first gets provision and because it does not have dependency chances that the cloud formation's default behavior is that if something is not provisioned properly something is not healthy it could roll back chances that the environment provisioning will roll back so to avoid that cloud formation first provisions all the services that has or that's dependent on that's depended by another service so it provisions those service first and then provisions the services that has dependencies and if you're being hired for a devops or you know if the interview wanted to test your skill on systems side this definitely would be a question in his list how do you upgrade or downgrade a system with near zero downtime now everybody's moving towards zero downtime or near zero downtime all of them want their application to be highly available so the question would be how do you actually upgrade or downgrade a system with near zero downtime now we all know that i can upgrade an ec2 instance to a better ec2 instance by changing the instance type stopping and starting but stopping and starting is going to cause a downtime right so that's you should be answering or you shouldn't be thinking in those terms because that's the wrong answer specifically the interviewer wants to know how do you upgrade a system with zero downtime so upgrading system with zero downtime it includes launching another system parallelly with the bigger ec2 instance type over the bigger capacity and install all that's needed if you're going to use an ami of the old machine well and good you don't have to go through installing all the updates and installing all the application from the ami once you've launched it in a bigger instance locally test the application to see if it is working don't put it on production yet test the application to see if it is working and if the application works we can actually swap if your server is behind and behind route 53 let's say all that you could do is go to dot 53 update the information with the new ip address new ip address of the new server and that's going to send traffic to the new server now so the cut over is handled or if you're using static ip you can actually remove the static ip from the old machine and assign it to the new machine that's one way of doing it or if you are using elastic nic card you can actually remove the new card from the old machine and attach the new car to the new machine so that way we would get near zero downtime if you are hired for an architect level you should be worrying about cost as well along with the technology and this question would test how well you manage cost so what are the tools and techniques we can use in aws to identify and correct identify and know that we are paying the correct amount for the resources that we are using or how do you get a visibility of your aws resources running one way is to check the billing there is a place where you can check the top services that were utilized it could be free and it could be paid service as well top services that can be utilized it's actually in the dashboard of the cost management console so that table here shows the top five most used services so looking at it you can get it all right so i'm using a lot of storage i'm using a lot of ec2 why is storage high you can go and try to justify that and you will find if you are storing things that should be storing and then clean it up why is compute capacity so high why is data transfer so high so if you start thinking in those levels you'll be able to dig in and clean up unnecessary and be able to save your bill and there are cost explorer services available which will help you to view your usage pattern or view your spending for the past 13 months or so and it will also forecast for the next three months now how much will you be using if your pattern is like this that will actually help and will give you a visibility on how much you have spent how much you will be spending if the trend continues budgets are another excellent a way to control cost you can actually set up budget alright this is how much i am willing to spend for this application for this team or for this month for this particular resource so you can actually put a budget mark and anytime it exceeds any time it's nearing you would get an alarm saying that well we're about to reach the allocated budget amount stuff like that that way you can go back and know and you know that how much the bill is going to be for that month or you can take steps to control bill amount for that particular month so aws budget is another very good tool that you could use cost allocation tags helps in identifying which team or which resource has spent more in that particular month instead of looking at the bill as one list with no specifications into it and looking at it as an expenditure list you can actually break it down and tag the expenditure to the teams with cost allocation tax the dev team has spent so much the production team has spent so much the training team has spent more than the dev and the production team why is that now you'll be able to you know think in those levels only if you have cost allocation tax now cost allocation tags are nothing but the tags that you would put when you create a resource so for production services you would put as a production tag you would create a production tag and you would associate that resources to it and at a later point when you actually pull up your bill that's going to show a detailed a list of this is the owner this is the group and this is how much they have used in the last month and you can move forward with your investigation and encourage or stop users using more services with the cost allocation tax the other famous question is are there any other tools or is there any other way of accessing aws resource other than the console console is gui right so in other words other than gui how would you use the aws resource and how familiar are you with those tools and technologies the other tools that are available that we can leverage and access the aws resource are of course putty you can configure puti to access the aws resources like log into an ec2 instance and ec2 instance does not always have to be logged in through the console you could use putty to log into an ec2 instance like the jump box like the proxy machine and like the gateway machine and from there you can actually access the rest of the resources so this is an alternative to the console and of course we have the aws cli in any of the linux machines or windows machines we can install so that's point two three and four we can install aws cli for linux windows also for mac so we can install them and from there from your local machine we can access run aws commands and access provision monitor the aws resources the other ones are we can access the aws resource programmatically using aws sdk and eclipse so these are bunch of options we have to use the aws resource other than the console if you're interviewed in a company or by a company that focuses more on security and want to use aws native services for their security then you would come across this question what services can be used to create a centralized logging solution the basic services we could use our cloud watch logs store them in s3 and then use elasticsearch to visualize them and use kinesis to move the data from s3 to elasticsearch right so log management it actually helps organizations to track the relationship between operational and security changes and the events that got triggered based on those logs instead of logging into an instance or instead of logging into the environment and checking the resources physically i can come to a fair conclusion by just looking at the logs every time there's a change the system would scream and it gets tracked in the cloud watch and then cloud watch pushes it to s3 kinesis pushes the data from s3 to elasticsearch and i can do a time-based filter and i would get an a fair understanding of what was going on in the environment for the past one or whatever the time window that i wanted to look at so it helps in getting a good understanding of the infrastructure as a whole all the logs are getting saved in one place so all the infrastructure logs are getting saved in one place so it's easy for me to look at it in an infrastructure perspective so we know the services that can be used and here are some of the services and how they actually connect to each other it could be logs that belongs to a one account it could be logs that belongs to multiple accounts it doesn't matter you know those three services are gonna work fairly good and they're gonna inject or they're gonna like suck logs from the other accounts put it in one place and help us to monitor so as you see you have cloud watch here that actually tracks the metrics you can also use cloud trail if you want to log api calls as well push them in an s3 bucket so there are different types of blog flow logs are getting captured in an instance application logs are getting captured from the same vpc from a different vpc from the same account from a different account and all of them are analyzed using elasticsearch using the kibana client so step one is to deploy the ecs cluster step two is to restrict access to the ecs cluster because it's valid data you don't want anybody to put their hands and access their data so rustic access to the ecs dashboard and we could use lambda also to push the uh data from cloudwatch to the elasticsearch domain and then kibana is actually the graphical tool that helps us to visualize the logs instead of looking at log as just statements or in a bunch of characters a bunch of files kibana helps us to analyze the logs in a graphical or a chart or a bar diagram format again in an interview the interviewer is more concerned about testing your knowledge on aw security products especially on the logging monitoring even management or incident management then you could have a question like this what are the native aws security logging capabilities now most of the services have their own logging in them like have their own logging like s3 s3 has its own login and cloudfront has its own logging ds has its own logging vpc has its own logging in additional there are account level logins like a cloudtrail and aws config services so there are variety of logging options available in the aws like cloud trail config cloudfront redshift logging rds logging vpc flow logs s3 object logging s3 access logging stuff like that so we're going to look at two servers in specific cloud trail now this cloud trail the very first product in that picture we just thought the cloud trail provides an very high level history of the api calls for all the account and with that we can actually perform a very good security analysis a security analysis of our account and these logs are actually delivered to you can configure it they can be delivered to s3 for long time archivals and based on a particular event it can also send an email notification to us saying hey just got this error thought i'll let you know stuff like that the other one is config service a config service helps us to understand the configuration changes that happened in our environment and we can also set up notifications based on the configuration changes so it records the cumulative changes that are made in a short period of time so if you want to go through the lifetime of a particular resource what are the things that happen what are the things it went through they can be looked at using aws config all right the other question you could get asked is if you know your role includes taking care of cloud security as well then the other question you could get asked is the native services that amazon provides or to mitigate ddos which is denial of service now not all companies would go with amazon native services but there are some companies which want to stick with amazon native services just to save them from the headache of managing the other softwares or bringing in another tool a third-party tool into managing ddos they simply want to stick with amazon proprietary amazon native services and a lot of companies are using amazon service to prevent ddos denial of service now denial of service is if you already know what denial of service is well and good if you do not know then let's know it now denial of service is a user trying to or maliciously making attempt to access a website or an application the user would actually create multiple sessions and he would occupy all the sessions and he would not let legitimate users access the servers so he's in turn denying the service for the user a quick picture review of what denial of services now look at it these users instead of making one connection they are making multiple connections and there are cheap software programs available that would actually trigger connections from different computers in the internet with different mac addresses so everything kind of looks legitimate for the server and it would accept those connections and it will keep the sessions open the actual users won't be able to use them so that's denying the service for the actual users denial of service all right and distributed denial of service is uh generating attacks from multiple places you know from a distributed environment so that's distributed denial of service so the tools the native tools that helps us to prevent the denial of service attacks in aws is cloud shield and web access firewall aw swap now they are the major ones they are designed to mitigate a denial of service if your website is often bothered by denial of service then we should be using aws shield or aws vaf and there are a couple of other tools that also does when i say that also does denial of service is not their primary job but you could use them for denial of service route 53's purpose is to provide dns cloudfront is to provide caching elastic load balancer elbs work is to provide load balancing vpc is to create and secure a virtual private environment but they also support mitigating denial of service but not to the extent you would get in aws shield and aws web so aws shield and waff are the primary ones but the rest can also be used to mitigate distributed denial of service the other tricky question is this actually will test your familiarity with the region and the services available in the region so when you're trying to provision a service in a particular region you're not seeing the service in that region how do we go about fixing it or how do we go about using the service in the cloud it's a tricky question and if you have not gone through such situation you can totally blow it away you really need to have a good understanding on regions the services available in those regions and what if a particular service is not available how to go about doing it the answer is not all services are available in all regions anytime amazon announces a new service they don't immediately publish them on all regions they start small and as in when the traffic increases as and when it becomes more likeable to the customers they actually move the service to different regions so as you see in this picture within america north virginia has more services compared to ohio or compared to north california so within not america itself north virginia is the preferred one so similarly there are preferred regions within europe middle east and africa and preferred regions within asia pacific so anytime we don't see a service in a particular region chances that the service is not available in that region yet we got to check the documentation and find the nearest region that offers that service and start using the service from that region now you might think well if i'm looking for a service in asia let's say in mumbai and if it is not available why not simply switch to north virginia and start using it you could but you know that's going to add more latency to your application so that's why we need to check for application which is check for region which is very near to the place where you want to serve your customers and find nearest region instead of always going back to north virginia and deploying an application in north virginia again there's a place there's a link in aws.com that you can go and look for services available in different region and that's exactly what you're seeing here and if your service is not available in a particular region switch to the other region that provides your service the nearest other region that provides that service and start using service from there with the coming up of cloud a lot of companies have turned down their monitoring team instead they want to go with the monitorings that cloud provides you know nobody wants to or at least many people don't want to go through the hassle of at least new startups and new companies that are thinking of having a monitoring environment now they don't want to go with traditional knock monitoring instead they would like to leverage aws monitorings available because it monitors a lot of stuff not just the availability but it monitors a lot of stuff like failures errors it also triggers emails stuff like that so how do you actually set up a monitor to website how to set up a monitor to monitor the website metrics in real time in aws the simple way anytime you have a question about monitoring cloudwatch should strike your mind because cloudwatch is meant for monitoring is meant for collecting metrics is meant for providing graphical representation of what's going on in a particular network at a particular point of time so cloudwatch cloudwatch helps us to monitor applications and using cloudwatch we can monitor the state changes not only the state changes the auto scaling life cycle events anytime there are there are more services added there is a reduction in the number of servers because of less usage and very informative messages can be received through cloud watch any cloud watch can now support scheduled events if you want to schedule anything cloudwatch has an event that would schedule an action all right schedule a trigger time based not incident based you know anything happening and then you get an action happening that's incident based on the other hand you can simply schedule few things on time based so that's possible with cloud watch so this cloud watch integrates very well with a lot of other services like notifications for notifying the user or for notifying the administrator about it and it can integrate well with lambda so to trigger an action anytime you are designing an auto healing environment this cloud watch can actually monitor and send an email if we are integrating it with sns simple notification service or this cloud watch can monitor and based on what's happening it can trigger an event in lambda and that would in turn run a function till the environment comes back to normal so cloudwatch integrates well with a lot of other aw services all right so cloudwatch has three statuses green when everything is going good yellow when the service is degraded and red when the service is not available green is good so we don't have to do anything about it but anytime there is an ello the picture that we're looking at it's actually calling and lambda function to debug the application and to fix it and anytime there's a red alert it immediately notifies the owner of the application about well the service is down and here is the report that i have here is the metrics that i've collected about the service stuff like that if the job role requires you to manage the servers as well there are certain job roles which are on the system side there are certain job roles which is development plus system side now you're responsible for the application and the server as well so if that's the case you might be tested with some basic questions like the different types of virtualization in aws and what are the difference between them all right the three major types of virtualization are hvm which is hardware virtual machine the other one is pv para virtualization and the third one is pv on hvm para virtualization on hardware virtual module all right the difference between them or actually describing them is actually the difference between them hvm it's actually a fully virtualized hardware you know the whole hardware is virtualized and all virtual machines act separate from each other and these vms are booted by executing master boot record in the root block and when we talk about para virtualization paragraph is actually the special boot loader which boots the pva mice and when we talk about pv on hvm it's it's actually the marriage between hvm and pv and this para virtualization on hvm in other words pv on hvm it actually helps operating system take advantage in storage and the network input output available through the host another good question is name some of the services that are not region specific now you've been thought that all services are within a region and some services are within an availability zone for example ec2 is within an availability zone ebs is within an availability zone s3 is region specific dynamodb is region specific stuff like that vpc is both availability and region specific meaning you know subnets are availability zone specific and vpc is region specific stuff like that so you might have thought you might have learned in that combination but there could be some tricky questions that tests you how well you have understood the region non-region and availability non-availability services i should say there are services that are not region specific that would be iam so we can't have im for every availability zone and for every region which means you know users will have to use one username and password for one region and anytime they switch to another region they will have to use another username and password that that's more work and that's not a good design as well authentication has to be global so im is a global service and which means its not region specific on the other hand route 53 is again a regional specific so we can't have a route 53 for every region route 53 is not a region specific service it's a global service and it's one application users access from everywhere or from every part of the world so we can't have one url or one dns name for each region if your application is a global application and then web application firewall works well with cloudfront then cloudfront is a region based service so the web application firewall it's not region specific service it's a global service and cloudfront is again a global service though you can you know cache content on a continent and country basis it's still considered a global service right it's not bound to any region so when you activate cloud front you're activating it away from region or availability zone so when you're activating a web application firewall because it's not a region specific service you're activating it away from availability zone and regions so a quick recap i am users groups roles and accounts they are global services they can be used globally route 53 services are offered at edge locations and they are global as well web application firewall a service that protects our web application from common web exploits they are global service as well cloud front cloudfront is global content delivery network cdn and they are offered at edge locations which are a global service in other words non-region specific service or beyond region service all right this is another good question as well in the project that you are being interviewed if they really want to secure their environment using nat or if they are already securing their environment using nat by any of these two methods like nat gateway or nat instances you can expect this question what are the difference between an ad gateway and that instances now they both saw the same thing all right so they're not two different services trying to achieve two different things they both serve the same thing but still they do have differences in them right on a high level they both achieve providing netting for the service behind it but the difference comes when we talk about the availability of it that gateway is a managed service by amazon whereas nat instance is managed by us now i'm talking about the third point maintenance here nat gateway is managed by amazon that instance is managed by us and availability of nat gateway is very high and availability of that instance is less compared to the nat gateway because it's managed by us you know it's on an ec2 instance which could actually fail and if it fails we'll have to relaunch it but if it is not gateway if something happens to that service amazon would take care of reprovisioning it and talking about bandwidth it can burst up to 75 gigabits now traffic through the net gateway can burst up to 75 gigabits but for that instance it actually depends on the server that we launched and if we are launching a t2 micro it barely gets any bandwidth so there's a difference there and the performance because it's highly available because of the bigger pipe 75 gigabits now the performance of the net gateway is very high but the performance of the nat instance is going to be average again it depends on the size of the nat instance that we pick and billing a billing for nat gateway is the number of gateways that we provision and the duration for which we use the nat gateway but billing for that instance is number of instance and the type of instance that we use of course number of instance duration and the type of instance that we use security in that gateway cannot be uh assigned meaning it already comes with full packed security but in that instance security is a bit customizable i can go and change the security because it's a server managed by me or managed by us i can always change the security well allow this allow don't allow this stuff like that size and load of the night gateway is uniform but the size and the load of the that instance changes as per add gateway is a fixed product but not instance can be small instance can be a big instance so the size and the load through it varies right the other question you could get asked is what are the difference between stopping and terminating an ec2 instance now you will be able to answer only if you have worked on environments where you have your instance stopped and where you have your instance terminated if you have only used lab and are attending the interview chances are that you might you always lost when answering this question it might look like both are the same well stopping and terminating both are the same but there is a difference in it so when you stop an instance it actually performs a normal shutdown on the instance and it simply moves the instance to the stopped state but when you actually terminate the instance the instance is moved to this stop state the evs volumes that are attached to it are deleted and removed and we'll never be able to recover them again so that's a big difference between stopping and terminating an instance if you're thinking of using the instance again along with the data in it you should only be thinking of stopping the instance but you should be terminating the instance only if you want to get rid of that instance forever if you are being interviewed for an architect level position or a junior architect level position or even a cloud consultant level position or even in an engineering position this is a very common question that gets asked what are the different types of easy to instances based on their cost or based on how we pay them all right they're all compute capacity for example the different types are on demand instances spot instances and reserved instances it kind of looks the same they all provide the compute capacity they all provide the same type of hardwares for us but if you are looking at cost saving or optimizing cost in our environment we got to be very careful about which one are we picking now we might think that well i'll go with on-demand instance because i pay on a per hour basis which is cheap you know i can use them anytime i want and anytime i don't want i can simply get rid of it by terminating it you're right but if the requirement is to use the service for one year the requirement is to use the service for three years then you'll be wasting a lot of money buying on-demand instances you'll be wasting a lot of money paying on an hourly basis instead we should be going for reserved instance where we can reserve the capacity for the complete one year or complete three years and save huge amount in buying reserved instances all right so under man is cheap to start with if you're only planning to use it for a short while but if you're planning to run it for a long while then we should be going for reserved instance that is what is cost efficient so spot instance is cheaper than on-demand instance and there are different use cases for spot instance as well so let's look at one after the other the on-demand instance the on-demand instance is purchased at a fixed rate per r this is very short term and irregular workloads and for testing for development on demand instance is a very good use case we should be using on demand for production spot instance spot instance allows users to purchase ec2 at a reduced price and any time we have more instances we can always go and sell it in spot instances i'm referring to anytime we have more reserved instances we can always sell them in spot instance catalog and the way we buy spot instance is we actually put a budget this is how much i'm willing to pay all right would you be able to give service within this cost so anytime the price comes down and meets the cost that we have put in will be assigned an instance and anytime the price shoots up the instance will be taken away from us but in case of on-demand instances we have bought that instance for that particular r and it stays with us but with spot instances it varies based on the price if you meet the price you get the instance if you don't meet the price it goes away to somebody else and the spot instance availability is actually based on supply and demand in the market there's no guarantee that you will get spot instance at all time all right so that's a caveat there you should be familiar with that's a caveat that you should be aware when you are proposing somebody that we can go for spot instance and save money it's not always going to be available if you want your spot instance to be available to you then we need to carefully watch the history of the price of the spot instance now how much was it last month and how much was it how much is it this month so how can i code or how much can i code stuff like that so you got to look at those history before you propose somebody that well we're going to save money using spot instance on the other hand reserved instance provide cost savings for the company we can opt for reserved instances for you know one year or three years there are actually three types of reserved instances light medium and heavy reserved instances they are based on the amount that we would be paying and cost benefit also depends with reserved instance the cost benefit also depends based on are we doing all upfront or no upfront or partial payment then split the rest as monthly payments so there are many purchase options available but overall if you're looking at using an application for the next one year and three years you should not be going for on-demand instance you should be going for reserved instance and that's what gives you the cost benefit and in an error-based interview sometimes you might be asked you know how you interact with the aws environment are you using cli are you using console and depending on your answer whether console or a cli the panelist put a score okay this person is cli specific this person is console specific or this person has used aws environment through the sdk stuff like that so this question tests whether you are a cli person or an console person and the question goes like this how do you set up ssh agent forwarding so that you do not have to copy the key every time you log in if you have used puri anytime if you want to log into an ec2 instance you will have to put the ip and the port number along with that you will have to map or we will have to map the key in the puri and this has to be done every time that's what we would have done in our lab environments right but in production environment using the same key or mapping the same key again and again every time it's actually an hassle it's considered as a blocker so you might want to cache it you might want to permanently add it in your putty session so you can immediately log in and start using it so here in the place where you would actually map the private key there's a quick button that actually fixes or that actually binds your ssh to your putty instance so we can enable ssh agent forwarding that will actually bind our key to the ssh and next time when we try to log in you don't have to always go through mapping the key and trying to log in all right this question what are solaris and ax operating systems are they available with aws that question generally gets asked to test how familiar are you with the amis available how familiar are you with ec2 how familiar are you with the ec2 hardwares available that basically test that now the first question or the first thought that comes to your mind is well everything is available with aws i've seen windows i've seen ubuntu i've seen red hat i've seen amazon amis and if i don't see my operating system there i can always go to marketplace and try them if i don't find a marketplace i can always go to community and try them so a lot of amis available that lot of operating systems available i will be able to find solaris and ax but that's not the case solar s and ax are not available with aws that's because solaris uses a different and solaris does not support the architecture does not support public cloud currently the same goes for ax as well and they run on power cpu and not on intel and as of now amazon does not provide power machines this should not be confused with the hpc which is a high performance computing should not be confused with that now these are different hardwares different cpu itself that the cloud providers they do not provide yet another question you could get asked in organizations that would want to automate their infrastructure using amazon native services would be how do you actually recover an ec2 instance or auto recover an ec2 instance when it fails well we know that ec2 instances are considered as immutable meaning irreparable we don't spend time fixing bugs in an os stuff like that you know once an ec2 instance crashes like it goes on a ospanic or there are various reasons why it would fail so we don't have to really worry about fixing it we can always relaunch that instance and that would fix it but what if it happens at two o'clock in the night what if it happens that during a weekend when nobody's in office looking or monitoring those instances so you would want to automate that not only on a weekend or during midnight but it's general practice good to automate it so you could face this question how do you actually automate an ec2 instance once it fails and the answer to that question is using cloudwatch we can recover the instance so as you see there is an alarm threshold a set in cloudwatch and once the threshold is met meaning if there is an error if there is a failure if the ec2 instance is not responding for a certain while we can set an alarm and once the alarm is met let's say the cpu utilization stayed high for five minutes right it's not taking any new connections or the instance is not pinging for five minutes or in this case it's two minutes it's not pinging so it's not going to respond connection so in those cases you would want to automatically recover that ec2 instance by rebooting the instance all right now look at this the take this action section under the action so there we have a bunch of options like recover this instance meaning reboot the instance so that's how we would recover the other two options are beyond the scope of the question but still you can go ahead and apply just like i'm going to do it so the other option is stop the instance that's very useful when you want to stop instances that are having low utilizations nobody's using the system as of now you don't want them to be running and wasting the cloud expenditure so you can actually set an alarm that stops the ec2 instance that's having low utilization so somebody was working in an instance and they left it without or they forgot to shut down that instance and it gets i mean they will only use it again the next day morning so in between there could be like 12 hours that the system is running idle nobody's using it and you're paying for it so you can identify such instances and actually stop them when the cpu utilization is low meaning nobody is using it the other one is to terminate let's say you want to give system to somebody temporarily and you don't want them to hand the system back to you right this is actually an idea in other words this is actually the scenario so you hand over a system to somebody and when they're done they're done we can actually terminate the system so you could instruct the other person to terminate the system when they're done and they could forget then the instance could be running forever or you can monitor the system after the specified time is over and you can terminate the system or best part you can automate the system termination so you assign a system to somebody and then turn on this cloud watch action to terminate the instance when the cpu is low for like two hours meaning they've already left or cpu is low for 30 minutes meaning they've already left stuff like that so that's possible and if you're getting hired for an system side architect or even on the sysop site you could face this question what are the common and different types of ami designs there are a lot of ami designs the question is the common ones and the difference between them so the common ones are the full back a mice and the other one is just enough os ami j e os ami and the other one is hybrid type amis so let's look at the difference between them the full backed ami just like the name says it's fully baked it's ready to use ami and this is the simplest ami to deploy can be a bit expensive it can be a bit cumbersome because you'll have to do a lot of work beforehand you could use the ami so a lot of planning a lot of thought process will go into it and the ami is ready to use right you hand over the ami to somebody and it's ready to use or if you want to reuse the ami it's already ready for you to use so that's full baked ami the other one is just enough operating system ami just like the name says it has i mean as you can also see in the diagram or in the picture it covers a part of the os all bootstraps are already packed properly and the security monitoring logging and the other stuff are configured at the time of deployment or at the time you would be using it so not much thought process will go in here the only focus is on choosing the operating system and what goes the operating system specific agents or bootstraps that goes into the operating system that's all we worry about the advantage of this is it's flexible meaning you can choose to install additional softwares at the time of deploying but that's going to require an additional expertise on the person who will be using the ami so that's another overhead there but the advantage is that it's kind of flexible i can change the configurations during the time of deployment the other one is a hybrid ami now the hybrid ami actually falls in between the fully baked ami and just enough operating system options so these amis have some features of the big type and some features of the just enough os type so as you see the security monitoring logging are packed in that ami and the runtime environments are installed during the time of a deployment so this is where the strict company policies would go into the ami company policies like you gotta log this you gotta monitor this these are the ports that generally gets open in all the systems stuff like that so they strictly go into the ami and sits in an ami format and during deployment you have the flexibility of choosing the different runtime and the application that sits in an ec2 instance another very famous question you would face in an interview is how can you recover login to an ec2 instance to which you lost the key well we know that if the key is lost we can't recover it there are some organizations that integrate their ec2 instances with an 80 that's different all right so you can go and reset the password and the 80 and you will be able to log into the new password but here the specific tricky question is you are using a key to log in and how do you recover if you have lost the key generally companies would have made a backup of the key so we can pick from the backup but here the specific question is we have lost the key literally no backups on the key at all so how can we log in and we know that we can't log into the instance without the key present with us so the steps to recover is that make the instance use another key and use that key to log in once the key is lost it's lost forever we won't be able to recover it you can't raise the ticket with amazon not possible they're not going to help it's beyond the scope so make the instance use another key it's only the key that's the problem you still have valid data in it you got to recover the data it's just the key that's having the problem so we can actually focus on the key part alone and change the key and that will allow us to log in so how do we do it step by step procedure so first verify the ec2 config service is running in that instance if you want you can actually beforehand install the easy to config in that service or you can actually make the easy to config run through the console just a couple of button clicks and that will make the easy to configure run in that ec2 instance and then detach the root volume for that instance of course it's going to require a stop and start to detach the root volume from the instance attach the root volume to another instance as a temporary volume or it could be a temporary instance that you've launched only to fix this issue and then log in to that instance and to that particular volume and modify the configuration file configuration file modify it to use the new key and then move the root volume back to its original position and restart the instance and now the instance is going to have the new key and you also have the new key with which you can log in so that's how we go ahead and fix it now let's move on to some product specific or s3 product specific questions a general perception is s3 and ebs can be used interchangeably and the interviewer would want to test your knowledge on s3 and evs well ebs uses s3 that's true but they can't be interchangeably used so you might face this question what are some key differences between aws s3 and ebs well the differences are s3 is an object store meaning you can't install anything in it you can store drive files but you can't actually install in it it's not a file system but ebs is a file system you can install services i mean install applications in it and that's going to run stuff like that and talking about performance s3 is much faster and ebs is super faster when accessing from the instance because from the instance if you need to access s3 you'll actually have to go out through the internet and access the s3 or s3 is an external service very external service you'll have to go through or you'll have to go outside of your vpc to access s3 s3 does not come under a vpc but ebs comes under a vpc it's on the same vpc so you would be able to use it kind of locally compared to s3 ebs is very local so that way it's going to be faster and redundancy talking about redundancy of s3 and ebs s3 is replicated the data in s3 is replicated across the data centers but ebs is replicated within the data center meaning s3 is replicated across availability zones ebs is within an availability zone so that way the redundancy is a bit less in ebs in other words redundancy is higher in s3 than eps and talking about security of s3 is3 can be made private as well as public meaning anybody can access s3 from anywhere in the internet that's possible with s3 but ebs can only only be accessed when attached to an ec2 instance right just one instance can access it whereas s3 is publicly directly accessible the other question related to s3 security is how do you allow access to a user to assert in a user to a certain bucket which means this user is not having access to s3 at all but this user needs to be given access to a certain bucket how do we do it the same case applies to servers as well in few cases there could be an instance where a person is new to the team and you actually don't want them to access the production service now he's in the production group and by default he or she is granted access to that server but you specifically want to deny access to that production server till the time he or she is matured enough to access or understand the process understand the do's and don'ts before they can put their hands on the production server so how do we go about doing it so first we would categorize our instances well these are critical instances these are normal instances and we would actually uh put a tag on them that's how we categorize right so you put attack on them put attacks saying well they are highly critical they are medium critical and they are not critical at all still there in production stuff like that and then you would pick the users who wants to or who should be or should not be given access to a certain server and you would actually allow the user to access or not access servers based on a specific tag in other words you can use actually tags in in the previous step we put tags on the critical server right so you would define that this user is not going to use this tag all right this user is not allowed to use the resources with this stack so that's how you would make your step forward so you would allow or deny based on the tags that you have put so in this case he or she will not be allowed to servers which are tagged critical servers so that's how you allow deny access to them and the same goes for bucket as well or if an organization is excessively using s3 for their data storage because of the benefit that it provides the cost and the durability you might get asked this question which is organizations would replicate the data from one region to another region for additional data durability and for having data redundancy not only for that they would also do that for dr purposes for disaster recovery if the whole region is down you still have the data available somewhere else and you can pick and use it some organizations would store data in different regions for compliance reasons to provide low latency access to their users who are local to that region stuff like that so when companies do replication how do you make sure that there is consistency in the replication how do you make sure that the replication is not failing and the data gets transferred for sure and there are logs for that replication this is something that the companies would use where they're excessively using s3 and they're fully relying on the replication in running their business and the way we could do it is we can set up a replication monitor it's actually a set of tools that we could use together to make sure that the cloud replication a region level replication is happening properly so this is how it happens now on this side on the left hand side we have the region 1 and on the right hand side we have region 2 and region 1 is the source bucket and region 2 is the destination bucket all right so object is put in the source bucket and it has to go directly to the region to bucket or or made a copy in the region to bucket and the problem is sometimes it fails and there is no consistency between them so the way you would do it is connect these services together and create and cross replication or cross region replication monitor that actually monitors that actually monitors your environment so there are cloud watch that make sure that the data is move no data is failing again there's cloud watch on the other end make sure that the data is moving and then we have the logs generated through cloudtrail and that's actually written in dynamodb and if there is an error if something is failing you get notified through an sms or you get notified through an email using the sns service so that's how we could leverage these tools and set up and cross region replication monitor that actually monitors your data replication some common issues that company companies face in vpc is that we all know that i can use route 53 to resolve an ip address externally from the internet but by default the servers won't connect to the other servers using our custom dns name that it does not do that by default so it's actually a problem there are some additional things that as an administrator or as an architect or as a person who uses it you will have to do and that's what we're going to discuss so the question could be a vpc is not resolving the server through the dns you can access it through the ip but not through the dns name and what could be the issue and how do you go about fixing it and you will be able to answer this question only if you have done it already it's a quick and simple step by default vpc does not allow that's the default feature and we will have to enable the dns hostname resolution before now this is for the custom dns not for the default dns that comes along this is for the custom dns so we will have to enable the dns host name resolution so our will have to enable dns hostname resolution so they actually resolve let's say i want to connect to a server1.simplylearn.com by default it's not allowed but if i enable this option then i will be able to connect to server one simplylearn.com if a company has vpcs in different regions and they have a head office in a central place and the rest of them are branch offices and they are connecting to the head office for access or you know for saving data or for accessing certain files or certain data or storing data all right so they would actually mimic the hub and spoke topology where you have the vpc which is centrally in an accessible region a centrally accessible region and then you would have a local vpcs or branch offices in different other regions and they get connected to the vpc in the central location and the question is how do you actually connect the multiple sites to a vpc and make communication happen between them by default it does not do that we know that vpcs they need to be paired between them in order to access the resources let's look at this picture right so i have like a customer network or branch offices in different parts and they get connected to a vpc that's fine so what we have achieved is those different offices the remote offices they are connecting to the vpc and they're talking but they can't connect or they can't talk to each other that's what we have built but the requirement is the traffic needs to or they should be able to talk to each other but they should not have direct connection between them which means that they will have to come and hit the vpc and then reach the other customer network which is in los angeles or which is in new york right that's the requirement so that's possible with some architecting in the cloud so that's using vpn cloud hub you look at this dotted lines which actually allows customers or which actually allows the corporate networks to talk to each other through the vpc again by default it doesn't happen cloud hub is an architecture that we should be using to make this happen and what's the advantage of it as a central office or as the headquarters office which is in the vpc or headquarters data center which is in the vpc you have control or the vpc has control on who talks to who and what traffic can talk to i mean what traffic can be routed to the other head office stuff like that that centralized control is on the vpc the other question you could get asked is neiman explain some security products and features available in vpc well vpc itself is an security service it provides security service to the application but how do you actually secure the vpc itself that's the question and yes there are products that can actually secure the vpc or the vpc delivers those products to secure the application access to the vpc is restricted through a network access control list all right so that's and security product in vpc and a vpc has security groups that protects the instances from unwanted inbound and outbound traffic and network access control list protects the subnets from unwanted inbound and outbound access and there are flow logs we can capture in vpc that captures incoming and outgoing traffic through a vpc which will be used for later analysis as in what's the traffic pattern what's the behavior of the traffic pattern and stuff like that so there are some security products and features available in vpc now how do you monitor vpc vpc is a very important concept very important service as well everything sits in a vpc most of the service sits in a vpc except for lambda and s3 and dynamodb and couple of other services most of them sit in a vpc for security reason so how do you monitor your vpc how do you gain some visibility on your vpc well we can gain visibility on our vpc using vpc flow log that's the basic service as you see it actually captures what's allowed what's not allowed stuff like that which ip is allowed which ip is not allowed stuff like that so we can gather it and we can use that for analysis and the other one is cloud watch and cloud watch logs the data transfers that happen so this is you know who gets allowed and who does not get allowed i mean the flow logs is who is allowed and who's not allowed that kind of detail and cloud watch gives information about the data transfer how much data is getting transferred we can actually pick unusual data transfers if there is a certain hike in the graph there's a sudden hike and something happens at 12 on a regular basis and you weren't expecting it there's something suspicious it could be valid backups it could be a malicious activity as well so that's how you know by looking at cloudwatch logs and cloudwatch dashboard now let's talk about multiple choice questions when going for an interview you might sometimes find yourself that the company is conducting an online test based on the score they can put you to a panelist and then they would take it forward so we thought it will also include multiple choice questions to help you better handle such situation if you come across all right when you find yourself in such situation the key to clear them is to understand the question properly read between the lines that's what they say you know there can be like a big paragraph with three lines or ten lines you really got to understand what the question is about and then try to find answer for that question so that's a thumb rule number one and then the second rule is try to compare and contrast the services mentioned or try to compare and contrast the answers you can easily read out one or two answers and then you will be left with only two answers to decide from you know so that also helps you with time and that's all that also helps you with some precision in your answer so number one read between the lines number two compare and contrast the services and you'll be able to easily weed out the wrong ones so let's try answering this question suppose you are a game designer and you want to develop a game with a single digit millisecond latency which of the following database services would you choose so we know that the following are database services are good enough all right and it talks about millisecond latency that's a key point and the third thing is it's a game could be a mobile game it's a game that you are trying to design and you need millisecond latency and it has to be a database all right so let's talk about the options available rds rds is a database for sure is it good for a game design we'll come back to that neptune uh neptune is a graph a database service in amazon so that's kind of out of the equation and snowball is actually a storage right it's it's a transport medium i would say so that's again out of the equation so the tie is between rds and dynamodb if we need to talk about rds rds is an a platform as a service it provides cost efficient resizable capacity but it's an sql database meaning the tables are kind of strict you know it's good for banking and other type of applications but not really good for anything that has to do with gaming so the only option left is dynamodb again it's the right answer dynamodb is actually an fast and flexible nosql database service and it provides a single digit millisecond latency at any scale and it's a database at the same time it's a key value store model database so the right answer is dynamodb all right let's look at the next question if you need to perform real-time monitoring of aws services and get actionable insights which service would you use all right let's list the services so it talks about real-time monitoring firewall manager what does it provide now firewall manager is not really a monitor just like the name says it's a manager it manages multiple firewalls and aws guard duty is an thread detection service it does monitoring it does continuously monitor our environment but it monitors for threats all right only threats now let's talk about cloud watch a cloud watch is a service that helps to track metrics it's a service that is used to monitor the environment and give us a system-wide visibility and also it helps us to store logs so at the moment it kind of looks like that could be the right answer we don't know that yet but i mean we have one more option left that's uh ebs so what's ebs ebs is a block storage elastic block store if we abbreviate ebs it's elastic block store so all three of them are easily out of the question the first one is to manage second one is to find threats of course it does monitoring so there's i mean if there is one relation between cloud watch and guard duty that's monitoring so easily we can actually find ourselves slipped towards picking guard duty but know that guard duty is only for gaining security inside but not about gaining aws service inside so cloudwatch is a service that helps us to get a system wide or an aws wide or an account wide it has number of metrics we can monitor and get a very good insight of how a service is performing be it cpu be it ram beat network utilization beat connection failures cloudwatch is a service that helps us perform a real-time monitoring and get some actionable insights on the services all right let's talk about this 33rd question as a web developer you are developing an app especially for the mobile platform all right there is a mention that this is especially for the mobile platform so a lot of services gets filtered out mobile platform right which of the following lets you add user sign up sign in and access control to your web and mobile app quickly and easily all right so this is all about signing in to your mobile app so if we need to read between the lines that's how we can read sign up or sign in into a mobile platform all right so we have like four options here uh shield aws massey aws inspector amazon cognito so let's try to weed out services which are not relevant to it so what's aws shield aws shield is actually a service that provides a ddos mitigation or ddos protection denial of service protection it's a security feature let's talk about the second option aws maxi is again a security service that uses machine learning to automatically discover and classify the data it again talks about security and this security is all about encrypting or saving the data does not come close with signing up and mobile platform all right let's talk about the other one aws inspector now aws inspector has something to do with apps it definitely has something to do with apps so kind of looks like that's relevant as of now so it actually helps with the improving the security and compliance of the apps that we deploy in the cloud so kind of looks like it could be because it has to do with apps the last one cognito now cognito is a service that actually lets the administrator to have control access over web and mobile apps and it's a service that helps us to sign up and assign in to an mobile and web app so that very much looks like we found the answer so cognito is a service that helps web app and mobile app for sign up and signing in and also gives the administrator to have control over who has access control over the web and the mobile app pretty much we found it so it's cognito cognito is a service that helps us to set up sign up sign in and have access control over the users who would be using our mobile and web app all right how about this question you are an ml engineer or a machine learning engineer who is on the lookout for a solution that will discover sensitive information that your enterprise stores in aws and then uses nlp to classify that data and provide business related insights which among the following services would you choose so we have a bunch of services that's going to help us achieve or one of it is going to help us achieve the above requirement so it's a service that deals with machine learning you're a machine learning engineer who's looking for a service that will help you to discover information at your enterprise store so we're talking about storage discover information in store and then classify uh the data depending on severity the sensitivity classify the data so which service is that so firewall manager just like the name says it's a manager and the aws iam if we abbreviate it it's a identity and access management so it's identity and access management nothing to do with identifying sensitive data and managing it so the first two is already out of the equation then the aw is massy we already had a quick definition description for aws massey that it's actually a security service that uses machine learning it kind of looks like it could be it it's a security service that uses machine learning and it discovers and classifies the sensitive information not only that it does not stop there it goes beyond and protects the sensitive data aws massey kind of looks like but we still have one more option to look at which is cloud hms cloud hms is also a security service kind of looks like that could be the answer as well and it enables us to generate encryption keys and save the data so kind of fifty percent of it it's a security service it encrypts helps us protect the data but aws maxi is right on spot it's a machine learning service it helps us to classify the data and also to protect the data so the answer for this question would be aws messy so hope you kind of get it how this is going so first we apply the thumb rule identify the question that's being asked read between the lines and then try to find the service that meets your requirement then finding the service is by first weeding out the wrong ones recollect everything that you've learned about the service and see how well that matches with those hints that you have picked up and if that doesn't match weed that out then you'll end up with two just to to decide from at some point and then it becomes easy for you to decide click on the question submit it and then move on to the other question in your interview all right so how about this one you are a system administrator in your company which is running most of its infrastructure on aws you are required to track your users and keep a look on how your users are being authenticated all right so this is where the problem statement starts but you need to keep track of how your users are being authenticated and you wish to create and manage aws users and use permissions to allow and deny their access to the aws resources right so you are to give them permission number one and then i mean if we put them in the right order first giving them permissions and then tracking their usage let's see which of the service will help us achieve it iam is a service that helps us to looking at the permissions we can actually predict whether the user or the group will have servers or not so that helps us to get a track of who is able to use who's not able to use certain servers and all that stuff so it kind of looks like but we have other three options left let's look at aws firewall manager just like the name says it's actually a firewall manager it helps us to manage multiple firewalls simple as that and shield is a service it's a service that's used to protect denial of service or distributed denial of service an api gateway is a service that makes it easy for developers to create publish maintain and monitor and secure api so i mean it's completely on the api side very less on user and how you authenticate your user we can get that by looking at the name itself right if you abbreviate it or if you if you try to find a definition for the name api gateway you would get it it has to do with api but if you aggregate aws iam its identity and access management pretty much meets the requirement for the problem statement about its aws identity and access management that's the right answer all right let's look at this one if you want to allocate various private and public ip address in order to make them communicate with the internet and other instances you will use this service which of the following is this service so it talks about using public and private ip address so this service uses ip address and then this service helps us to allow and deny connections to the internet and to the other instances so you get the question is it let's pick the service that helps us achieve it route 53 route 53 is actually a dns service right so it's not a service that's used to allow or deny no it does not do that vpc vpc uses public and private ip address yes so kind of looks like a vpc helps us to allow i mean the security in vpc the security group the network access control list in a vpc that outing table in a vpc that actually helps us to allow or deny a connection to a particular ip address or to a particular service within the vpc or outside of the vpc so as of now it kind of looks like it could be but let's look at the other services what if if we find a service that closely matches to the above requirement than the amazon vpc gateway api gateway we know that it's a managed service that makes it easy for developers to create publish maintain and monitor apis and secure api so that has completely do with api not with ib cloudfront we know about cloudfront that it's a content delivery network and it provides global distribution of servers where our content can be cached it could be video or bulk media or anything else they can be cached locally so users can easily access them and download them easily right so that's cloud front now at this point after looking at all four it looks like vpc is the right answer and in fact vpc is the right answer vpc has public ip address vpc can help us with private ip address vpc can be used to allow deny connection based on the security group access control list and routing table it has so that's right answer is vpc all right how about this one this platform as a service or platform as a db service provides us with the cost efficient and resizable capacity while automating time consuming administrative tasks so this question is very clear it's a db service we got to look for and it's a service that can provide automating some of the time consuming tasks it has to be resizable at the same time so let's talk about amazon rational database it's a database kind of matches the requirement we can resize it as and when needed all right looks like it's a fit as of now it actually automates some of the time consuming work looks like it's a fit as of now let's move on to elastic cache and then try to see if that matches the definition that we've figured out about elastic cache it's actually a caching service it's again an in-memory data store which helps in achieving high throughput and low latency in memory data store so it's not a full-blown database and it does not come with any amazon provisioned automation in it for automating any of the administration tasks no it does not come up with anything like that yeah we can resize the capacity as and when needed but automation it's not there yet and moreover it's not a database so that's out of the equation vpc is not a recessible one you know once we have designed vpc it's fixed it can't be resized so that's out of the equation and amazon glacier glacier is a storage but not a database all right so that's again of the equation so that tie is kind of between amazon rational database service and amazon elastic cache because they both aid the database service but elastic cache is not a full-blown database it actually helps database but it's not a full-blown database so it's amazon relational database so that's the one which is a platform as a service it's the one which can be resized it's the one which can be used to automate the time consuming administrative tasks all right let's talk about this one which of the following is a means for accessing human researchers or consultants to help solve a problem on a contractual or a temporary basis all right let's read the question again which of the following is a means for accessing human researchers or consultant to help solve problems on a contractual or a temporary basis it's like a signing task or hiring aws experts for a temporary job so let's try to find that kind of service in the four services that are listed amazon elastic mapreduce mapreduce is actually an framework service that makes it easy and cost effective to analyze large amount of data but that has nothing to do with accessing human researchers all right let's talk about mechanical term it's a web service that provides a human workforce that's the definition for it for example automation is good but not everything can be automated for something to qualify for automation it has to be an repetitive task one time task can't be automated or the time and money that you would be spending in automation is not worth it instead you could have done it manually so that does not qualify for automation and anything that requires intelligence right anything that's a special case right automation can do repetitive task automation can do precise work but it has to be a repetitive task the scenario you know it should have been there already only then that can be executed but if it's a new scenario and it requires uh appropriate addressing then it requires human thought so we could hire researchers and consultants who can help solve a problem using amazon mechanical turk the other two are already out of the equation now dev pay is actually a payment system through amazon and multi-factor authentication as it says it's an authentication system so the right answer is amazon mechanical turk all right this sounds interesting let's look at this one this service is used to make it easy to deploy manage and scale containerized applications using kubernetes on aws which of the following is this aws service so it's a service to deploy manage and scale containerized applications so it deals with containers it also should have the ability to use kubernetes which is an container orchestration service all right the first one amazon elastic container service kind of looks like it's the one the name itself has the word and the relation we're looking for elastic container service so this container service is an highly scalable high performance container orchestration service let's look at the other one aws batch it's a service that enables id professionals to schedule and execute batch processing i mean the name itself says that that's meant for batch processing elastic bean stock that's another service that helps us to deploy manage and scale but it helps us with easy two instances not with containerized uh instances so that's again out of the equation would light scale be a good time for elastic container service what's light sale now light scale is a service it's called as a virtual private server without a vpc it's called as a virtual private server it comes with a predefined compute storage networking capacity it's actually a server not a container right so at this point that also becomes out of the equation so it's amazon elastic container service that's the one that helps us to easily deploy manage scale container services and it helps us orchestrate the containers using kubernetes all right how about this one all right this service lets us to run code without provisioning or managing servers no servers run code select the correct service from the below option all right so no servers but we should be able to run code amazon easy to auto scaling easy to auto scaling ec2 is elastic compute cloud which is a server and auto scaling is a service that helps us to achieve scaling the server so that's the definition for it could be that's out of the equation aw is lambda now lambda is a service it's actually an event driven serverless computing platform and lambda runs code in response to the event that it receives and it automatically manages the compute resource that's required for that code as long as we have uploaded a code that's correct and set up events correctly to map to that code it's going to run seamlessly so that's about lambda it kind of looks like it could be the answer because lambda runs code we don't have to manage servers it manages servers by itself but we can't conclude as of now we have other two servers to talk about aws batch all right batches service that enables id professionals to run batch job we know that and about inspector amazon inspector it's actually a service that helps us to increase and identify any security issues and align our application with compliance well that's not the requirement of the question the requirement in the question was run code without provisioning your server and without any more space for confusion aws lambda is a service or is the service that runs code without provisioning and managing services right the right one would be aws lambda the second part in aws interview questions if you haven't watched the first part yet please go back and watch the first part we have a lot of interesting questions there which will better prepare you for the interview all right let's get started so in an environment where there's a lot of automation infrastructure automation you'll be posted with this question how can you add an existing instance to a new auto scaling group now this is when you are taking an instance away from the auto scaling group to troubleshoot to fix a problem you know to look at logs or if you have suspended the auto scaling you know you might need to re-add that instance to the auto scaling group only then it's going to take part in it right only then the auto scaling is going to count it has part of it it's not a straight uh procedure you know when you remove them you know it doesn't get automatically re-added i've had worked with some clients so when their developers were managing their own environment they had problems adding the instance back to the auto scaling group you know irrespective of what they tried the instance was not getting added to the auto scaling group and whatever they fixed that they were provided or whatever fix that they have provided were not you know getting encountered in the auto scaling group so like i said it's not a straight you know a click button procedure there are ways we'll have to do it so how can you add an existing instance uh to the auto scaling group there are a few steps that we need to follow so the first one would be to under the ec2 instance console right under the uh instance under actions in specific you know there's an option called attach to auto scaling group right if you have multiple auto scaling groups in your account or in the region that you're working in then you're going to be posted with the different auto scaling groups that you have in your account let's say you have five auto scaling groups for five different application you know you're going to be posted with five different auto scaling groups and then you would select the auto scaling the appropriate auto scaling group and attach the instance to that particular auto scaling group while adding to the auto scaling group if you want to change the instance type you and that's possible as well sometimes when you want to add the instance back to the auto scaling group there would be requirement that you change the instance type to a better one to a better family to the better instance type you could do that at that time and after that you are or you have completely added the instance back to the auto scaling group so it's actually an seven step up process adding an instance back to the auto scaling group in an environment where they're dealing with migrating the instance or migrating an application or migrating an instance migrating and vm into the cloud you know if the project that you're going to work with deals with a lot of migrations you could be posted this question what are the factors you will consider while migrating to amazon web services the first one is cost is it worth moving the instance to the cloud given the additional bills and results features available in the cloud is this application going to use all of them is moving into the cloud beneficial to the application in the first place you know beneficial to the users who will be using the application in the first place so that's a factor to think of so this actually includes you know cost of the infrastructure and the ability to match the demand and supply transparency is this application in high demand you know is it going to be a big loss if the application becomes unavailable for some time so there are few things that needs to be considered before we move the application to the cloud and then if uh the application does the application needs to be provisioned immediately is that an urge is there an urge to provision the application immediately that's something that needs to be considered if the application requires to go online if the application needs to hit the market immediately then we would need to move it to the cloud because in on-premises procuring or buying an infrastructure buying the bandwidth buying the switchboard you know buying an instance you know buying their softwares buying the license related to it it's going to take time at least like two weeks or so before you can bring up an server and launch an application in it right so the application cannot wait you know waiting means uh you know workforce productivity loss is it so we would want to immediately launch instances and put application on top of it in those case if your application is of that type if there is urge in making the application go online as soon as possible then that's a candidate for moving to the cloud and if the application or if the the software or if the product that you're launching it requires hardware it requires an updated hardware all the time that's not going to be possible in on premises we try to deal with legacy infrastructure all the time in on premises but in the cloud they're constantly upgrading their hardwares only then they can keep themselves up going in the market so they constantly the cloud providers are constantly updating their hardware and if you want to be benefited of your application wants to be benefited by the constant upgrading of the hardwares making sure the hardware is as latest as possible the software version the licensing is as latest as possible then that's a candidate to be moved to the cloud and if the application does not want to go through any risk if the application is very sensitive to failures if the application is very much tagged to the revenue of the company and you don't want to take a chance in you know seeing the application fail and you know seeing the revenue drop then that's a candidate for moving to the cloud and business uh agility you know moving to the cloud at least half of the responsibility is now taken care by the provider in this case it's amazon at least half of the responsibility is taken care by them like if the hardware fails amazon makes sure that they're fixing the hardware immediately and notifications you know if something happens you know there are immediate notifications available that we can set it up and make ourselves aware that something has broken and we can immediately jump in and fix it so you see there are the responsibility is now being shared between amazon and us so if you want to get that benefit for your application for your organization for the product that you're launching then it needs to be moved to the cloud so you can get that benefit from the cloud the other question you could get asked is what is rto and rpo in aws they are essentially disaster recovery terms when you're planning for disaster recovery you cannot avoid planning disaster recovery without talking about rto and rpo now what's the rto what's the rpo in your environment or how do you define rto how do you define rpo or some general questions that get asked rto is recovery time objective rto stands for the maximum time the company is willing to wait for the recovery to happen or for the recovery to finish when and disaster strikes so rto is uh in the future right how much time is it going to take to fix and bring everything to normal so that's rto on the other hand rpo is recovery point objective which is the maximum amount of data laws your company is willing to accept as measured in time rpo always refers to the backups the number of backups the the frequency of the backups right because when an outage happens you can always go back to the latest backup right and if the latest backup was before 12 hours you have lost the in between 12 hours of data data storage right so rpo is the acceptable amount if the company wants less rpo rpo is 1r then you should be planning on taking backups every one hour if rpo is 12 hours then you should be planning on taking backups every 12 hours so that's how rpo and rto you know helps disaster recovery the fourth question you could get asked is if you'd like to transfer huge amount of data which is the best option among a snowball snowball edge and snowmobile again this is a question that gets asked if the company is dealing with a lot of data transfer into the cloud or if the company is dealing with the migrating data into the cloud i'm talking about a huge amount of data data in petabytes snowball and all of the snowball series deals with the petabyte-sized data migrations so there are three options available as of now aws snowball is an data transport solution for moving high volume of data into and out of a specified aws region on the other hand aws snowball edge as additional computing functions snowball is simple storage and movement of data and snowball edge has a compute function attached to it snow mobile on the other hand is an exabyte scale migration service that allows us to transfer data up to 100 petabytes that's like 100 000 terabytes so depending on the size of data that we want to transfer from our data center to the cloud we can hire we can rent any of these three services let's talk about some cloud formation questions this is a classic question how is aws cloud formation different from aws elastic bean stock you know from the surface they both look like the same you know you don't go through the console provisioning resources you don't you know you don't go through cli and provision resources both of them provision resources through click button right but underneath they are actually different services they support they aid different services so knowing that is going to help you understand this question a lot better let's talk about the difference between them and this is what you will be explaining to the interviewer or the panelist so the cloud formation the cloud formation service helps you describe and provision all the infrastructure resources in the cloud environment on the other hand elastic bean stock provides an simple environment to which we can deploy and run application cloud formation gives us an infrastructure and elastic beanstalk gives us an small contained environment in which we can run our application and the cloud formation supports the infrastructure needs of many different types of application like the enterprise application the legacy applications and any new modern application that you want to have in the cloud on the other hand the elastic bean stock it's a combination of developer tools they are tools that helps manage the life cycle of a single application so cloud formation in short is managing the infrastructure as a whole and elastic bean stock in short is managing and running an application in the cloud and if the company that you're getting hired is using uh cloud formation to manage their infrastructure using or if they're using infrastructure or any of the infrastructure as a code services then you would definitely face this question what are the elements of an aws cloud formation template so it has uh four or five basic uh elements right and the template is in the form of json or in yaml format all right so it has parameters it has outputs it has data it has resources and then the format or the format version or the file format version for the cloud formation template so parameter is nothing but it actually lets you to specify the type of ec2 instance that you want the type of rds that you want all right so ec2 is an umbrella rds is an umbrella and parameters within that ec2 and parameters within that rds are the specific details of the ec2 or the specific details of the rds service so that's what parameters in a cloud formation template and then the element of the cloud formation template is outputs for example if you want to output the name of an s3 bucket that was created if you want to output the name of the ec2 instance if you want to output the name of some resources that have been created instead of looking into the template instead of you know navigating through in the console and finding the name of the resource we can actually have them outputted in the result section so we can simply go and look at all the resources created through the template in the output section and that's what output values or output does in the cloud formation template and then we have resources resources are nothing but what defines what are the cloud components or cloud resources that will be created through this cloud formation template now ec2 is a resource rds is a resource and s3 bucket is a resource elastic load balancer is a resource and not gateway is a resource vpc is a reserve so you see all these components are the resources and the resource section in the cloud formation defines what are the aws cloud resources that will be created through this cloud formation template and then we have a version a version actually identifies the capabilities of the template you know we just need to make sure that it is of the latest version type and the latest version is zero nine zero nine uh 2010 that's the latest version number you'll be able to find that on the top of the cloud formation template and that version number defines the capabilities of the cloud formation template so just need to make sure that it's the latest all the time still talking about cloud formation this is another classic question what happens when one of the resource in a stack cannot be created successfully well if the resource in a stack cannot be created the cloud formation automatically rolls back and terminates all the resources that was created using the cloud formation template so whatever resources that were created through the cloud formation template from the beginning let's say we have created like 10 resources and the 11th resource is now failing cloud formation will roll back and delete all the 10 resources that were created previously and this is very useful uh when the cloud formation cannot you know go forward cloud formation cannot create additional resources because we have reached the elastic ip limits elastic ip limit per region is five right and if you have already used five ips and a cloud formation is trying to buy three more ips you know we've hit the soft limit till we fix that with amazon cloudformation will not be able to you know launch additional you know resources and additional ips so it's going to cancel and roll back everything that's true with a missing ec2 ami as well if an ami is included in the template and but the ami is not actually present then cloud formation is going to search for the mi and because it's not present it's going to roll back and delete all the resources that it created so that's what cloud formation does it simply rolls back all the resources that it created i mean if it sees a failure it would simply roll back all the resources that it created and this feature actually simplifies the system administration and layered solutions built on top of aws cloud formation so at any point we know that there are no orphan resources in the in in our environment you know because something did not work or because there was an you know cloud formation executed some there are no orphan resources in our account at any point we can be sure that if cloud formation is launching a resource and if it's going to fail and it's going to come back and delete all the resources it's created so there are no orphan resources in our account now let's talk about some questions in elastic block store again if the environment deals with a lot of automation you could be thrown this question how can you automate easy to backup using ebs it's actually a six step process to automate the ec2 backups we'll need to write a script to automate the below steps using aws api and these are the steps that should be found in the scripts first to get the list of instances and then and then the script that we are writing should be able to connect to aws using the api and list the amazon abs volumes that are attached locally to the instance and then it needs to list the snapshots of each volume make sure the snapshots are present and it needs to assign a retention period for the snapshot because over time the snapshots are going to be invalid right once you have some 10 latest snapshots any snapshot that you have taken before that 10 becomes invalid because you have captured the latest and 10 snapshot coverage is enough for you and then the fifth point is to create a snapshot of each volume create a new snapshot of each volume and then delete the old snapshot anytime a new snapshot gets created the oldest snapshot of the list needs to go away so we need to include options we need to include scripts in our script lines in our script that make sure that it's deleting the older snapshots which are older than the retention period that we are mentioning another question that you could see in the interview be it written interview beat online interview or beat and telephonic or face-to-face interview is what's the difference between ebs and instant store let's talk about ebs first ebs is kind of permanent storage the data in it can be restored at a later point when we save data in ebs the data lives even after the lifetime of the ec2 instance for example we can stop the instance and the data is still going to be present in ebs we can move the ebs from one instance to another instance and the data is simply going to be present there so abs is kind of permanent storage when compared to instance on the other hand instance store is a temporary storage and that storage is actually physically attached to the host of the machine ebs is an external storage an instant store is locally attached to the instance or locally attached to the host of the machine we cannot detach an instant store from one instance and attach it to another but we can do that with eba so that's a big difference one is permanent data and another one is ebs is permanent instant store is a volatile data and instant store with instant store we won't be able to detach the storage and attach it to another instance and another feature of instant store is data in an instant store is lost if the disk fails or the instance is stopped or terminated so instant store is only good for storing cache data if you want to store permanent data then we should think of using ebs and not instant store while talking about storage on the same lines this is another classic question how can you take backups of efs like ebs and if you can take backup how do you take that backup the answer is yes we can take efs to efs backup solution efs does not support snapshot like ebs efs does not support snapshot snapshot is not an option for efs elastic file system right we can only take backup from one efs to another efs and this backup solution is to recover from unintended changes or deletions of the efs and this can be automated but any data that we store in efs can be automatically replicated to another efs and once this efs goes down or gets deleted or data gets deleted or you know the whole ef is is for some reason interrupted or deleted we can recover the data from we can use the other efs and bring the application to consistency and to achieve this it's not an one-step configuration it's a cycle there are series of steps that's involved before we can achieve efs to efs backup the first thing is to sign in to the aws management console and under efs or click on efs to efs restore button from the services list and from there we can use the region selector in the console navigation bar to select the actual region in which we want to work on and from there i'll ensure that we have selected the right template you know some of the templates would be you know efs to efs backup granular backups incremental backups right so there are some templates the kind of backups that you want to take do you want to take grant alert do you want to take increment backups stuff like that and then create a name to that solution the kind of backup that we have created and finally review all the configurations that you have done and click on save and from that point onwards the data is going to be copied and from that point onwards any additional data that you put is going to copy it and replicate it now you have an efs to efs backup this is another classic question in companies which deals with a data management there are easy options to create snapshots but deleting snapshots is not always an you know click button or an single step configuration so you might be facing a question like how do you auto delete old snapshots and the procedure is like this now as best practice we will take snapshots of ebs volume to s3 all snapshots get stored in s3 we know that now and we can use aws ops automator to automatically handle all snapshots the ops automator service it allows us to create copy delete ebs snapshots so there are cloud formation templates available for aws ops automator and this automator template will scan the environment and it would take snapshots it would you know copy the snapshot from one region to another region if you want i know if you're setting up a dr environment and not only that based on the retention period that we create it's going to delete the snapshots which are older than the retention period so life or managing snapshot is made a lot easier because of this ops automator cloudformation template moving into questions and elastic load balancer this again could be an uh question in the interview what are the different types of load balancers in aws and what's their use case what's the difference between them and as of now as we speak there are three types of load balancers which are available in aws the first one being application load balancer just like the name says the application load balancer works on the application layer and deals with the http and https request and it it also supports part based routing for example simplylearn.com slash some webpage simplylearn.com another website so it's going to direct the path based on the slash value that you give in the urls that's path based routing so it supports that and not only that it can support a port based colon 8080 colon 8081 or colon 8090 you know based on that port also it can take a routing decision and that's what application load balancer does on the other hand we have network load balancer and the network load balancer makes routing decisions at the transport level it's faster because it has very less thing to work on it works on lower osi layer it works on a lower layer so it has very less information to work with than compared with application layer so comparatively it's a lot faster and it handles millions of requests per second and after the load balancer receives the connection it selects a target group for the default rule using the flow hash routing algorithm it does simple routing right it does not do path-based or port-based routing it does simple routing and because of it it's faster and then we have classic load balancer which is kind of expiring as we speak amazon is discouraging people using classic load balancer but there are companies which are still using classic load balancer they are the ones who are the first one to step in to amazon when classic load balancer was the first load balancer or the only load balancer available at that point so it supports http https tcp ssl protocol and it has a fixed relationship between a load balance report and the container port so initially we only have classic load balancer and then um at after some point amazon said instead of having one load balancer address all type of traffic we're going to have two load balances called as the child from the classic two load balancer and one is going to specifically address the application requirement and one is going to specifically address the network requirement and let's call it as application load balancer and network load balancer so that's how now we have two different load balancers talking about load balancer another classic question could be what are the different uses of the various load balancer in aws elastic load balancing there are three types of load balancer we just spoke about it application load balancer is used if we need a flexible application management and tls termination and network load balancer if we require extreme performance and the load balancing should happen on based on static ips for the application and classic load balancer is an old load balancer which is for people who are still running their environment from ec2 classic network now this is an older version of vpc or this is what was present before vpc was created easy to classic network is what was present before ec2 was created so they are the three types and they are the use cases of it let's talk about some of the security related questions you would face in the interview when talking about security and firewall in aws we cannot avoid discussion talking about waff web application firewall and you would definitely see yourself in this situation where you've been asked how can you use aws vaf in monitoring your aws applications waff or web application firewall protects our web application from common web exploits and vaf helps us control which traffic source your application should be allowed or a block which traffic from a certain source and which source or which traffic from a certain source should be allowed or blocked your application with waff we can also create custom rules that blocks common attack patterns you know if it is a banking application it has a certain type of attacks and if it is simple data management data storage application it has i mean content management application it has a separate type of attack so based on the application type we can identify a pattern and create rules that would actually block that attack based on the rule that we create and waff can be used for three cases you know the first one is allow all requests and then a block all request and count all requests for a new policy so it's also an monitoring and management service which actually counts all the policies or counts all the requests that matches a particular policy that we create and some of the characteristics we can mention in awsof are the origin ips and the strings that appear in the request we can allow block based on origin ip allow block based on strings that appear in the request we can allow block or count based on the origin country length of the request yeah we can block and count the presence of malicious scripts in an connection now we can count the request headers or we can allow block a certain request header and we can count the presence of a malicious sql code in a connection that we get and that want to reach our application still talking about security what are the different aws im categories we can control using aws iam we can do the following one is create and manage im users and once the user database gets bigger and bigger we can create and manage them in groups and in im we can use it to manage the security credentials kind of setting the complexity of the password you know setting additional authentications you know like mfa and uh you know rotating the passwords you know resetting the password there are a few things we could do with iam and finally we can create policies that actually grants access to aws services and resources another question you will see is what are the policies that you can set for your users password so some of the policies that we can set for the user password is at the minimum length or you know the complexity of the password by at least having one number or one special characters in the password so that's one and then the requirement of a specific character types including you know uppercase lowercase number and non-alphabetic characters so it becomes very hard for somebody else to guess what the password would be and and try to hack them so we can set the length of the password we can set the complexity in the password and then we can set an automatic expiration of the password so after a certain time the user is forced to create a new password so the password is not still old and easy to guess in the environment and we can also set settings like the user should contact the admin i mean when the password is about to expire so you know you can get a hold of how the user is setting their password is it having good complexity in it is it meeting company standards or there are few things that we can control and set for the users when the users are setting or recreating the password another question that could be posted in an interview so to understand your understanding of iem is what's the difference between an iem role and an im user let's talk about iem user let's start small and then go big or let's start simple and then talk about the complex one the im user has a permanent long term credential and it's used to interact directly with aws services and on the other hand iem role is an iem entity that defines a set of permissions for making aws service request so im user is an permanent credential and role are temporary credentials and iem user has full access to all aws iem functionalities and with role trusted entities such as iem users application or aws services assume the role so when an im user is given an permission you know it sticks within the iem user but with roles we can give permissions to applications we can give permissions to users in the same account in a different account corporate id we can give permissions to ec2 s3 rds vpc and lot more role is wide and im user is is not so wide you know it's very constrained only for that i am user let's talk about managed policies in aws managed policies there are two types you know customer managed and amazon managed so manage policies are i am resources that express permissions using the iam policy language and we can create policies edit them manage them manage them separately from the im user group and roles which they are attached to so they are something that we can do to managed policies if it is customer managed and we can now update policy in one place and the permissions automatically extend to all the attached entries so i can have like three services four services point to a particular policy and if i edit that particular policy it's gonna reflect on those three or four services so anything that i allow is going to be allowed for those four services anything that i denied is going to be denied for uh the four services imagine what would be without the i am managed policy will have to go and specifically allow deny on those different instances four or five times depending on the number of instances that we have so like i said there are two types of managed policies one is managed by us which is customer managed policies and then the other is managed by aws which is aws managed policy this question can you give an example of an iem policy and a policy summary this is actually to test how well-versed are you with the aws console the answer to that question is look at the following policy this policy is used to grant access to add update and delete objects from a specific folder now in this case name of the folder is example folder and it's present in a bucket called example bucket so this is an iam policy on the other hand the policy summary is a list of access level resource and conditions for each service defined in a policy so iam policy is all about one particular resource and the policy summary is all about multiple resources with iem policy it was only talking about s3 bucket and one particular s3 bucket here it talks about cloud formation template cloud watch logs ec2 elastic bean stock services summary summary of resources and the permissions and policies attached to them that's what policy summary is all about another question could be like this what's the use case of iam and how does im help your business two important or primary work of iam is to help us manage iam users and their access it provides a secure access to multiple users to their appropriate aws resources so that's one it does and the second thing it does is manage access for federated users federated users or non-iam users and through iam we can actually allow and provide secured access to resources in our aws account to our employees without the im user no they could be authenticated using the active directory they could be authenticated using the facebook credential google credential amazon credential and a couple of other credentials third party identity management right so we could actually trust them and we could give them access to our account based on the trust relationship that we have built with the other identity of systems right so two things one is manage users and their access for manage iam user and their access in our aws environment and second is manage access for federated users who are non-iam users and more importantly im is a free service and with that will only be charged for the use of the resources not for the im username and password that we create all right let's now talk about some of the questions in route 53. one classic question that could be asked in an interview is what is the difference between latency based routing and geodns or jio based dns routing now the jio based dns routing takes routing decisions on the basis of the geographic location of the request and on the other hand the latency based routing utilizes latency measurements between networks and data centers now latency based routing is used where you want to give your customers the lowest latency as possible so that's when we would use latency based routing and on the other hand the geo based routing is when we want to direct customers to different websites based on the country they are browsing from you know you could have you know two different or three different websites for the same url you know take amazon the shopping website for example when we go to amazon.com from in the us it directs us to the us web page where the products are different the currency is different right and the flag and and a couple of other advertisements that shows up are different and when we go to amazon.com from india it gets directed to the amazon.com indian site where again the currency the product and the advertisements they're all different right so depending on the country they're trying to browse if you want to direct customers to two or three different websites we would use a geo-based routing another use case of geo based routing is if you have a compliance that you should handle all the dns requires sorry if you should handle all the requests you know from a country within the country then you would do geo-based routing now you wouldn't direct the customer to a server which is in another country all right you would direct the customer to a server which is very local to them that's another use case of jio based routing and like i said for latency based routing the whole goal or aim is to achieve minimum end user latency if you are hired for the architect role and if that requires working lot on the dns then you could be posted with this question what is the difference between domain and a hosted zone a domain is actually a collection of data describing a self-contained administrative and technical unit on the internet right so for example you know simplylearn.com is actually a domain on the other hand hosted zone is actually an container that holds information about how you want to route traffic on the internet to a specific domain for example lms.simplylearn. is an hosted zone whereas simplylearn.com is a domain so in other words in hosted zone you would see the domain name plus and a prefix to it lms is a prefix here ftp is a prefix mail.simplylearn.com is a prefix so that's how you would see a prefix in hosted zones another question from another classic question from route 53 would be how does amazon drop 53 provide high availability and low latency the way amazon rav53 provides high availability and low latency is by globally distributed dns servers amazon is a global service and they have dna services globally any customer doing a query from different parts of the world they get to reach an dns server which is very local to them and that's how it provides low latency now this is not true with all the dns providers there are dns providers who are very local to a country who are very local to a continent so they don't they generally don't provide low latency service right it's always high latency it's low latency for local users but anybody browsing from a different country or a different continent it's going to be high latency for them but that's not again true with amazon amazon is a globally distributed dns provider it has dns servers global wide and like i said it has optimal locations it has got global servers or in other words it has got servers around the globe different parts in the globe and that's how they are able to provide high availability and because it's not running on just one server but on many servers they provide high availability and low latency if the environment that you're going to work on is going to take a lot of configuration backups environmental backups then you can expect questions in aws config a classic question would be how does aws config work along with aws cloudtrail aws cloudtrail actually records user api activity on the account and you know any http https access or any any sort of access you know that's made to the cloud environment that's recorded in the cloud trail in other words any api calls the time is recorded the type of call is recorded and what was the response given was it a failure was it successful they also get recorded in cloudtrail it's actually a log it actually records the activity in your cloud environment on the other hand config is an a point in time configuration details of your resources for example at a given point what are all the resources that were present in my environment what are all the resources or what are the configuration in those resources at a given point they get captured in aws conflict right so with that information you can always answer the question what did my aws resource look like at a given point in time that question gets answered when we use aws config on the other hand with cloudtrail you can answer the question i mean by looking at the cloud trail or with the help of cloudtrail you can easily answer the question who made an apa call to modify this resource that's answered by cloudtrail and with the cloud trail we can detect if a security group was incorrectly configured and who did that configuration let's say there happened to be a downtime and you want to identify let's say there happened to be a downtime and you want to identify who made that uh change in the environment you can simply look at cloudtrail and find out who made the change and if you want to look at how the environment looks like before the change you can always look at aws config can aws configure or aws config aggregate data across different aws accounts yes it can now this question is actually to test whether you have used aws config or not i know some of the services are very local is it some of these services are availability zone specific some of them are regional specific and some of them are global services in amazon and though some of the services are region services you still can do some changes you know add some configuration to it and collect regional data in it for example s3 is a regional service but still you can collect logs from all other regions into an s3 bucket in one particular region that's possible and cloud trail is and cloud watch is an regional service but still you can with some changes to it with some adding permissions to it you can always monitor the cloud watch that belongs to cloudwatch logs that belongs to other regions you know they're not global by default but you can do some changes and make it global similarly aws config is a service that's a region based service but still you can make it act globally you can aggregate data across a different region and different accounts in an aws config and deliver the updates from different accounts to one s3 bucket and can access it from there aws config also works or integrates seamlessly with sns topic so you know anytime there is a change anytime a new data gets collected you can always notify yourself or notify a group of people about the new log or the new config or new edit that happened in the environment let's look at some of the database questions you know database should be running on reserved instances so whether you know that fact or not the interviewer wants to understand how well you know that fact by asking this question how are reserved instances different from on-demand db instances reserved instances and on-demand instances are exactly the same when it comes to their function but they only differ based on how they are built reserved instances are purchased for one year or three year reservation and in return we get a very low per hour pricing because we're paying upfront it's generally said that reserved instance is 75 percentage cheaper than on-demand instance and amazon gives you that benefit because you know you're committing for one year and sometimes you're paying in advance for the whole year on the other hand on-demand instances are built on an oddly early price talking about auto scaling how will you understand the different types of auto scaling the interviewer might ask this question which type of scaling would you recommend for rds and y the two types of scaling as you would know now vertical and horizontal and in vertical scaling we can vertically scale up the master database with a couple of clicks all right so that's vertical scaling vertical scaling is keeping the same node and making it bigger and bigger if previously it was running on t2 micro now we would like to run it on m3 two times large instance previously it had one virtual cpu one gigabit now it's going to have eight virtual cpu and 30 gigabit of ram so that's vertical scaling on the other hand horizontal scaling is adding more nodes to it previously it was running on one vm now it's going to run on 2 3 10 vms all right that's horizontal scaling so database can only be scaled vertically and there are 18 different types of instances we can resize our rds to all right so this is true for ids mysql postgres sql mariadb oracle microsoft sql servers there are 18 type of instances we can vertically scale up to on the other hand horizontal scaling are good for replicas so they are read-only replicas we're not going to touch the master database we're not going to touch the primary database but i can do horizontal scaling only with amazon aurora and i can add additional read replicas i can add up to 15 read replicas for amazon aurora and up to five read replicas for rds mysql postgres sql and marie db rds instances and when we add replica we are horizontally scaling adding more nodes right a read-only nodes so that's horizontal scaling so how do you really decide between vertical and horizontal scaling if you're looking in to increase the storage and the processing capacity we'll have to do a vertical scaling if you're looking at increasing the performance or of the read heavy database we need to be looking for horizontal scaling or we need to be implementing horizontal scaling in our environment still talking about database this is another good question you can expect in the interview what is the maintenance window in amazon rds will your db instance be available during the maintenance event all right so this is really to test how well you have understood the sla how well you have understood the amazon rdas uh the failover mechanism of amazon rdas stuff like that so audio's maintenance window it lets you decide when a db instance modification a database engine upgrades or software patching has to occur and you you actually get to decide should it happen at 12 in the night or should it happen at afternoon should it happen early in the morning should it happen in the evening you actually get to decide an automatic scheduling by amazon is done only for patches that are security and durability related sometimes amazon takes down and does automatic scheduling uh if you know if there is a need for a patch update that deals with security and durability and by default the maintenance window is is for 30 minutes and the important point is the db instance will be available during that event because you're going to have primary and secondary right so when that upgrade happens amazon would shift the connection to the secondary do the upgrade and then switch back to the primary another classic question would be what are the consistency models in dynamodb in dynamodb there is eventual consistency read this eventual consistency model it actually maximizes your read throughput and the best part with eventual consistency is all copies of data reach consistency within a second and sometimes when you write and when you're you know trying to read immediately chances that you you would still be reading the old data that's eventual consistency on the other hand there is another consistency model called the strong consistency or strongly consistent read where there is going to be a delay in writing the data you know making sure the data is written in all places but it guarantees one thing that is once you have done a write and then you're trying to do a read it's going to make sure that it's going to show you the updated data not the old data and you can be guaranteed of it that it is going to show the updated data and not the old data that's strongly consistent still talking about uh database talking about nosql dynamodb or nosql database which is dynamodb and amazon you could be asked this question what kind of query functionality does dynamodb support dynamodb supports get and put operation dynamodb supports or dynamodb provides flexible querying by letting you query on non-primary key attributes using global secondary index and local secondary indexes a primary key can be either a single attribute partition key or a composite partition sort key in other words a dynamodb indexes a composite partition sort key as a partition key element and the sort key element and by holding the partition key you know when doing a search or when doing a query by holding the partition key element constant we can search across the sort key element to retrieve the other items in that table and the composite partition sort key should be a combination of user id partition and a timestamp so that's what the composite partition sort key is made of let's look at some of the multiple choice questions you know sometimes some companies would have an a written test or an mcq type online test before they call you for at the first level or before they call you for the second level so these are some classical questions that companies asked or companies ask in their multiple choice online questions let's look at this question as a developer using this pay-per-use service you can send store and receive messages between software components which of the following is being referred here let's look at it right we have aws step functions amazon mq amazon simple queue service amazon simple notification service let's read the question again as a developer using this pay-per-use service so the service that we are looking for is a pay-per-view service you can send store and retrieve messages between two software components kind of like a queue there so what would be the right answer it would be amazon simple queue service now amazon simply queue service is the one that's used to decouple at the environment it breaks the tight coupling and then it introduces decoupling in that environment by providing a queue or by inserting a queue between two software components let's look at this other question if you would like to host a real-time audio and video conferencing application on aws it's an audio and video conferencing application on aws this service provides you with a secure and easy to use application what is this service let's look at the options they are amazon chime amazon workspace amazon mq amazon app stream well you might tend to look at amazon app stream because it's real time and a video conference but it's actually for a different purpose is actually amazon chime that lets you create chat and create a chat board and then collaborate with the security of the aws services so it lets you do the audio it lets you do the video conference all supported by aws security features it's actually amazon china let's look at this question as your company's aw solution architect you are in charge of designing thousands of individual jobs which are similar which of the following service best serves your requirement aws ec2 auto scaling aws snowball aws fargate aws batch let's read the question again as your company's aws solution architect you are in charge of designing thousands of individual jobs which are similar it looks like it's batch service let's look at the other options as well aw snowball is actually an storage uh transport service ec2 auto scaling is you know in introducing scalability and elasticity in the environment and aws far gate is container services aws batch is the one is being referred here that actually runs thousands of individual jobs which are similar aws batch it's the right answer but let's look at the other one you are a machine learning engineer and you're looking for a service that helps you build and train machine learning models in aws which among the following are we referring to so we have amazon sage maker and aws deep lens amazon comprehend aws device farm let's read the question again you are a machine learning engineer and you're looking for a service that helps you build and train machine learning models in aws which among the following are referred here the answer is sage maker it provides every developer and data scientist with the ability to build train and deploy mission learning models quickly that's what sagemaker does now for you to be familiar with you know the the products i would recommend you to you know simply go through the product description you know there's one page available on amazon that explains all the products a quick neat and simple now that really helps you to be very familiar with you know what the product is all about and what it is capable of you know is it a db service is it a machine learning service or is it a monitoring service is it a developer service stuff like that so get that information get that details before you attend an interview and that should really help to answer or face such questions with a great confidence so the answer is amazon sagemaker because that's the one that provides developers and a data scientist the ability to build a train and deploy machine learning models quickly as possible all right let's look at this one let's say that you are working for your company's id team and you are designated to adjust the capacity of the aws resource based on the incoming application and network traffic how do you do it so what's the service that's actually helps us to adjust the capacity of the aws resource based on the incoming application let's look at it amazon vpc amazon iam amazon inspector amazon elastic load balancing amazon vpc is a networking service amazon iam is an username password authentication amazon inspector is a service that actually does security audit in our environment and amazon elastic load balancer is a service that helps in scalability that's in one way you know indirectly that helps in increasing the availability of the application right and monitoring it monitoring you know how much requests are coming in through the elastic load balancer we can actually adjust the environment that's running behind it so the answer is going to be amazon elastic load balancer all right let's look at this question this cross platform video game development engine that supports pc xbox playstation ios and android platforms allows developers to build and host their games on amazon's servers so we have amazon game lift amazon green grass amazon lumber yard amazon sumerian let's read the question again this cross-platform video game development engine that supports pc xbox playstation ios and android platforms allows developers to build and host their games on amazon servers the answer is amazon lumberyard this lumberyard is an free aaa gaming engine deeply integrated with aws and twitch with full source this lumbar yard provides a growing set of tools that helps you create and highest game quality applications and they connect to a lot of games and vast compute and storage in the cloud so it's that service they are referring to let's look at this question you are the project manager of your company's cloud architect team you are required to visualize understand and manage your aws cost and usage over time which of the following service will be the best fit for this we have aws budgets we have aws cost explorer we have amazon work mail we have amazon connect and the answer is going to be cost explorer now cost explorer is an option in the amazon console that helps you to visualize and understand and even manage the aws cost over time who's spending more who's spending less and what is the trend what is the projected cost for the coming month all these can be visualized in aws cost explorer let's look at this question you are a chief cloud architect at your company and how can you automatically monitor and adjust computer resources to ensure maximum performance and efficiency of all scalable resources so we have a cloud formation as a service we have aws aurora as a solution we have aws auto scaling and amazon api gateway let's read the question again you're the chief cloud architect at your company how can you automatically monitor and adjust computer resources how can you automatically monitor and adjust computer resources to ensure maximum performance and efficiency of all scalable resources this is an easy question to answer the answer is auto scaling right that's a basic service and solution architect course is it auto scaling is the service that helps us to easily adjust monitor and ensure the maximum performance and efficiency of all scalable resources it does that by automatically scaling the environment to handle the inputs let's look at this question as a database administrator you will use a service that is used to set up and manage databases such as mysql maya db and postgres sql which service are we referring to amazon aurora amazon elastic cache aws rds aws database migration service amazon arora is amazon's flavor of the rds service and elastic cache is is the caching service provided by amazon they are not full-fledged database and database migration service just like the name says it helps to migrate the database from on-premises to the cloud and from one database flavor to another database flavor amazon rds is the service is the console is the service is the umbrella service that helps us to set up manage databases like mysql money db and postgres sql it's amazon rds let's look at this last question a part of your marketing work requires you to push messages to onto google facebook windows and apple through apis or aws management console you will use the following service so the options are aws cloudtrail aws config amazon chime aws simple notification service it says a part of your marketing work requires you to push messages it's dealing with pushing messages to google facebook windows and apple through apis or aws management console you will use the following service it's simple notification service simple notification service is an message pushing a service and sqs is pulling similarly sns is pushing right here it talks about a pushing system that pushes messages to google facebook windows and apple through api and it's going to be a simple notification system or a simple notification service i would like to welcome you to this azure interview preparation session knowing azure is one thing having worked on azure is another thing and being able to answer interview questions in azure is a totally different thing although one helps the other it's still different skills and our aim through this video is to prepare you with common product and scenario based interview questions so why wait let's get started a common cloud interview question is what's the difference between sas pass and is we all know that a software as a service is thin client model of software provisioning where client in this case usually is simply a web browser providing the point of access to software's running on the servers now sas is the most familiar form of cloud service for customers sas moves the task of managing software and its deployment to third-party services meaning the vendor actually gets to manage all that so sas is software as a service involving applications being consumed and used by organization so it's generally using an application and usually organizations pay for their use of this particular application now some examples of sas would include office 365. salesforce is another very good example of sas and a lot of google apps and storage solutions like box and dropbox are a very good example of software as a service talking about platform as a service or pass it actually functions at the lower level than sas now typically it provides a platform on which software can be developed and deployed now here we develop the software we deploy the software now pass actually provides an abstract of much of the work dealing with servers and giving client an environment in which the operating system and the server software and the hardwares and the network are managed and taken care in other words with a platform as a service all the things that i've mentioned like the servers the server software the hardware everything is managed by the provider and we can focus on business side of the scalability and we can focus on application development of our product or the service so in short platform as a service is a service that enables developers to build and work with applications without even having to worry about the infrastructure or management of the underlying hosting environments and some examples of uh pass in azure is sql and azure storage talking about infrastructure as a service i as now this is moving down the stack even further now we get to the fundamental building block of the cloud service which is infrastructure as a service is now is is fully of highly automated scalable computer resources is is full of storage is full of the network capability that's what is is now is clients have direct access to the servers and storage just as they would to do traditional servers but in this case it's going to be in the cloud in this case it's going to be more scalable so is is very similar to what you would do in your on-premises physical data center but when we talk about is we get to do everything but it's stored in the cloud so if we need to build a definition around is is or infrastructure as a service provides users with components it provides components it does not give us a built environment it simply provides a component such as operating system networking capabilities and a lot more now this is a paid for based on the usage and can be used to host applications in other words this is pay as you go type the more you use the more you pay the less you use the less you pay and some of the examples of is in azure is a virtual machine that's a great example for is and v-nets for networking that's another good example for is in azure another common question in azure interview is what are the instant types offered by azure the main intention of this question is how well have you understood the different offerings in azure and how well are you trained to pick the right offering for the right service now one size does not fit all and thereof there are a lot of services in azure that under the carpet it does the same thing but depending on how different your requirement is we'll have to pick the appropriate server so this actually this question what are the different instant types offered by azure it's to test how well have you used the product and services available in azure and how well have you applied them for the given requirement you should be provisioning more you should be provisioning less at the same time so it's kind of matching the right service to the right requirement so what are the instant types offered by azure as you see in the list we have general purpose compute optimized memory optimized storage optimized gpu virtual machines and high performance compute virtual machines now answering just the names won't be enough in an interview you'll have to go further and explain why and in what scenario you would use general purpose and what are the use cases what type of servers is a good fit for a general purpose and what type is a good fit for compute optimized so on and so forth and that's exactly what we're going to do now so the general purpose vms you know they provide a balanced cpu to memory ratio and it's very good for testing very good for a development environment very good for small and medium databases and also for low to medium traffic web servers and some of the use cases are like we said test servers low traffic web servers small to medium databases some enterprise great applications it's also good for relational database it's also good for servers used for in-memory caching it's also good for some small analytic database very good for micro services and if you're trying to build a proof of concept for an idea that you just have or just parked this is another good server for doing proof of concepts because you're not going to send actual traffic to it i just want to show that you know your idea works so general purpose server is a very good use case for those scenarios and the largest instance size we can get in general purpose is standard d64 v3 which comes with 256 gigabit of memory and 1600 gigabit of ssd temporary storage on the other hand compute optimized vms have an a high cpu to memory ratio and are very good for medium traffic web servers very good for batch processing servers very good for application servers now because it's computer optimized and compute means a cpu it's an excellent choice for workloads that demand faster cpu but does not need as much memory or temporary storage virtual cpu some of the workloads that run very well on computer optimized are analytic workloads gaming servers require more cpu they run really well batch processing are some of the applications that can be placed in computer optimized and by doing that we get the actual benefit of the compute optimized instance and the largest instance size or the largest instance size type is standard f72s v2 and here we get 144 gigabit of memory and 576 gigabit of ssd temporary storage in compute optimized vms in the same lines memory optimized to vm they offer high memory to cpu ratio and that are great for databases databases require more memory so it's a great fit for database and it's a great fit for medium to large scale caches applications that require in-memory analytics so this memory optimized memories more so it's very good for in-memory analytics applications and the largest instant size we get here is standard m128m and look at the gigabit of memory it's 3892 gigabit of memory and look at the temporary storage it's uh 1400 336 a gigabit of temporary storage on the same lines storage optimized now i guess i don't have to explain to you what storage optimized is used for you might have easily guessed looking at the flow yes storage optimized vm offer high disk throughput and io and are very ideal for big data sql nosql databases data warehousing servers large transactional databases and lot more and some of the examples of the applications that can be launched on storage optimized are cassandra mongodb cloudera redis these are some familiar applications that can get benefited when we run them on storage optimized and one difference between storage optimized and the other servers are they are generally optimized to use the local disk on the node attached directly to the vm rather than using an durable disk which is actually in a remote disk space now what does this allow this allows for greater input outputs per second or throughput for the workload so that's what we get a greater throughput at greater input outputs per second is what we get when we use storage optimized and the largest instance size available in storage optimized is standard l32s and the memory is 256 gigabit and look at the temporary storage it's a 5630 gigabit of temporary storage gpu type vms easy to guess gpu optimized vms are specialized virtual machines available with multiple gpus attached to them now these sizes are designed for or these vms are designed for compute intensive graphic intensive visualization workloads that require a lot of graphical processing unit attached to it so in short these are virtual machines that specialize in heavy graphic rendering and video editing it also helps with model training and interferencing with the standard nd nd24rs which has 448 gigabit of memory and 2948 gigabit of temporary storage and the last but not the least but the best last but the best is high performance compute or azure h series virtual machines now they are the latest in high performance computing vms and are aimed to handle workloads like batch processing analytic molecular modeling and fluid dynamics a lot of complicated applications in this vm and this is the fastest and powerful cpu virtual machine with the optional high throughput interfaces and the largest instance size that's available is a standard l32s which comes with 224 gigabit of memory and 200 gigabit of ssd temporary storage and a third common question is what are the deployment environments offered by azure there are two main deployment environments one is the staging environment and the other one is the production environment now in staging environment let's talk about staging first so when you're deploying your web app or web app on linux you can deploy them to a separate slot instead of the default production slot when running them in standard premium or isolated app service plan tiers now the deployments slots are actually live app with their own host name and at a later point that the staging environment can be swapped with the production environment so why do we need and staging environment what are the benefit of it so the benefit of deploying our application to a non-production or staging environment it provides a platform to validate changes to our application before it can be made live in the production environment and in the staging environment the app can be identified using the azure's global unique identifier also called as the guid url and it's very very similar to the production url except that it has a custom name in front of it that identifies it as the staging environment and for production environment this is the live production environment that's serving customers request that serving the customer content now it can be slightly different from the staging environment in a way that the url that's used to identify the production environment that's more often dns friendly name like the name of the actual service dot cloud app.net that way it differs in case of staging environment you have an custom name right before it so the custom name and then the cloudapp.net but in this case you get the direct service name as the name of the url so this is live production environment which receives and handles and serves customer traffic another commonly asked question in azure is what are the advantages of scaling in azure the actual thought behind the question is to see how much have you understood scaling how much have you seen and how much have you applied the scaling effect in the production environment and have received benefits in return so let's talk about it advantages of scaling in azure some of the advantages are we get the maximum application performance now auto scaling is a built-in feature for the cloud services be it aws azure google and couple of other cloud service providers it's a built-in feature for a cloud service a cloud service should be auto scalable and that includes mobile services virtual machines and when we run our applications on mobile services or virtual machines the website actually gets the best performance during the change in the demand again a different applications might require different performance needs for examples for some apps the performance measured based on memory and another a good example is the fluctuating demand for example you could have a web app that handles millions of requests during the day and literally nothing at the night and auto scaling this environment auto scaling any of these environment will automatically scale or fatten your environment so to receive the all the incoming traffic and during lean period it actually gets slimmer and slimmer so to help you with the cost so it actually maximizes the performance that's what auto scaling does and like we said auto scaling scales up and scales down based on demand it not only scales up but also scales down so to help you with the cost and if you know the particular pattern in which the application is going to receive traffic then we can very well go ahead and schedule scaling to our application or schedule scaling that infrastructure based on time if we already know that monday to friday that's the traffic that i would get and it's a constant one it's not a public facing but you know it's an internal application so i know all the 500 users or the thousand or the 5000 users who will be using it so at any given point it's just 5000 users it's not going to go beyond that and during saturday and sunday literally nobody's going to be in office so no load at all so in that case i pretty much know how the pattern is going to be i can go for scheduled scaling if i know at the pattern and auto scaling like i said not only helps with keeping the application highly available it also helps with the cost effectiveness of our infrastructure so anytime there's a vm or a group of vms running on less cpu auto scaling is going to actually get the environment slimmer and slimmer so we're not unnecessarily running any resources and paying for it if you're being interviewed for the infrasight in azure this is another common question that gets asked how are windows active directory and azure active directory different let's talk about the windows active directory first the non-cloud windows active directory was the service was released along with windows 2000 server edition and this active directory is essentially a database that helps organizations to organize the users organize the computers and a lot more it provides authentication and authorization to the applications not only to the applications but also to file servers to printers and a lot of other on-premises resources that's what the basic non-cloud active directory does on the other hand the azure active directory is not designed to manage web-based services the azure active directory on the other hand was designed to support web-based services that use rest api interfaces for office 365 salesforce.com etc unlike the plain active directory this uses a completely different protocol so protocol wise it's different and the services that it support is quite different now besides that it also has couple of other differences as well and let's look at them so the actual active directory or the windows actual directory is a directory service that facilitates working with interconnected complex and different network resources in a very unified manner on the other hand azure active directory is microsoft's multi-tenant cloud-based directory and identity management service and the windows active directory has five layers to store data to store user details and to issue the management certifications on the other hand azure active directory integrates or compresses the five layers into just two layers here windows active directory works with on-premises servers like applications file servers and printers etc on the other hand azure active directory it works on web-based services that use restful interfaces if you're being hired for the development environment or for the cloud devops support environment or even for the production support environment you might find yourself being asked this question what are the types of queues offered by azure now azure supports two types of queue mechanisms the storage cube and the service bus queue let's talk about storage queue first now the storage queue which are part of azure storage infrastructure it provides a simple rest based interface simple rest based get put and peak interface it provides reliable persistent messaging within and between other services it follows the pub sub model or pub sub messaging infrastructure and it's best suited for users that need to store more than 80 gigabit of messages in the queue it can provide a logs for all the transactions executed against the user's queue so that's what we get with storage queue and on the other hand service bus queue the service bus cues are built on top of broader messaging infrastructure and they are designed to integrate applications and applications component that can span multiple communication protocols so that way it differs so this is good for applications and components that may span multiple communication protocols and even different totally different network environments so in short these service buses or the service bus queues in azure are part of azure's messaging infrastructure and they integrate applications or application components that can actually span multiple different protocols and multiple different network environments it also provides and first and first out style for delivery and the user's queue size has to remain under 80 gigabit another familiar question is what are the advantages of azure resource manager now this resource manager helps us to manage the usage of the application resources this question is actually to test how well have you tested how well have you used resource manager and have gotten the benefit of it this question actually tests how easy it has become after the introduction of resource manager compared to when doing deployments or when provisioning resources without the resource manager so let's get into the answers for the question what are the advantages of azure resource manager the in chart resource manager is called arm so the arm helps deploy manage and monitor all the resources for an application a solution or a group so all the interconnected application all the interconnected services can be monitored as group using resource manager and users can be granted to access to resources that they require within a resource manager so in an account i can have like 10 different resources created by resource manager or a resource group created by resource managers and i can allow deny connection to those services or only to those services based on whether the user should be accessing one and not accessing the other so that way it becomes easy to give access to a group of application it helps in getting billing details for the group of resources now which group is using more which group is using less and which group has contributed more to this month's bill stuff like that so those details can be obtained using azure zos manager and provisioning resources is made much easier with the help of this resource manager another question is how has integrating hybrid cloud been useful for azure well with the use of hybrid cloud we get the best of both the worlds so what's hybrid cloud it's nothing but a combining the public cloud and the private cloud and allowing data and applications to be shared between them so whenever the compute or the processing demand fluctuates hybrid cloud computing gives businesses the ability to seamlessly scale their on-premises infrastructure in the public cloud and handle any kind of uh overflow in the requirement or overflow in handling the application so it really helps it helps it boost the productivity of our on-premises application so with the hybrid cloud we get a greater efficiency with combination of azure services devops processes and tools for the application running in on-premises and by having an hybrid cloud environment users can take advantage of a constantly updated azure service and other aws marketplace applications for their on-premises environment and the other benefit is with hybrid cloud environment we can simply deploy applications regardless of its locations in case of on-premises we'll have to worry about the location but when we expand our on-premises environment in the cloud and they can or we can pick any of the locations and simply deploy it in them and this enables the applications to be created at a greater speed what's federating in azure sql now this question is very specific about sql how can we scale the sql database now this is a very good question or a valid question or an important question in the interview because many customers or companies have not been able to meet the user demand because they could not scale out the databases the theory of scaling out or adding servers to accommodate the increased workloads and traffic is not hard to understand but the implications can be very complicated the implications can be very expensive we're well aware of scaling the web servers that's very common but how do we scale the database so microsoft provides the tools and technologies so we can scale out the database in the cloud and that's what is called sql or federation in azure sql so the way we scale out the sql database is by sharing sharding the database so sharing actually enables users to take advantages of the resources in the cloud not only that it allows users to have their own database or a shared database amongst each other because we're creating a highly available database because we're having shards in a database it actually reduces the possibility of a single point of failure for our database and more importantly because we're sharing because we're using federation and azure sql it provides and cost effective scaling of our databases by using cloud resources or by using billing only for the cloud resources that we have used so no pre-provisioning no over provisioning it provisions the right amount and we pay the right amount let's talk about this one what are the different types of storage offered by azure now the different types of storage offered by azure are as you already know and as you can see they are azure blob storages table storages file storage and queue storage so let's expand one after the other now blob storage are nothing but a massive scalable object storage and that's very good for storing text and binary data and azure blob storage is a microsoft object storage solution for the cloud a lot of storage is optimized for storing massive massive amount of unstructured data that can be in form of text or or in form of binary data so in short blob storage enables users to store unstructured data and those data can be in the format of pictures music video files and lot more and it stores them along with their metadata and another advantage or another feature benefit that we get from blob storage is when object is changed it is verified to ensure it is of the latest version number one and number two it provides maximum flexibility to optimize the user's storage needs and this unstructured data is available to customers through an url or an rest based object storage so they are the benefits that come along with the blob storage table storage on the other hand is an a nosql store for schema-less storage of secured data now this azure table storage is a service that stores structured nosql data in the cloud and because this table is schema-less it's very easy to save your data it's very easy to adapt your data as the need for your application grows and this table storage is very fast and cost effective for many type of applications so some of the some of the type of data that we can store is table storage is good for flexible databases like user data for web applications address book storage device information storage and if you want to store metadata this is a very good use case to store them in azure table storage azure files is another storage here it's an managed file share for cloud or on-premises deployment so file storage provides the file sharing capabilities accessible by the server messaging block protocol and this can be accessed from the cloud and this can be accessed from on-premises as well now here in file storage the data is protected by smb 3.0 and https protocols and the more important thing is azure takes care of managing hardware and the operating system deployments for azure file storage so this additional file storage can be used when we want to burst the storage capacity in on premises so on-premises the primary and cloud is the secondary or the extended on-premises storage so it actually improves the on-premises performance and capabilities for our on-premises data center and then we have queues azure queues it's a messaging store for reliable messaging between the application components so we spoke a little about this uh in the previous question so the azure queue storage is a service for storing a large amount of messages that can be accessed from anywhere in the world via http or https protocol in here the a single message can be up to 64 kilobits in size and in a queue we can have millions of messages and the limit can actually go up if we have not reached the limit of the storage account so it's millions and millions of requests that can be stored in the storage queue or the queue storage so the queue storage in chart provides message queueing for large workloads and it enables users to build flexible applications and separate the functions one from another so one failing doesn't affect the other application which is running healthy and this q storage it ensures the application is scalable and less prone to individual component failures because they are decoupled separate now it also helps in monitoring the queue which ensures the customers demands are met so queue is a great place to monitor or a great component to monitor so we understand how much peak have we reached for a particular application service or a container what is text analysis api in azure machine learning now a text analysis is actually an cloud based analytics api and it provides an advanced natural language processing over the raw text and it has got four main functions like the sentiment analysis or and the key phrase analysis language uh deduction and few other things now what do you mean by sentiment analysis now sentiment analysis is from the logs from the comments from the text comments that we receive do an analysis and find out whether that's a positive or a negative statement now if it is a the api the api returns and sentiment score between 0 and 1 and 1 is positive and 0 is negative and then in text analysis we have a key phrase extraction which is it will automatically extract the key phrase to uh quickly identify the main points in that key phrase for example if you're analyzing an text which says the food was delicious and there were wonderful stuff then the api returns the main talking points of that phrase like food food is the main talking point and wonderful stuffs that was a main talking point so that's another feature that this text analysis has and then we have language deduction in text analysis but irrespective of what you paste it can try to gauge and try to align it to the 120 or up to 120 languages that it supports so i can simply take text from the internet and i can paste it and text analysis software is going to identify the language and then can run phrase and sentiment analysis on those texts right so in short text analysis is an api a set of web services that can be used for text analysis it can be used to analyze unstructured statement sentiment analysis key phrase extraction and lot more and the results are generally between 0 and 1 and 1 being positive and 0 being the negative sentiment there is no much training or in other words this is not as complicated as couple of other text analysis softwares are available in the market we can simply paste we can simply upload the text and we can call the service and it runs a sentiment analysis on it all by itself let's look at this question what are the advantages of azure queue storage if you're going to work in a development environment if you're going to work on an environment that embraces devops this could be a question what are the advantages of azure queue storage now azure queue storage is built to flexibly operate the applications and separate the functions between the applications that run large workloads so when we design applications for scale these applications can be decoupled so that they can scale independently you know and thing happening on an application is not dependent on another application and anything happens to an section of the application will not affect the other application because they are now decoupled and connected through the queue storage so the queue storage gives us asynchronous message queuing for communication between the applications irrespective of whether they are running in the cloud or whether they are running in desktop or whether they are running on premises or on mobile devices so in short this queue storage enables message queueing for large workloads in a simple and cost effective and a durable manner talking about the advantages advantages is it provides rich client libraries for java android c plus plus php ruby and lot other services getting added during every new release from azure and the main advantage again is it enables users to build flexible apps and separate the functions for bigger or greater durability again introduction of cues into our application it ensures our users applications are scalable and less prone to individual component failures meaning one component failing is not going to take the whole application down right if one component fails it's just that component that stays fails the rest are healthy and the rest are going to function it also helps us to monitor the queues and ensure the servers aren't overhelmed by sudden traffic bursts so how much do i have in the cube kind of determines the traffic for my application and if the queue is more i can always go and auto scale my environment and the queue is less i can always go and shrink or make my environment thinner so we can save cost and anytime there is more data in the queue i can auto scale monitor the metric and do auto scaling based on that metric so the environment knows that there are more data's coming in i need to expand myself to handle that much amount of data this is a very common question what are the two kinds of azure web service roles now a service role is a set of managed and load balanced virtual machines that work to perform some tasks and based on what it's going to run on top of it is it going to run a web service or is it going to run worker service defines what kind of roles that gets attached or that goes on this virtual machines so we have two types web role and worker roles the web role is a cloud service role that's configured to run web applications developed on programming languages technologies and majorly they support iis internet information service and they support asp.net php windows communication foundation and so on so that's web roles and these web roles it automatically deploys and hosts application through the user's iis internet information service on the other hand worker roles are roles that runs applications and service level tasks which generally do not require iis so is is actually the differentiating factor so in worker roles is not installed by default the worker roles are mainly used to perform supporting background process along with web roles and do tasks automatically compressing or uploading the images running scripts and or doing some changes in the database getting new messages from the queue and processing a lot more you know the work the applications are the work that does not require ies that's what this worker role does again the main difference between the web role and the worker role is that the web role automatically deploys and hosts your application through is whereas the worker role does not use iis and runs our application as standalone this is another classic question what is azure service fabric so azure service fabric is actually a distributed system platform that makes it easy to pack deploy and manage scalable and reliable microservices and containers now service fabric also addresses some of the significant challenges in developing and managing cloud native applications and the problem that it addresses and fixes is now developers and administrators can avoid complex infrastructure problems and focus on implementing mission critical and demanding workloads that can be scaled and that can be managed through the console or from the single place in short service fabric provides a platform that makes the process of developing microservices and managing application life cycle lot easier and the advantages of service fabric is that now we can produce application with faster time to market because all the worry about the infrastructure is taken away from us we don't have to design an infrastructure all that we need to worry about is simply the application and the application life cycle again the advantage is it supports windows it supports linux not only that it supports servers on-premises and in the cloud with service fabric we can scale up our environment to even thousand machines in just a single command or if there is an immediate need for a thousand machines i can immediately scale them up 2 000 machines that's possible with service fabric now let's look at this question you can expect this question if the customer is running hybrid environment meaning having some of the applications and on-premises and running some of the applications from the cloud and for some reason when classifying the application that goes to the cloud and that stays on premises they have decided to keep the database in-house so in that environment a lot of customers do that so in that environment this is a classic and a scenario based question a client wants the front end of their application to be hosted on azure in the cloud and wants the database to be hosted in on-premises for security reasons or to have full control on their databases how do we go about suggesting a solution for this customer the ideal solution in this scenario is to use the v-net based point-to-site vpn solution so all the front-end applications will be in the cloud and they'll be hosted in a v-net and from the we net they'll be connecting to the database through and point to site vpn so the traffic and the writings and the leads are not coming through the internet but through a point to site vpn link that's connecting the azure vnet and the on-premises environment and this model or this approach or this solution is best suited for scenarios where there are only a limited number of resource that needs to be connected between on-premises and the cloud this is a very common question what's azure traffic manager of course we no more running applications on a single server we no more running applications on or from a single environment right the same application is being run from multiple environments within azure and it can be running from azure and on-premise as well so multiple environments between azure and on-premises a lot of customers have such environment and if you are facing an interview with such customer this could be an ideal question what is azure traffic manager now the azure traffic manager is a dns based traffic load balancer that actually enables us to distribute traffic between services across azure global regions and by doing this it provides a good availability and a good responsiveness to the application and this traffic manager it uses dns to direct client requests to the most appropriate service endpoint based on the traffic routing logic and the health of the endpoints that it maintains so in short this traffic manager is a load balancer that enables users to provide high availability and responsiveness by distributing traffic in an optimal manner across the azure when we run the same application in different regions so some of the advantages or some of the use cases of using azure traffic manager is it provides multiple automatic failover options it also helps with reduced downtime it also helps with the distribution of user traffic across multiple locations so one location is not overloaded and then it helps with users knowing from where our customers are getting connected from that's another big use case with azure traffic manager let's look at this question right this is an ideal question now there are group of servers connected together within an virtual network and now we need to move them or create a separation between them how do you go about achieving it so the question goes like this you need to isolate network traffic among vms in a subnet which is part of a virtual network with little downtime and impact on the user so that's the given scenario and the best way we can do it is create a new virtual network and move all the vms in that subnet to the new virtual network now this feature is not possible with a lot of other cloud service providers like aws and a lot of other providers now in those environments we might need to shut down we might need to stop the vm create a new vm based on the image and it's an hefty process but here in azure i can simply move the vms from one subnet to another virtual network without needing for any additional security like the network security group i can simply isolate them if i need to by creating a simple new virtual network and moving the servers to the new virtual network look at this one this is another common question with respect to azure what is public private and hybrid cloud so this is really to test how well have you understood the different cloud offerings in the market public private and hybrid or at least the three basic offerings in the market public private and hybrid cloud now the public cloud is the most common way of deploying cloud computing applications and it has resources like servers storage and are owned and operated by third party cloud service providers like microsoft azure microsoft azure is a very good example of public cloud so here every component that the user is using is running only on azure that's public cloud right let me talk to you about some of the advantages of public cloud some of the advantages is low cost because there's no need to purchase hardware or software and we pay only for the services that we use in public cloud and there is literally no maintenance because the service provider maintains the environment for us and with public cloud we have nearly unlimited scalability meaning we can get resources on demand and can meet our business requirements on demand and the public clouds are very highly reliable because they have a vast network of servers and they ensure that our application does not fail so there are some advantages of public cloud let's talk about private cloud now private cloud consists of compute resources used extensively by one business or one organization now this private cloud can be physically located at our organizations on-site data center or it can be hosted by a third-party service provider whichever the case the private cloud services and infrastructure are always maintained on a private network and they're maintained on hardware and software that are dedicated solely for one organization or solely for your organization so in short private cloud in azure is azure services being run within an on-premises data center or on-premises data center used by the user to host systems or applications and some of the advantages some of the advantages is it gives more security resources are not shared with others so a higher level of control and security over our resource and application is possible and then we have hybrid cloud now hybrid cloud is the best of both worlds so it combines the features of both public and private cloud and some of the user components are being run on azure and others within on-premises data center so they kind of share the resources in other words they kind of share the application half of the application would be running in on premises and half of them would be in the cloud and they would be working in harmony to support the application and the business need so that's hybrid cloud this is one another good example question that wants to test how well you pick services or how well have you understood the azure products and services and are picking the right service for the need so the question would go like this what kind of storage is best suited to handle unstructured data there are a lot of storage options available and the requirement here is what or which one would you choose for unstructured data the answer for that question is blob storage because blob storage is designed to support unstructured data it works in this way it places the data into different tiers based on how often they are accessed different tier means different performance different performance means different cost associated with it so a lot of add-on advantages will be get when we use blob storage for unstructured data in addition to it any type of unstructured data can be stored in blob storage this is not true with couple of other storage options that we have in azure only with blob storage we can store any type of unstructured data and the data integrity is maintained every time an object is changed in the blob storage and the best part is the blob storage helps increase applications performance and reduces the bandwidth consumption and reduces the bandwidth consumption for that application so they are the benefits that we get for block storage and blob storage are the ones that are well suited for unstructured data and that's what your answer should be it's really an five-step process and if you've worked and if we have done some labs some basic labs with azure you can easily answer this question so it's a five step process first step is to log into the azure the second one is to create an resource resource or a resource manager and within the resource manager you will be selecting the resource and then pick the offering system do you want windows or linux and within windows what's the flavor you want or within linux what's the flavor you want so decide on it and then entering the relevant information relevant information like the name of the instance or the vm that we're going to launch and the password the url that goes with it and couple of other relevant information that goes gets itself attached with the vm and then select the size of the virtual machine different size different uh types available for the kind of application and for the intensity of the application that will be running on top of it so select the virtual select the size of the virtual machine review everything whether they're good or not if there are any changes required go back and edit them and then come back and launch and your vm is there for you to start working within like three or four minutes not even five minutes within three or four minutes it gets ready and you can start working on it so it's a quick and it's a five step process and you should be able to answer it easily if you have done a few labs in azure let's now look at some scenario-based question you've been posed with a scenario so we thought through it and we picked some common scenario based question that are being asked in interview and i thought we'll present it for you with the answers with explanation so you can get benefited through it so let's look at this question you're asked to make sure your virtual machines are able to communicate securely with each other to ensure security or to have good amount of security what would you do and the correct and the best answer for this would be using virtual network in azure which enables us to communicate with the internet securely which enables us to communicate with the on-premises data center in a secure fashion so the advantage of using virtual network is users can create their own private network users can pick their own private ip ranges users can create their own subnet users can create their own routing between those two subnets a lot more goes into that virtual network so it's very customizable and the users are provided with an isolated and highly secure environment for applications it's completely isolated from other customers it's completely isolated from other applications that are running in other virtual network that we own so within our account we can have multiple virtual networks and one application running on a virtual machine is completely isolated from other applications running on other virtual machines and of course all traffic stays within the azure network as your virtual machine or within the azure network depending on how you set up the routing if you have set up a routing to go or reach the internet it's going to go otherwise it's going to stay within azure if you have set up routing to reach on-premises then it's going to go and reach on-premises otherwise it's not going to go and reach on-premises it's going to stay within the azure and it also allows users to design their own network like we already discussed picking up ips picking routing you know picking subnets you know how many servers should be present in that particular subnet or how many servers should that subnet accommodate the size of the subnet the ip ranges the netting the masking of ips creating of vpn all that's possible with the virtual network so it really allows users to design their own network and using virtual machine is how we secure applications in the cloud let's look at this other scenario how do you ensure that every time a user logs in they are not asked to re-enter the password as part of authentication so you really don't want your users to re-enter the password every time they log in to a different application well all the applications have their authentication mechanism in place all of them wants to authenticate the user before they log in ensuring the user does not log in every time does not mean that wiping away all the authentication and authorization that's present in that application you still need that in place but how do you make the user hassle free so they're not asked to re-enter the password or the same password again and again let's look at the options available the first one is to enable microsoft account authentication well it's not going to fix because with that the user will still need to re-enter the username and password at deploy express route it's not going to fix either because express route is a network level service that connects on-premises to the cloud so that has got nothing to do with prompting or not prompting for password and then we have a setup vpn between on-premises data center and azure set up 80 domain controller in vm and implement integrated windows authentication well you can use the same username and password for on-premises and the cloud but this setup the the vpn and the ad controller setup it's not going to stop you asking for repeated passwords so this is all about using the same password in on premises and in the cloud and this has got nothing to do with not prompting the user to re-enter the password all right that's same password is different from not prompting the user to re-enter the password they are two different scenarios so that is also out of the equation and the last one is configure ad sync to use single sign-on that's the right one so when we configure the ad to use a single sign on then it's going it's not going to ask for the username and password every time we access an application because we have logged in and that login is going to stay active for like 24 hours or so depending on how you configure it and within that time you can access a lot of other applications and it's not going to ask for the username and password because you already have a single signed on and you have signed in using the right credentials let's look at this one you need to ensure that virtual machines remain available while migrating to azure what would be the appropriate service to use right let's look at the options traffic manager traffic manager is is literally a dna service and then let's look at the other one update domains it again has to do with traffic manager updating the url so now the traffic manager gets updated and then starts sending requests to that particular url it's gonna take some downtime because when we update the url they will have to be populated to all different places and it takes time so within that time any user trying to access it's going to fail and then we have express route and cloud services express route could be the in fact it's the right answer because express route it's an extension of our on-premises and cloud environment and in this question it really comes out from a customer who's having an hybrid environment so they have applications running in on-premises they have applications running in the cloud and they want to have a way to migrate applications from on-premises to the cloud in other words kind of do a cut over between on-premises and the cloud and this express route is a service that connects between on-premises and the cloud so when you do the cut over the traffic is now sent to the cloud instead of being handled in on-premises in fact the services and the application is getting down are getting shut down in on premises so the request will come in the same pattern instead of they being handled in on-premises they are now routed to the cloud using express route and the api calls get addressed or the the queries gets answered in the cloud through the express route service look at this question you are an administrator for a website called web game and you are required to validate and deploy changes made to your website by your development team with minimum downtime so the real question is how do you validate the deployment changes that's made by the development team let's look at the options create a new linked resource create a staging environment for the site enable remote debugging on the website and then create a new website well why would you want to create a new website just to validate the changes and doing a remote debugging is not going to help because debugging only captures logs of the changes happening it does not do anything with validating the changes create a staging environment could be or is the right answer because when we have staging environments anything that we run on production can be run on staging environment and any failures that would happen in production if we simply run it in production can be captured when we run the application in the staging environment so that way staging environment is a very helpful and useful service and that way i can catch any errors in other words i can validate the changes that were done by my development team before i move it to production and that reduces the downtime in the production environment look at this one last question that we have for you it's a standard tier application is used across the world and uses azure website standard tier it uses large amount of image files so you can get it this could be an e-commerce website which has a lot of pictures in it and this is causing the application to load slow how can we handle this situation let's look at the options given configure blob storage with custom domain well this application has pictures but the pictures only the pictures is not all that the application has all right so configuring blob storage might not help this could be an very interactive website and that can't be run from blob storage let's look at the other options configure azure website auto scaling to increase instances at high loads now it's the picture that's causing issues for the website it's not the cpu or it's not the memory unavailable memory not enough that's causing the application to be slow so we need to identify what's causing the application to be slow so it's not the cpu it's not the memory so configure azure for auto scaling is not gonna help and then what are the other options let's see configure azure cdn to cache all responses from the applications web endpoint a cdn could be the right answer but look at that it says a cdn to cache all responses from the applications web endpoint cdn is not designed for that though it can do it that's not the best way to use cdn to capture all responses from the application's web endpoint the proper design for cdn would be to cache the frequently used ones in other words cache the static content which are photos videos logos and pictures and lot more static content that never changes let's look at the last option configure azure cdn to cache site images and content stored in azure blob storage absolutely correct so here we will have to redesign the application to store the pictures high quality lazy loading or slow loading pictures because of the high quality and the bigger size so store them in cdn and then the content let it be stored in azure blob storage that's the right way of designing the application and if we do it this application is going to run faster or the application is going to respond faster to the users thanks guys with that we've reached the end of this complete cloud computing course i hope you enjoyed this video do like and share it thank you for watching and stay tuned for more from simply learn [Music] you